Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Student mobility in policy and poverty context: two essays from Washington
(USC Thesis Other)
Student mobility in policy and poverty context: two essays from Washington
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
STUDENT MOBILITY IN POLICY AND POVERTY CONTEXT: TWO ESSAYS FROM
WASHINGTON
by
Stephani Lynn Wrabel
______________________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(EDUCATION)
May 2017
Copyright 2017 Stephani Lynn Wrabel
ii
Acknowledgements
There are so many people who provided the support, guidance, advice, and even laughs
that helped me throughout this graduate school adventure, and especially during the dissertation
process. First and foremost, I need to thank the Office of Superintendent of Public Instruction in
Washington. Had they not provided me access to their extensive student enrollment data, this
dissertation would not have been possible.
I owe an abundance of appreciation to my dissertation committee who has helped shape
the course of my personal and professional trajectory. To my chair and faculty advisor, Morgan,
there are no words sufficient enough to express my gratitude for the opportunities, learning, and
guidance with which you have provided me. Thank you for giving so much of yourself to ensure
my success during and beyond my tenure at USC. Because of your support and efforts, I now
possess the ability to conduct policy-relevant research that will address critical issues in
education. You ensured that I achieved the goal that brought me to this PhD program, and for
that I am forever indebted to you. Katherine Strunk, you were an invaluable member of my
committee. You removed roadblocks, introduced new perspectives, and always pushed me to
advance the quality of my research. Your insightful challenges were always accompanied by the
support and encouragement necessary to make them reality. I hope to pay forward the
graciousness you have shown me. Ron Astor, thank you for welcoming me into your research
family and being a challenger, cheerleader, and champion of my work ever since. As a second
year PhD student, I could never have known just how much I would learn and grow, both
professionally and personally, from working with you and your team. I appreciate, more than
you know, all that you have done to make certain my success.
iii
I have been lucky to share this research process with many colleagues during my time at
USC—from fellow PhD students to researchers across the country, and both the Building
Capacity and Welcoming Practices teams. I consider myself fortunate to have shared in these
experiences with you all. Thank you for being part of this academic journey with me.
The most unexpected benefit of joining the USC PhD program is the lifelong friendships
I have gained. The Sunday Night Dinner crew, some of the most brilliant women I have come to
know, offered sustenance, laughs, and an ear or shoulder when needed. You are exemplars of
human kindness and how to champion fellow women while simultaneously striving to reach new
heights of your own. My fellow 9
th
floor research assistants provided endless camaraderie, a few
too many coffee breaks, shortcuts and fun Stata codes, and have become a network of peers for
whom I will forever be grateful. A few individuals have logged their fair share of mileage with
me traveling to conferences, during early morning runs, and even for some fun free-time
adventures. I have such fond memories from these last few years and I look forward to the new
memories we create.
There is a crew of people to whom I had to say a temporary goodbye for this academic
pursuit. They were, and remain today, some of the finest people this world has to offer. Brooke,
Katy, Eden, Althea, and Stewart, each of you encouraged me to chase this dream (with the caveat
that I return east once I completed the degree). Thank you for making it hard to leave, for
encouraging me throughout this process, and for being some of the many reasons I look forward
to coming home.
Finally, I recognize my family. Owen, my brother and a powerhouse of inspiring
speeches, thank you for sharing your words with me at times when I was at a loss for words of
my own. Most importantly, Mom and Dad, thank you for feeding my curiosity and fostering my
iv
love of learning. You have encouraged and supported me every time I decided I was ready for a
new challenge. Your unshakeable faith in my capacity to achieve the next dream has played a
critical role in my success. Thank you for everything.
v
Table of Contents
Acknowledgements………………………………………………………………………………..ii
Table of Contents……………………………………………...……………………………….….v
List of Appendices ………………………………………………………………………………vii
List of Tables …………………………………………………………………………………...viii
List of Figures …………………………………………………………………………………….x
Essay 1: Making All Students Count: Exploring the Inclusion of Mobile Students in
Accountability………………………………………………………………….………….1
1.1 Introduction …………………………………………………………………………...1
1.2 Conceptual Framework …………………………….....................................................4
1.3 The Full Academic Year Regulation …………………………………………………7
1.4 Including Mobile Students in Performance Calculations …………………………...12
1.5 Evaluating the Inclusion of Mobile Students ………………………………………..17
1.6 Analytic Methods ……………………………………………………………………21
1.7 Results …………………………………………………………………………….…24
1.8 Discussion …………………………………………………………………………...37
1.9 Policy Recommendations ……………………………………………………………40
1.10 Limitations …………………………………………………………………………43
Essay 2: Who, When, and Where: The Mobility of Students across School Poverty
Contexts ………………………………………………………………………………....66
2.1 Introduction ………………………………………………………………………….66
2.2 Mobility Literatures …………………………………………………………………68
2.3 Data ………………………………………………………………………………….75
vi
2.4 Analytic Strategy …….……………………………………………………………...80
2.5 Results ……………………………………………………………………………….85
2.6 Discussion and Implications …………………………………………...……………95
2.7 Limitations …………………………………………………………………………..99
References…………………………………………………………………………...………… 110
vii
List of Appendices
Appendix 1.A State Accountability, Full Academic Year Definitions…………………………...45
Appendix 1.B Washington Testing Windows 2010-2014 ……………………………………….47
Appendix 1.C Important Dates in Washington…………………………………………………..48
Appendix 1.D Federal Accountability Practices and Measures…………………………….…..50
Appendix 1.E Washington Accountability Rules ………..………………………………………53
Appendix 1.F Definition of Included, Excluded, and Ineligible Students by Inclusion Practice 54
viii
List of Tables
Table 1.1 Student Demographics……………………………………………………………….. 55
Table 1.2 School Demographics………………………………………………………………... 55
Table 1.3 Number and Proportion of Students Classified as
Included, Excluded, and Ineligible.................................................................................. 56
Table 1.4a Differences between Included and Excluded Students, Traditional ……………….. 57
Table 1.4b Differences between Included and Excluded Students, Last Year …………………. 58
Table 1.5a Differences between Included and Excluded Students, January Practice………….. 59
Table 1.5b Differences between Included and Excluded Students, Testing Window Instruction
Practice………………………………………...……………………………………….. 60
Table 1.5c Differences between Included and Excluded Students, All Schools and All Schools
Instruction Practice…………………………………………………………………...... 61
Table 1.6 Average Change in Proficiency Rate Compared to Performance in the Traditional
Practice (in percentage points), by Practice and Subgroup……………………………..62
Table 1.7 Average Change in Proficiency Rate Over Time (in percentage points), by Practice
and Subgroup………………………………………………………………..………….. 63
Table 1.8 Mean and Standard Deviation of Value-Added Measures, by Subject, Years of Lagged
Achievement, and Inclusion Practice……………………………………..……………...64
Table 1.9 Average Change in School Value Added Relative to Traditional Practice, by Inclusion
Practice…………………………………………………………………………………..64
Table 1.10 Proportion of Traditional Practice Low-Performing Schools Identified as Low-
Performing with Alternative Inclusion Practices…………..………………………...….65
Table 2.1 Student Characteristics, student-year observations……..…………………......……101
ix
Table 2.2 School Enrollment Demographics…………………………………………………...102
Table 2.3 School Mobility Rates……………………………………………………….…….…103
Table 2.4 Relative Risk Ratio of Moving to a Q1, Q2, or Q3 Poverty School (Relative to a Q4
Poverty School)…………………………………………………………………………104
x
List of Figures
Figure 2.1 Unadjusted Distributions of School Mobility Rates, by Poverty-Quartiles………...107
Figure 2.2 Arrival of New Students, by Week after September 1
st
(Proportion)……………..…108
Figure 2.3 Arrival of New Students, by Week after September 1
st
(Count)…………………..…109
1
Essay 1: Making All Students Count: Exploring the Inclusion of Mobile Students in
Accountability
A driving force behind the introduction of standards-based accountability in the US is the
public interest in improving both the educational opportunities offered to and the outcomes
attained by all students (Kim & Sunderman, 2005; Porter, Linn, & Trimble, 2005). In an effort to
achieve these goals, the US Department of Education (DoED) leveraged the 2001 reauthorization
of the Elementary and Secondary Education Act (ESEA), known as the No Child Left Behind
Act of 2001 (NCLB), and federal education monies (Title I funds
1
) to ensure states implemented
federally-driven accountability mandates. Since NCLB, the DoED has utilized similar practices
in Race to the Top Grants (RTTT) (2009) and ESEA Flexibility Waivers (2011) to incentivize
the advancement of accountability practices beyond those of NCLB. The successful passage of
the 2015 Every Student Succeeds Act (ESSA) ensures that standards-based accountability will
continue to be a primary mechanism employed by the DoED and state education agencies to
improve educational outcomes for the nation’s students.
Accountability policy is designed through a process of negotiation between decision
makers at multiple levels of the government. The DoED establishes the federal framework of
accountability that identifies broad expectations and regulations for states to follow.
Policymakers and practitioners in each state then create a state-specific system of accountability
that is responsive to the federal mandates and meets local needs. Once a state’s proposed system
of accountability is approved by the DoED, school-level personnel (i.e., teachers and
administrators) must ensure students are provided with the curriculum and instruction necessary
to meet the annual performance goals identified by policymakers.
Policymakers, in an attempt to ensure that schools focused on all students and not just the
1
Title I funds are the allocations set aside for schools serving large proportions of low-income students.
2
highest-performing students in the school, incorporated participation regulations in the
accountability policy framework that were not negotiable by state policymakers. The
participation rate requirements serve to ensure that schools 1) focus on all students and 2) and
not make performance appear higher by excluding the lowest-performing students from testing
(Forte, 2010). Schools across the country were responsive to this regulation; very few schools
failed accountability regulations due to participation rates after the first few years of
implementation (Davidson, Reback, Rockoff, & Schwartz, 2015).
This participation requirement within the policy creates a public perception that schools
are incorporating the assessment results of 95% of students in their performance measures. In
reality, the Full Academic Year (FAY) regulation in federal accountability policy allows schools
to exclude a subset of students each year without requiring schools to publicly acknowledge the
rate or impact of this exclusionary practice. The FAY regulation has the potential, therefore, to
undermine the participation regulations. The FAY regulation establishes a cutoff date for when
students must be enrolled in a school in order to be included in that school’s annual performance
measures. Unlike the requirement to publicly report adherence to the 95% participation rate,
there is no way of knowing, in current data systems or through public documents, the proportion
of a school’s enrollment excluded due to the FAY requirement (and therefore not included in the
school’s annual performance measures (e.g., proficiency rates)).
Descriptive statistics from Ohio provide the only available evidence on the proportion of
students excluded from accountability measures, but this research is limited to district-level
reporting. As high as 22% of a district’s enrollment in Ohio was excluded from district-level
accountability measures in 2011 (Community Research Partners, 2012).
2
Under NCLB, any
2
This report suggests that 4.5% of students across the state are excluded from district performance
measures. Ohio uses an October 31
st
cutoff for inclusion.
3
student who switched schools within a district after the FAY cutoff date was incorporated into
district-level performance measures. Thus, the 22% of district-level enrollment excluded
represents students who switch across districts when moving. Given that the majority of student
mobility occurs within-district (Eadie, Eisner, Miller, & Wolf, 2013; Hanushek et al., 2004a;
Offenberg, 2004), it is certain that school-level exclusion rates exceed 22% of enrollment in the
most extreme cases.
The ostensible purposes of the FAY cutoff point are to 1) ensure schools are not held
responsible for performance of an individual student that is attributable to time the student spent
enrolled in a different school and 2) provide schools with relief from potential sanctions they
might receive if responsible for large numbers of late-arriving students. However, there is no
research that substantiates the concerns the FAY regulation is intended to address. In the limited
instances in which the FAY regulation is acknowledged in research, the practice is either
questioned for how it creates variation in exclusion practices across states (e.g., Davidson et al.,
2015) or researchers recommend finding ways to incorporate late-enrolling (i.e., mobile) students
into accountability systems (Offenberg, 2004; Özek, 2012) because the current regulation creates
incentives to overlook the needs of individuals who enroll after the FAY cutoff date. This study
attempts to address that evidence gap and is guided by three research questions. The research
questions guiding this work are:
1. Under current systems of accountability, how many students are excluded from
accountability measures each year and how do excluded students differ from students
who are included?
2. How does instituting alternative FAY practices alter the number of included students and
any differences between students who are and are not included?
4
3. To what extent does school performance change when instituting alternative FAY
practices, relative to current practice?
The research presented herein is the first to provide empirical evidence on the exclusion practices
created by the FAY regulation as well as to test the necessity of the regulation in accountability
policy. Moreover, the evidence provided from this research will offer states alternatives to
current FAY practices that consider the needs of late-enrolling students while simultaneously
reducing the presumed penalty for schools serving large proportions of them. In the next section,
I present the conceptual framework that guides this work.
Conceptual Framework
I use the principal-agent problem to frame the research on the FAY accountability
regulation. At the most basic level, principal-agent theory relies on the actions of two actors: the
principal and the agent (Holmstrom & Milgrom, 1987, 1991). Principals are the supervisor or
“boss” while the agents are the subordinates hired to carry out the principal’s interests (Manna,
2006). The principal-agent relationship rests on two assumptions. First, agents do not share the
same objectives or goals as principals and, second, the agent possesses knowledge to which the
principal does not have access, creating information asymmetry between the pairing (Holmstrom
& Milgrom, 1991; Waterman & Meier, 1998). These competing goals and the advantage of
additional information may lead the agent to act in ways that are counter to the interests of the
principal. To address this principal-agent problem, a contract is developed that specifies the
expectations of the principal and what an agent receives in exchange for the efforts made
towards reaching the principal’s goals.
If a principal were capable of directly observing the agent’s effort, the principal could
directly reward or pay for that effort (Holmstrom & Milgrom, 1991; Prendergast, 1999).
5
However, the practice of education is not captured in a simple principal agent model where the
agent is tasked with a single responsibility. Educating students requires teachers and school
leaders to address numerous responsibilities and tasks over the course of an academic year.
Holmstrom and Milgrom (1991) suggest that in the presence of multiple tasks, incentives must
serve not only as a motivating factor but also help direct the allocation of agent resources to each
of their various responsibilities. Thus, a principal uses the contract to outline the responsibilities
of the agent while incorporating incentives that motivate the agent to act diligently in the
principal’s interest (Ladd & Zelli, 2002; Prendergast, 1999). The two constraints on any contract
is that the agent must find the task adequately desirable to take on the responsibility and the
rewards must be sufficient for the amount of effort required to meet the desired goals (Ladd &
Zelli, 2002).
In accountability policy, for example, the contract provides the agent with funds that can
be utilized to support the operation of the schools for which the agent in responsible. In exchange
for that money, the agent also accepts the responsibility of working towards the goals the
principal has identified: improving achievement of all students and closing achievement gaps
between key subgroups of students. The incentive for agents is the threat of sanctions if
performance expectations are not met. Specifically, schools not meeting annual performance
targets are labeled low-performing schools and required to implement a set of interventions
intended to address shortcomings in performance.
3
The assumption in the principal-agent
relationship is that the incentives written into the policy (i.e., avoiding negative performance
labels and associated sanctions) induce schools to prioritize efforts aligned with achieving the
intended goals of the policy. The policy structure of standards-based accountability also assumes
3
The sanctions identified vary by state. Under NCLB, sanctions increase in severity with each addition year of
below-expectation performance. Under ESEA Flexibility, states in the lowest 5% and the next lowest 10% of
schools were identified as low-performing schools in the state and required to implement reform programs.
6
schools receive essential and sufficient information from this annual performance assessment to
identify areas for improvement and can respond, in-kind, to address performance needs in
subsequent years.
To be sure, federally driven accountability has improved the educational attainment of
students, particularly students from historically marginalized backgrounds (Dee & Jacob, 2011;
Wong, Cook, & Steiner, 2015). However, research has also identified the unintended, negative
behaviors brought about by the design of the policy. Incentives written into any contract can also
lead to unintended, negative behaviors by agents, particularly when the behaviors of agents are
not directly observable (Holmstrom, 1999). In principal-agent theory, these unintended behaviors
are referred to as moral hazards. In some cases moral hazards, or the induction of behaviors by
an agent that are inappropriate from the principal’s perspective, might go undetected by the
principal. In essence, the agent appears to satisfy the goals of the contract while subverting the
principal’s intentions. A substantial body of research on accountability systems has identified the
ways in which the incentives of the policy have also induced unintended, negative behaviors by
the adults enacting the policy. Examples of negative, unintended responses to the accountability
incentives include shifting the instructional focus to those students most likely to help a school
meet annual performance targets in a given year (Booher-Jennings, 2005; Krieg, 2008; Neal &
Schanzenback, 2010) or outright cheating by school personnel to avoid performance sanctions
(Maxwell, 2013).
A subset of these unintended, negative behaviors of accountability design revolve around
excluding students from performance measures in an effort to raise the school’s publicly reported
performance. Such gaming behaviors include the unnecessary or inflated labeling of students as
requiring special education services (Cullen & Reback, 2006; Heilig & Darling-Hammond,
7
2008; Jacob, 2005; Jennings & Beveridge, 2009), enacting disciplinary sanctions around the
testing window (Figlio & Getzler, 2006; Jacob, 2005), or retaining low-achieving students in
untested grades (Jacob, 2005). This shifting of educators’ focus takes resources away from
students who were either highest performing and assumed to meet the performance standard or
those students who were lowest performing yet considered unlikely to meet performance targets.
These types of behaviors were not intended, but they were motivated by the structure of the
policy.
I argue that the FAY regulation in accountability policy also incentivizes unintended
behaviors in two principal-agent relationships established in accountability policy. Those
relationships are between the DoED and state departments of education as well as the
relationship between state departments of education and the schools they oversee. In the next
section, I describe the FAY regulation in full, present the unintended behaviors incentivized by
the FAY regulation, and demonstrate two cases of how states responded to the FAY regulation in
the NCLB legislation. I conclude the next section by documenting the potential unintended
behaviors at the school-level that the FAY regulation creates.
The Full Academic Year Regulation
Every state is required to identify a FAY enrollment cutoff date; the deadline by which a
student must be enrolled in a school in order to be included in that school’s annual performance.
In the original NCLB legislation, states were permitted to exclude any student who had not
attended the school for a FAY (No Child Left Behind Act of 2001, 2001). An amendment to the
original legislation subsequently required states to exclude students from school performance
calculations who were not FAY enrollments (Making adequate yearly progress, 2003). There is
only one stipulation on how a state sets it FAY regulation: a state may not set a FAY cutoff date
8
that requires a student to be enrolled for 365 calendar days or more (Blagojevich, Ruiz, & Dunn,
2005). With this stipulation in mind, policymakers identified the FAY date appropriate for their
state context, though that FAY cutoff designation did require approval from the DoED. Nineteen
(19) states set September 30
th
/October 1
st
as their FAY cutoff date. In such states, students must
be enrolled in a school as of the chosen date through the end of the annual testing period in order
to be incorporated into school accountability measures. Five states: Hawaii, Illinois, Iowa, New
Mexico, and Wisconsin, require a student to have been enrolled in the same school during the
prior academic year, typically by the prior year’s testing period, through test administration of
the current academic year.
Whereas some states select a specific calendar date cutoffs, others identify a specific
number or proportion of instructional days a student must be present to be considered a FAY
student. For example, Kentucky identifies any student to be a FAY enrollment if she is present
for any 100 days of the academic year.
4
The District of Columbia mandates a student must be
present for 85% of the instructional days between the Fall enrollment date (typically October 1
st
)
and the beginning of annual testing.
The DoED shows that Michigan requires students to be present on “count days” in
September and February, but available documentation does not clarify whether the enrollment
must be continuous between those days. Some states relax the identification of included students
by providing alternative definitions to “continuous enrollment.” For example, Washington
defines continuous enrollment as having no enrollment gaps that exceed 29 calendar days.
4
The 100 days are based on the number of days a student is enrolled in a school. Absences, both excused and
unexcused, do not factor into the identification strategy. A student may be enrolled for 101 days and absent
for 12 of those days. The student is still considered enrolled for more than the 100 days minimum and would
be labeled an FAY student.
9
Oklahoma restricts the enrollment gap to 10 consecutive days.
5
The variation in the state-
determined FAY cutoffs leads to potential state-to-state differences in which and how many
students are included in school performance measures. In other words, mobile students receive
different levels of inclusion and recognition in schools based upon where they live or to where
they move. For a full listing of state FAY rules as registered with the DoED, see Appendix 1.A.
The DoED has extended its regulation of the FAY cutoff in the new ESSA legislation and
identified a minimum length of enrollment for students to be incorporated into accountability
systems.
6
Now referred to as partially-enrolled students rather than FAY students, schools can
include students who have been enrolled for at least 50% of the year. It appears as though every
state is already in compliance with this ESSA regulation if using the current FAY cutoff practice.
However, ESSA provides the opportunity for state policymakers to reconsider how mobile
students are incorporated into accountability measures and potentially move away from the more
conservative exclusion practices currently in place.
7
Incentives of the FAY Regulation
State-level incentives. Whereas the intent of the FAY regulation is to act in a manner fair
to schools, both states and schools can respond to the policy in ways that may not align with the
good intentions of the regulation. If states are particularly concerned about the performance of
their schools or the requirement of including mobile students into performance measures, they
may establish conservative FAY cutoff dates to exclude larger numbers of students from
accountability measures (Davidson et al., 2015). The historic practices of states on the FAY
regulation are not well-documented in public education records. However, documents from both
5
Documentation does not specify whether this is calendar or business days.
6
To my knowledge, this decision was not based on research and no justification was provided for the rationale as
part of the ESSA legislation.
7
Conservative in this context means excluding more students rather than less, at a presumed benefit to schools, by
setting earlier FAY definitions.
10
Illinois and California show a move towards more conservative practices in response to the
NCLB legislation. Before NCLB, Illinois identified the FAY cutoff date as September 30
th
of the
current academic year. The state changed the FAY designation to May 1 of the previous
academic year starting in 2005-2006 (i.e., following the NCLB allowance of excluding late-
enrolling students). Since then, students in Illinois must be enrolled for close to a full calendar
year to be part of the school accountability system. Illinois policymakers were explicit that they
did not overextend the FAY cutoff beyond the federal limit (Blagojevich et al., 2005) but they
essentially used the most conservative regulation allowable.
California introduced a state accountability system prior to the passage of NCLB. In this
system, schools were accountable for the performance of any mobile student who had moved
within-district that year. Following the introduction of NCLB, the state elected to align their state
practice with the federal legislation (SB 722, June 2003). Subsequently, schools in California
excluded any mobile student who arrived in the school after October 1
st
from accountability,
regardless of from where that student moved. Thus, California began excluding larger
proportions of mobile students under NCLB policy than previously practiced within the state.
While the case of California may demonstrate interest in consistency between state and federal
policy more than a response to incentives in the legislation, this case shows that states do adjust
practices when the DoED creates new policy regulations. And while the shifts in state regulation
are allowable by law, these changes are creating practices that were unintended by the
policymakers designing the federal regulations.
School and district-level incentives. Schools may also be responsive to the FAY
regulation in ways unintended by both the DoED and the state departments of education. In the
limited research that has explored the FAY cutoff and the subsequent exclusion of mobile
11
students from accountability performance measures, Özek (2012) found that excluding late-
arriving students negatively impacts the achievement of those students in the year of exclusion.
Specifically, this study suggests that students who arrive in the lowest performing schools in
Florida just after the FAY cutoff demonstrated significantly lower performance in the year of
exclusion compared to peers who enrolled in a low-performing school just before the FAY
deadline. In schools that were not on the cusp of being identified as low-performing, the
excluded students did not experience these differences in achievement. This effect may be
explained by two separate but equally concerning practices. First, schools may potentially
manipulate enrollment dates to ensure lower performing students are not officially enrolled until
after the FAY cutoff date. Second, schools may focus instruction and resources on those students
they know will be included in accountability measures at the expense of the late-enrolling
students. Regardless of the mechanism that creates the disparities in achievement, Özek’s
findings suggest that schools respond in unintended ways to the FAY regulation.
Unfortunately, analyzing gaming or strategic behaviors of schools with respect to the
FAY regulation is relatively difficult. In addition to the two unintended, negative behaviors
identified by Özek, the FAY regulation may create an additional incentive that is not easily
identified or observed. Namely, school personnel may encourage low-performing students to
switch schools after the FAY regulation but prior to the testing period. This out-mobility of
students would neither affect the performance scores of the sending (influencing) school nor
would the late-arriving student be included in performance scores of the receiving schools. Such
behaviors by school personnel would not be easily identified through data as it can occur at any
point between the FAY cutoff and testing and would likely look similar to the mobility of
students exiting the school on their own accord.
12
In accountability policy, there is an understanding that negative unintended or unexpected
consequences may arise, but the expectation is that these negatives are always outweighed by the
benefits brought about by the policy (Fuhrman, 2004). However, there is no research
substantiating the need for the FAY regulation. Thus, we cannot definitively say that the FAY
regulation is necessary and that it justifies the current exclusion practices that prior research
suggests are harmful to the excluded students (Özek, 2012).
Including Mobile Students in School Performance Calculations
The current FAY regulations create the potential for unintended, negative behaviors of
the state policymakers and adult personnel within schools. As Özek (2012) suggests, these
behaviors may detract from the learning and opportunities provided to excluded students.
Researchers who study mobile students and accountability policy suggest the need to incorporate
mobile students into accountability measures to ensure their educational outcomes are a priority
for schools (Eadie, Eisner, Miller, & Wolf, 2013; Offenberg, 2004; Rennie Center for Education
Research & Policy, 2011; Rumberger, 2003). The literature on teacher evaluation systems and
the measurement of growth and value-added scores have already begun to address the inclusion
of mobile students (Ehlert, Koedel, Parsons, & Podgursky, 2014; SAS EVAAS, 2015). School-
level accountability research has not yet made these advances.
In order to advance the thinking of policymakers and practitioners on the inclusion of
mobile students in accountability systems, I present five potential alternatives to the current FAY
practices and identify how each alternative addresses the three unintended behaviors the current
regulations create (i.e., manipulating enrollment records, shifting resources, influencing out-
mobility). In the end, I seek to identify a practice that incorporates the largest number of mobile
students each year, reducing the incentives for adults to behave in unintended ways, while
13
remaining fair to the schools serving mobile students. These practices shift FAY cutoff dates to
test the most liberal cutoff currently allowed, remove the FAY regulation entirely, or shift the
weight given to mobile students in accountability measures while still incorporating their
performance.
January
The first alternative FAY is the January practice. Similar to the most common FAY
practice of using October 1
st
, this practice is reliant on a date-specific cutoff. However, it would
extend the deadline for late enrollment to January 15
th
. Under this practice, a student must be
continuously enrolled from January 15
th
through the end of the testing period to be included in
school performance measures. More than half of mobile students in Washington arrive in their
new school prior to this date.
8
Additionally, January 15
th
is a close-approximation to the most
inclusive a state could be under ESSA’s partially-enrolled student classification and would
demonstrate the potential of realistic short-term changes states could make to the FAY
regulation. With regard to incentives, this alternative does not change the incentives identified
previously. The January practice simply shifts the date around when schools might manipulate
enrollment records, shift the allocation resources, or influence the out-mobility of the lowest
performing students prior to the testing window. Thus, the practice is more inclusive but may not
get away from creating unintended behaviors at the school-level.
Testing Window
The second FAY alternative removes the calendar-based cutoff date and uses the annual
testing window to determine eligibility. In this case, a student is only assigned to the school in
which she is either 1) tested or 2) enrolled in for the duration of the testing window when no test
8
Data from the state of Washington is used in this study. Details on that data are presented in the next
section. The mobility statistic is derived from analysis within this dissertation.
14
score is on file.
9
By removing the pre-testing cutoff, this practice addresses the incentive on how
schools would treat late-arriving students. All students enrolling prior to the testing window
would be incorporated in accountability systems, and thus, there would be no rationale for
schools to allocate instructional resources only to the subset of the student body present by the
traditional FAY cutoff date. However, both manipulation of enrollment records around the
testing window and the incentive to push low-performing students out of the school remain in
place under this alternative.
Testing Window Instruction
Using the recommendation of Offenberg (2004) and Özek (2012), the Testing Window
Instruction practice builds upon the Testing Window practice, but adjusts each student’s
performance by the proportion of instructional time she was enrolled in the school prior to
annual testing. The purpose of this adjustment is to maintain the current policy’s intent to not
hold schools accountable for performance that should be attributed to time spent in other schools.
The Testing Window Instruction practice should not incentivize the manipulation of enrollment
records because delaying enrollment by one day will not substantially change the weight given to
any individual student in the accountability system. Second, there is potential that the practice
will reduce the incentive to reallocate resources as all late-arriving students would be included.
However, there is potential that because teachers know the order in which their students arrive,
they will allocate more time to those students who have been present for the majority or entirety
of the academic year. Finally, the incentive to push-out low-performing students would remain in
place as students who exists schools are not attributed to sending schools in accountability.
9
A student who is enrolled for the duration of a testing window but does not have a test score would still
count in the calculation of performance measures. This is standard in current accountability practices and
intended to disincentivize schools from encouraging low-performing students to stay home on testing days
(through disciplinary action or other practices).
15
While the Testing Window Instruction practice addresses some of the unintended
behaviors incentivized by the current FAY regulations, the practice also creates an added benefit
not present when using date-based cutoffs. Testing dates are set by grade-level (see Appendix
1.B) but the FAY regulation cutoff is not. Thus, elementary schools get more days with their
late-enrolling students prior to testing than high schools do in current FAY regulations. Because
an instruction adjustment takes into account the different testing dates, the new practice would
allow for fair application of the regulation across all grades.
All Schools
Each of the previous alternatives practices has failed to address the incentive to push-out
low performing students. One way to remove that incentive is to include any student who was
ever enrolled in the school in that school’s performance measures. A student who enrolls for one
day would count the same as a student who was enrolled for the entire year. The group of
students that would get added in to performance measures under this practice is students who
have exited the school prior to the testing window and have a valid test score on file. In this
practice, schools would not benefit when manipulating enrollment records because even one day
of enrollment means the student is included in that school’s calculation. The incentive to
reallocate instructional resources to stable students would be reduced because all students are
included. And this FAY alternative should remove the push-out motivation because once a
student is present for one day, she is counted in all performance measures. Unfortunately, this
practice may be the least fair to schools that serve large proportions of mobile students, because
schools with high mobility would be responsible for the performance of larger numbers of
students than more stable schools would be (McEachin & Polikoff, 2012). As such, state
policymakers may find it difficult to garner support for this particular regulation when
16
incorporating to into future accountability systems.
All Schools Instruction
To maintain a reduced incentive to push-out low-performing students and potentially
allow for a more fair system for schools, the All Schools Instruction practice adjusts the
performance weight assigned to students in the All Schools practice. Similar to the adjustment
made in the Testing Window Instruction practice, a student’s performance would be adjusted by
the amount of instructional time she was enrolled in a particular school prior to the first day of
testing. Again, a student would count using this FAY practice if she was present for at least one
day even if not enrolled during the testing window, but the student’s performance would not
carry the same amount of weight as an individual who was enrolled in the school for the entire
year. Moreover, every school that served the student during the year would be accountable for
that proportion of the student’s performance. A student could count at 45% of performance in
one school and at 55% of her performance in a second school in that same year.
The manipulation of student records by a handful of days is not likely to occur in this
practice because delaying enrollment by one or two days will not substantially change the weight
given to any individual student in the accountability system. Similarly, because a school does not
know how long a student will be present for or the quality of instruction received while at other
schools attended in the year, a school is incentivized to focus instructional resources on both
mobile and stable students in preparation for annual assessments. By contrast, this practice does
not entirely remove the push-out incentive. Schools may still be interested in influencing the out-
mobility of the lowest performing students, as those students would carry less weight in the
school’s performance measures. There may, in fact, be no practice of inclusion that completely
addresses the push-out incentive that is also fair to schools. However, empirical evidence, such
17
as that presented below, could help inform judgments about the fairness of different practices
(i.e., practices that appear “unfair” to schools serving more mobile students may not be unfair
based on their actual impacts on school performance judgments).
The proposed alternatives are not an exhaustive list of ways to consider mobile students
in accountability measures. For example, mobile students could be identified as a separate
accountability subgroup or included only if a within-district school change is made. Using a
separate subgroup of mobile students would not drastically increase the number of mobile
students included compared to these proposed alternatives. Using the October 1
st
(Traditional)
cutoff, less than 2% of all schools would be held accountable for a stand-alone, mobile student
subgroup (because there are not many schools that have large enough numbers of mobile
students to qualify given minimum subgroup size requirements). Thus, it may not be an effective
way of including mobile students in future iterations of accountability policy.
Other Excluded Students
A few other groups of students are excluded from all performance measures of
accountability for reasons that are unrelated to the FAY regulation. These include being
identified as New, Non-English Proficient Students or receiving a medical exemption from
testing. Approximately 1% of unique students in tested grades are classified as exempt each year
based on these additional classification rules. I remove these individuals from the analyses
presented in this paper because evaluating these exclusionary practices is beyond the scope of
this research.
Evaluating the Inclusion of Mobile Students
This research relies on an administrative panel of data from the Washington Office of
Superintendent of Public Instruction (WA OSPI) for academic years 2009-2010 (2010) through
18
2013-2014 (2014). Student-level data files include identifiers for a student’s race/ethnicity, grade
level, eligibility for free or reduced price lunch (FRLE), English language learner (ELL) status,
and whether the student has a disability (SWD). Standardized test scores are available for
reading and math.
10
Along with test scores, the state provides the test type, the test attempt status
(e.g., tested, unexcused absent, previously passed), and an indicator is provided that identifies
whether a student has met the proficiency target for her grade in each subject. In the student
enrollment files, students are linked to each school and the associated school district attended
during the academic year. The student data files also mark dates of enrollment and dates of exit
from schools and districts. The enrollment data is utilized to identify students as FAY
enrollments for each practice of including students in accountability measures. Publicly available
data from WA OSPI and the National Center for Education Statistics (NCES) are utilized to
identify school characteristics such as Title I status, school size (starting enrollment), location,
grade-levels served, and school type (e.g., public, vocational, virtual, etc.).
Changes in Testing and Accountability
Washington utilized the Measurement of Student Progress (MSP), the High School
Proficiency Exam (HSPE), or an End-of-Course exam in Algebra 1 (EOC) as the annual
assessments through 2013. For students in grades 3-8, Washington piloted their transition to the
Smarter Balanced (SBAC) exams in 2014. Scale scores and indicators of proficiency levels are
not provided for students who took the SBAC exams. Analyses focusing on the number of
excluded students and demographics of excluded students are not affected by these changes.
Enrollment dates and test attempt status, provided for all years, are used for the purposes of
determining Included, Excluded, and Ineligible students. This information was provided for all
individuals, including SBAC-tested students. School performance measures, in 2014, are
10
Washington state uses reading test scores for the English/language arts components of accountability.
19
affected by the change in tests. The state was granted a waiver to exclude pilot schools from
annual accountability calculations (Delisle, 2014). Districts identified approximately one-third of
Title I schools across the state to pilot the SBAC exam (Dorn, 2013). Therefore, the 2014 school
performance measures used in this study are based on approximately two-thirds of Title I schools
in the sample for each practice of inclusion.
Important Dates and Measures
There are four dates in Washington that matter for accountability policy (see Appendix
1.C for how these dates were constructed in the dataset). The first date is the fourth instructional
day of September, the date by which schools must have current enrollment records in order to
indicate who is enrolled for the start of the academic year. Because it is unclear whether school
changes prior to this date reflect mobility as opposed to the correction of records, I use the 4
th
instructional day, calculated for each school, to represent the first day of instruction at a school.
The second date is the FAY regulation currently in practice, which is October 1
st
. The third date
is the start of the testing period for math and/or ELA. The final date that is used in this research
is the last day of the testing period for math and/or ELA. I provide a list of all testing dates, by
subject and grade, from Spring 2010 through Spring 2014 in Appendix 1.B.
In two of the five proposed alternatives to the current FAY regulation, a student’s
performance is adjusted by the proportion of time enrolled between the 4
th
instructional day of
September and the first day of the testing window. Instructional time was calculated for each
school a student enrolled in and for each subject, independently. For a student to be marked as
receiving 100% of instructional time in a subject, the student had to be enrolled from the fourth
instructional day continuously to the start of the testing window. The total number of school days
these students were present represents the total possible instructional days available in that year.
20
This number was calculated with consideration for US holidays and an annual winter break, but
it does not include the possibility of district-specific holidays. For students who enroll after the
fourth instructional day, their instructional time is determined as the number of weekdays,
respective of US holidays and winter breaks, between their first day of enrollment and the start
of the testing window as a proportion of total possible instructional days offered in that school.
Sample of Schools
I limit the sample of schools involved in this analysis in a few ways. First, I focus on
traditional public schools and exclude special education, alternative, virtual, home- or hospital-
based, and career/technical schools as well as juvenile detention centers. Second, I focus only on
those traditional, public schools that receive Title I funds (60%) as they are the subset of schools
in a state under the threat of sanctions in accountability policy. Third, a school had to have at
least 30 students who met the state’s current FAY regulations (Appendices 1.D and 1.E provide
details on federal accountability regulations and the specific rules used in Washington). Schools
must meet this minimum enrollment of FAY students for performance to be disaggregated and
publicly reported in Washington. So long as a school met this threshold in at least one year, it
was included in the analyses. The final sample included 4912 school-year observations that
enrolled over 1.5 million student-year observations across all five years in the panel. Descriptive
statistics on these schools and students are presented in Tables 1.1 and 1.2, respectively.
Operationalizing Inclusion
I defined specific parameters for which students were or were not including in
accountability measures for the five proposed inclusion practices discussed previously. I also
identify these parameters for two of the current FAY practices that states implement. The
Traditional practice refers to utilizing an October 1
st
FAY cutoff date and the Last Year practice
21
refers to schools requiring students to be enrolled at the end of the prior academic year in order
to be included. For each of these inclusion practices, I define how students are identified as
included or excluded. I also label a group that is “ineligible” for inclusion for each FAY practice.
A general way to think about these three groups of students is that included students are
incorporated in accountability measures, excluded students are actively enrolled in a school but
not included in accountability measures, and ineligible students are not actively enrolled in
schools during the testing window or they do not have a test score on file for the year. However,
the specific definitions change across inclusion practices and the number of ineligible students
does not remain constant across each practice. I will acknowledge these changing definitions
throughout the remainder of the paper where it is essential for understanding the information
being presented. Otherwise, all information on how each group was identified, for the purposes
of this paper, is available in Appendix 1.F.
Analytic Methods
Research Question 1
The first research question in this paper asks: Under current systems of accountability,
how many students are excluded from accountability measures each year and who are these
students? To answer this question I review two current FAY regulations: the Traditional practice
and the Last Year practice. I rely upon descriptive statistics, simple correlation analysis, and
standard tests of significance to investigate the number of students excluded from current
accountability practices each year and the demographic differences between included and
excluded students. Throughout this analysis, I will also present school exclusion rates, calculated
independently for each practice of inclusion and subject as follows:
11
11
Ineligible students are not factored into this equation.
22
=
# !"#$%&'& !"#$%&"!
# !"#$!"#" !"#$%&"!+# !"#$%&!& !"#$%&"!
Research Question 2
My second research question asks how the results of Question 1 change when
implementing the alternative inclusion practices. To answer this question, I use the same
methods as Question 1 but focus on the January, Testing Window, Testing Window Instruction,
All Schools, and All Schools Instruction practices. All results are discussed relative to the
Traditional inclusion rates unless explicitly stated otherwise.
Research Question 3
The third research question focuses on how the changes in inclusion/exclusion identified
in the results from Question 2 will alter the performance measures used in accountability systems
(Appendix 1.D provides background on accountability practices and measures that guide this
work). I outline how Washington determines proficiency rates in Appendix 1.E. I follow this
methodology in my calculations and calculate a new proficiency rate for each school based on
each inclusion practice. The second performance measure I calculate is a school value-added
measure (VAM). Ehlert and colleagues (2015) argue that a two-step value added measure is the
most appropriate method of calculating VAMs for the purposes of evaluation because it “levels
the playing field” or removes any correlation between a school’s VAM and the demographics of
the students served. In doing so, schools with similar demographic profiles will be found
throughout the distribution rather than clustered at specific points within the distribution. I
calculate a VAM for every school that has at least 20 students with test score data and is
accountable for the all students subgroup.
12
I begin this analysis by removing the variance in test
scores that is attributable to student and school characteristics using Eq (1):
12
A school may have met the 30 student threshold (in the all students subgroup) but not have 30 students to include
in a VAM measure. This is largely due to missing information on prior test scores. The VAMs are not always
calculated based on the full sample of students included in accountability measures.
23
Eq (1)
!
!"#$
= !
!
+!
!
!
!"#$!!
+ !
!
!
!"#$
+ !
!
!
!"#$
+ !
!"#$
where the test score Y of student i in subject area a at school s in year y is regressed on a
student’s prior achievement (!
!"#$!!
), a vector of student demographic characteristics (!
!"#$
),
and a vector of school characteristics (!
!"#$
). The student demographics include race, gender,
income, language, migrant, and disability status, and a student’s within-year mobility status.
School covariates include the school-level aggregates of race, income, language, migrant, and
disability status as well as lagged school-aggregate achievement. School covariates are based on
all individuals within the whole school, not just those students in tested grades. Prior
achievement indicators include measures for the both math and reading. Additionally, prior
achievement for students in tenth grade is the eighth grade test score as 9
th
grade students are not
tested in math and reading.
13
The remaining variance in student-level achievement (!
!"#$
) is
carried over to the second stage of the VAM calculation. Specifically, Eq 2 is defined as:
Eq (2)
!
!"#$
= !
!"#$
+ !
!"#$
where the remaining variance in student achievement is measured as a function of a school fixed
effect (!
!"#$
) and an error term (!
!"#$
).
I measure the school VAMs using one and two years of lagged achievement. By doing
so, I can calculate one-year VAMs for 2011-2014 and two-year VAMs for years 2012, 2013, and
2014. Research has shown that incorporating additional years of prior achievement data can
improve the stability of VAM measures (McCaffrey, Sass, Lockwood, & Mihaly, 2009). I
13
The calculation of high school VAMs present challenges and limitations because prior achievement is not
available from the prior academic year (8
th
grade test scores are used as prior achievement for 10
th
grade
students).
24
calculate these measures each year for each practice of inclusion and separately for reading and
math. In the Testing Window Instruction and All Schools Instruction practices, I weight the first-
step to account for a student’s length of enrollment in the school prior to testing. Thus, schools
are only held accountable for a portion of a mobile student’s performance in the VAM.
I compare the newly calculated proficiency rates and VAMs to the same measures
calculated based on the Traditional practice. Descriptive statistics, standard tests of significance
and correlation analysis are used to identify and assess differences. I examine the stability of
schools classified into the bottom 15% of schools. I separate out the bottom 5% and the next
lowest 10% of schools to reflect the identification of low-performing schools that is used in the
most recent accountability systems. This analysis is conducted to determine whether including
mobile students in accountability changes the identification of the lowest-performing schools in
the state.
Results
I begin my analysis by presenting descriptive statistics on the over 1.5 million student-
year (see Table 1.1) and 4912 school-year observations in my sample (see Table 1.2). Students
enrolled in tested grades in traditional public, Title 1 schools are primarily identified as White
(50.6%) and Latino (27.8%). Almost two-thirds (63.0%) of students are eligible for free- or
reduced-lunch (FRLE). Students with disabilities (SWDs) make up 14.1% of the enrollment and
English language learners (ELLs) comprise 11.5% of the state enrollment in Title I public
schools. The average standardized test score of students in the sample is -0.162 in reading and -
0.165 in math across all five years.
14
There are a total of 4912 schools across five years (school-year observations) that are
14
Standardized test scores were based on all students enrolled in the state and not just students in Title I, traditional
public schools. As such, the average achievement score of these students is not zero (0) as would be expected in a
standardized measure.
25
held accountable for student performance in at least one year of the data panel.
15
Across the five-
year panel, the average school has a starting enrollment of 488 students. The majority of schools
are located in urban communities (31.1%) with suburban (26.8%), rural (25.4%), and town
schools (16.7%) accounting for the remainder of Washington schools. Elementary schools make
up 63.0% of the sample, followed by middle schools (19.6%), high schools (14.0%), and schools
serving other grade configurations (e.g., K-8) (3.3%). Finally, the average school mobility rate is
22.2%. This suggests that for a school that serves a total of 200 students in tested grades over the
course of the academic year, 44 of those students arrive late or leave early.
Research Question 1
The first question of this study asks: Under current systems of accountability, how many
students are excluded from accountability measures each year and how do excluded students
differ from students who are included?
Under current systems, how many students are excluded from accountability
measures each year? In text, I present the average number of excluded students annually and I
provide year-by-year exclusion counts in Table 1.3. Across all practices the math rates of
exclusion are higher due to differences in the testing windows. For a student to be present for the
testing window in math, she must be enrolled in a school for more consecutive days than is
required for reading. The later testing occurs in the year the more opportunity and time for
students to change schools, qualifying them for exclusion.
Statewide under the Traditional FAY practice in reading, 18,836 students or 7.6% of
students enrolled in a school with reading test scores on file are excluded from accountability
each year. In math, 19,659 or 7.9% of students are excluded from accountability measures
annually. The range of exclusion for reading measures within an individual school extends from
15
A total of 1221 unique schools make up the sample of schools.
26
no students to as much as 50% of students in tested grades; the average exclusion rate is 6.9%. If
the state instead used the Last Year practice, the number of excluded students would more than
double in both subjects. In reading, an average of 51,418 or 17.1% of enrolled students with a
test on file would be excluded each year under the Last Year practice. The numbers are similar in
math where 52,242 or 17.4% of students actively enrolled with test scores on file would be
excluded from accountability. Within specific schools, the proportion of an enrollment that is
excluded would reach over 80% of the enrollment present at testing, with the average rate being
18%. These numbers give credence to what researchers have suggested about the FAY regulation
(Davidson et al., 2015); states can essentially assist schools in excluding large proportions of
students simply by setting earlier FAY cutoffs. The general public has no concept that this rate of
exclusionary practice is occurring because the numbers are not reported anywhere.
How do excluded students differ from students who are included? There are
significant and meaningful differences in the demographic and achievement profiles between
students who are included or excluded under the Traditional FAY practice (see Table 1.4a and
1.4b for complete details). Consistent across both subjects, White students are overrepresented in
the included students group by a difference of seven percentage points (i.e., included students are
51.1% White and Excluded students are 44.2% White). By contrast, excluded students are
disproportionately Black, Pacific Islander, low-income, SWDs, and lower achieving than their
included peers. The average included student demonstrates an achievement of -0.133sd whereas
the average excluded student demonstrates an average achievement of -0.474sd in reading. This
is a standardized mean difference of more than one-third of a standard deviation. Importantly,
these differences in achievement are used to justify the exclusion of mobile students from
accountability measures (i.e., because including them would negatively affect school
27
performance measures in an unfair way).
The Last Year practice also creates differences between included and excluded students.
In reading, excluded students are disproportionately low-income and lower achieving. In math,
Black, low-income, and lower achieving students are underrepresented in accountability
measures (i.e., excluded at higher rates). The magnitudes of the differences identified between
included and excluded students under the Last Year practice are smaller than the differences
found between included and excluded students in the Traditional practice. For example, the
difference in reading scores between included and excluded students is approximately 0.129sd in
the Last Year practice as compared to a 0.341sd difference under the Traditional practice. Prior
research on mobility shows that different types of students move at different times of the year
(Hanushek et al., 2004a). The Last Year practice captures a full calendar year of mobility
whereas the Traditional practice captures approximately a half-year worth of mobility (only
mobility that occurs in the middle of a school year). What these results confirm is that students
who move mid-year (Traditional practice excluded students) are more disadvantaged than
students who move between the end of the prior school year and October 1
st
.
Research Question 2
The second question of this study asks: How does instituting alternative FAY practices
alter the number of included students and any differences in which students are or are not
included? I compare all of the numbers presented below to the Traditional practice. As can be
seen in Table 1.3, when using the Testing Window, Testing Window Instruction, All Schools, or
All Schools Instruction practices, the number of ineligible students is reduced compared to the
Traditional practice. The difference in these numbers comes from two sources: first, allowing
students who are enrolled for part, but not all, of the testing window to be included if they have a
28
test score on file (the Testing Window practice) and, second, by including students who are not
actively enrolled in a Title I, public school during the testing window but have a test score on
record (both All Schools practices). Because the total number of students eligible for inclusion
changes relative to the Traditional practice, the numbers presented below are discussed in terms
of the additional students included rather than the number or proportion of students excluded.
How do the numbers of excluded students change? Relative to the Traditional
practice, each of the proposed inclusion practices see large increases in the number of students
incorporated into accountability measures each year (see Table 1.3). The smallest increase comes
from moving the FAY cutoff from October 1
st
to January 15
th
, essentially what is the least
conservative allowance under ESSA regulations. If the state instead used the January practice,
the number of included students would increase by 3.5% over the Traditional practice. In
reading, an average of 9,400 additional students actively enrolled with a test record on file would
be included each year under the January practice. The numbers are similar in math where an
additional 9,529 students actively enrolled with test scores on file would be included in
accountability each year.
The remaining four practices all see annual increases of over 20,000 students in both
reading and math accountability measures. The Testing Window practice would increase the
number of included students by 21,488 students (7.9%) and 22,257 students (8.2%), in reading
and math respectively. This regulation does not take into account whether the student enrolled
prior to annual testing. Thus, schools would be held accountable for the learning of students who
received no instructional in that school that year. The Testing Window Instruction practice
removes students from accountability who were not enrolled in the accountable school for at
least one instructional day prior to testing. This FAY practice still sees increases of 20,855
29
students in reading (7.7%) and 21,582 students in math (8.0%) each year, on average. In other
words, less than one student per school, on average, would be included in the Testing Window
practice that had not been enrolled in that school prior to testing. Finally, the All Schools and the
All Schools Instruction practices both include the same number of additional students each year,
relative to the Traditional FAY practice as they are both based on the same group of students. In
reading, an average of 25,675 additional students, or 9.5%, are included into accountability
measures. In math, 26,487 additional students, or 9.8%, are added into math performance
measures each year. Worth noting, the All Schools practices also have the smallest number of
ineligible students relative to all other inclusion practices because students who exited a school
prior to testing are eligible for inclusion in accountability measures in these practices.
How does instituting alternative FAY practices alter differences in which students
are or are not included? In Tables 1.5a-1.5c, I present the demographic characteristics of
students who are Included and Excluded in the January, Testing Window Instruction, and both
All Schools practices. Demographic characteristic are not presented for the Testing Window
practice because there are no excluded students in this practice.
16
Additionally, the All Schools
and All Schools Instruction practices are both defined by inclusion of the same students and only
differ by the weight these students are given in accountability measures. As such, I present the
demographics for the All Schools practices only once.
The differences in over- and underrepresented students in the January practice are similar
to the patterns identified in the Traditional practice. Students who are White and Asian are
overrepresented in accountability measures while Black, Pacific Islander, low-income, and lower
achieving students are underrepresented. However, while the demographic differences reduce in
magnitude compared to the Traditional practice, the differences in achievement between
16
Excluded students should not be confused for Ineligible students.
30
included and excluded students increase in magnitude. For example, the difference in average
reading achievement between included and excluded students in the January practice is 0.368sd
whereas the difference between included and excluded in the Traditional practice is 0.341sd. In
math, the respective differences are 0.445sd and 0.399sd. These differences suggest that students
who move in the Fall (pre-January 15
th
but after October 1
st
) perform better on the annual
assessments in the year of moving than their peers who move after January 15
th
of the same
academic year.
In the Testing Window Instruction and both All Schools practices, I identify two different
patterns of representation relative to the Traditional practice. First, Black students are no longer
underrepresented in reading and math performance measures. Second, Latino students become
underrepresented in the All Schools practices. The standardized mean difference between Latino
included and excluded students is 0.104sd. The most notable difference between included and
excluded students in these last three practices revolves around achievement in the year of
mobility, where the standardized mean difference in achievement scores is greater than one-half
standard a deviation. In the All Schools Instruction practice, students who are Included
demonstrate an average score of -0.162sd whereas excluded students demonstrate an average
achievement of -0.695sd. This suggests that students who enroll in traditional, Title I, public
schools after the testing window has begun (i.e., at the end of the academic year) are lower
achieving than their peers who move earlier in the academic year.
Research Question 3
The third and final research question of this study seeks to identify how including
previously excluded students into accountability measures alters school performance relative to
currently-reported school performance. As the above results suggest, there are stark differences
31
in achievement between students who are included and those who are excluded from
accountability in current systems. The analysis that follows will demonstrate if the incorporation
of previously excluded students leads to meaningful differences in the identification of schools
as low-performing. For this analysis, I focus solely on the proposed alternative FAY regulations
and compare that performance to the school’s Traditional practice performance. I exclude
comparisons between the proposed FAY alternatives and the Last Year practice as well as
between the Last Year and Traditional practice for two reasons: the Last Year practice is much
less common in current states practices and the paper intends to inform practices aimed at
including more students rather than fewer. Moreover, my presentation of exclusion numbers
under the Last Year practice was meant to demonstrate the ways states, rather than schools,
respond to the federal regulation. I begin with an analysis of proficiency rates in reading and
math.
Proficiency rates. In my analysis of proficiency rates, I look at two pieces of
information. First, I examine how performance on proficiency changes relative to the Traditional
practice, within a year. This represents the initial proficiency rate change a school would expect
if changing from a Traditional FAY inclusion practice to a less conservative inclusion practice.
Second, I examine how performance on proficiency rates would change from one year to the
next. This information provides the annual rate of improvement a school could expect to
experience using the less conservative inclusion practices.
Within-year proficiency rate changes. In Table 1.6, the first column provides the
average school proficiency rate for each subgroup under the Traditional practice in both reading
and math across the five years of the panel. Each subsequent column provides the average
difference in proficiency when calculating school performance using alternative inclusion
32
practices in the same year. These results are presented for each significant subgroup by practice
and subject area. Each cell in the table represents the average across all five years of the data.
The first takeaway from this analysis is that, with the exception of the All Schools
practice, all within-year differences demonstrate slight decreases in proficiency for each practice,
subgroup, and subject area relative to the Traditional practice. For the all students subgroup in
reading, schools would experience an average within-year decrease in proficiency rate by 0.386
percentage points using the January practice or an average decrease of 0.681 percentage points
under the All Schools Instruction practice. Overall these results suggest that schools would
experience an initial downward shift in proficiency rates for all subgroups and in both subject
areas when moving to less conservative FAY practices. However, across all practices, the
correlation of proficiency rates within subgroups across practices in a year is 0.997 or higher.
This is true in both reading and math. Thus, relative to all schools in the sample, a school’s rank
position would remain virtually identical when changing the FAY practice.
The second takeaway from this analysis is that the All Schools practice creates the largest
within-year difference in proficiency rates relative to the other proposed alternatives. For
example, the average within year change in proficiency for the Low-income subgroup is -5.649
percentage points in the All Schools practice. The average within-year change in proficiency for
the low-income subgroup of the other proposed inclusion practices ranges from -0.348 to -0.925
percentage points. This difference is not unexpected given that the All Schools practice
incorporates the largest number of mobile students, who on average demonstrate significantly
lower levels of achievement than traditionally included students. Also, this practice does not take
into account how long a student is enrolled in a school. A school may have a student for 2 days
who is equally counted in the calculation of performance as a student present for 85 days. Once
33
adjusting for instructional time, as shown by the All Schools Instruction practice, the within-year
change in proficiency is much less pronounced and much more similar to that of the other
proposed practices.
Proficiency rates over time. What the prior results demonstrate is the initial one-time dip
in annual performance measures with the switch to more inclusive FAY regulations. However,
the larger concern may be how these measures react to the inclusion of mobile students over
time. States (or schools) may resist the change to including mobile students into accountability
systems if the performance from year-to-year is substantively different than what has been
realized historically under the Traditional practice. As such, I examine the average rates of
proficiency change schools experience over time. I do this for each of the proposed practices,
including the Traditional practice for comparison purposes. The results are presented in Table
1.7.
The first cell in Table 1.7 represents that average improvement in the proportion of a
school’s students identified as reading proficient from one year to the next (e.g., from 2010 to
2011 or 2013 to 2014). The average school proficiency rate for the all students subgroup in
reading under the Traditional practice is 66.0% (see Table 1.6). The rate of change presented in
Table 1.7 suggests that the following year’s average proficiency rate would be 67.1% (an
addition of 1.01 percentage points of proficient students) under the Traditional practice. In math,
the average Latino subgroup proficiency rate is 58.1%. Under the Traditional practice, the
average school would expect to identify 61.8% of students proficient the next year. Using the All
Schools Instruction practice, the same school would also expect to identify 61.8% of students
proficient in the following year.
I find that the year-to-year or annual changes in proficiency rates are similar in each
34
practice and subject across all subgroups of students. These annual changes within subgroup and
subject across all practices are all correlated at 0.873 or higher. For example, the Black student
subgroup demonstrates an average of 1.458 percentage points of improvement in reading
proficiency each year when using the Traditional practice. Across the other four practices of
inclusion, the Black student subgroup demonstrates between a 1.270 and a 1.510 percentage
point improvement from one year to the next. In math, the Black student subgroup experiences
an average proficiency rate improvement of 3.977 percentage points in the Traditional practice
each year and between a 3.905 and a 4.035 percentage point improvement in the other practices
from one year to the next. What these annual changes in proficiency results suggest is that while
there may be a one-time shock to school proficiency rates by introducing the inclusion of mobile
students, the performance of school proficiency rates over time will be similar to what has been
experienced under the Traditional practice, regardless of which alternative practice is chosen.
Value-added measures. The average VAM score across every practice of inclusion is
essentially zero because each VAM is mean-centered by year. In Table 1.8, I present the standard
deviation of each VAM score calculated. Overall, the standard deviation of the VAM measures
range between 0.135 and 0.197 and are smallest in the All Schools Instruction models which
include the largest number of students. Prior VAM analysis provides evidence that standard
deviation of school effectiveness measures typically range between 0.11-0.22 (see Lipscomb,
Gill, Booker, & Johnson, 2010 for examples). As such, the estimates calculated in this study are
on-par with prior research utilizing VAMs.
In Table 1.9, I show the average change between a school’s VAM under the Traditional
practice and each of the more inclusive practices. Across both the one-year and two-year VAMs
in both reading and math, the differences are essentially zero. In fact, the one-year VAMs, within
35
subject and across practices, are correlated at or above 0.987. This also holds true for the VAMs
based on two-years of lagged student and school achievement scores.
17
These results suggest that
regardless of inclusion practice, there are very few differences between a school’s measure of
effectiveness under the Traditional practice and the measures based on each of the five
alternative inclusion practices proposed herein. The next section of this paper focuses on changes
between the Traditional and the alternative inclusion practices in the identification of schools as
low-performing.
Low-performing schools. What this research has demonstrated up to this point is how
including mobile students into accountability systems changes the average point-estimates of
specific performance measures commonly utilized. But the average Title I school in many states
is not at-risk of being identified as low-performing. The focus of accountability systems has
shifted to the bottom 5% and the next lowest 10% of schools within a state. Based on this
identification strategy, I identify what proportion of schools that fell into the bottom 5% on
Traditional performance measures remain in the state’s bottom 5% using the alternative inclusion
practices. I report this statistics for the all student subgroup proficiency rate and the one- and
two-year VAMS in both subjects for each inclusion practice. I repeat these calculations for the
next lowest 10% of schools and present all results in Table 1.10.
Washington state has a total of 4,875 schools held accountable under the Traditional
practice. Across the five-year panel, at least 244 schools must be identified as part of the bottom
5% of schools and 488 schools must be identified in the next 10% of schools.
18
My calculations
identify 245 schools as members of the bottom 5% because of a tie in performance scores. Not
17
These correlations align with research on VAMs that suggest there is high correlation in estimates across
different modeling techniques (Ehlert, Koedel, Parsons, & Podgursky, 2014).
18
Analyses were conducted by year to identify the lowest performing schools and then stacked for the sake of
presentation.
36
all schools are held accountable for every significant subgroup and other schools are not included
in the calculation of VAMs because of too many missing data points or unavailable data (e.g.,
prior achievement scores for students in third grade). This means that for some of the
performance measures, fewer than 245 or 488 schools are identified. Instead, 5% or 10% of the
schools that have those performance measures are identified.
Focusing first on the bottom 5% of schools, I find that across every practice and every
measure there is a shift in which schools would be identified as low-performing. However, these
shifts affect relatively few schools across the five years of data. Using proficiency rates, for
example, approximately 5% of schools (or fewer than 3 per year) labeled low-performing under
the Traditional practice would not be identified as such using the January, Testing Window,
Testing Window Instruction, or All Schools Instruction practices. Using the All Schools practice,
no more than 5 schools per year exit this low-performance designation. The VAM scores
demonstrate that between 4 – 16% of schools (or 8 per year at most) labeled low-performing in
the Traditional practice would no longer be identified in the bottom 5% of schools using any of
the alternative inclusion practices.
The schools identified as being part of the next lowest 10% of schools are often provided
less severe sanctions than those schools in the bottom 5%. However, these schools are still
assigned the label of low-performing schools and those labels are publicly reported. What is seen
in Table 1.10 is again a picture of relatively consistent identification with a few changes to the
schools labeled as low performing. For example, if a state selects VAMs with one-year of lagged
achievement data as their measure for identifying low-performing schools, between 6-22% of the
schools the Traditional practice identifies for intervention would not be identified as belonging to
the 10% classification of schools using an alternative inclusion practice. While a change in 23%
37
of schools is larger than what was seen in the lowest 5% of schools classification, these measures
are offered with some reservation. Take the two-year reading VAM measure for the All Schools
Instruction practice. Approximately 35% of the 117 schools in this measure, or 42 schools across
3 years,
19
would be reclassified relative to the Traditional practice classification. Fifteen (15) of
these 42 schools would be moved out of the 10% classification and into the bottom 5%
classification, another 15 schools would move from the bottom 5% classification into the bottom
10% classification, and the remaining 12 schools (or approximately 4 schools per year) would be
introduced into the bottom 10% of schools from schools that were not part of the Traditional
practice’s bottom 10%. In other words, schools switch between low-performance classifications
to the same extent or more so than moving into the low-performing classification from outside of
it.
Discussion
This study attempted to answer the question of how many students are excluded from
accountability measures because of their mid-year mobility, how this exclusion differs across
schools, and how reconfiguring the ways that mobile students are incorporated into
accountability alters both the number of students included in accountability systems and the
current school performance metrics. Detailed enrollment data from the state of Washington made
answering these questions possible. Designers and implementers of accountability systems have
assumed the need to exclude mobile students from annual performance measures based on a
body of research documenting the lower achievement of mobile students compared to their stable
peers (see: Hanushek et al., 2004a; Mehana & Reynolds, 2004; Raudenbush, 2004; Xu et al.,
2009). The research in this paper confirms that the excluded students are, on average, lower
achieving than their included peers. However, systems of accountability currently in place were
19
Two-year VAMs were calculated for 2012, 2013, and 2014.
38
intended to focus on the achievement and outcomes of all students. The FAY regulation
contradicts that intent by placing the predominant focus on stable students. More inclusive FAY
regulations would incorporate mobile students into the accountability focus.
Every year, approximately 20,000 students are excluded from accountability measures in
Washington because they switch schools after October 1
st
in the given academic year. For states
that set the prior academic year as the enrollment deadline for the current year’s accountability,
the number of excluded students more than doubles relative to the October 1
st
exclusion practice.
This practice of excluding students has remained largely overlooked in the public and policy
discourse. By shifting the FAY regulation to one of the proposed practices, an additional 3.5-
9.8% of students in the state would be added into the annual accountability measures, relative to
the Traditional FAY regulation practice.
At the student-level, prior research on this exclusion practice has shown it to be harmful
to the individual students who are being excluded (Özek, 2012). This study added a school-level
analysis to determine if schools benefit from this exclusion practice. Two performance measures
were examined as current and forthcoming systems of accountability are likely to rely upon
multiple metrics of school performance annually. Proficiency rates remain the most common
measure of academic performance used across the US and have been the focus of accountability
since NCLB. More recent accountability practices, such as ESEA Waiver systems, have moved
beyond the use of a single status measure of performance (e.g., proficiency rates) towards
utilizing measures of growth or effectiveness.
Moving to more inclusive FAY regulations will have an immediate impact on the average
school. There is an initial, downward adjustment to a school’s proficiency rate in the year a
switch is made from the Traditional FAY practice to a more inclusive FAY practice. After that
39
initial switch, schools experience similar year-to-year rates of progress in the improvement of
proficiency rates for all subgroups in both reading and math. All proposed practices of inclusion
demonstrate similar patterns in the year-to-year progression of proficiency rates. In other words,
schools may experience a one-time “cost” to changing the FAY inclusion regulation in the state
but that cost is not sustained over time when measuring reading and math proficiency. Finally,
the average differences between a school’s Traditional practice VAM and alternative inclusion
practice VAMs are essentially zero. These results suggest no sustained change in performance
when moving from the Traditional FAY practice to more inclusive practices.
Since RTTT, and continuing in the ESSA accountability systems, low-performing
schools have been identified based on their performance relative to other schools within the state.
Schools ranked in the bottom 5% (Priority Schools) and the next lowest 10% of schools (Focus
Schools) on various measures of academic performance are labeled as low-performing. This
practice of using a specific threshold target means that there will always be schools labeled as
low-performing. In an ideal case, schools would not be identified as such solely based on the
inclusion or exclusion of mobile students. The FAY exclusion practice instituted in NCLB was
predicated on an assumption that schools would “lose” by the including students who switch
schools mid-year. As such, I investigated what proportion of schools identified as low-
performing in the Traditional practice would also be reclassified using one of the alternative
measures of inclusion.
The large majority of schools that would be identified as low-performing, either as part of
the Priority or Focus schools group using the Traditional FAY regulation, would also be
considered low-performing with the inclusion of mobile students. In the reverse, schools that
were not identified as low-performing under the Traditional practice continue to avoid sanctions
40
and interventions with the switch to more inclusion FAY regulations. Each year there is a small
group of schools (5-12% of Priority schools) that would experience a change in classification if
switching from the October 1
st
FAY cutoff to one of the proposed practices. Most of the changes
reflect schools transferring between the Priority school status and the Focus school status. These
schools were already identified as low-performing but switched the specific low-performance
classification. A handful of schools either exit the low-performance status entirely or enter low-
performance status, relative to the performance measured using the Traditional practice. But
these changes would only reflect the initial switch to a more inclusive practice relative to what
might be expected if continuing the status quo of mobile student exclusion.
Policy Recommendations
The collective evidence presented in this paper suggests that including mobile students in
school-level accountability measures does not substantially change school-level performance in
systems identifying the lowest 15% of schools for intervention and supports. Based on these
findings, I make two recommendations for changes around the inclusion of mobile students in
accountability measures. The first recommendation I make is an alteration of how participation
rates are reported. States could independently report the proportion of students tested each year
as well as the proportion of the school’s enrollment the annual performance measures are based
on. The additional information would increase the transparency of the current exclusion practice
and provide additional information for families in understanding school-level performance.
School participation rates are currently being discussed in state departments of education
due to the testing opt-out movement.
20
In a letter to Chief State School Officers from the DoED,
the argument is made that including all students in accountability measures is essential to
20
Families select to not participate in annual state testing often as a push against continued use of
standardized testing in education or for other personal reasons (see Chingos(2015) or Klein (2015) for
additional information).
41
providing “local leaders, educators, and parents with the information they need to identify the
resources and supports that are necessary to help every student succeed in school and in a career”
(Whalen, 2015). However, this continued emphasis on testing 95% of students to ensure
accountability measures focus on all students misses the fact that the FAY regulation nullifies the
intent of the participation regulation. Even if opt-out students are not counted as part of the
participation rates, these rates will still likely overstate the proportion of students annual school
performance is based on. Disaggregating participation rates from inclusion rates provides this
additional information for stakeholders and may improve the allocation of resources and supports
to those students and communities with the most need.
The second and more substantial recommendation I make is to move to more inclusive
FAY practices. The preponderance of evidence in this research suggests that schools in
Washington realize similar results on accountability measures when using the Traditional
practice and the more inclusive FAY regulations. These findings suggest that the FAY regulation
does not provide much, if any, benefit or assistance to schools serving large proportions of
mobile students. However, prior research provides some evidence that the exclusionary practice
is harmful to students (Özek, 2012). As such, I recommend either the All Schools or All Schools
Instruction practices. Essentially, these two inclusion practices incorporate the largest number of
students, hold all schools accountable for all of the students they serve in a year, and reduce the
potential for unintended behaviors incentivized by the current FAY regulations. Moreover,
establishing such regulations at the federal level would treat mobile students more equitably in
accountability systems from one state to the next.
An additional rationale for moving towards a more inclusive FAY regulation is that
including mobile students in accountability systems creates alignment between school and
42
teacher accountability practices. Porter and Chester (2002) argue that strong accountability
systems hold all stakeholders (i.e., students, teachers, schools) accountable in symmetrical ways,
aligning measures so that the incentives of each system work together rather than against one
another. Current teacher accountability practices, namely the use of VAMs in performance
evaluations, already incorporate mobile students and attribute their learning to individual
teachers. The developers of the Tennessee Value-Added Assessment System specifically
highlight the importance of incorporating mobile students in performance measures so as to not
create sample bias as well as to ensure that all students receive the same amount of attention in
the education system (SAS EVAAS, 2015). This same sample bias and lack of focus on mobile
students is a concern for school performance systems that exclude non-FAY students.
Researchers should extend the analysis presented here to other states to understand how
specific accountability design choices respond to the inclusion of mobile students. Although I
believe these results will hold across states with respect to the specific measures tested, the use
of state-specific performance indexes or the introduction of non-academic performance measures
in ESSA accountability (e.g., attendance, student engagement) may create slight modifications
on the trends identified. Similarly, states also have different testing windows and have seen
changes to those testing windows as the transition to new assessments takes place (e.g., Smarter
Balanced or the PARCC exams). The role of the testing window timing in these performance
calculations should not be underestimated. States that have earlier testing windows may include
larger proportions of students than those identified in Washington and may see fewer or smaller
changes when altering FAY regulations. Policymakers should pay close attention to these types
of analyses to ensure the next generation of accountability is responsive to empirical evidence.
43
Limitations
As with all research studies, the analysis presented herein is not without limitation. First,
more accurate testing window data may alter the inclusion and performance identified using the
Traditional practice FAY cutoff. As previously stated, the testing windows in Washington cover
numerous weeks of the academic year though the testing only requires a few school days.
Identifying school-specific testing dates would increase the accuracy of the findings presented.
However, I attempted to reduce the influence of the exact testing date in the Testing Window,
Testing Window Instruction, and All Schools Instruction practices by including students who
were present for a portion of the testing period and had a test record on file. The results in the
Traditional, Last Year, and January practice may be most influenced by non-school specific
testing dates.
Second, this study examined the individual measures of performance (e.g., proficiency
rates, VAMs). Many states rely upon a performance index in their ESEA Waivers that weights
various measures in different ways to create a single school score (Polikoff, McEachin, Wrabel,
& Duque, 2014). These indexes are permissible under ESSA accountability rules. The point
estimates on singular measures may only provide introductory evidence to what a state might
experience when changing the FAY regulation. More analysis is needed regarding the inclusion
of mobile students in index-based accountability systems as well as any non-academic measures
of performance (e.g., student engagement) states select to use in ESSA systems.
Third, the research presented in this paper is not intended to argue for or against the use
of specific measures in accountability systems. However, I would be remiss to not address the
fact that the types of measures used may not all capture the performance of mobile students to
the same extent. For example, two-step VAMS using multiple years of lagged achievement data
44
may inadvertently remove mobile students from the metric. Mobile students are less likely to
have complete academic records (Scherrer, 2013) or they may have missed testing in prior years
due to mobility. Students who arrive from a different state lack comparable, prior test scores
data. Without complete information, students are excluded from the calculations of these
measures. Thus, state-level policymakers should consider how the measures they select to use
and the ways in which schools are subsequently held accountable for those measures will
function in communities serving highly mobile enrollments.
Finally, I caution the reader on the interpretation of results. I did not use a causal
framework in this analysis nor do I believe one was appropriate given the questions that guided
the study. The results presented in this paper should not be interpreted as the effect of student
mobility on school performance measures. My findings only demonstrated how the inclusion of
additional students into accountability measures each year shifts current performance levels and
the schools that would be identified as low-performing. Regardless, the results herein are the first
to provide empirical evidence on the relationship between the FAY regulation and the
performance used in accountability system.
This study was the first to investigate the questions raised by researchers with regard to
mobile students in accountability policy and also to identify how changing current practices
would alter the identification of low-performing schools in a state. Overall, the findings from this
study suggest that schools do not receive substantial benefit from the current exclusion practices
that have been part of accountability policy for more than a decade. For policymakers to continue
touting accountability systems as focusing on and emphasizing the opportunities and outcomes
of all students, a shift needs to be made for how we currently treat mobile students in federally
mandated and state implemented practices.
45
Appendix 1.A
State Accountability, Full Academic Year Definitions
AL September 1 to the first day of testing (suspended students considered to be enrolled).
AK October 1
st
AZ Enrolled at the start of school year (within the first two weeks of instruction) and enrolled
during the first day of administration of the Arizona Instrument to Measure Standards (AIMS)
AR October 1
st
CA From the first Wednesday in October
CO Continuously enrolled on or before October 1 through the testing window for schools;
enrolled for 12 or more months for LEAs
CT October 1 to test administration
DE Continuously from September 30 through May 31
DC Continuously enrolled at least 85% of the days between the fall enrollment date in October
and the first day of the testing window (typically late April)
FL Enrolled as of Survey 2 conducted the second week of October and Survey 3 conducted the
second week of February
GA Continuous enrollment from the fall FTE count through the spring testing window
HI From May 1 of the previous school year to May 1 of the current school year
ID Enrolled continuously in a public school within Idaho from the end of the 1st 8 weeks (or 56
calendar days) of the school year through the spring testing window
IL From May 1 of previous school year
IN October 1, for 162 days
IA First day of a testing window in the previous school year to the first day of the testing window
in the current school year
KS September 20 enrollment
KY Enrolled in the school any one-hundred (100) instructional days
LA Enrolled on October 1 and the test administration date
ME October 1
st
MD From September 30 through the dates of testing
MA From October 1
MI Must be enrolled for the 2 most recent semi-annual student count days (the fourth Wednesday
in September and the second Wednesday in February)
MN October 1
st
MS Same school on 6 of 7 end of month records
MO Last Wednesday in September and the first day of the testing window without having
transferred out of the school or LEA for more than 1/2 the eligible days between the two dates
MT From first Monday in October through the March test administration.
NE Enrolled from the last Friday in September until the end of the assessments or end of the
school year
NV October through test administration
NH 1st business day of October
46
NJ July 1 to the test administration date
NM Enrolled from 120th day prior year to 120th day current year, for a period not to exceed 365
days
NY Continuously enrolled from the first Wednesday in October until the test administration
NC 140 days in membership as of the first day of EOG testing
ND Student has been enrolled for a period equal to or exceeding 173 days
OH Continuous enrollment from the October enrollment accounting period through the March or
May test administration
OK Continuously enrolled beginning within the first ten days of the school year and has not
experienced an enrollment lapse of ten or more consecutive days
OR One-half the instructional days prior to testing date
PA October 1 to the close of the testing window
RI October 1 to the end of that prior school year
SC Any student who is continuously in membership in a school at the time of the 45-day
enrollment count until the time of testing
SD Student continuously enrolled from October 1 through the last day of the testing window
TN Continuous enrollment at least one day of the first reporting period consisting of the first 20
days of the school year and reported Oct. 31 until test administration
TX Last Friday in October
UT Continuous enrollment for no less than 160
VT Continuously enrolled from the first day of school to the last.
VA Student enrolled by September 30 through the test administration
WA Students whose enrollment from October 1 through the test administration is continuous and
uninterrupted.
WV Fifth instructional day
WI From test administration, has been continuously enrolled since the third Friday of the
September enrollment report of the previous academic year.
WY Enrolled on October 1 and on the first day of the official testing window
Note. Each state must define what constitutes a full academic year and include only those students who meet this definition in
the state's proficiency calculation used for adequate yearly progress (AYP) determinations. This ensures that schools are held
accountable for the proficiency of only those students considered to have received a full year of instruction at the school. A dash
(-) indicates that the data are not available. A "†" symbol means not applicable.
Source: State Accountability Workbook: http://www2.ed.gov/admins/lead/account/stateplans03/index.html (6/9/2010)
47
Appendix 1.B
Washington Testing Windows 2010-2014
2010
Reading
Math
Paper, Grades 3-8
May 12 - 28
May 12 - 28
Online, Grades 3-8
--
--
High School
March 16 - 18
April 13
2011
Paper, Grades 3-8
May 2-19
May 2-19
Online, Grades 3-8
May 2 - June 3
May 2 - June 3
High School
March 17
Last 3 weeks
2012
Paper, Grades 3-8
April 25 - May 17
April 25 - May 17
Online, Grades 3-8
April 25 - June 1
April 25 - June 1
High School
March 15
Last 3 weeks
2013
Paper, Grades 3-8
April 24 - May 16
April 24 - May 16
Online, Grades 3-8
April 24 - May 31
April 24 - May 31
High School
March 14
Last 3 weeks
2014
Paper, Grades 3-8
April 23 - May 15
April 23 - May 15
Online, Grades 3-8
April 23 - June 6
April 23 - June 6
High School March 20 Last 3 weeks
Note. The End-of-Course exams used to measure high school math proficiency
must be administered within three weeks of the end of the course. These dates were
locally determined based on school calendars.
48
Appendix 1.C
Important Dates in Washington
The Washington data files do not provide important academic calendar dates for schools.
These variables had to be constructed based on the student data files, specifically using the
school enrollment date and the school exit date. Importantly, districts manage how initial
enrollments dates are entered into the annual files for students present on the first day of the year.
There are two common practices. First, a student is assigned the first day of the current academic
year as the date of entry in the school, regardless of where the student was enrolled the prior
school year. For example, a student’s first day is marked as September 2, 2012 in the 2012-2013
academic year even though she was enrolled in that same school for the 2011-2012 academic
year. The second practice assigns a student an enrollment date that reflects the first day the
student was ever enrolled in that school. For example, a student’s first day is marked as March
11, 2010 in the 2012-2013 year because she has been in the same school since that initial arrival
in Spring 2010. Based on these data entry methods, I identified the modal enrollment date
between August 1
st
and September 30
th
of each calendar year for each grade level served in the
school. I then looked for date agreement across grades to identify the first day of school in each
year of the data panel. In cases where agreement across grades was not found, I used publicly
available school and district websites to identify the first day of the academic year.
The process of identifying the first day of the year was repeated to identify the last day of
the year. Exit data for students in each grade level were tracked between the period of May 1
st
and June 30
th
. Again, districts enact two common processes for marking student exit dates. The
first practice leaves the exit date blank for any student who remained in the school through the
last day. The second practice assigns all students present for the last day of school an exit date
that reflects that particular date. The first and last days occasionally vary across grades within
schools, typically with the highest grade level having an earlier departure than students in lower
grade levels. In this case, grade levels within schools were assigned different first and last days
for the same academic year. The first and last days of school identified through data analysis
were checked against academic calendars published on school and district websites to verify
accuracy.
Washington allows schools to have the period of time from the first day of the year
through the fourth instructional day in September to organize enrollment data. This flexibility
provides schools time to enroll unexpected, new students into the system and also remove from
the system students who were expected to be present but who did not show up. The fourth
instructional day was determined independently for each school based on that school’s first day
of instruction. For a school that began on September 1
st
, the fourth instructional day was
identified as the fourth weekday in September, respective of the US Labor Day holiday.
Essentially, this date counts as the start of the academic year for the inclusion practices that take
into account the amount of instruction a student was provided before the testing window.
State documentation identifies annual testing windows for each subject by grade level
and test-type (i.e., paper or online administrations). For some grades and tests, the testing
window is only a one- or two-day period. For others, the testing window may be as long as a full
calendar month. It is plausible that schools only test over a three- or five-day period within the
month-long window. However, school specific-testing dates are not available, an inherent
limitation of this research. With more detailed testing windows, the results herein may differ
slightly.
49
Instructional time was calculated independently for each subject. For a student to be
marked as receiving 100% of instructional time in a subject, the student had to be enrolled from
the fourth instructional day continuously to the start of the testing window. The total number of
school days these students were present represents the total possible instructional days available
in that year. This number was calculated with consideration for US holidays and an annual
winter break (typically late December until the first weekday after January 1
st
). For students who
enroll after the fourth instructional day, their instructional time is determined as the number of
weekdays, respective of US holidays and winter breaks, between their first day of enrollment and
the start of the testing window as a proportion of total possible instructional days offered in that
school.
50
Appendix 1.D
Federal Accountability Practices and Measures
Accountability systems have relied heavily on the use of school-level proficiency rates to
determine the performance designations of schools. NCLB required states to set annual
performance targets, separately for math and English/language arts, which schools were required
to meet each year. These annual targets increased over time with the expectation that 100% of
students would make grade-level proficiency by 2014. To meet proficiency, a school had to
demonstrate that, at minimum, the proportion of the enrollment currently grade-level proficient
was as much as or more than the proficiency rate target. In some cases, schools could meet the
annual proficiency targets using alternative methods; these were policy allowances that permitted
schools to avoid sanctions for low-performance through adjustments for one-year performance
aberrations or making satisfactory progress towards meeting the proficiency rate even if still
below expectation (e.g., Safe Harbor) (see: Polikoff & Wrabel, 2013 for additional information).
States established proficiency targets separately for elementary, middle, and high schools,
but under NCLB the targets had to be the same across schools serving the same grade levels.
With the introduction of the ESEA Waivers, states were provided the option to re-establish
proficiency rate targets. States could select to set state-wide proficiency targets similar to NCLB
or identify school- and/or subgroup-specific proficiency rates. These proficiency rate targets and
participation rate requirements make up the Annual Measureable Objectives (AMOs) that
schools were required to meet in order to avoid being labeled as low-performing. The research
presented in this paper does not directly explore the relationship between exclusion rates and the
AMOs. This is because the ways in which low-performing schools are identified in more recent
and upcoming accountability systems has moved away from the use of these established AMOs.
However, it is important to acknowledge that schools with proficiency rates right around the
performance target may have reacted to the FAY regulations in different ways than schools that
expected to meet the proficiency AMOs.
The introduction of RTTT and ESEA Waivers brought additional measures of school
performance into accountability systems. Variants on student or school growth are one example
of the additional performance measures introduced. Growth measures are considered
improvements over status measures such as proficiency rates because they account for factors
outside the school’s control and focus on a school’s contribution to student learning (Harris,
2010). Many states (e.g., North Carolina) that maintained a state-level accountability system in
addition to NCLB accountability used such measures even before the introduction of RTTT and
ESEA Waivers. State policies and practices are commonly developed based on prior state
practice rather than starting with an entirely new framework (Furgol & Helms, 2012; Wrabel,
Saultz, Polikoff, McEachin, & Duque, 2016). As such, the use of these three performance
measures: proficiency rates, achievement gaps, and growth, are expected components in state
ESSA accountability systems.
Subgroup Definitions
A central component of NCLB legislation was the disaggregation of performance by key
demographic groups of students within each school. This disaggregation served to fulfill the
policy goal of focusing on all students within a school and not masking the performance of one
group with that of another. This disaggregation has been carried forward in accountability
systems post-NCLB. Along with students who are English language learners, from low-income
families, and who have disabilities, schools must separately report the performance for the major
51
racial/ethnic groups within the state in both English/language arts and math. Schools must also
report aggregated performance in the form of the “all students” subgroup.
Not report performance for every subgroup identified by the state. Only when a subgroup
meets a minimum number of FAY students is it considered significant. The modal subgroup size
set by states is 30 individuals. A school that enrolls 29 Latino FAY students is neither required to
report the performance of this group separately nor is it held accountable if this subgroup does
not meet expected performance targets. However, once that school enrolls one additional FAY
Latino student, the school becomes responsible for publicly reporting that subgroup’s
performance and accountable for this group meeting performance expectations. When the student
subgroup is not large enough to require disaggregated reporting, these students are included in
the all students subgroup.
Research on federal and state accountability systems has identified that the more
subgroups a school must disaggregate performance by, the more likely the school is to be
identified as low-performing (Balfanz, Legters, West, & Weber, 2007; Krieg & Storer, 2006;
Sims, 2013). Excluding mobile students from accountability measures may reduce the number of
subgroup performance targets a school is responsible for meeting. This provides potential benefit
to schools, particularly if the subgroup performs below performance targets. By rolling the
performance of students into the all students subgroup, the school can potentially avoid being
labeled a low-performing school.
Identifying Low-performing Schools
Each accountability policy, at the federal and state level, identifies low-performing
schools in different ways. NCLB regulations stated that any school failing to meet any one AMO
is considered a low-performing school. After two consecutive years of being labeled low-
performing, schools were assigned sanctions to address this performance. The number of schools
labeled low-performing increased each year of NCLB because the performance targets continued
to increase in difficulty and schools were unable to make expected progress. To be fair, the goal
of 100% grade-level proficiency established in NCLB was considered unrealistic even from the
early years of implementation (Linn, 2003).
Learning from this criticism of NCLB, states that received RTTT grants were asked to
identify low-performing schools in a new way. Specifically, schools in the bottom 5% of
performance were labeled persistently low-achieving schools. This moved away from set
benchmarks and labeled a set proportion of schools within a state each year as low-performing.
A similar of identification was then incorporated into the DoED’s next round of accountability in
the ESEA Waivers. Every state had to identify, at minimum, the bottom 5% of schools as
Priority schools and the next lowest 10% of schools as Focus schools. States determined the
measures used to identify the bottom 5% and 10% of schools using a wide variety of metrics,
including proficiency rates, gaps in achievement, and growth measures. Schools labeled as
Priority or Focus were required to institute interventions and practices aimed at addressing
performance issues.
The metrics established in ESSA align with the more recent accountability trends and
states must identify the bottom 15% of schools for “significant intervention” (Every Student
Succeeds Act, 2015). Once again a school will be labeled as Priority and Focus depending on
where it falls in this distribution. The specific measures used by states to identify these low-
performing schools are still being identified. States can use separate measures to identify Priority
and Focus schools rather than relying solely on one type of accountability measure. For example,
Priority schools may be those with the lowest overall levels of achievement (e.g., lowest
52
proficiency rates) while Focus schools might be identified as having the lowest growth scores. I
highlight the practices used to identify low-performing schools because the inclusion of mobile
students in performance measures has the potential to alter which Title I schools fall into the
bottom 15% of the state’s distribution. Thus, understanding how including mobile students in
accountability systems prior to finalizing ESSA accountability systems will help states think
through the measures used and whether those practices align with goals of the state
accountability system.
53
Appendix 1.E
Washington Accountability Rules
1. Full-Academic Year Enrollment and Continuous Enrollment
a. Washington State considers a student to be a Full-Academic Year student if she is
continuously enrolled from October 1
st
through the end of the testing period. Full-
Academic Year is measured independently for English/language arts and
mathematics as the testing periods end at different times in the academic year.
b. Continuous enrollment is defined as having no enrollment gaps in a school that
are 30 or more calendars days in length.
2. Subgroup Size
a. Washington uses 30 or more students to identify a subgroup as significant for the
purposes of accountability.
3. Meeting Proficiency Rate Targets
a. Washington uses an index to determine if a school meets the state-established
proficiency targets each year because there are numerous grade configurations
(e.g., K-5, K-8, 3-8) that must be accounted for and different grades levels have
different annual proficiency targets. Below is an example of the index.
b. This method of calculating proficiency rates reduces the possibility that a grade
level that is high performing masks that performance of a grade level that is lower
performing, especially if grade-level enrollment varies in size.
Index Example for Reading Proficiency in 2009, All-Student Group
A B C D E F G
=B/A =C-D =A/Sum A =E*F
Grade # Enrolled # Proficient % Proficient Target
Distance
from Target
Weight
3 75 48 64.000 76.1 -12.100 0.254 -3.076
4 72 53 73.611 76.1 -2.489 0.244 -0.607
5 75 59 78.667 76.1 2.567 0.254 0.653
6 73 56 76.712 65.1 11.612 0.247 2.874
Total 295 216 -0.158
= Sum A = Sum B = Sum G
This school did not meet the All-Student Reading Proficiency requirement in 2009.
The "Sum G" box represents the proficiency index. A score of zero (0) or greater represents
meeting the AMO whereas a score below zero (0) represents failing to meet the AMO.
54
Appendix 1.F
Definition of Included, Excluded, and Ineligible Students by Inclusion Practice
Included
Excluded
Ineligible
Traditional
• Continuously enrolled from October 1st
through the end of the testing window
• Continuously enrolled after October
1
st
, before the start of testing, through
the testing window
• Left prior to the start of testing
• Were not enrolled for the entire testing window
• Enrolled in the school after the testing window
began
Last Year
• Continuously enrolled from the beginning
of the prior year testing window through the
end of the current-year testing window
• Continuously enrolled after the
beginning of the prior year’s testing
window, before the start of testing,
through the current year’s testing
window
• Left prior to the start of testing
• Were not enrolled for the entire testing window
• Enrolled in the school after the testing window
began
January
• Continuously enrolled from January 15
th
through the end of the testing window
• Continuously enrolled after January
15
th
, before the start of testing, through
the testing window
• Left prior to the start of testing
• Were not enrolled for the entire testing window
• Enrolled in the school after the testing window
began
Testing Window
• Enrolled for the entire testing window
• Enrolled for some portion of the testing
window and has a test score on file
• No students • Left prior to the start of testing
• Were enrolled for part of the testing window but
has no test on record
• Enrolled in the school after the testing window
ended
Testing Window
Instruction
• Present for at least one instructional day
prior to the testing window and either:
− Enrolled for the entire testing window or
− Enrolled for some portion of the testing
window and has a test score on file
• Present during the testing window and
has a test score on file but was not
present for at least one instructional day
prior to the start of testing
• Left prior to the start of testing
• Were enrolled for part of the testing window but
has no test on record
• Enrolled in the school after the testing window
ended
All Schools and
All Schools
Instruction
• Present for at least one instructional day
prior to the testing window and either:
− Enrolled for the entire testing window or
− Has a test score on file, regardless of
enrollment at the time of testing
• Present during the testing window and
has a test score on file but was not
present for at least one instructional day
prior to the start of testing
• Left the school system prior to testing
• Arrived in the school system after the testing
period
55
Table 1.1
Students Demographics (N=1,532,280)
Mean
sd
White
0.506
0.500
Latino
0.278
0.448
Black
0.063
0.243
Asian
0.062
0.241
American Indian/Alaskan Native
0.023
0.151
Pacific Islander or Hawaiian Native
0.013
0.114
Two-or-More Races
0.054
0.227
Low-Income
0.630
0.483
English language learner
0.115
0.319
Students with disabilities
0.141
0.348
Average Reading Score
-0.162
0.987
Average Math Score -0.165 0.953
Table 1.2
School Demographics (n=4912 school-year observations)
mean
sd
% White
0.529
0.529
% Latino
0.260
0.260
% Black
0.057
0.057
% Asian
0.054
0.054
% American Indian
0.032
0.032
% Pacific Islander
0.011
0.018
% Two or more race
0.049 0.047
% Low-income
0.633
0.165
% English learner
0.134
0.145
% Students with disabilities
0.142 0.048
Urban
0.311
0.463
Suburban
0.268
0.443
Rural
0.254
0.436
Town
0.167 0.373
Elementary
0.630
0.483
Middle
0.196
0.397
High School
0.140
0.347
Other Configuration
0.033 0.179
Mobility Rate
0.222
0.085
Starting Enrollment 488 310
56
Table 1.3
Number and Proportion of Students Classified As Included, Excluded, and Ineligible
2010 2011 2012 2013 2014
Reading
Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible
Traditional
231,434 18,315 15,678 262,723 19,093 17,622 278,096 19,916 16,019 289,809 20,297 15,624 293,614 19,769 15,365
92.7% 7.3% 93.2% 6.8% 93.3% 6.7% 93.5% 6.5% 93.7% 6.3%
Last Year
-- -- -- 232,366 49,450 17,622 245,808 52,204 16,019 257,465 52,641 15,624 262,007 51,376 15,365
-- -- -- 82.5% 17.5% 82.5% 17.5% 83.0% 17.0% 83.6% 16.4%
January
239,926 9,823 15,678 271,902 9,914 17,622 287,645 10,367 16,019 299,776 10,330 15,624 303,429 9,954 15,365
96.1% 3.9% 96.5% 3.5% 96.5% 3.5% 96.7% 3.3% 96.8% 3.2%
Testing Window
250,978 0 14,449 284,783 0 14,655 299,973 0 14,058 311,928 0 13,802 315,452 0 13,296
100.0% 0.0% 100.0% 0.0% 100.0% 0.0% 100.0% 0.0% 100.0% 0.0%
Testing Window
w/ Instruction
250,696 282 14,449 284,076 707 14,655 299,296 677 14,058 311,138 790 13,802 314,745 707 13,296
99.9% 0.1% 99.8% 0.2% 99.8% 0.2% 99.7% 0.3% 99.8% 0.2%
All Schools
255,971 180 9,276 289,199 434 9,805 304,008 510 9,513 315,874 573 9,283 319,000 508 9,240
99.9% 0.1% 99.9% 0.1% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2%
All Schools w/
Instruction
255,971 180 9,276 289,199 434 9,805 304,008 510 9,513 315,874 573 9,283 319,000 508 9,240
99.9% 0.1% 99.9% 0.1% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2%
2010 2011 2012 2013 2014
Math
Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible Include Exclude Ineligible
Traditional
231,755 19,127 15,698 262,439 19,868 18,724 277,778 20,611 17,086 289,521 21,141 16,628 293,321 20,578 16,293
92.4% 7.6% 93.0% 7.0% 93.1% 6.9% 93.2% 6.8% 93.4% 6.6%
Last Year
-- -- -- 231,848 50,459 18,724 245,597 52,792 17,086 257,113 53,549 16,628 261,730 52,169 16,293
-- -- -- 82.1% 17.9% 82.3% 17.7% 82.8% 17.2% 83.4% 16.6%
January
240,489 10,393 15,698 271,750 10,557 18,724 287,388 11,001 17,086 299,564 11,098 16,628 303,267 10,632 16,293
95.9% 4.1% 96.3% 3.7% 96.3% 3.7% 96.4% 3.6% 96.6% 3.4%
Testing Window
252,098 0 14,482 285,269 0 15,762 300,325 0 15,150 312,453 0 14,837 315,956 0 14,236
100.0% 0.0% 100.0% 0.0% 100.0% 0.0% 100.0% 0.0% 100.0% 0.0%
Testing Window
w/ Instruction
251,805 293 14,482 284,529 740 15,762 299,610 715 15,150 311,604 849 14,837 315,175 781 14,236
99.9% 0.1% 99.7% 0.3% 99.8% 0.2% 99.7% 0.3% 99.8% 0.2%
All Schools
257,126 189 9,265 289,516 467 11,048 304,365 555 10,555 316,614 646 10,030 319,626 605 9,961
99.9% 0.1% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2%
All Schools w/
Instruction
257,126 189 9,265 289,516 467 11,048 304,365 555 10,555 316,614 646 10,030 319,626 605 9,961
99.9% 0.1% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2% 99.8% 0.2%
Note. Sample is restricted to students in Title I schools.
57
Table 1.4a
Differences between Included and Excluded, Traditional
Reading
Math
Included
(n=1,355,676)
Excluded
(n=97,390)
Included
(n=1,354,814)
Excluded
(n=101,325)
Mean
sd
Mean
sd
Mean
sd
Mean
sd
White 0.511
0.442ᵃ
0.510
0.433ᵃ
(0.500)
(0.497)
(0.500)
(0.495)
Latino 0.277
0.302
0.277
0.302
(0.448)
(0.459)
(0.448)
(0.459)
Black 0.060
0.092ᵃ
0.060
0.094ᵃ
(0.237)
(0.289)
(0.237)
(0.292)
Asian 0.065
0.040ᵃ
0.065
0.049
(0.246)
(0.196)
(0.247)
(0.215)
American Indian 0.022
0.032
0.022
0.031
(0.147)
(0.175)
(0.147)
(0.173)
Pacific Islander 0.012
0.025ᵃ
0.012
0.026ᵃ
(0.109)
(0.155)
(0.109)
(0.158)
Two-or-more races 0.053
0.066
0.053
0.065
(0.224)
(0.249)
(0.224)
(0.246)
Low-income 0.619
0.759ᵇ
0.619
0.761ᵇ
(0.486)
(0.428)
(0.486)
(0.426)
English language learner 0.114
0.137
0.115
0.159ᵃ
(0.317)
(0.344)
(0.319)
(0.366)
Student with Disability 0.137
0.171
0.137
0.166
(0.344)
(0.376)
(0.344)
(0.372)
Standardized Achievement -0.133
-0.474ᵇ
-0.134
-0.533ᵇ
(0.980) (0.991) (0.954) (0.905)
ᵃ standardized mean difference d>0.10
ᵇ standardized mean differences d>.25
ᵈ standardized mean differences d>.50
Reading achievement differences based on 1,241,091 Included students and 86,993 Excluded students. Math achievement differences based on
1,136,880 Included students and 84,009 Excluded students.
58
Table 1.4b
Differences between Included and Excluded Students, Last Year
Reading
Math
Included
(n=997,646)
Excluded
(n=455,418)
Included
(n=996,288)
Excluded
(n=208,969)
Reading
Mean
sd
Mean
sd
Mean
sd
Mean
sd
White 0.511
0.495
0.511
0.467
(0.500)
(0.500)
(0.500)
(0.499)
Latino 0.282
0.273
0.282
0.281
(0.450)
(0.445)
(0.450)
(0.450)
Black 0.055
0.078
0.054
0.084ᵃ
(0.227)
(0.268)
(0.227)
(0.277)
Asian 0.065
0.059
0.065
0.051
(0.247)
(0.235)
(0.247)
(0.221)
American Indian 0.020
0.029
0.020
0.024
(0.140)
(0.168)
(0.140)
(0.153)
Pacific Islander 0.011
0.016
0.011
0.022
(0.105)
(0.126)
(0.105)
(0.147)
Two-or-more races 0.056
0.049
0.056
0.070
(0.230)
(0.216)
(0.230)
(0.255)
Low-income 0.609
0.670ᵃ
0.609
0.725ᵃ
(0.488)
(0.470)
(0.488)
(0.446)
English language learner 0.114
0.119
0.114
0.143
(0.317)
(0.324)
(0.317)
(0.350)
Student with Disability 0.133
0.153
0.133
0.158
(0.340)
(0.360)
(0.340)
(0.365)
Standardized Achievement -0.114
-0.243ᵃ
-0.111
-0.365ᵇ
(0.977) (0.996) (0.952) (0.940)
ᵃ standardized mean difference d>0.10
ᵇ standardized mean differences d>.25
ᵈ standardized mean differences d>.50
Reading achievement differences based on 900,459 Included students and 427,623 Excluded students. Math achievement differences based on
804,252 Included students and 169,870 Excluded students.
59
Table 1.5a
Differences between Included and Excluded, January Practice
Reading
Math
Included
(n=1,402,678)
Excluded
(n=50,388)
Included
(n=1,402,458)
Excluded
(n=53,681)
Mean
sd
Mean
sd
Mean
sd
Mean
sd
White 0.508
0.448ᵃ
0.507
0.436ᵃ
(0.500)
(0.497)
(0.500)
(0.496)
Latino 0.278
0.301
0.278
0.301
(0.448)
(0.459)
(0.448)
(0.459)
Black 0.061
0.088ᵃ
0.061
0.091ᵃ
(0.239)
(0.283)
(0.239)
(0.288)
Asian 0.064
0.038ᵃ
0.065
0.047
(0.245)
(0.191)
(0.246)
(0.212)
American Indian 0.022
0.033
0.022
0.032
(0.148)
(0.179)
(0.148)
(0.176)
Pacific Islander 0.012
0.024ᵃ
0.012
0.025ᵃ
(0.111)
(0.154)
(0.111)
(0.157)
Two-or-more races 0.054
0.067
0.053
0.065
(0.225)
(0.250)
(0.225)
(0.247)
Low-income 0.624
0.742ᵃ
0.624
0.745ᵇ
(0.484)
(0.437)
(0.484)
(0.436)
English language learner 0.115
0.134
0.117
0.159ᵃ
(0.319)
(0.340)
(0.321)
(0.365)
Student with Disability 0.139
0.166
0.138
0.162
(0.346)
(0.372)
(0.345)
(0.369)
Standardized Achievement -0.143
-0.511ᵇ
-0.145
-0.590ᵇ
(0.982) (0.995) (0.955) (0.896)
ᵃ standardized mean difference d>0.10
ᵇ standardized mean differences d>.25
ᵈ standardized mean differences d>.50
Reading achievement differences based on 1,283,315 Included students and 44,769 Excluded students. Math achievement differences based on
1,177,428 Included students and 43,461 Excluded students.
60
Table 1.5b
Differences between Included and Excluded Students, Testing Window Instruction Practice
Reading
Math
Included
(n=1,459,951)
Excluded
(n=3,163)
Included
(n=1,462,723)
Excluded
(n=3,378)
Reading
Mean
sd
Mean
sd
Mean
sd
Mean
sd
White 0.506
0.450ᵃ
0.505
0.441ᵃ
(0.500)
(0.498)
(0.500)
(0.497)
Latino 0.279
0.318
0.279
0.317
(0.448)
(0.466)
(0.448)
(0.466)
Black 0.062
0.075
0.062
0.077
(0.241)
(0.263)
(0.241)
(0.266)
Asian 0.063
0.028ᵃ
0.064
0.039ᵃ
(0.243)
(0.164)
(0.245)
(0.194)
American Indian 0.023
0.035
0.023
0.032
(0.149)
(0.185)
(0.149)
(0.176)
Pacific Islander 0.013
0.027ᵃ
0.013
0.029ᵃ
(0.112)
(0.163)
(0.113)
(0.167)
Two-or-more races 0.054
0.067
0.054
0.065
(0.226)
(0.250)
(0.226)
(0.247)
Low-income 0.628
0.725ᵃ
0.629
0.721ᵃ
(0.483)
(0.447)
(0.483)
(0.449)
English language learner 0.115
0.122
0.118
0.159ᵃ
(0.319)
(0.327)
(0.323)
(0.365)
Student with Disability 0.140
0.175ᵃ
0.139
0.170
(0.347)
(0.380)
(0.346)
(0.376)
Standardized Achievement -0.157
-0.711ᵈ
-0.163
-0.817ᵈ
(0.985) (1.012) (0.956) (0.868)
ᵃ standardized mean difference d>0.10
ᵇ standardized mean differences d>.25
ᵈ standardized mean differences d>.50
Reading achievement differences based on 1,334,836 Included students and 2,962 Excluded students. Math achievement differences based on
1,227,373 Included students and 3,138 Excluded students.
61
Table 1.5c
Differences between Included and Excluded Students, All Schools and All Schools Instruction Practices
Reading
Math
Included
(n=1,484,052)
Excluded
(n=2,205)
Included
(n=1,487,247)
Excluded
(n=2,462)
Reading
Mean
sd
Mean
sd
Mean
sd
Mean
sd
White 0.507
0.451ᵃ
0.505
0.441ᵃ
(0.500)
(0.498)
(0.500)
(0.497)
Latino 0.277
0.324ᵃ
0.277
0.325ᵃ
(0.448)
(0.468)
(0.448)
(0.468)
Black 0.062
0.070
0.062
0.071
(0.242)
(0.256)
(0.242)
(0.257)
Asian 0.063
0.033ᵃ
0.064
0.048
(0.242)
(0.179)
(0.244)
(0.213)
American Indian 0.023
0.031
0.023
0.029
(0.150)
(0.173)
(0.150)
(0.167)
Pacific Islander 0.013
0.027ᵃ
0.013
0.027ᵃ
(0.112)
(0.163)
(0.113)
(0.163)
Two-or-more races 0.054
0.063
0.054
0.060
(0.226)
(0.244)
(0.226)
(0.237)
Low-income 0.628
0.699ᵃ
0.629
0.697ᵃ
(0.483)
(0.459)
(0.483)
(0.460)
English language learner 0.114
0.124
0.117
0.170ᵃ
(0.318)
(0.330)
(0.322)
(0.376)
Student with Disability 0.140
0.163
0.140
0.160
(0.347)
(0.370)
(0.347)
(0.366)
Standardized Achievement -0.162
-0.695ᵈ
-0.168
-0.815ᵈ
(0.987) (1.030) (0.956) (0.892)
ᵃ standardized mean difference d>0.10
ᵇ standardized mean differences d>.25
ᵈ standardized mean differences d>.50
Reading achievement differences based on 1,357,647 Included students and 2,087 Excluded students. Math achievement differences based on
1,247,212 Included students and 2,256 Excluded students.
62
Table 1.6
Average Change in Proficiency Rate Compared to Performance in the Traditional Practice (in percentage points), by Practice and Subgroup
Traditional
Percent
Proficient
January
Testing
Window
Testing
Window
Instruction
All Schools
All Schools
Instruction
Reading
All Students (n=4874) 65.961 -0.386 -1.008 -0.410 -5.328 -0.681
White (n=4261) 72.995 -0.353 -0.907 -0.376 -5.231 -0.627
Black (m=784) 54.038 -0.529 -1.336 -0.575 -6.063 -0.960
Latino (n=2949) 55.509 -0.316 -0.956 -0.366 -4.833 -0.605
Asian (n=896) 72.356 -0.286 -0.717 -0.306 -3.706 -0.495
American Indian/Alaskan Native (n=156) 38.733 -0.192 -0.496 -0.239 -3.304 -0.431
Pacific Islander/Hawaiian Native (n=29) 45.654 -1.074 -1.597 -0.956 -5.679 -1.394
Two or More Races (n=406) 70.078 -0.257 -0.991 -0.337 -5.799 -0.753
Low-Income Students (n=4713) 59.373 -0.348 -0.925 -0.373 -5.649 -0.616
English Language Learners (n=1707) 33.115 -0.173 -0.444 -0.186 -2.500 -0.298
Students with Disabilities (n=2473) 35.643 -0.294 -0.848 -0.330 -3.310 -0.541
Math
All Students (n=4871) 53.102 -0.489 -1.331 -0.554 -4.484 -0.880
White (n=4260) 58.059 -0.414 -1.153 -0.439 -4.431 -0.755
Black (m=782) 35.062 -0.567 -1.486 -0.645 -4.238 -0.973
Latino (n=2944) 58.059 -0.404 -1.189 -0.461 -3.879 -0.721
Asian (n=902) 63.550 -0.418 -1.181 -0.553 -3.846 -0.761
American Indian/Alaskan Native (n=156) 25.407 -0.331 -0.798 -0.374 -2.507 -0.625
Pacific Islander/Hawaiian Native (n=31) 34.657 -0.811 -1.637 -1.092 -4.958 -1.554
Two or More Races (n=403) 57.308 -0.322 -1.235 -0.493 -5.085 -1.035
Low-Income Students (n=4710) 44.300 -0.436 -1.206 -0.461 -4.577 -0.753
English Language Learners (n=1722) 27.846 -0.258 -0.809 -0.304 -2.580 -0.424
Students with Disabilities (n=2469) 23.061 -0.241 -0.690 -0.252 -2.269 -0.448
† Last Year numbers based on 2011-2014 school performance measures
63
Table 1.7
Average Change in Proficiency Rate Over Time (in percentage points), by Practice and Subgroup
Traditional January
Testing
Window
Testing
Window
Instruction
All Schools
All Schools
Instruction
Reading
All Students 1.101 1.103 1.105 1.120 1.142 1.131
White 0.979 0.998 0.999 1.013 1.031 1.027
Black 1.458 1.484 1.270 1.510 1.478 1.459
Latino 1.897 1.905 1.866 1.956 1.965 1.960
Asian 2.296 2.327 2.384 2.340 2.619 2.384
American Indian/Alaskan Native 0.358 0.400 0.516 0.561 0.510 0.467
Pacific Islander/Hawaiian Native 0.894 1.396 1.698 0.578 0.971 1.337
Two or More Races 0.736 0.882 0.550 0.876 0.812 0.864
Low-Income Students 1.368 1.344 1.310 1.360 1.290 1.356
English Language Learners 7.803 7.842 7.845 7.886 7.501 7.840
Students with Disabilities 0.983 1.032 0.813 0.970 0.951 1.016
Math
All Students 2.859 2.847 2.814 2.853 2.791 2.878
White 3.605 3.600 3.539 3.598 3.338 3.604
Black 3.977 3.905 3.715 4.035 3.566 3.918
Latino 3.695 3.687 3.476 3.714 3.434 3.707
Asian 5.192 5.153 5.190 5.116 5.137 5.164
American Indian/Alaskan Native 1.630 1.689 1.661 1.734 1.820 1.960
Pacific Islander/Hawaiian Native -0.372 0.205 0.739 0.566 0.125 0.315
Two or More Races 0.221 0.010 -0.288 0.142 -0.404 -0.220
Low-Income Students 3.315 3.284 3.220 3.325 2.972 3.332
English Language Learners 6.866 6.873 6.760 6.882 6.381 6.824
Students with Disabilities 1.427 1.454 1.230 1.473 1.286 1.453
64
Table 1.8
Mean and Standard Deviation of Value-Added Measures, by Subject, Years of Lagged Achievement, and Inclusion
Practice
Reading
1-Year
(n=3450)
Reading
2-Year
(n=2269)
Math
1-Year
(n=3445)
Math
2-Year
(n=2198)
Traditional 0.000 0.000 0.000 0.000
(0.141) (0.155) (0.175) (0.197)
January 0.000 0.000 0.000 0.001
(0.141) (0.155) (0.174) (0.196)
Testing Window 0.000 0.000 0.001 0.001
(0.139) (0.153) (0.171) (0.193)
Testing Window Instruction 0.000 0.000 0.000 0.001
(0.139) (0.153) (0.171) (0.194)
All Schools 0.000 0.000 0.000 0.001
(0.134) (0.149) (0.166) (0.187)
All Schools Instruction 0.000 0.000 0.000 0.001
(0.135) (0.149) (0.165) (0.187)
Last Year 0.000 0.000 0.000 0.000
(0.143) (0.158) (0.176) (0.197)
Table 1.9
Average Change in School-Value Added Relative to the Traditional Practice, by Inclusion Practice
January
Testing
Window
Testing
Window
Instruction
All Schools
All Schools
Instruction
Last Year
One-Year
Reading (n=3450) -0.000 0.000 -0.000 -0.000 -0.000 -0.000
Math (n=3345) 0.000 0.000 -0.000 0.000 -0.000 -0.001
Two-Year
Reading (n=2268) -0.000 -0.000 -0.000 -0.000 -0.000 0.000
Math (n=2198) 0.001 0.001 0.001 0.001 0.001 -0.001
65
Table 1.10
Proportion of Traditional Practice Low-Performing Schools Identified as Low-Performing with Alternative Inclusion
Practices
January
Testing
Window
Testing
Window
Instruction
All
Schools
All
Schools
Instruction
Bottom 5%
Reading Proficiency
All Student Proficiency (n=245) 95.1%
94.3%
96.7%
89.4%
95.9%
Reading Growth
One-Year(n=179)‡ 93.3%
89.9%
89.4%
84.4%
83.8%
Two-Year (n=116)† 93.1% 86.2% 86.2% 86.2% 86.2%
Math Proficiency
All Student Proficiency (n=245) 95.5%
94.7%
96.3%
92.2%
95.9%
Math Growth
One-Year (n=172)‡ 94.8%
91.9%
92.4%
87.2%
87.2%
Two-Year (n=112)† 95.5% 92.9% 92.9% 87.5% 88.4%
Bottom 10%
Reading Proficiency
All Student Proficiency (n=488) 92.6%
89.5%
94.1%
80.9%
91.8%
Reading Growth
One-Year (n=341)‡ 92.4%
83.9%
84.5%
78.0%
78.6%
Two-Year (=117)† 80.3% 70.1% 68.4% 63.2% 64.1%
Math Proficiency
All Student Proficiency (n=488) 93.9%
90.4%
95.1%
85.0%
92.8%
Math Growth
One-Year (n=333)‡ 93.7%
89.8%
89.8%
83.8%
83.5%
Two-Year (n=113)† 85.0% 78.8% 82.3% 69.9% 70.8%
‡ Based on years 2011-2014
† Based on years 2012-2014
66
Essay 2: Who, When, and Where: The Mobility of Students across School Poverty Contexts
Addressing the educational outcomes associated with poverty has long been recognized
as a challenge for the US public education system. The original Elementary and Secondary
Education Act (ESEA) legislation, passed in 1965, intended to address some of the discrepancies
in opportunities and resources for students of low-income families through additional funding for
high-poverty schools (Borman, 2005; Vinovskis, 2009). Following the release of A Nation at
Risk (1983), policymakers ramped up efforts to address the perceived inadequacies within the
school system, creating national education goals and promoting school reform efforts (Ladd,
2012; Vinovskis, 2009). School reform policies have pursued numerous avenues to
improvement, all with the intent to provide every student a quality education and reduce
pervasive achievement gaps (CPRE Policy Briefs, 1991; Darling-Hammond, 2007).
High-poverty schools are the target of intense school reform efforts and the focus of
many current education policies. Relative to more economically advantaged schools, high-
poverty schools deal with numerous challenges that impact daily operations and school
effectiveness. For example, high-poverty schools enroll larger proportions of low-performing
students, experience higher rates of absenteeism, and must consider the instability associated
with poverty (e.g., less access to health care, food insecurity, housing instability) in the home-
lives of their students (Berliner, 2009; Reardon, 2011; Romero & Lee, 2007). Moreover, funding
structures in many states provide fewer financial resources for high-poverty schools (Baker,
Sciarra, & Farrie, 2010), creating constraints (e.g., teacher retention) not present in more
economically advantaged communities (Branch, Hanushek, & Rivkin, 2013; Hanushek, Kain, &
Rivkin, 2004b; Orfield & Lee, 2004; Orfield, Losen, Wald, & Swanson, 2004). Even with
67
targeted reform efforts intended to address educational inequities, the disparities in educational
attainment between low- and high-poverty schools remain pervasive today.
Student mobility is yet another challenge high-poverty schools face that has not received
sufficient attention in policy or research. Mobility is concentrated and experienced at higher rates
amongst low-income students (de la Torre & Gwynne, 2009; Eadie, Eisner, Miller, & Wolf,
2013; Hanushek, Kain, & Rivkin, 2004a; Voight, Shinn, & Nation, 2012). Such mobility is
harmful to both mobile students and their stable classroom peers (Hanushek et al., 2004a;
Raudenbush, Jean, & Art, 2011; Stiefel, Schwartz, & Whitesell, 2013). Evidence from Chicago
further suggests that a high rate of late enrolling students is one aspect of a school’s context that
impedes school improvement efforts (Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010).
In other words, mobility can further exacerbate the many challenges high-poverty schools work
to address each year.
Prior research on mobility has not connected school-level poverty with student-level
mobility. Certainly, if mobility is a risk factor that disproportionately affects students from low-
income families, then the schools that serve them will also be disproportionately impacted.
Research has not identified the exent to which mobility differs by school-level poverty or how
the mobile students schools are tasked with supporting each year differ across poverty contexts.
By bringing empirical evidence of these relationships to the forefront, policymakers, educators,
and researchers can begin to address how to best account for mobility in the design,
implementation, and evaluation of policy and practice. As such, the research presented herein
explicitly connects patterns of student mobility to school-level poverty and seeks to answer the
following questions:
1. How do school mobility rates vary by school-level poverty?
68
2. When do new students arrive in schools and do the patterns vary by school-level
poverty?
3. How do the types of mobile students high-poverty schools work with differ from the
mobile students switching to or between lower-poverty schools?
This research is intended to inform the mobility and school poverty literatures by creating a
thorough, statewide account of which schools and students are most in-need of support with
regard to mobility and educational opportunity. Such work has implications for current policies
such as the Every Student Succeeds Act, the implementation of the Common Core State
Standards, and the many interventions designed to improve outcomes of low-income students
and the schools that serve them.
Mobility Literatures
Before outlining the analytic strategy I use for studying school mobility as it relates to
school-level poverty, I provide a short overview of what is known about student and school
mobility to help position this research. First, mobility typically falls into one of two categories:
structural or non-structural mobility. Structural mobility is a move that is created or built into
the design of the education system. The most common structural move is a promotion to
middle or high school from the prior level of schooling. For example, a student must move to a
middle school to begin sixth grade because her elementary school only served students through
fifth grade. Structural mobility also includes moves caused by changes to school zones or
redistricting (Titus, 2007), the development of new schools to address expanding student
populations or overcrowding (Siegel-Hawley, 2013); and, in rare cases, school closure
following multiple years of low-performance (Kirshner, Gaertner, & Pozzoboni, 2010).
Structural mobility is not the focus of this research study but it is important to note that even
69
within structural moves, not all mobility will be experienced in the same way. For example, the
promotion to a new grade level may be something a student has expected for a few years. A
sudden redistricting may be more disruptive as it is an unanticipated change to the student’s
academic trajectory.
Non-structural school changes are the focus of a substantial proportion of the transition
literature. Non-structural mobility results from a multitude of life circumstances. While there is
an underlying public assumption that school changes are often caused by residential changes, the
reality is that only one-third of non-structural changes are associated with residential relocation
(Gasper, DeLuca, & Estacion, 2010; Rumberger, 2003). Previous research suggests that school
changes not due to residential relocation may be a response to the death of a family member or
caretaker, parental divorce, loss of or change in employment, or required by school disciplinary
sanctions (Grigg, 2012; Gruman, Harachi, Abbott, Catalano, & Fleming, 2008; Rumberger,
2003). Students may also switch schools because they seek a specific type of academic program
(e.g., magnet programs) or seek better educational opportunities when space in a desired school
become available (de la Torre & Gwynne, 2009; Hanushek et al., 2004a; Temple & Reynolds,
1999). Whereas most structural mobility is experienced in the summer months and students begin
the first day of school enrolled in the new school, non-structural mobility occurs at any point in
the calendar year. Typically these moves are classified as between-year (occurring over the
summer) or mid-year moves (between the first and last day of the academic year). The focus of
this paper is on non-structural mid-year mobility.
Student Mobility
There is a robust body of literature that documents differences in students who are and
are not mobile. Students who are Black and Latino (Burkam, Lee, & Dwyer, 2009; Eadie et al.,
70
2013; Engec, 2006; Gasper et al., 2010; Grigg, 2012; Hanushek et al., 2004a; Haynie & South,
2005; Parke & Kanyongo, 2012; US Government Accountability Office, 2010; Xu, Hannaway,
& D’Souza, 2009), low-income (Alexander, Entwisle, & Dauber, 1996; de la Torre & Gwynne,
2009; Eadie et al., 2013; Hanushek et al., 2004a; Kerbow, 1996; Voight et al., 2012), English
language learners (Fong, Bae, & Huang, 2010; Xu et al., 2009), and students with disabilities
(Rennie Center for Education Research & Policy, 2011) are more frequently mobile than their
peers. Empirical evidence also suggests that migrant students (Branz-Spall, Rosenthal, & Wright,
2003; Hagan, MacMillan, & Wheaton, 1996), students experiencing homelessness or living in
the foster care system (Conger & Reback, 2001; Cutuli et al., 2013; Fantuzzo, LeBoeuf, Chen,
Rouse, & Culhane, 2012), and military-connected students (Bradshaw, Sudhinaraset, Mmari, &
Blum, 2010; Lyle, 2006; US Government Accountability Office, 2011) switch schools at higher
rates than students who are not members of these groups.
The experience of making unscheduled school moves is consistently found as negatively
associated with the academic outcomes of mobile students (Burkam et al., 2009; Gruman et al.,
2008; Heinlein & Shinn, 2000; Mehana & Reynolds, 2004; US Government Accountability
Office, 2010; Xu et al., 2009). Research suggests that mobility is associated with lower levels of
academic achievement (Burkam et al., 2009; Gruman et al., 2008; Heinlein & Shinn, 2000; US
Government Accountability Office, 2010; Xu et al., 2009), higher rates of grade repetition
(Alexander et al., 1996; Burkam et al., 2009; Simpson & Fowler, 1994), decreased engagement
in schools (Gruman et al., 2008), as well as increased risk of dropping out and lower rates of
graduation (Astone & McLanahan, 1994; Gasper et al., 2010; Haynie & South, 2005;
Rumberger, Larson, Palardy, Ream, & Schleicher, 1998). However, mobile students are not a
71
monolithic group of individuals and not all school moves can be classified as harmful for the
students who make them.
Identifying the causal effects of mobility, or of different types of moves, on student
outcomes is difficult given that researchers struggle to disentangle the separate impacts of a
student’s prior learning experience, the instigating mechanism behind the school move (e.g.,
divorce of parents), and the actual school transition in the analysis of post-move outcomes. That
is, the same experiences that may have created the need to switch schools mid-year may also be
associated with a student’s educational outcomes following a school move. Grigg (2012)
attempted to address some of these limitations by modeling the academic growth of students in
years of mobility against their own academic growth in years where the student was stable.
Moreover, he was able to model multiple types of moves (mid-year or between-year as well as
structural versus non-structural). His research suggests that a non-structural, mid-year move not
mandated by a school (i.e., forced through disciplinary sanctions) is associated with a 6%
reduction in growth in the year of mobility. This is an average detrimental effect of mid-year
moves for all students who switch schools. But again, mid-year school moves arise from
numerous circumstances and an average effect does not distinguish between proactive versus
reactive moves, even if all moves were family initiated.
In an attempt to disentangle detrimental moves from advantageous moves, Hanushek and
colleagues (2004) again modeled the academic growth of students relative to their own growth in
years before and after a move. Moreover, they measured differences in these growth rate changes
for students who move within and across districts, within the same region or to a new region.
The underlying assumption behind this modeling choice was the belief that families switching
districts within a regional area put more weight into schooling options than families switching
72
schools within a district or across districts in two different regions. Thus, the different types of
moves may capture differences between detrimental and advantageous mobility. The results of
this research again identify a negative effect of mobility on achievement growth in the year of
moving. For students who moved across districts, that one-year cost of mobility was offset by
long-term gains in school quality as measured by improvements to the student’s annual rate of
growth. By contrast, within district moves were not associated with improvements to school
quality following a mid-year transition. These findings suggest that students move in different
ways, and some families are potentially making moves informed, in part, by the type of school a
student has the potential to enroll in following a school change.
These two studies provide strong evidence on the negative effects of mobility, at least in
the year of moving, on student achievement. However, these studies are not without limitation.
While Grigg’s work advances much of the research on mobility, it also suffers from a focus on a
single metropolitan region. His work is not unique in this way. Research on mobility has placed a
heavy focus on urban schools (Cullen, Jacob, & Levitt, 2005; e.g., Nakagawa, Stafford, Fisher,
& Matthews, 2002; Parke & Kanyongo, 2012; Raudenbush et al., 2011; Voight et al., 2012) or a
handful of districts within or across states (e.g., Carson, Esbensen, & Taylor, 2013; Yin,
Kitmitto, & Shkolnik, 2012).
21
Even statewide analyses such as those presented by Hanushek
and colleagues (2004a) or Xu and colleagues (2009) place a focus on urban versus suburban
incidents of mobility.
The underlying assumption in most of this work is that low-income students are
concentrated in urban communities and thus, by studying urban schools, the results can inform
policies aimed at supporting high-poverty schools. However, in an attempt to examine student-
21
These are solely patterns or rates of mobility in urban schools and not the comparison of urban
communities to suburban and rural districts within the same region or state
73
level mobility’s connection with school level poverty, focusing on urban locations may limit the
generalizability of findings. In fact, while it is true that high concentrations of poverty are
commonly found in urban centers, both low-poverty urban schools as well as high-poverty
suburban schools exist across the US (Logan, Minca, & Adar, 2012). As such, a statewide
analysis that controls for location but also looks at school-level poverty in urban, suburban, and
rural locations will advance the conversation around supporting high-poverty schools and the
prevalence of mobility.
School Mobility
By contrast to the extant research on student-level mobility, only a few studies or reports
document school-level mobility rates. School mobility refers to the change in enrollment a
school experiences between the first and last day of classes, including both students who enroll
late and students who depart before the academic year ends.
22
Research on North Carolina public
schools suggests that mobility rates are highest, on average, in urban schools (28-34% turnover
rate) as compared to suburban (18-21%) and rural schools (13-16%) (Xu et al., 2009).
23
In the
District of Columbia, the district saw an overall net loss of 2% enrollment in 2011-2012;
however, the school-by-school enrollment variations range from a net enrollment loss of 15% to
a net enrollment gain of 16% (Office of the State Superintendent of Education (OSSE), 2013).
This 16% net gain in students reflects 22% of students withdrawing from the school and 38% of
the school’s cumulative student body enrolling mid-year. These reports from Washington, DC
allude to the fact that even within urban communities and single school districts, schools face
varying levels of annual instability.
22
Unless otherwise identified, the school mobility rates presented here were calculated as the number of
students who arrive to a school after the start of the year and the number of students who depart early as a
proportion of the school’s cumulative enrollment during that year.
23
Xu and colleagues (2009) use the term turnover rate when discussing school mobility rates. The exact
calculation of this rate is not defined in the paper.
74
Unfortunately, the schools that experience higher rates of instability tend to be amongst
the lowest performing schools in a state. For example, in Massachusetts’ lowest-performing 35
schools, 88% had a mobility rate above 21% when the statewide incidence of mobility is 10.3%
(O’Donnell & Gazos, 2010; Rennie Center for Education Research & Policy, 2011).
24
Xu and
colleagues (2009) document that the unadjusted turnover rates between the highest and lowest
performing schools (i.e., top and bottom quartiles of students meeting state standards) in North
Carolina differ by 12 percentage points. To be fair, the prevalence of mobility in these schools is
potentially a contributing factor to the low levels of performance identified.
In rigorous analyses that attempt to identify the impact of mobility and instability on
schools, the evidence is consistent and concerning. Mobility adversely impacts mobile students
but also negatively effects stable students enrolled in schools which serve highly mobile
populations of students (Hanushek et al., 2004a; Raudenbush et al., 2011). In Chicago, being
enrolled in a school with a high mobility rate leads to annual achievement growth that is lower
by approximately one-month of content coverage (Raudenbush et al., 2011).
25
These growth
rates are in comparison to students enrolled in schools with low rates of annual mobility.
Furthermore, this research suggests that the effects of enrolling in a highly mobile school are
cumulative for both mobile and stable student: three years of enrollment in a highly mobile
school is associated with a three-month lag in average growth. Similar findings from Texas also
suggest that school-level instability has negative effects on the academic growth of both stable
and mobile students, but these effects are larger for low-income students as well as students
identified as Black or Latino (Hanushek et al., 2004a).
24
The state uses the term “churn rate” instead of mobility rate though the calculation is the same as the
mobility rate definition provided in Footnote 1.
25
Mobility in this research focuses on in-mobility or the proportion of students who arrive late to a school as
a proportion of the school’s cumulative enrollment.
75
The instability of student enrollments is neither the only turnover schools must address
nor the only instability that is associated with negative impacts on student achievement. At least
in elementary and middle schools, teachers are more likely to exit schools that serve highly
mobile communities, partially due to the weak relationships between teachers and parents
(Allensworth, Ponisciak, & Mazzeo, 2009). School context and working conditions, of which
student mobility is one piece, are leading factors in a teacher’s decision to leave a school
(Johnson, Kraft, & Papay, 2012; Ladd, 2011; Loeb, Darling-Hammond, & Luczak, 2005). High-
poverty schools especially struggle to recruit and retain high-quality teachers which reduces the
instructional consistency for students (Hanushek et al., 2004b; Lankford, Loeb, & Wyckoff,
2002; Shen, 1997; Shields et al., 2001). Importantly, teacher turnover negatively affects student
achievement, particularly in low-performing schools (Ronfeldt, Loeb, & Wyckoff, 2013).
The collective challenges created by high mobility and high poverty have the potential to
hinder many of today’s policy efforts focusing on curriculum and standards, equity in teacher
quality across schools, and increasing educational opportunities for low-income students. Until
now, research on mobility has not explicitly explored the connection between school-level
poverty and student-level mobility. This study begins to address this limitation by asking how
mobility differs across school-level poverty, not only in the rate of prevalence or in the timing of
moves, but also from where students are coming. By conducting this analysis at a state level and
utilizing data that allows for day-to-day deconstruction of mobility patterns, we can better
understand the dynamics of mobility across poverty contexts and begin to shape policies
responsive to this empirical evidence.
Data
This research relies on an administrative panel of data from the Washington Office of
76
Superintendent of Public Instruction (WA OSPI) for academic years 2009-2010 (2010) through
2013-2014 (2014). Student-level data files include identifiers for a student’s race/ethnicity, grade
level, eligibility for free- or reduced-price lunch (FRLE), whether the student has a disability, is
an English language learner, or is from a migrant family. Standardized test scores are available in
math and English/language arts (ELA) for students in tested grades (predominantly grades 3-8
and 10). In the student enrollment files, students are linked to each school and associated school
district attended during the academic year. The student data files also mark dates of enrollment
and dates of exit from schools and districts.
Publicly available data from WA OSPI and the National Center for Education Statistics
(NCES) provide school characteristics such as location (e.g., urban, suburban), grades served,
and enrollment characteristics. The school demographics discussed in the research that follows
were constructed using the administrative panel of data provided by OSPI but were verified
against the publicly available data sources. Both the publicly available data and the
administrative panel identify school type (e.g., public, vocational, virtual, etc.). I limit this
analysis to traditional public schools. Washington state did not open its first charter school until
2014-2015.
26
Thus, the school options families select from in Washington may not be
representative of what occurs in states with stronger school choice options.
Sample
This analysis is restricted to students in Kindergarten (K) through fifth grade. The
primary reason for this restriction is because the types of schools available to students in early
grades are not as numerous as those offered to students once they reach middle or high school.
For students in upper grades, families can select from traditional public schools, alternative
26
Future research should consider investigating mobility by various school types. However, that analysis is
beyond the scope of this current paper.
77
education programs, or career and technical education programs. Students in upper grades can
also be enrolled in government or social services program (e.g., juvenile justice centers). In other
words, the educational choices and mobility patterns for students in upper grades are more
complicated than they are for students in earlier grades. Thus, focusing on grades K-5 provides a
sample of students who have a more narrow set of educational options to choose from within the
state’s public school system (e.g., traditional public, special education, or virtual schools). Future
studies will look into mobility patterns in middle and high schools.
A second reason for the sample restriction is that, consistent with prior research (see:
Engec, 2006; Rumberger 2003), the majority of school moves are experienced by students in
earlier grades in Washington. Fifty-nine (59) percent of all students who switch schools mid-year
in Washington are enrolled in grades K-5. These grade levels account for less than 47% of the
state’s total enrollment in that same period. All measures of school mobility (as defined below)
are restricted to the enrollments of students in grades K-5, regardless of whether the school
serves other grade levels. The correlation between a school’s K-5 mobility rate and the mobility
rate for students in all grades is 0.895.
Key Variables and Identifiers
The Washington data does not provide all of the indicators necessary to identify mobile
students within a year. Variables were created to make the analysis of school-level and student-
level mobility possible. Here I define key variables used throughout the analysis.
First and Last Day of School
The administrative data panel does not include the first or last day of the academic year. I
created these dates using the student dates of enrollment and exit. The first day of school was
identified using increasing bandwidths of days around September 1
st
. The modal enrollment date
78
for each grade in each school was identified for each bandwidth of days used (e.g., 10, 20, 30). I
continued to expand bandwidths until a date was identified that matched across grade levels
within the school. In cases in which an agreed upon date was not identified or one grade had a
different modal date than all others within a school, publicly available academic calendars were
checked for verification. The same process was repeated to identify the last day of school using
June 10
th
as the initial date against which expanding bandwidths were set.
Enrollment Period
Washington defines the initial enrollment period in a school as the time from the first day
of the school year through the fourth instructional day in September. This is the period of time
when schools must identify which students are or are not enrolled and adjust their enrollment
records to correctly identified students as such. The fourth instructional day was uniquely
identified for each school based on that school’s first day of instruction. All moves, arrivals or
departures, prior to a school’s fourth day of September instruction are considered corrections to
academic records and not actual changes in enrollment.
Stable and Mobile Students
Stable students are the individuals who are enrolled in a school for the entire academic
year. In Washington, these are the students enrolled in a school as of the fourth instructional day
of September through the last instructional day in the year. Across all five years of data, 83.2%
of students receive a full-year of instruction in the same school and are labeled as stable students.
I use two groups of mobile students in this research. The first group is Arrivers. These students
are individuals who show up in a traditional public school after the fourth instructional day in
September. Across the five years of data, 14.0% of students arrive late in a school. This group
breaks down into students who arrive from outside a traditional public school in Washington
79
(8.6%) and the group I call Switchers (5.4%). Switchers are the students who move from one
traditional public school within the state to a second traditional public school in the state during
the same academic year. The remaining 2.9% of students in this dataset are individuals who are
enrolled in a traditional public school at the start of the year and exit the traditional public school
system before the academic year ends.
27
Mobility Rate
I measure a school’s mobility rate based upon the most common definition in prior
research (see: Alexander et al., 1996; Fong et al., 2010; O’Donnell & Gazos, 2010; Rennie
Center for Education Research & Policy, 2011; Thompson, Meyers, & Oshima, 2011). Here, a
school’s rate of mobility refers to the total number of late arriving students and early departing
students as a proportion of a school’s cumulative enrollment for the year.
Eq (1).
mobility rate = (# late-arriving students + # of early departing students)
total # of students enrolled in a year
A student who switches from one school (the sending school) to a second school (the receiving
school) during the academic year is counted in the numerator and denominator of both schools. If
a student arrives late and departs early, she is included twice in the numerator and once in the
denominator.
School-Poverty
I determined school-level poverty by the proportion of low-income students enrolled in a
school as of the fourth instructional day in September. A student is considered low-income if she
eligible for free or reduced-price meals. I define low-poverty schools as those schools in the
bottom quartile (Q1) of low-income enrollment while high-poverty schools are those schools in
27
Values add to more than 100% due to rounding.
80
the top quartile (Q4) of low-income enrollment.
28
Mid-Range Poverty schools are schools in the
2
nd
or 3
rd
quartile of low-income enrollment. These quartiles were based on enrollment in schools
that serve students in at least one grade between kindergarten and fifth grade.
Prior Achievement
I create a student-level and school-level measure of prior achievement. For students, I
standardize math and reading test scores by grade, year, and test type (e.g., Measurement of
Student Progress). I then take the mean of the student’s math and reading standardized test scores
to create an average prior performance score for a student. At the school-level, I create the
average reading score using all students who were enrolled in the prior year and have a valid test
score. Each student’s performance is adjusted by the proportion of the instructional days for
which she was enrolled.
29
I repeat this process for math and then take the mean of the school’s
reading and math average score to create an overall measure of school prior achievement.
30
Analytic Strategy
I begin all analyses with descriptive statistics on the students and schools used in this
analysis. For students, I identify demographics for all student-year observations and then
disaggregate the demographics by stable students and mobile students. For schools, I present
average enrollment characteristics including school mobility rates. I also break these
characteristics down for schools in each quartile of poverty. The statistics are presented for
school-year observations.
28
This method of defining low- and high-poverty schools has been used in other studies on high-poverty
schools (Boyd, Lankford, Loeb, Rockoff, & Wyckoff, 2008; Clotfelter, Ladd, Vigdor, & Wheeler, 2007). The
current proposed definition of high-poverty schools in the Every Student Succeeds act is the top-quartile of
low-income enrollment in the state (Elementary and Secondary Education Act of 1965, As Amended by the
Every Student Succeeds Act-Accountability and State Plans, 2016).
29
The average school achievement that adjusts for a student’s length of enrollment and the non-adjusted
average school achievement are highly correlated (r=0.931).
30
The below analyses were tested using only the reading score, only the math score, and the average of the
two scores. The results do not change. By using the average, rather than just one subject, I am able to
incorporate more students into the analytic sample.
81
Research Question One
The first research question asks: How do school mobility rates vary by school-level
poverty? I begin by presenting mean differences and then use OLS regression and account for
demographic variables that may be associated with a school’s mobility rate. With this in mind, I
model the relationship between school mobility and school characteristics as follows:
Eq (2).
!"ℎ!!" !"#$%$&' !"#$
!"
= !
!
+!
!
!"#$%&'_!"#$%&'(
!"
+!
!
!
!"
+!
!
!"#$
!"
+ !
!"
where the mobility rate of school s in year y is a function of a school’s quartile of poverty, β
1
,
and a vector of school-specific characteristics. The focal variables of interest are the poverty-
quartiles where the individual coefficients for each quartile will provide the difference in
mobility rates, relative to high-poverty schools, after controlling for school demographics
associated with student mobility. These characteristics include school’s proportion of enrollment
belonging to each racial/ethnic group, the proportion of students identified as having a disability,
the proportion of enrollment identified as migrant, the school’s locale (e.g., urban, suburban),
whether the school is in eastern or western region of the state,
31
and the schools’ starting
enrollment size.
32
I include a year fixed-effect, !
!
, to address any variation in mobility rates that
may be attributed to annual shocks that occur statewide.
33
I cluster standard errors, !
!"
, at the
school-level. I repeat the above model for years 2011-2014 and include a measure of prior
school-level achievement. Typically, annual school performance reports do not become available
31
Western, WA is home to almost 75% of all schools in the state. Western, WA has recently seen a population
growth rate higher than Eastern, WA for the first time in seven years and this growth may be reflected in
school populations and mobility rates (Washington State Office of Financial Management, 2014).
32
I do not control for a school’s proportion of students who are English learners because it is collinear with
the proportion of Latino students in a school (r=0.868)
33
The results are robust to alternative specifications. A district fixed effect was also tested. The results of the
focal variables (e.g., quartile of school-poverty) reduce in magnitude but the overall relationship identified
between these covariates and the school-level mobility rate hold. However, including a district fixed-effect
removes any school that is the only school within a district that serves students in grades K-5.
82
until after the academic year has already begun. A model that does not account for lagged
achievement may overlook the contribution of a school’s prior year performance to a family’s
decision in seeking a new school for their student in the middle of the current academic year.
Research Question Two
The second research question asks: When do new students arrive in schools and do the
patterns vary by school-level poverty? I investigate the timing of student moves in a purely
descriptive manner. Historically, research on student mobility has only indicated whether a
student moved during a Fall or Spring term or during the middle of the academic year because
exact dates of enrollment and exit were unavailable. This limited perspective on when mobility
occurs does not provide the complete picture of what schools experience in an academic year.
The data from Washington allows for day-to-day tracking of students across schools.
Rather than using statistical inference to analyze the timing of student moves, I present
two graphs that provide a visual representation of mobility patterns identifying when students
arrive in new schools, by poverty-quartiles. I use seven day periods of time to examine the week-
by-week arrivals of students into schools. The graphs utilized provide a first-look investigation
into whether students move at certain times in the academic year and whether the patterns of
these arrivals differ by school poverty-quartiles. This analysis puts all five years of movers into
one graph. In order to do this, I adjust the date variables such that September 1
st
of each
academic year is day zero (0). In shifting the calendars to be on the same time scale, I can
compare moves across years that occur at key times (e.g., winter break) in the school year. By
poverty-quartile, I look at the proportion of late arriving students who enroll each week as well
as the raw counts of students who arrive in each poverty-quartile each week. The proportional
measures are poverty-quartile specific, identifying the proportion of late-arriving students that
83
enroll each week relative to all late-arrivals experienced in each school poverty-quartile in the
data panel.
Research Question Three
My third research question asks: “How do the types of mobile students high-poverty
schools work with differ from the mobile students switching to or between lower-poverty
schools?” I rely on multinomial logistic regression to predict the quartile of poverty a student
moves to when switching schools within the traditional public school system. The basic model
for this analysis is:
Eq(4)
log
!
!"#
(!)
!
!"#
(!4)
= !
!
+!
!!
!"#_!"#$%&
!"#
+!
!!
!"#$%&'_!"#$%&'(,!"#$%#&
!"#
+ !
!!
!"#_!"#$%& ∗!"#$%&'_!"#$%&'(,!"#$%#&
!"#
+!
!!
!
!"#
+ !
!!
!
!"#
+!
!!
!
!"#
+!
!
!
!"#
+!
!
where the P
ist
(q) is the probability that student i from sending school s in year y moves to a Q1
poverty school, Q2 poverty school, or Q3 poverty school relative to moving to a Q4 poverty
school. I estimate the log odds of the probability of these outcomes, log
!
!"#
!
!
!"#
!!
, as a function
of the student being low-income, the sending school’s quartile of poverty, and the interaction
between the student’s income status and the sending school’s quartile of poverty !
!!
. The
interaction term between student income and sending school poverty level is the variable of focal
interest. The coefficients on the interaction terms identify whether the relationship between a
student’s sending school poverty-quartile and the log-odds of moving to a particular poverty-
84
quartile receiving school varies by the student’s income status, above and beyond any main
effect of being low-income or moving from a school in a particular poverty-quartile.
I include a vector of grade indicators (!
!"#
) to identify whether patterns differ across
grade levels.
34
The vector of student controls (!
!"#
) includes race, English language learner
status, special education status, and migrant status. I also control for the sending school
characteristics (!
!"#
) which includes school’s proportion of enrollment belonging to each
racial/ethnic group, the proportion of students identified as having a disability, the proportion of
enrollment identified as migrant, the school’s locale (e.g., urban, suburban), whether the school
is in eastern or western region of the state, and the schools’ starting enrollment size. Finally, I
use a year fixed-effect to account for year-specific influences that affect the ways a student
moves. These could include changes in housing policy or employment opportunities (e.g., a
drought changing working conditions in migrant communities) that is year specific. !
!
is a
random error term and is clustered at the student-level.
I run a subset of this analysis for fourth and fifth grade students in 2011-2014. Prior
achievement data is not available for students in earlier grades. This final specification is used to
identify the patterns of students high- or low-poverty schools receive, with specific interest in the
income status of those students and where those students are moving from, above and beyond the
ways in which mobility patterns might be influenced by the prior achievement of those students.
To be fair, there is always a concern regarding endogeneity when attempting to identify a causal
impact of mobility on student achievement, or vice versa. However, this work is not intended to
imply a causal impact of low achievement on student mobility. Rather, this question is intended
34
Prior research demonstrates that students are more frequently mobile in earlier grades (Engec, 2006).
Moreover, as I show in the descriptive characteristics of students who are stable relative to students who are
switchers, students in later grades (grades 2-5) are underrepresented in the switchers group. Thus,
controlling for this enrollment characteristics allows me to test whether there are different patterns of
mobility between students across grades.
85
to describe the ways in which high- and low-poverty schools differ in the types of mobile
students they receive.
Results
I begin with a brief discussion of the demographic characteristics of students and schools
(see Tables 2.1 and 2.2, respectively). I provide the statewide student demographics as well as
present the demographic characteristics of both stable students and switchers. Students in
Washington are predominantly White (57.2%) and Latino (21.7%) with fewer students
identifying as Asian (7.3%), two or more races (6.5%), Black (4.8%), American Indian/Alaskan
Native (American Indian) (1.6%), and Pacific Islander/Hawaiian Native (Pacific Islander)
(1.0%). More than half (51.9%) of elementary students are eligible for Free-or Reduced Price
Lunch (low-income, 14.3% of students are identified as having a disability, 13.8% are English
language learners, and 1.8% of students are identified as Migrant. Students are distributed evenly
across all grades (K-5).
Compared to stable students, mobile students are disproportionately overrepresented by
Latino, Black, two-or-more race, American Indian, and Pacific Islander students. Low-income
students, English language learners, students with disabilities, migrant students, and low-
achieving students are all overrepresented in the group of students who switch schools. Finally,
larger proportions of mobile students are found in kindergarten and first grade and students in
grades 2-5 make up smaller proportions of the mobile students than stable students in
Washington.
Table 2.2 provides descriptive characteristics on all schools in the state and schools in
each quartile of poverty (Q1-Q4). There are significant differences between high-poverty schools
and schools which fall into the other three poverty-quartiles. To highlight a few differences,
86
high-poverty schools serve smaller proportions of White and two-or-more race students as well
as larger proportions of students identifying as Latino, Black, American Indian, Pacific Islander,
English language learners, and from migrant families. The average Q4 school has a prior
achievement value two-thirds of a standard deviation unit (sd unit) below that of a low-poverty
school (-0.332 vs. 0.351). Even relative to Q3 schools, the prior achievement at high-poverty
schools is one-quarter of a sd unit lower in high-poverty (Q4) schools.
High-poverty schools are more frequently located in urban areas and less frequently in
suburban areas, compared to the three other quartiles of schools. By definition, the proportion of
enrollment identified as low-income increases across the school poverty-quartiles with Q1
schools serving an average of 19.6% low-income students and high-poverty schools serving an
average of 84.0% low-income students. The remaining demographic characteristics (i.e., percent
Asian, percent students with disabilities, schools located in towns, and starting enrollment) vary
across the poverty-quartiles. For example, High-Poverty schools have larger starting enrollments
than Q3 schools, smaller starting enrollments than Q1 schools, and no differences in average
starting enrollment to schools in Q2. Similarly, Q1 schools serve smaller proportions of students
with disabilities and Q3 schools serve larger proportions of students with disabilities than high-
poverty (Q4) schools. These descriptive statistics show that schools differ on many key
demographic categories at the start of the year, prior to the arrival of mobile students.
Research Question 1
The first research question asks: How do school mobility rates vary by school-level
poverty? There are significant differences in the average mobility rate of schools in each
poverty-quartile. The statewide mobility rate in elementary schools is 18.4% (see Table 2.2).
Low-poverty schools have an average of 11.8% mobility during the academic year, Q2 and Q3
87
schools experience 17.0% and 20.2% mobility, respectively. These rates are all significantly
lower than the average 24.8% mobility experienced in high-poverty (Q4) schools. Figure 1
shows the distribution of the unadjusted school mobility rates by poverty-quartile. As mentioned
above, unadjusted mobility rates differ significantly by school-level poverty. With increasing
levels of low-income enrollments, schools experience higher rates of mid-year mobility. Table
2.3 shows the results of the OLS regression predicting school mobility rate. The baseline model
(Specification 1) shows that the unadjusted differences between Q1 and Q4 poverty schools is
13.028 percentage points. Similarly, Q4 schools have mobility rates 7.779 and 4.553 percentage
points higher than Q2 and Q3 poverty schools, respectively. These differences are all significant
(p<0.001).
I add school-level covariates into the model for Specification 2. Specification 3 reduces
the sample to years 2011-2014. Specification 4 is run on schools in 2011-2014 and adds in a
prior school achievement control. I find consistent results across these models and focus my
discussion on Specification 4 because this is the full model. Controlling for school demographic
characteristics and prior school achievement, significant differences remain in school mobility
rate across the poverty-quartiles, though the differences reduce in magnitude from the baseline
model. The difference in mobility rate between high-poverty (Q4) and low-poverty (Q1) schools
is 9.166 percentage points (p<0.001), a reduction of almost 4 percentage points from the
unadjusted differences. The differences between Q4 schools and Q2 and Q3 schools also reduce
in magnitude from the baseline model, with differences now at 5.894 and 3.406 percentage
points respectively. These results suggest that the demographic differences of school enrollments
88
alone do not explain the differences in mobility rates experienced by schools serving higher
proportions of low-income families.
35
A few key demographic groups are consistently associated with differences in school
mobility rates after controlling for school poverty. Schools serving larger proportions of black,
Pacific Islander, or two-or-more students experience higher rates of mid-year mobility. As the
proportion of Asian students increases, a school’s mobility is predicted to decrease. For example,
a 10% increase in Asian student enrollment is associated with a mobility-rate decrease of 1.34
percentage points (p<0.001). Prior school achievement is also significantly associated with
differences in school mobility rates. I find that for a 0.10 sd unit decrease in average
achievement, mobility rates increase by 0.567 percentage points. Urban schools (β=0.949,
p<0.050) have higher rates of mobility relative to suburban schools and schools in western WA
(β=1.370,p<0.010) also have significantly higher mobility rates than schools located in the
eastern part of the state. It is plausible, that part of the explanation for why urban and western
WA schools experience higher rates of mobility is due, to availability of other schools in the
local area. Future research should investigate this further.
Research Question 2
The second research question asks: When do students move and does timing differ by
school-level poverty? Figure 2 shows when students arrive in Q1, Q2, Q3, and Q4 schools after
the fourth day of instruction in September. Along each x-axis, day 0 represents September 1st
and day 300 represents June 28
th
. The beginning of each calendar month is identified along the
35
There is potential that these findings do suffer from unobserved variable bias. For example, the state of
Washington is home to six active military installations. Students from military-connected families switch
schools three times as frequently as their non-military connected peers (Military Child Education Coalition,
2009; Segal & Segal, 2004). Additionally, students experiencing homelessness are not included as a
demographic group in this analysis due to data limitations. Homeless students are well-documented as
experiencing high rates of annual mobility (Fantuzzo, LeBoeuf, Chen, Rouse, & Culhane, 2012). Adding these
additional demographic groups may further explain differences in mobility rates.
89
x-axis. Each bar in the graph represents the number of switchers arriving each 7 calendar day
period. The y-axis is the proportion of late arriving students (in that poverty-quartile) that arrive
in a given week. In Figure 3, I repeat this graph but change the y-axis to the raw count of new
arriving students.
36
Two things are worth noting in Figure 2. First, students arrive in schools every week of
the academic year. The gap week prior to January 1
st
is the winter/holiday break where schools
are not in session. Second, the patterns and proportionality regarding when students arrive in
schools are similar across each poverty-quartile. The first week of each month appears to have
slightly higher rates of in-mobility than the later weeks. The most notable spike of new arrivals
occurs in the first two weeks of the new calendar year (January 1
st
– January 14
th
). In total,
11.5% of arrivers show up in schools during this time across the four quartiles. The final spike of
arrivals (mid-April) occurs in the three weeks prior to annual testing. This period reflects a
Spring Break for the schools that offer one. After this time, the number of arrivers reduces
steadily until the end of the academic year.
What Figure 3 captures that Figure 2 does not is the volume of late arrivers experienced
in high-poverty schools compared to schools in low- and mid-range poverty-quartiles. Out of all
late arriving students across five years of data, Q4 schools receive 35.8% (n=48,338) of late
arrivers, Q3 schools receive 26.3% of late arrivers, Q2 schools receive 23.0% of late arrivers,
and Q1 schools receive 14.9% of late-arrivers. So while schools experience similar proportions
of their late-arriving students showing up each week, the raw number of movers high-poverty
36
To ensure that the proportionality in the graphs is not distorted by differences in the number of students
enrolled in each poverty-quartile, I measure the total number of observations in the panel associated with
each quartile. Enrollment in Q1 schools is 26.8% of total observations, Q2 schools enroll 24.6% of total
observations, Q3 schools enroll 22.8% of total observations, and Q4 schools enroll 25.7% of total
observations. These are relatively balanced proportions of observations in each school poverty-quartile so
the proportionality in the graphs represents legitimate differences in volume (number of arrivers in a given
week) across quartiles.
90
schools must work with each year is larger than what is experienced in the other poverty-quartile
schools. Compared to the of 27 late enrolling students in Q1 schools each year, the average Q4
school welcomes 57 late enrolling students each year. In the next section, I investigate whether
the types of students who move to the different poverty-quartile schools differ in systematic
ways knowing that the timing of moves for students are generally similar across school contexts.
Research Question 3
The third question of this study asks: How do the types of mobile students high-poverty
schools work with differ from the mobile students switching to or between lower-poverty
schools? In Table 2.4, I provide the results of the multinomial logistic regression analysis. These
results are presented as relative risk ratios. A coefficient less than one suggests lower poverty
quartile schools have a lower odds of receiving mobile students with a given characteristic,
relative to high-poverty schools. A coefficient greater than one suggests lower poverty quartile
schools have higher odds of serving switchers of a given demographic characteristic, relative to
high-poverty schools. The reference group of these results is White, non-low income,
kindergarten students from a suburban Q4 sending school in eastern Washington.
I provide the results of three separate specifications of this model. Specification 1
includes students in all grades for the years 2011-2014. I run this model because the descriptive
characteristics of students who switch schools in WA state shows that students in kindergarten
and first grade are more frequently identified as switchers than their peers in later grades.
Specification 1, then, does not control for prior achievement at student-level because testing does
not begin until third grade. Specification 2 reduces the sample to students who have prior
achievement data (grades 4 and 5) but does not yet include prior achievement as a control.
Specification 3 adds a measure of student prior achievement. Results from all specifications of
91
the multinomial logistic regression suggest consistent relationships between a student’s income
status and the poverty level of the student’s sending school as they relate to the odds of moving
to a Q4 school relative to moving to a Q1, Q2, or Q3 school. As such, I predominantly focus on
the findings from Specification 3 but I first highlight one key piece of information identified in
Specification 1.
In the model that includes students in all grades (Specification 1), I look specifically at
the relationship between the grade level of switchers and the relative odds of moving to higher or
lower poverty schools. High-poverty schools (Q4) always have higher odds of working with
mid-year kindergarten switchers than students moving in any other grade. For example, a Q1
schools has a 17.0% higher odds, a Q2 school has 28.8% higher odds, and a Q3 schools has a
15.7% higher odds of working with fifth-grade switchers than kindergarten switchers, relative to
Q4 schools.
37
There is no guidance in prior literature as to why high-poverty elementary schools
are more likely to receive kindergarten switchers over students in any of the later grades. One
potential hypothesis is that families with more time in or connection to the local education
system have more insight on area schools prior to making a school change than do families with
kindergarten students. What research does tell us is that mobility in the earliest years of
elementary school (i.e., Kindergarten through second grade) has been linked with sustained,
negative effects on a student’s long-term achievement especially in reading (Voight et al., 2012).
Future research exploring why mobility is occurring in these early years may help in the
37
I ran a check to identify whether students in tested grades (3-5) moved in different ways than students in
non-tested grades (K-2). Lower-poverty schools had higher odds of working with students in tested grades
than non-tested grades, relative to high-poverty (Q4) schools. I also ran this analysis using first grade or
second grade as the reference groups. Results suggest that all differences between tested and non-tested
grade students appear to be driven by kindergarten switchers as the results to Specification 1 suggest.
92
development of policy and practice to reduce both the prevalence of early mobility and the
detrimental costs associated with it.
Moving to Specification 3 of this analysis I discuss the focal variables of interest first and
then conclude with some key findings that further demonstrate the differences in the types of
mobile students high-poverty and lower-quartile poverty schools serve. The reference group in
this analysis is non-low income students from Q4 sending schools.
38
I focus the presentation of
results on the comparison between low-poverty (Q1) and high-poverty (Q4) schools. All else
equal, Q1 schools also have higher odds of working with students coming from Q1 (β=4.333,
p<0.001), Q2 (β=2.748, p<0.001), or Q3 sending schools (β=1.586, p<0.010) than Q4 sending
schools, relative to high-poverty schools. The coefficient on low-income suggests that Q1
schools have 57.4% lower odds of working with low-income mobile students coming from Q4
schools than non-low income students coming from Q4 schools, all relative to high-poverty
schools.
I find the interaction between student-income and sending school poverty levels to be
significant in the Q1/Q4 comparison. These interaction terms suggests that low-income mobile
students enter high- and low-poverty schools in ways that differ based on their sending school’s
poverty level. Low-poverty schools have 333.3% higher (β=4.333, p<0.001) odds of working
with non-low income students coming from Q1 sending schools and 174.8% higher (β=2.748,
p<0.001) odds of working with non-low income students from Q2 sending schools, relative to
high-poverty schools. By contrast, the interaction terms also suggest that low-poverty schools
have a 19.7% lower odds of working with a low-income student coming from a Q1 school and
lower odds of working with low-income students from Q2 sending schools of 32.5%, both
38
The coefficients in Table 4 are relative risk ratios. The effects are multiplicative rather than additive and,
thus, numbers discussed in text may not be immediately identifiable in the table.
93
relative to high-poverty schools.
39
The interaction term for Q3 sending is not significant.
Essentially what the results of this model suggest is that high-poverty schools have higher odds
of working with low-income students than low-poverty schools, especially when those students
are moving from sending schools with higher levels of poverty.
Beyond difference in high-poverty school’s and lower-poverty school’s mobile students
in terms of income or sending school poverty level, I want to highlight other key differences in
the demographics of switchers high-poverty schools work with compared to schools in the lower
three poverty quartiles. Lower-quartile poverty (Q1-Q3) schools relative to high-poverty schools
always have lower odds of working with switchers who are Black, Latino, American Indian,
Pacific Islander, or students identifying as being Two or more races. For example, a Q1 school
has 47.0% lower odds (β=0.530, p<0.001) of working with a mobile student who is Black than a
high-poverty school does. The odds of a Q3 school receiving a mid-year switcher who is Black is
37.3% lower relative to a Q4 school (β=0.627, p<0.001). I find similar patterns of differences
between high-poverty schools and lower-quartile poverty schools as it relates to working with
mid-year switchers who are English language learners and migrant students. High-poverty
schools are no different than Q1 or Q2 schools in odds of working with students with disabilities
who switch schools mid-year but have slightly lower odds of working with SWDs relative to Q3
poverty schools (β=1.120, p<0.010). In other words, the types of mobile students high-poverty
and lower-poverty schools work with each year differ on key demographic characteristics.
High-poverty schools also work with students who are coming from sending schools that
differ from the sending schools of mobile students moving to lower quartile poverty schools. For
example, for every one sd increase in a sending school’s lagged achievement, low-poverty (Q1)
39
The 19.7% calculation comes from the interaction of the relative risk ratio on low-income (β=.426), Q1
sending (β=4.333) and the interaction term between low-income and Q1 sending (β=.435).
94
schools have a 37% higher odds of receiving that mobile student relative to high-poverty schools
(Q4) (β=1.370, p<0.010). As sending school proportion of Black, Latino, Pacific Islander
populations increase relative to the White student body, the odds of these lower-quartile (Q1-Q3)
poverty schools receiving these mobile students are decreased relative to high-poverty schools.
Take the sending schools proportion of Black student enrollment. For every percentage point
increase in that sending school’s Black student enrollment, a Q1 school has a 3.1% lower odds of
receiving a mobile student relative to a Q4 school. Finally, low-poverty schools (Q1) have
123.7% higher odds, Q2 schools have 141.8% higher odds, and Q3 schools have 115.1% higher
odds of receiving students from western WA than an eastern WA school, relative to high-poverty
schools (p<0.001). To be fair, eastern WA has a higher concentration of high-poverty schools
and 90.1% of students who are identified as switchers from an eastern WA school remain in
eastern WA after the switch. Thus, by simple availability of local offerings, eastern WA is going
to have more switching across high-poverty schools.
Overall, the results to my third research question suggest that the type of mobile students
schools work with differ across the poverty-quartiles. High-poverty schools regularly receive
low-income students from other high-poverty schools. Low-poverty schools are more likely to
receive mobile students who are not identified as low-income from lower poverty sending
schools. In sum, high poverty schools deal with higher rates of overall mobility but also serve
mobile students with different needs and past schooling experiences when compared to mobile
students in lower poverty schools. Policies intended to support schools serving mobile students
should consider both the overall rates of mobility but also the differences in the types of mobile
students served across school contexts.
95
Discussion and Implications
The extant research on mobility places a heavy focus on student-level moves and
outcomes without connecting that mobility to the types of schools students are leaving and
entering (Burkam et al., 2009; Grigg, 2012; Mehana & Reynolds, 2004; Parke & Kanyongo,
2012; Rumberger, 2003; Voight et al., 2012; Xu et al., 2009). Prior research has also focused
predominantly on urban locales (Heywood, Thomas, & White, 1997; Nakagawa et al., 2002;
Parke & Kanyongo, 2012; Temple & Reynolds, 1999; Voight et al., 2012) or a handful of
districts within or across states (Carson et al., 2013; Yin et al., 2012). Such evidence does not
provide the patterns of mobility a state department of education must consider in developing
policy and support structures for all schools within the system. As such, I exploited detailed
enrollment data from Washington to explore student mobility across all K-5 serving schools in
the state and connected that mobility to school-level poverty.
Several key findings from this work expand the school mobility literature and identify
ways in which high-poverty schools differ from schools serving fewer low-income students with
regard to mobility. High-poverty schools deal with significantly higher rates of mobility than
schools serving fewer low-income students. These results are not surprising as extant mobility
literatures documents the frequent movement of low-income students across schools (Alexander
et al., 1996; de la Torre & Gwynne, 2009; US Government Accountability Office, 2010; Voight
et al., 2012; Xu et al., 2009). However, even after accounting for students sorting in non-random
ways across school contexts, the difference in school mobility rates between low and high-
poverty schools remains larger than nine percentage points. This research is not establishing a
causal relationship and should not be taken to suggest that high-poverty schools cause the
96
mobility. Rather, there is something inherent to the communities served in high-poverty schools
that create the unaccounted-for variation in mobility.
Unlike the differences in mobility rate, I find that schools in all poverty-quartiles
experience the arrival of new students at similar times in each calendar month and throughout the
academic year. Moreover, schools in all poverty-quartiles experience the enrollment of new
students each week. At the start of each calendar month (e.g., November 1
st
) and especially at the
start of the new calendar year, the rate of new students arriving is higher than other times in the
month. The exception to this trend is April when mobility spikes in the middle of the calendar
month, approximately the time of school spring breaks and just before annual state testing. This
particular exception may be specific to Washington and might differ in states with alternative
testing calendars or in schools without spring breaks.
Prior research has not plotted when mobility occurs in the academic year or how this
differs across contexts or states. More work is needed in this area. However, this first attempt to
consider timing does suggest that administrators may want to consider the timing of spring break
and the relative proximity to testing periods. Families appear to respond to a student’s out-of-
school time to make school switches. This means schools must handle an influx of students
during a time in the academic calendar that is already high in administrative demand. While all
schools see a peak of enrollment in mid-April, the volume of new arrivals is more than twice as
large in high-poverty schools, often the exact schools that face the greatest external pressure for
annual testing performance.
Beyond differences in the volume of late arriving students high-poverty schools work
with in a given year, the types of mobile students these schools work with also differ from the
types of mobile students arriving in lower poverty schools. High-poverty schools are more likely
97
to work with mobile students who move from higher poverty-quartile schools while low-poverty
schools are more likely to work with mobile students who are arriving from lower poverty-
quartile schools. Similarly, high-poverty schools experience higher odds of serving low-income
students whereas low-poverty schools are less likely to work with mobile students who are low-
income.
Essentially, mobility is another mechanism that sorts students in non-random ways across
schools. Some of this sorting may be due to the availability of other school options for students
and it may also be a reflection of different types of families making reactive versus proactive
moves. A next step in this research on statewide mobility and mobility across poverty-contexts is
to focus on the concentration of available schools by poverty-quartile in the geographic region
from which or to which students move. Such analysis will further the research on proactive and
reactive moves as well as inform school choice literatures to show who does or does not have the
relative ease of access to the highest-quality schools within the public school system.
Importantly, the mobility patterns identified in Washington may differ in states that have
stronger cultures of school choice and a more established charter school presence. Repeating this
analysis in a strong choice state would help identify whether these patterns are unique to
Washington or are similar in states with more schooling options, particularly for elementary-
aged students.
The overarching story in this research is that mobility occurs in all types of schools and
constantly throughout the year but that the disruptions are concentrated in schools that already
tend to face numerous challenges and resource constraints. As previously mentioned, the
availability and retention of high-quality teachers is a challenge for high-poverty schools and the
turnover of educators, occurring more frequently in high-poverty communities, reduces the
98
consistency of curriculum delivery and instructional quality in these schools (Hanushek et al.,
2004b; Lankford et al., 2002; Shen, 1997; Shields et al., 2001). Compiling student mobility onto
teacher instability further exacerbates the learning opportunities for both stable and mobile
students in high-poverty communities. Education policies that do not account for or address the
existence and prevalence of mobility overlook an aspect of the educational environment that has
the potential to disrupt or hinder school improvement efforts (Bryk et al., 2010; Desimone,
2002).
Mobility factors into all aspects of the current education policy and practice landscape.
Instability in school enrollments has implications for school accountability policy, school finance
reform, the implementation of standards and curriculum, and even regulations on data reporting
and data systems. Schools can utilize a number of federal programs to support key groups of
mobile students (e.g., The McKinney-Vento Act for students experiencing homelessness or the
Interstate Compact on Educational Opportunity for Military Children) but the broader landscape
of education policy has not identified ways to support the mobility of students who do not belong
to these key groups. Moreover, the student-by-student supports offered through the McKinney-
Vento Act or the Interstate Compact does not address the structural and systemic mobility
schools deal with each year.
40
This study builds upon the collective evidence from extant research that mobility is
pervasive and experienced every year in schools across the country. Finding ways to support
schools working with mobile students may go a long way in supporting the educational goals of
reducing inequities and increasing opportunities for all students, especially those from low-
income families. But this research suggests that simply recognizing and supporting mobility
40
Schools serving predominantly military-connected students may benefit from the Interstate Compact in key
ways but this does not mean that all schools in the US education system are aware of or could utilize such a
policy to help address challenges associated with mobility.
99
based on proportions of students moving overlooks the ways in which mobility differs between
high-poverty and low-poverty communities. In this case, equal treatment of mobility across
schools will not lend itself to equitable support of schools to deal with the challenges associated
with mobility.
Limitations
This study is the first to explicitly connect student mobility with school-level poverty.
Moreover, it has added to the literature on mobility partially due to the extensive enrollment data
provided by Washington’s OSPI. For example, the day-by-day exchange of student across
schools has not been previously examined. However, this research still has limitations that must
be acknowledged. First, this study relies on a dichotomous indicator of student income status.
Recent research from Michelmore and Dynarski (2017) challenges the usefulness or
appropriateness of a temporal versus persistent indicator of poverty in explaining educational
outcomes. Essentially, within students who qualify for Free- or Reduced-Price lunch there is
variation in income and poverty levels (Clotfelter, Ladd, Vigdor, & Wheeler, 2007). More
nuance in the patterns and relationships of mobility may be identified when using additional or
alternative measures of student income-status.
Second, the state of Washington context may not be generalizable to other states.
California, for example, serves a significantly greater proportion of Latino students, a group
shown to be more highly mobile than White students in extant research. As such, these results
may be suggestive but not representative of what occurs in other locales. Further analysis is
needed to determine whether other states have similar patterns of mobility as it relates to school-
level poverty. Third, there is concern for omitted variable bias. I do not control for a student’s
status of experiencing homelessness or living in the foster care system as the data provided
100
cannot fully account for these groups of students. The state either no longer reports these
indicators or the data is missing not at random. Additionally, the ability to identify military-
connected students is in its infancy at the federal and state level. Fourth, I only focus on mid-year
mobility. Another step for this research to further understand differences in mobility by school-
level poverty is to analyze differences in between-year non-structural mobility. Likely, the rates
and types of students moving in the summer months also differ across poverty quartiles and
would have implications for both teacher planning and administrative demand experienced at the
start of an academic year. Finally, I do not use a causal framework in this analysis. I do not
attempt to claim that high-poverty schools or low-poverty schools cause different types of
students to move or for them to move in different ways. The results of this study show the
associations between school poverty and mobility and should not be interpreted otherwise.
The limitations of this study, however, do not diminish the usefulness of the research
presented in this paper. This study provides a starting point for policymakers, researchers, and
school administrators from which to enhance discussion around mobility in specific school
contexts and to further push for policies that address systematic mobility differences in order to
best support students and schools. Future research in this area should continue to explore how the
timing of mobility differs by students and across years. Such research may lead to the
identification of policy or practice that can support mobile students and the schools in which they
enroll.
101
Table 2.1
Student Characteristics, student-year observations
Statewide Average
(n=2,371,676)
Stable Students
(n=2,086,868)
Mobile Students
(n=284,808)
mean sd
mean Sd
mean sd
White 57.2 49.5
58.4 49.3
47.9 50.0
Black 4.8 21.3
4.3 20.3
8.0 27.2
Latino 21.7 41.2
21.1 40.8
26.0 43.9
Asian 7.3 26.0
7.4 26.2
6.5 24.6
American Indian 1.6 12.5
1.5 12.1
2.2 14.6
Pacific Islander 1.0 10.1
0.9 9.4
2.1 14.4
Two or More Races 6.5 24.6
6.4 24.4
7.2 25.9
Low-Income 51.9 50.0
49.8 50.0
66.7 47.1
ELL 14.3 35.0
13.8 34.5
17.8 38.3
SWD 13.8 34.4
13.7 34.4
14.3 35.0
Migrant 1.8 13.3
1.7 12.8
3.0 16.9
Student Prior Achievement 0.019 0.906
0.046 0.902
-0.318 0.886
Grade K 16.9 37.5
16.2 36.8
22.1 41.5
Grade 1 17.0 37.5
16.8 37.4
18.2 38.5
Grade 2 16.6 37.2
16.7 37.3
16.2 36.8
Grade 3 16.5 37.1
16.7 37.3
15.3 36.0
Grade 4 16.5 37.2
16.8 37.4
14.5 35.2
Grade 5 16.5 37.1 16.9 37.5 13.7 34.3
Note. Lagged performance variables are based 586,916 students statewide: 543,482 stable students, and 43,434
mobile students.
102
Table 2.2
School Enrollment Demographics
Statewide Average
(n=5874)
Low-Poverty
(Q1) Schools
(n=1471)
Mid-Range Poverty
(Q2) Schools
(n=1467)
Mid-Range Poverty
(Q3) Schools
(n=1469)
High-Poverty
(Q4) Schools
(n=1467)
mean sd mean sd mean sd mean sd mean sd
% Low-Income 52.4 24.9 19.6 10.4 44.3 5.6 61.7 5.7 84.0 8.6
% White 60.1 23.9 72.3 13.7 69.4 15.7 63.3 19.7 35.5 24.8
% Black 4.4 7.6 2.3 2.7 3.4 4.3 4.6 7.7 7.4 11.5
% Latino 19.9 20.2 8.4 6.8 13.1 8.2 18.1 12.2 40.2 28.0
% Asian 6.3 8.5 9.5 10.1 5.4ᵃ 6.9 4.8ᵃ 7.5 5.2 8.3
% American Indian 2.2 7.7 0.7 2.5 1.3 3.1 2.2 5.1 4.7 13.6
% Pacific Islander 0.9 1.5 0.4 0.7 0.7 1.0 0.9 1.4 1.6 2.3
% Two or More Races 6.2 4.6 6.4 4.0 6.7 4.5 6.2 4.6 5.4 5.0
% ELL 12.2 14.9 4.3 4.8 7.0 7.5 10.6 10.1 27.1 19.9
% SWD 14.2 4.4 12.5 4.2 14.4ᵃ 3.9 15.3 4.5 14.6 4.6
% Migrant 1.6 5.2 0.2 2.1 0.3 1.7 0.9 2.7 5.0 8.9
% Urban
27.5 44.7 23.5 42.4 18.8 39.1 26.5 44.1 41.3 49.3
% Suburban
36.7 48.2 53.2 49.9 43.5 49.6 27.8 44.8 22.4 41.7
% Rural
24.0 42.7 20.4 40.3 26.2 44.0 28.7 45.3 20.4 40.3
% Town
11.8 32.3 3.0 17.0 11.5 31.9 17.0ᵃ 37.6 15.9 36.6
Mobility Rate 18.4 8.5 11.8 6.3 17.0 6.1 20.2 7.5 24.8 8.2
School Prior Achievement 0.001 0.340 0.351 0.257 0.064 0.204 -0.082 0.217 -0.332 0.251
Starting Enrollment 386 162 430 170 384ᵃ 155 348 157 383 156
Western WA 72.0 44.9 85.6 35.1 79.1 40.7 68.4 46.5 54.9 49.8
Note. Lagged performance variables are based on 4596 Schools Statewide, 1152 Q1 Schools, 1157 Q2 Schools, 1136 Q3 Schools, and 1151 Q4 Schools
ᵃ Difference from Q4 schools not significant. All other differences are significant (p<0.001).
103
Table 2.3
School Mobility Rates
Specification 1
Mobility Rate
2010-2014
Specification 2
Mobility Rate
2010-2014
Specification 3
Mobility Rate
2011-2014
Specification 4
Mobility Rate
2011-2014
Q1 School -13.028*** -10.775*** -11.498*** -9.166***
(0.483) (0.801) (0.906) (0.981)
Q2 School -7.799*** -6.610*** -7.052*** -5.894***
(0.478) (0.649) (0.730) (0.752)
Q3 School -4.553*** -3.698*** -4.057*** -3.406***
(0.508) (0.575) (0.646) (0.645)
School Prior Achievement -- -- -- -5.672***
-- -- -- (0.748)
% Black -- 0.150*** 0.120*** 0.076**
-- (0.030) (0.030) (0.029)
% Latino -- 0.016 -0.002 -0.029
-- (0.016) (0.018) (0.018)
% Asian -- -0.126*** -0.134*** -0.097***
-- (0.026) (0.028) (0.027)
% American Indian -- -0.002 -0.028 -0.077**
-- (0.021) (0.025) (0.024)
% Pacific Islander -- 0.921*** 0.937*** 0.901***
-- (0.127) (0.139) (0.137)
% Two or More Races -- 0.207*** 0.185*** 0.210***
-- (0.036) (0.039) (0.038)
% Students with disabilities -- 0.041 0.046 0.024
-- (0.042) (0.047) (0.045)
% Migrant -- -0.057 -0.057 -0.054
-- (0.049) (0.057) (0.056)
Urban -- 0.825* 0.813* 0.949*
-- (0.388) (0.404) (0.401)
Rural -- -0.479 -0.523 -0.661
-- (0.453) (0.484) (0.478)
Town -- -0.714 -0.956+ -0.746
-- (0.488) (0.512) (0.506)
Starting Enrollment -- -0.384 -0.24 -0.064
-- (0.262) (0.281) (0.271)
Western, WA -- 1.511*** 1.437** 1.370**
-- (0.421) (0.443) (0.432)
Constant 24.754*** 20.253*** 21.365*** 21.192***
(0.405) (1.091) (1.258) (1.229)
Year fe yes yes yes yes
Clustered errors (school) yes yes yes yes
R2 0.312 0.421 0.426 0.445
N 5872 5872 4594 4594
Note. † p<0.100, * p<0.050, **p<0.010, ***p<0.001
104
Table 2.4
Relative Risk Ratio of Moving to Q1, Q2, or Q3 Poverty School (Relative to a Q4 Poverty School)
Specification 1
All Grades, 2011-2014
No Student Prior Achievement
Specification 2
Grades 4-5, 2011-2014
No Student Prior Achievement
Specification 3
Grades 4-5, 2011-2014
Student Prior Achievement
To Q1
School vs.
Q4 School
To Q2
School vs.
Q4 School
To Q3
School vs.
Q4 School
To Q1
School vs.
Q4 School
To Q2
School vs.
Q4 School
To Q3
School vs.
Q4 School
To Q1
School vs.
Q4 School
To Q2
School vs.
Q4 School
To Q3
School vs.
Q4 School
Low-Income 0.402*** 0.588*** 0.722***
0.408*** 0.534*** 0.672***
0.426*** 0.547*** 0.678***
(0.022) (0.027) (0.030)
(0.049) (0.052) (0.061)
(0.051) (0.053) (0.062)
Q1 Sending 4.136*** 1.976*** 0.944
4.203*** 1.943*** 1.034
4.333*** 1.970*** 1.037
(0.330) (0.145) (0.070)
(0.739) (0.308) (0.166)
(0.761) (0.313) (0.167)
Q2 Sending 2.414*** 1.993*** 1.229***
2.696*** 2.479*** 1.611***
2.748*** 2.501*** 1.616***
(0.169) (0.125) (0.075)
(0.419) (0.339) (0.216)
(0.427) (0.342) (0.217)
Q3 Sending 1.524*** 1.592*** 1.211**
1.573** 1.555*** 1.215
1.586** 1.560*** 1.216
(0.104) (0.096) (0.071)
(0.238) (0.201) (0.152)
(0.239) (0.202) (0.152)
Low-Income* Q1 Sending 0.596*** 0.857* 1.023
0.426*** 0.825 0.861
0.435*** 0.834 0.866
(0.047) (0.063) (0.077)
(0.074) (0.132) (0.142)
(0.076) (0.134) (0.143)
Low-Income* Q2 Sending 0.717*** 0.886† 0.929
0.568*** 0.681** 0.695**
0.577*** 0.686** 0.697**
(0.051) (0.057) (0.057)
(0.090) (0.095) (0.094)
(0.091) (0.096) (0.095)
Low-Income* Q3 Sending 0.776*** 0.830** 0.959
0.763+ 0.880 0.942
0.773 0.887 0.944
(0.057) (0.053) (0.058)
(0.122) (0.118) (0.122)
(0.124) (0.119) (0.122)
Prior Achievement -- -- --
-- -- --
1.314*** 1.163*** 1.055*
-- -- --
-- -- --
(0.036) (0.027) (0.023)
First Grade 1.145*** 1.171*** 1.053*
-- -- --
-- -- --
(0.038) (0.033) (0.027)
-- -- --
-- -- --
Second Grade 1.168*** 1.168*** 1.065*
-- -- --
-- -- --
(0.040) (0.034) (0.029)
-- -- --
-- -- --
Third Grade 1.155*** 1.149*** 1.049†
-- -- --
-- -- --
(0.041) (0.034) (0.029)
-- -- --
-- -- --
Fourth Grade 1.204*** 1.192*** 1.141***
-- -- --
-- -- --
(0.043) (0.036) (0.031)
-- -- --
-- -- --
Fifth Grade 1.171*** 1.288*** 1.157***
0.961 1.086* 1.019
0.967 1.091* 1.021
(0.043) (0.039) (0.033)
(0.040) (0.037) (0.033)
(0.041) (0.038) (0.033)
105
Student Characteristics
Black 0.437*** 0.452*** 0.606***
0.488*** 0.503*** 0.618***
0.530*** 0.525*** 0.627***
(0.020) (0.017) (0.020)
(0.043) (0.037) (0.040)
(0.047) (0.039) (0.041)
Latino 0.470*** 0.510*** 0.597***
0.491*** 0.520*** 0.626***
0.506*** 0.528*** 0.629***
(0.014) (0.013) (0.014)
(0.030) (0.025) (0.028)
(0.031) (0.025) (0.028)
Asian 1.115* 0.667*** 0.760***
1.056 0.676*** 0.840+
1.004 0.662*** 0.836†
(0.053) (0.034) (0.037)
(0.107) (0.068) (0.080)
(0.102) (0.067) (0.080)
American Indian 0.344*** 0.415*** 0.558***
0.463*** 0.462*** 0.659***
0.491*** 0.477*** 0.668***
(0.031) (0.028) (0.031)
(0.071) (0.056) (0.067)
(0.075) (0.057) (0.068)
Pacific Islander 0.274*** 0.410*** 0.593***
0.306*** 0.425*** 0.587***
0.324*** 0.439*** 0.594***
(0.026) (0.025) (0.030)
(0.058) (0.052) (0.062)
(0.061) (0.054) (0.063)
Two or More Races 0.623*** 0.592*** 0.693***
0.651*** 0.652*** 0.768***
0.673*** 0.664*** 0.773***
(0.024) (0.019) (0.022)
(0.052) (0.044) (0.049)
(0.054) (0.045) (0.049)
English language learner 0.639*** 0.680*** 0.812***
0.583*** 0.637*** 0.783***
0.696*** 0.698*** 0.808***
(0.022) (0.019) (0.020)
(0.048) (0.039) (0.041)
(0.059) (0.044) (0.044)
Student with Disability 0.993 0.999 1.047*
0.911+ 0.944 1.092*
1.056 1.021 1.120**
(0.029) (0.024) (0.023)
(0.050) (0.042) (0.045)
(0.060) (0.047) (0.048)
Migrant 0.347*** 0.609*** 0.589***
0.328** 0.626** 0.633***
0.319** 0.617** 0.630***
(0.060) (0.049) (0.038)
(0.122) (0.096) (0.080)
(0.119) (0.095) (0.080)
Sending School Characteristics
Prior School Achievement 1.612*** 1.095* 1.115**
1.591*** 0.981 1.055
1.370** 0.905 1.026
(0.084) (0.048) (0.045)
(0.169) (0.087) (0.085)
(0.148) (0.081) (0.084)
% Black 0.976*** 0.980*** 0.982***
0.970*** 0.977*** 0.979***
0.969*** 0.976*** 0.978***
(0.002) (0.001) (0.001)
(0.004) (0.003) (0.003)
(0.004) (0.003) (0.003)
% Latino 0.996*** 0.993*** 0.991***
0.988*** 0.992*** 0.989***
0.988*** 0.992*** 0.989***
(0.001) (0.001) (0.001)
(0.002) (0.002) (0.001)
(0.002) (0.002) (0.001)
% Asian 1.025*** 0.996* 0.989***
1.025*** 0.998 0.987***
1.025*** 0.998 0.987***
(0.002) (0.002) (0.002)
(0.003) (0.003) (0.003)
(0.003) (0.003) (0.003)
% American Indian 0.996 0.990*** 0.993***
1.000 0.986** 0.991**
0.999 0.986** 0.991**
(0.004) (0.002) (0.002)
(0.006) (0.005) (0.003)
(0.006) (0.005) (0.003)
% Pacific Islander 0.892*** 0.943*** 0.969***
0.886*** 0.930*** 0.973**
0.886*** 0.930*** 0.973**
(0.008) (0.006) (0.005)
(0.016) (0.011) (0.010)
(0.016) (0.011) (0.010)
% Two or More Races 0.995† 1.005* 0.998
1.002 1.002 0.997
1.002 1.002 0.997
(0.003) (0.002) (0.002)
(0.006) (0.005) (0.004)
(0.006) (0.005) (0.004)
% Students with disabilities 1.019*** 1.011*** 1.001
1.015* 1.017** 0.999
1.014* 1.016** 0.998
(0.003) (0.003) (0.002)
(0.006) (0.005) (0.005)
(0.006) (0.005) (0.005)
106
% Migrant 0.986** 1.007* 1.007**
0.996 1.010+ 1.005
0.996 1.010† 1.005
(0.004) (0.003) (0.002)
(0.009) (0.005) (0.005)
(0.009) (0.005) (0.005)
Urban 0.928** 0.839*** 0.971
0.861** 0.806*** 0.995
0.859** 0.804*** 0.994
(0.024) (0.018) (0.020)
(0.045) (0.035) (0.041)
(0.045) (0.035) (0.041)
Rural 0.683*** 0.903** 1.083*
0.678*** 0.907 1.108+
0.675*** 0.905 1.107
(0.028) (0.030) (0.034)
(0.055) (0.058) (0.069)
(0.055) (0.058) (0.069)
Town 0.559*** 0.795*** 1.002
0.561*** 0.728*** 0.987
0.558*** 0.726*** 0.987
(0.026) (0.027) (0.031)
(0.050) (0.049) (0.061)
(0.050) (0.049) (0.061)
Starting Enrollment 1.085*** 1.146*** 1.075***
1.130*** 1.175*** 1.078**
1.121*** 1.171*** 1.078**
(0.017) (0.015) (0.014)
(0.035) (0.030) (0.026)
(0.034) (0.030) (0.026)
Western WA 2.158*** 2.351*** 2.009***
2.227*** 2.420*** 2.152***
2.226*** 2.416*** 2.149***
(0.070) (0.060) (0.047)
(0.143) (0.120) (0.100)
(0.143) (0.120) (0.099)
Constant 0.462*** 0.683*** 1.075
0.735 0.883 1.313+
0.728 0.887 1.318†
(0.045) (0.054) (0.078)
(0.147) (0.141) (0.194)
(0.146) (0.141) (0.195)
Year fixed-effect yes
yes
yes
Clustered Errors (Student) yes
yes
yes
N 106351
26951
26951
χ2 21057.610***
5245.824***
5327.712***
bic 260000 66865 66781
Note. † p<0.100, * p<0.050, ** p<0.010, *** p<0.001
107
Figure 2.1.
Unadjusted Distributions of School Mobility Rates, by Poverty-Quartiles
Q1 Mobility: 11.54%
Q2 Mobility: 16.63%
Q3 Mobility: 19.53%
Q4 Mobility: 24.52%
0 2 4 6 8
10
Percent
0 .2 .4 .6 .8
School Mobility Rate
Low-Poverty (Q1) Mid-Range Poverty (Q2)
Mid-Range Poverty (Q3) High-Poverty (Q4)
108
Figure 2.2
Arrival of New Students, by Week after September 1
st
(Proportion)
0 2 4 6 8
Proportion of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Low-Poverty Schools, Q1
0 2 4 6 8
Proportion of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Mid-Range Poverty Schools, Q2
0 2 4 6 8
Proportion of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Mid-Range Poverty Schools, Q3
0 2 4 6 8
Proportion of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
High-Poverty Schools, Q4
Arrival of New Students, by Week after September 1st (Proportion)
109
Figure 2.3.
Arrival of New Students, by Week after September 1
st
(Count)
0
1000 2000 3000 4000
Number of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Low-Poverty Schools, Q1
0
1000 2000 3000 4000
Number of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Mid-Range Poverty Schools, Q2
0
1000 2000 3000 4000
Number of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
Mid-Range Poverty Schools, Q3
0
1000 2000 3000 4000
Number of Arrivers, by Week
October 1
November 1
December 1
January 1
February 1
March 1
April 1
May 1
June 1
0 300
High-Poverty Schools, Q4
Arrival of New Students, by Week after September 1st (Count)
110
References
Alexander, K. L., Entwisle, D. R., & Dauber, S. L. (1996). Children in motion: School transfers
and elementary school performance. Journal of Educational Research, 90(1), 3–12.
Allensworth, E., Ponisciak, S., & Mazzeo, C. (2009). The schools teachers leave: Teacher
mobility in Chicago Public Schools (pp. 1–52). Chicago, IL: Consortium on Chicago
School Research. Retrieved from
https://consortium.uchicago.edu/sites/default/files/publications/CCSR_Teacher_Mobility.
pdf
Astone, N. M., & McLanahan, S. S. (1994). Family structure, residential mobility, and school
dropout: A research note. Demography, 31(4), 575–584.
Baker, B., Sciarra, D. G., & Farrie, D. (2010). Is school funding fair? A national report card.
Newark, NJ: Education Law Center.
Balfanz, R., Legters, N., West, T. C., & Weber, L. M. (2007). Are NCLB’s measures, incentives,
and improvement strategies the right ones for the nation’s low-performing high schools?
American Educational Research Journal, 44(3), 559–593.
Berliner, D. (2009). Poverty and Potential: Out-of-School Factors and School Success. Tempe,
AZ: Education and the Public Interest Center & Education Policy Research Unit.
Retrieved from http://epicpolicy.org/publication/poverty-and-potential
Blagojevich, R. R., Ruiz, J. H., & Dunn, R. J. (2005). Illinois State Board of Education request
for a change in the definition of full academic year. Retrieved from
http://www.isbe.net/nclb/pdfs/QA_May1_fullacademicyear.pdf
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability
system. American Educational Research Journal, 42(1), 231–268.
111
Borman, G. D. (2005). National efforts to bring reform to scale in high-poverty schools:
Outcomes and implications. Review of Research in Education, 29, 1–27.
Boyd, D., Lankford, H., Loeb, S., Rockoff, J., & Wyckoff, J. (2008). The narrowing gap in New
York City teacher qualifications and its implications for student achievement in high-
poverty schools. Journal of Policy Analysis and Management, 27(4), 793–818.
Bradshaw, C. P., Sudhinaraset, M., Mmari, K., & Blum, R. W. (2010). School transitions among
military adolescents: A qualitative study of stress and coping. School Psychology Review,
39(1), 84–105.
Branch, G. F., Hanushek, E. A., & Rivkin, S. G. (2013). School leaders matter. Education Next,
13(1).
Branz-Spall, A. M., Rosenthal, R., & Wright, A. (2003). Children of the road: Migrant students,
our nation’s most mobile population. Journal of Negro Education, 72(1), 55–62.
Bryk, A. S., Sebring, P. A., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Trust, Size,
and Stability: Key Enablers. In Organizing Schools for Improvement (pp. 137–157).
Chicago, IL: The University of Chicago Press.
Burkam, D. T., Lee, V. E., & Dwyer, J. (2009). School mobility in the early elementary grades:
Frequency and impact from nationally-representative data. Presented at the Workshop on
the Impact of Mobility and Change on the Lives of Young Children, Schools, and
Neighborhoods, Washington, DC.
Carson, D. C., Esbensen, F., & Taylor, T. J. (2013). A longitudinal analysis of the relationship
between school victimization and student mobility. Youth Violence and Juvenile Justice,
11(4), 275–295.
112
Chingos, M. (2015, June 18). Who opts out of state tests? Retrieved from
http://www.brookings.edu/research/papers/2015/06/18-chalkboard-who-opts-out-chingos
Clotfelter, C. T., Ladd, H. F., Vigdor, J. L., & Wheeler, J. (2007). High-poverty schools and the
distribution of teachers and principals. North Carolina Law Review, 85, 1345–1379.
Community Research Partners. (2012). Student nomads: Mobility in Ohio’s schools. Columbus,
OH: Community Research Partners.
Conger, D., & Reback, A. (2001). How children’s foster care experiences affect their education.
New York, NY: Vera Institute of Justice.
CPRE Policy Briefs. (1991). Putting the pieces together: Systemic school reform. Rutgers, NJ:
Consortium for Policy Research in Education.
Cullen, J. B., Jacob, B. A., & Levitt, S. D. (2005). The impact of school choice on student
outcomes: An analysis of the Chicago Public Schools. Journal of Public Economics, 89,
729–760.
Cullen, J. B., & Reback, R. (2006). Tinkering toward accolades: School gaming under a
performance accountability system. Advances in Microeconomics, 15, 1–34.
Cutuli, J. J., Desjardins, C. D., Herbers, J. E., Long, J. D., Heistad, D., Chan, C., … Masten, A.
S. (2013). Academic achievement trajectories of homeless and highly mobile students:
Resilience in the context of chronic and acute risk. Child Development, 84(3), 841–857.
Darling-Hammond, L. (2007). The flat Earth and education: How America’s commitment to
equity will determine our future. Educational Researcher, 36(6), 318–334.
Davidson, E., Reback, R., Rockoff, J. E., & Schwartz, H. L. (2015). Fifty ways to leave a child
behind: Idiosyncracies and discrepancies in states’ implementation of NCLB.
Educational Researcher, 44(6), 347–358.
113
de la Torre, M., & Gwynne, J. (2009). Changing schools: A look at student mobility trends in
Chicago public schools since 1995. Chicago, IL: Consortium on Chicago School
Research.
Dee, T. S., & Jacob, B. (2011). The Impact of No Child Left Behind on Student Achievement.
Journal of Policy Analysis and Management, 30(3), 418–446.
Delisle, D. S. (2014, February 12). WA Field Testing Flexibility Approval Letter. Retrieved
from http://www.k12.wa.us/ESEA/Waivers/WAFieldTestingFlexibilityapprovalletter.pdf
Desimone, L. M. (2002). How can Comprehensive School Reform models be successfully
implemented? Review of Educational Research, 72(3), 433–479.
Dorn, R. I. (2013, September 19). Memorandum No. 042-13M Assessment and Student
Information: Spring 2014 Smater Balanced field tests. Retrieved from
http://www.k12.wa.us/bulletinsmemos/Memos2013/M042-13.doc
Eadie, S., Eisner, R., Miller, B., & Wolf, L. (2013). Student mobility patterns and achievement in
Wisconsin: Report prepared for the Wisconsin Department of Public Instruction.
Madison, WI: University of Wisconsin-Madison Robert M. La Follette School of Public
Affairs.
Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2014a). Selecting growth measures for
school and teacher evaluations: Should proportionality matter? Educational Policy, 30(3),
465–500.
Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2014b). The sensitivity of Value-Added
Estimates to specification adjustments: Evidence from school- and teacher-level models
in Missouri. Statistics and Public Policy, 1(1), 19–27.
114
Elementary and Secondary Education Act of 1965, As Amended by the Every Student Succeeds
Act-Accountability and State Plans, 81 FR 34539 § 34 CFR 200 (2016). Retrieved from
https://federalregister.gov/a/2016-12451
Engec, N. (2006). Relationship between mobility and student performance and behavior. Journal
of Educational Research, 99(3), 167–178.
Every Student Succeeds Act, Pub. L. No. 114–95 (2015).
Fantuzzo, J. W., LeBoeuf, W. A., Chen, C., Rouse, H. L., & Culhane, D. P. (2012). The unique
and combined effects of homelessness and school mobility on the educational outcomes
of young children. Educational Researcher, 41(9), 393–402.
Figlio, D. N., & Getzler, L. S. (2006). Accountability, ability, and disability: Gaming the system?
Advances in Applied Microeconomics, 14, 35–49.
Fong, A. B., Bae, S., & Huang, M. (2010). Patterns of student mobility among English language
learner students in Arizona public schools. ((Issues & Answers Report, REL 2010 - No.
093)). Washington, DC: US Department of Education, Institute of Education Sciences,
National Center for Education Evaluation and Regional Assistance, Regional Education
Laboratory West.
Forte, E. (2010). Examing the assumptions underlying the NCLB federal accountability policy
on school improvement. Educational Psychologist, 45(2), 76–88.
Fuhrman, S. H. (2004). Introduction. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning
accountability systems for education (pp. 3–14). New York, NY: Teachers College Press.
Furgol, K., & Helms, L. (2012). Lessons in leveraging implementation: Rulemaking, growth
models, and policy dynamics under NCLB. Educational Policy, 26(6), 777–812.
115
Gasper, J., DeLuca, S., & Estacion, A. (2010). Coming and going: Explaining the effects of
residential and school mobility on adolescent delinquency. Social Science Research, 39,
459–476.
Grigg, J. (2012). School enrollment changes and student achievement growth: A case study in
educational disruption and continuity. Sociology of Education, 85(4), 388–404.
Gruman, D. H., Harachi, T. W., Abbott, R. D., Catalano, R. F., & Fleming, C. B. (2008).
Longitudinal effects of student mobility on three dimensions of elementary school
engagement. Child Development, 79(6), 1833–1852.
Hagan, J., MacMillan, R., & Wheaton, B. (1996). New kid in town: Social capital and the life
course effects of family migration on children. American Sociological Review, 61(3),
368–385.
Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (2004a). Disruption versus tiebout improvement:
The costs and benefits of switching schools. Journal of Public Economics, 88(1721–
1746).
Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (2004b). Why public schools lose teachers. Journal
of Human Resources, 39, 326–354.
Harris, D. N. (2010). Value-added measures of education performance: Clearing away the
smoke and mirrors (No. PACE Policy Brief 10-4). Stanford, CA: Policy Analysis for
California Education (PACE). Retrieved from
http://edpolicyinca.org/sites/default/files/PACE_BRIEF_OCT_2010.pdf
Haynie, D. L., & South, S. J. (2005). Residential mobility and adolescent violence. Social
Forces, 84(1), 361–374.
116
Heilig, J. V., & Darling-Hammond, L. (2008). Accountability Texas-style: The progress and
learning of urban minority students in a high-stakes testing context. Educational
Evaluation and Policy Analysis, 30(2), 75–110.
Heinlein, L. M., & Shinn, M. (2000). School mobility and student achievement in an urban
setting. Psychology in the Schools, 37(4), 349–357.
Heywood, J. S., Thomas, M., & White, S. B. (1997). Does classroom mobility hurt stable
students?: An examination of achievementi n urban schools. Urban Education, 32(3),
354–372.
Holmstrom, B. (1999). Managerial incentive problems: A dynamic perspective. The Review of
Economic Studies, 66(1), 169–182.
Holmstrom, B., & Milgrom, P. (1987). Aggregation and linearity in the provision of
intertemporal incentives. Econometrica, 55, 303–328.
Holmstrom, B., & Milgrom, P. (1991). Multitask principal-agent analyses: Incentive contracts,
asset ownership, and job design. Journal of Law, Economics, & Organization, 7(Special
Issue: [Papers from the Conference on the New Science of Organization, January 1991]),
24–52.
Jacob, B. A. (2005). Accounability, incentives and behavior: The impact of high-stakes testing in
the Chicago Public Schools. Journal of Public Economics, 89(5–6), 761–796.
Jennings, J. L., & Beveridge, A. A. (2009). How does test exemption affect schools’ and
students’ academic performance? Educational Evaluation and Policy Analysis, 31(2),
153–175.
117
Johnson, S. M., Kraft, M. A., & Papay, J. P. (2012). How context matters in high-need schools:
The effects of teachers’ working conditions on their professional satisfaction and their
students’ achievement. Teachers College Record, 114(10), 1–39.
Kerbow, D. (1996). Patterns of urban student mobility and local school reform. Journal of
Education for Students Placed at Risk, 1(2), 147–169.
Kim, J. S., & Sunderman, G. L. (2005). Measuring academic proficiency under the No Child
Left Behind Act: Implications for educational equity. Educational Researcher, 34(8), 3–
13.
Kirshner, B., Gaertner, M., & Pozzoboni, K. (2010). Tracing transitions: The effect of high
school closure on displaced students. Educational Evaluation and Policy Analysis, 32(3),
407–429.
Klein, A. (2015, December 22). Ed. Dept. to states: Even under ESSA, you need a plan for high
opt-out rates. Retrieved from http://blogs.edweek.org/edweek/campaign-k-
12/2015/12/ed_dept_to_states_under_essa_need_plan_for_opt-
Outs.html?r=848597750&preview=1
Krieg, J. M. (2008). Are students left behind? The distributional effects of the No Child Left
Behind Act. Education Finance and Policy, 3(2), 250–281.
Krieg, J. M., & Storer, P. (2006). How much do students matter? Applying the Oaxaca
decomposition to explain determinants of Adequate Yearly Progress. Contemporary
Economic Policy, 24(4), 563–581.
Ladd, H. F. (2011). Teachers’ perceptions of their working conditions: How predictive of
planned and actual teacher movement? Educational Evaluation and Policy Analysis, 33,
235–261.
118
Ladd, H. F. (2012). Education and poverty: Confronting the Evidence. Journal of Policy Analysis
and Management, 31(2), 203–227.
Ladd, H. F., & Zelli, A. (2002). School-based accountability in North Carolina: The responses of
school principals. Educational Administration Quarterly, 38(4), 494–529.
Lankford, H., Loeb, S., & Wyckoff, J. (2002). Teacher sorting and the plight of urban schools: A
descriptive analysis. Educational Evaluation and Policy Analysis, 24(1), 37–62.
Linn, R. L. (2003). Accountability: Responsibility and reasonable expectations. Educational
Researcher, 32(7), 3–13.
Lipscomb, S., Gill, B., Booker, K., & Johnson, M. (2010). Estimating teacher and school
effectiveness in Pittsburgh: Value-Added Modeling and results (No. 06723.300) (pp. 1–
60). Cambridge, MA: Mathematica Policy Research, Inc.
Loeb, S., Darling-Hammond, L., & Luczak, J. (2005). How teaching conditions predict teacher
turnover in California schools. Peabody Journal of Education, 80(3), 44–70.
Lyle, D. S. (2006). Using military deployments and job assignments to estimate the effect of
parental absences and household relocations on children’s academic achievement.
Journal of Labor Economics, 24(2), 319–350.
Making adequate yearly progress, 34 C.F.R. § 200.20(e) (2003).
Maxwell, L. A. (2013, April 4). Atlanta cheating scandal reverberates. EdWeek. Retrieved from
http://www.edweek.org/ew/articles/2013/04/04/28atlanta.h32.html?r=1806723235
McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The intertemporal
variability of teacher effect estimates. Education Finance and Policy, 4(4), 572–606.
Mehana, M., & Reynolds, A. J. (2004). School mobility and achievement: A meta-analysis.
Children and Youth Services Review, 26, 93–119.
119
Michelmore, K., & Dynarski, S. M. (2017). The gap within the gap: Using longitudinal data to
understand income differences in educational outcomes. AERA Open, 3(1), 1–18.
Military Child Education Coalition. (2009). Ten years of excellence serving military children:
Performance report 1998-2008. Harker Heights, TX: Military Child Education Coalition.
Nakagawa, K., Stafford, M. E., Fisher, T. A., & Matthews, L. (2002). The “city migrant”
dilemma: Building community at high-mobility urban schools. Urban Education, 37, 96–
125.
Neal, D., & Schanzenback, D. W. (2010). Left behind by design: Proficiency counts and test-
based accountability. Review of Economics and Statistics, 92, 263–283.
No Child Left Behind Act of 2001, Pub. L. No. 107.110 (2001).
O’Donnell, R., & Gazos, A. (2010). Student mobility in Massachusetts. Maldon, MA:
Massachusetts Department of Elementary and Secondary Education.
Offenberg, R. (2004). Inferring Adequate Yearly Progress of Schools from student achievement
in highly mobile communities. Journal of Education for Students Placed at Risk, 9(4),
337–355.
Office of the State Superintendent of Education (OSSE). (2013). A statewide analysis of student
mobility in the District of Columbia executive overview. Washington, DC: OSSE.
Orfield, G., & Lee, C. (2004). Brown at 50: King’s dream or Plessy’s nightmare? Cambridge,
MA: The Civil Rights Project at Harvard University.
Orfield, G., Losen, D., Wald, J., & Swanson, C. B. (2004). Losing our future: How minority
youth are being left behind by the graduation rate crisis. Cambridge, MA: The Civil
Rights Project at Harvard University.
120
Özek, U. (2012). One day too late? Mobile students in an era of accountability. National Center
for Analysis of Longitudinal Data in Education Reseach (CALDER), Working Paper No.
82.
Parke, C. S., & Kanyongo, G. Y. (2012). Student attendance, mobility, and mathematics
achievement. Journal of Educational Research, 105(3), 161–175.
Polikoff, M. S., McEachin, A., Wrabel, S. L., & Duque, M. (2014). The waive of the future?
School accountability in the waiver era. Educational Researcher, 43(1), 45–54.
https://doi.org/10.3102/0013189X13517137
Polikoff, M. S., & Wrabel, S. L. (2013). When is 100% not 100%? The use of safe harbor to
make Adequate Yearly Progress. Education Finance and Policy, 8(2), 251–270.
Porter, A. C., & Chester, M. (2002). Building a high-quality assessment and accountability
program: The Philadelphia example. In D. Ravitch (Ed.), Brookings papers on education
policy 2002 (pp. 285–337). Washington, DC: Brookings Institution Press.
Porter, A. C., Linn, R. L., & Trimble, C. S. (2005). The effects of state decisions about NCLB
adequate yearly progress targets. Educational Measurement: Issues and Practice, 24(4),
32–39.
Prendergast, C. (1999). The provision of incentives in firms. Journal of Economic Literature, 37,
7–63.
Raudenbush, S. W. (2004). Schooling, statistis, and poverty: Can we measure school
improvement? Princeton, NJ: Educational Testing Services, Policy Evaluation and
Research Center, Policy Information Center.
Raudenbush, S. W., Jean, M., & Art, E. (2011). Year-by-year and cumulative impacts of
attending a high-mobility elementary schoolon children’s mathematics achievement in
121
Chicago, 1995 to 2005. In G. J. Duncan & R. J. Murnane (Eds.), Whither opportunity?
Rising inequality, schools, and children’s life chances (pp. 359–375). New York, NY:
Russell Sage Foundation.
Reardon, S. F. (2011). The widening academic achievement gap between the rich and the poor:
New evidence and possible explanations. In G. J. Duncan & R. J. Murnane (Eds.),
Whither opportunity: Rising inequality, schools, and children’s life chances (pp. 91–116).
New York, NY: Russell Sage Foundation.
Rennie Center for Education Research & Policy. (2011). A revolving door: Challenges and
solutions to educating mobile students. Cambridge, MA: Rennie Center for Education
Research & Policy.
Romero, M., & Lee, Y.-S. (2007). A national portrait of chronic absenteeism in the early grades.
New York, NY: National Center for Children in Poverty.
Ronfeldt, M., Loeb, S., & Wyckoff, J. (2013). How teacher turnover harms student achievement.
American Educational Research Journal, 50(1), 4–36.
Rumberger, R. W. (2003). Causes and consequences of student mobility. Journal of Negro
Education, 72(1), 6–21.
Rumberger, R. W., Larson, K. A., Palardy, G. J., Ream, R. K., & Schleicher, N. C. (1998). The
hazards of changing schools for California Latino adolescents. Berkeley, CA:
Chicano/Latino Policy Project.
SAS EVAAS. (2015). Misconceptions about value-added reporting in Tennessee. SAS EVAAS.
Retrieved from
https://www.tn.gov/assets/entities/education/attachments/tvaas_common_misconceptions
.pdf
122
Scherrer, J. (2013). The negative effects of student mobility: Mobility as predictor, mobility as a
mediator. International Journal of Education Policy and Leadership, 8(1). Retrieved
from www.ijepl.org
Segal, D. R., & Segal, M. W. (2004). America’s military population. Washington, DC:
Population Reference Bureau.
Shen, J. (1997). Teacher retention and attrition in public schools: Evidence from SASS ’91.
Journal of Educational Research, 91, 81–88.
Shields, P. M., Humphrey, D. C., Wechsler, M. E., Riel, L. M., Tiffany-Morales, J., Woodworth,
K., … Price, T. (2001). The status of the teaching profession 2001. Santa Cruz, CA: The
Center for the Future of Teaching and Learning.
Siegel-Hawley, G. (2013). Educational gerrymandering? Race and attendance boundaries in
demographically changing suburbs. Harvard Educational Review, 83(4), 580–612.
Simpson, G. A., & Fowler, M. G. (1994). Geographic mobility and children’s
emotional/behavioral adjustment and school functioning. Pediatrics, 93(2), 303–309.
Sims, D. P. (2013). Can failure succeed? Using racial subgroup rules to analyze the effect of
school accountability failure on student performance. Economics of Education Review,
32, 262–274.
Stiefel, L., Schwartz, A. E., & Whitesell, E. R. (2013). The spillover effects of student mobility
on stable classmates. Presented at the Association for Public Policy Analysis &
Management Fall Research Conference.
Temple, J. A., & Reynolds, A. J. (1999). School mobility and achievement: Longitudinal
findings from an urban cohort. Journal of School Psychology, 37(4), 355–377.
123
Thompson, S. M., Meyers, J., & Oshima, T. C. (2011). Student mobility and its implications for
schools’ Adequate Yearly Progress. Journal of Negro Education, 80(1), 12–21.
Titus, D. N. (2007). Strategies and resources for enhancing the achievement of mobile students.
NASSP Bulletin, 2007(91), 81–97.
US Government Accountability Office. (2010). K-12 Education: Many challenges arise in
educating students who change schools frequently. Washington, DC: US Government
Accountability Office.
US Government Accountability Office. (2011). Education of military dependent students: Better
information needed to assess student performance. Washington, DC: US Government
Accountability Office.
Vinovskis, M. A. (2009). From A Nation at Risk to No Child Left Behind: Naitonal education
goals and the creating of federal education policy. New York, NY: Teachers College
Press.
Voight, A., Shinn, M., & Nation, M. (2012). The longitudinal effects of residential mobility on
the academic achievement of urban elementary and middle school students. Educational
Researcher, 41(9), 385–392.
Waterman, R. W., & Meier, K. J. (1998). Principal-agent models: An expansion? Journal of
Public Administration Research and Theory, 8(2), 173–202.
Whalen, A. (2015, December 22). ESEA DCL Part Rate CSSO letter. Retrieved from
https://www2.ed.gov/policy/elsec/guid/stateletters/eseadclpartrate12222015.pdf
Wong, M., Cook, T. D., & Steiner, P. M. (2015). Adding design elements to improve time series
designs: No Child Left Behind as an example of causal pattern-matching. Journal of
124
Research on Educational Effectiveness, 8(2), 245–279.
https://doi.org/10.1080/19345747.2013.878011
Wrabel, S. L., Saultz, A., Polikoff, M. S., McEachin, A., & Duque, M. (2016). The politics of
Elementary and Secondary Education Act Waivers. Educational Policy.
https://doi.org/10.1177/0895904816633048
Xu, Z., Hannaway, J., & D’Souza, S. (2009). Student transience in North Carolina: The effect of
school mobility on student outcomes using longitudinal data. National Center for
Analysis of Longitudinal Data in Education Research (CALDER), , Working Paper No.
82.
Yin, L. M., Kitmitto, S., & Shkolnik, J. (2012). Military connection and student achievement: A
look at the performance of military-connected students in eight public school districts.
Washington, DC: American Institutes for Research.
Abstract (if available)
Abstract
The two essays in this dissertation are each written as standalone studies, focusing on two separate facets of student mobility. ❧ In Essay 1, I explore a lesser-known aspect of the Elementary and Secondary Education Act (ESEA) known as the Full Academic Year (FAY) regulation. This regulation allows schools to exclude mobile students from school performance measures each year. Using principal-agent theory, I demonstrate the potential for unintended, negative behaviors this regulation incentivizes. I subsequently propose five alternatives to the FAY practices states currently use in accountability systems to reduce the potential of these unintended behaviors. These alternative FAY practices are then tested to determine how school performance changes when including mobile students into annual school accountability measures. The ostensible purposes of the FAY cutoff point are to 1) ensure schools are not held responsible for performance of an individual student that is attributable to time the student spent enrolled in a different school and 2) provide schools with relief from potential sanctions schools might receive if responsible for large numbers of late-arriving students. The preponderance of evidence presented in this study suggests that the FAY regulation does not provide much, if any, benefit or assistance to schools serving large proportions of mobile students. Prior research provides some evidence that the exclusionary FAY practice is harmful to those students who are excluded each year (Özek, 2012). As such, I recommend that states move towards inclusive practices that incorporate the largest number of students, hold all schools accountable for all students they serve in a year, and reduce the potential for unintended behaviors incentivized by current FAY regulations. Moreover, formalizing such regulations in a reauthorization of ESEA would provide for more equitable treatment of mobile students across all states. ❧ In Essay 2, I explore how mobility differs between high-poverty and lower-poverty schools on the dimensions of volume, timing of moves, and demographics of mobile students served. High-poverty schools are the target of intense school reforms and the focus of many education policies intended to address discrepancies in opportunities and resources for students of low-income families. Relative to more economically advantaged schools, high-poverty schools deal with numerous challenges that impact daily operations and school effectiveness. Student mobility is yet another challenge high-poverty schools face. The results of this study show that even after accounting for students sorting in non-random ways across school contexts, a large difference in mobility rates between high-poverty and low-poverty schools remains. But unlike the differences in mobility rates, high-poverty and lower-poverty schools deal with the arrival of new students at similar times in each calendar month and throughout the academic year, with new students arriving every week. The results of this study also demonstrate that high-poverty schools work with mobile students with different needs and past schooling experiences when compared to mobile students arriving in lower poverty schools. Together, the results of this study suggest that policies that simply recognize and support schools based on a general rate of annual mobility overlook the ways mobility differs between high-poverty and lower-poverty communities. This study provides a starting point from which policymakers, researchers, and school administrators can enhance discussions around mobility in specific school contexts and further push for policies that address systematic mobility differences in order to best support students and schools.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The role of the timing of school changes and school quality in the impact of student mobility: evidence from Clark County, Nevada
PDF
High-achieving yet underprepared: first generation youth and the challenge of college readiness
PDF
Exploring threats to causal inference in empirical education research
PDF
The role of school climate in the mental health and victimization of students in military-connected schools
PDF
Loaded questions: the prevalence, causes, and consequences of teacher salary schedule frontloading
PDF
Three essays on the high school to community college STEM pathway
PDF
That's not what I asked for: three essays on the (un)intended consequences of California's dual-accountability system
PDF
English language development materials in Texas: a study of effectiveness and selection
PDF
An application of value added models: School and teacher evaluation in Chinese middle schools
PDF
No place like home: a three paper dissertation on K-12 student homelessness & housing affordability
PDF
The perceptions of cross cultural student violence in an urban school setting
PDF
More than sanctions: California's use of intensive technical assistance in a high stakes accountability context to close achievement gaps
PDF
Uneven development of perspectives and practice: Preservice teachers' literacy learning in an era of high-stakes accountability
PDF
A comparison of value-added, orginary least squares regression, and the California Star accountability indicators
PDF
The writing studio as co-requisite remediation: a relational ethnography of academic discourse and social capital
PDF
The impact of leadership on student achievement in high poverty schools
PDF
Investigating the association of student choices of major on college student loan default: a propensity-scored hierarchical linear model
PDF
Building networks for change: how ed-tech coaches broker information to lead instructional reform
PDF
High performing schools in high risk environments: a study on leadership, school safety, and student achievement at two urban middle schools in Los Angeles County
PDF
The logic of mining student data: corporate education reform, stakeholder activism, and the fate of inBloom
Asset Metadata
Creator
Wrabel, Stephani Lynn
(author)
Core Title
Student mobility in policy and poverty context: two essays from Washington
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Urban Education Policy
Publication Date
05/09/2017
Defense Date
03/07/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accountability,education,full academic year regulation,OAI-PMH Harvest,Poverty,student mobility
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Polikoff, Morgan S. (
committee chair
), Astor, Ron Avi (
committee member
), Strunk, Katharine O. (
committee member
)
Creator Email
swrabel@rand.org,swrabel@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-372635
Unique identifier
UC11256201
Identifier
etd-WrabelStep-5331.pdf (filename),usctheses-c40-372635 (legacy record id)
Legacy Identifier
etd-WrabelStep-5331.pdf
Dmrecord
372635
Document Type
Dissertation
Rights
Wrabel, Stephani Lynn
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accountability
education
full academic year regulation
student mobility