Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The pursuit of equity: a comparative case study of nine schools and their use of data
(USC Thesis Other)
The pursuit of equity: a comparative case study of nine schools and their use of data
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE PURSUIT OF EQUITY:
A COMPARATIVE CASE STUDY OF NINE SCHOOLS
AND THEIR USE OF DATA
by
Christine Hiromi Sanders
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2009
Copyright 2009 Christine Hiromi Sanders
ii
DEDICATION
This dissertation is dedicated first and foremost to my parents, Ed and Shigemi
Sanders. Through your continuous support, unconditional love, and patience, I have
become the individual I am today. Thank you for the endless sacrifices you have made to
help me accomplish my personal and professional goals.
This is also dedicated to the memory of my younger brother, Brian Toshio Sanders,
who has continuously whispered words of encouragement. Your spirit, perseverance, and
selflessness are characteristics I will always strive for.
iii
ACKNOWLEDGEMENTS
I would like to personally acknowledge the following individuals for their
continuous support of my study and guidance in completing my degree. To my
Dissertation Committee Chair: Dr. Amanda Datnow for her unyielding faith and
optimism, her timely and reflective feedback, and her superior knowledge of education
and data-driven decision making. To my Dissertation Committee Members: Dr. Kathy
Stowe, for her expertise in both practice and theory and her bright, warm personality and
to Dr. Courtney Malloy, for her thoughtful feedback and who by far is the best at
explaining any type of qualitative and quantitative methodology.
To my Best Aunt and Editor: Setsuko Enomoto; Dear Friends: Molly Barnes, Dr.
Virginia Ward-Roberts, Phaidra Crayton, LaVell Ospina, Cheryl Caddick, Allen
Overfield, Dr. Kaivan Yuen, Reverend Yoshitaro Araki, and Dr. Ginger Clark. My
Family Members: Yo, Kevin, Jason, Saori, and Jalen Enomoto; and my Flournoy
Elementary Family: Catherine Andrews, Sharon Robinson, Craig Slattery, Grace Nsor,
Charles Cho, Paola Torres, Erin Stafford, Ariana Cardenas, and Leticia Villanueva. To
the children of Flournoy Elementary School: your resilience to rise above the challenges
in your community is truly inspiring.
And finally, to all the participants of the New Horizons and Summerfield Unified
School Districts for their honesty, hard work, and continuous passion for educating all of
our children.
Without all of your support, none of this would have been possible.
iv
TABLE OF CONTENTS
DEDICATION ii
ACKNOWLEDGMENTS iii
TABLE OF CONTENTS iv
LIST OF FIGURES vii
ABSTRACT viii
CHAPTER ONE: Overview of the Study 1
Introduction 1
Background of the Problem 1
Accountability Influences on Race/Ethnicity and Socioeconomic Status 4
Use of Data 5
Research Questions 7
Significance of the Study 8
Limitations/Assumptions 8
CHAPTER TWO: Literature Review 10
Introduction 10
Foundations of Educational Reform 11
Educational Institutions and Inequality 11
Historical and Current Movements Toward Educational Reform 12
Consequences of Test-Based Accountability Systems 17
Incentives and Sanctions 17
Impact of Accountability Pressures on Curriculum and Instruction 18
Data-Driven Decision Making 20
The Successes of Data-Driven Decision Making 20
Closing the Achievement Gap 24
The Challenges of Data-Driven Decision Making 29
Data Use in High and Low Performing Schools 32
Educators Beliefs and Expectations on the Race and Class of Students 34
Outlook and Implications for Minority Students 35
Summary of the Literature Review 36
CHAPTER THREE: Research Methodology 40
Introduction 40
Background of the Problem 40
Research Methodology Overview 41
Research Design 42
Sample and Population 43
v
Selection Criteria 43
Sampling Procedure 44
Table 1: Sample of Nine Schools 45
Participants 46
Overview of the Districts 47
Overview of the Schools 48
Instrumentation 50
Data Collection Procedures 52
Semi-Structured Interviews 52
Researchers Journal 53
Potential Bias in Data Collection Phase 54
Data Analysis Procedures 54
Ethical Considerations 57
Limitations 58
Summary of Research Methodology 58
CHAPTER FOUR: Findings 60
Introduction 60
Variation of Data Use 61
Types of Student Assessment Data 62
Focus on Certain Types of Students 64
Focus on Broader Content Clusters and Departmentalizing 72
The Pressures of Accountability 78
Different Motivations to Use Data 86
Foundation for Data Use 87
Beliefs about Data Use 93
The Challenges 100
Educators Expectations of School Populations 108
High Expectations of Students 108
Special Education Students 110
Is it fair? 111
Conclusion 112
CHAPTER FIVE: Summary and Conclusions 116
Introduction 116
Background of the Problem 116
Purpose of the Study 117
Research Findings and Connections to Prior Research 119
Foundations for Educational Reform 119
Implications for Policy and Practice 126
Recommendations for Future Research 128
Conclusion 130
REFERENCES 132
vi
APPENDIX A : Administrator Interview Protocol 140
APPENDIX B : Teacher Interview Protocol 143
APPENDIX C : Coordinator/Coach Interview Protocol 146
APPENDIX D: Over-Arching Themes 149
APPENDIX E : Refined Codes/Sub-Themes 150
vii
LIST OF FIGURES
Figure 1: Four Types of Schools with Various Performance and Poverty Levels 44
Figure 2: Matrix of Research Questions and Instrumentation 51
Figure 3: Visual Representation of Themes, Codes, and Research questions 56
Figure 4: Continuum of Responses from High/Low Performing Schools in
Low to High Poverty Areas on How Data is Focused on Certain
Types of Students 71
Figure 5: Matrix Depicting the Variation of Data Use Across All Types
of Schools 77
Figure 6: Comparison of Thoughts between High Poverty and Low Poverty
Schools on How the Perceived Pressures of Accountability Affect
Data Use 85
Figure 7: Presence of Challenges at High-Low Performing/High Poverty and
High-Low Performing/Low Poverty Schools 106
viii
ABSTRACT
Since the passing of No Child Left Behind (NCLB) and related state mandates,
schools have witnessed a major transformation in accountability policies. Therefore, as a
result of these stringent laws, the use of data has become a prominent reform strategy.
Prior research has focused on the successes of high performing districts and schools, as
well as the significant barriers that have affected the implementation of data-driven
decision making (DDDM). However, little is known as to whether schools with varied
performance levels (i.e., high vs. low performing) and student populations (i.e., high vs.
low poverty levels) implement DDDM differently.
This multiple case study examined the contrasting uses of data in eight urban
elementary schools in the New Horizons Unified School District and one in Summerfield
Unified School District that vary in performance and poverty levels. Data for this
research study was garnered through semi-structured interviews of administrators,
coordinators, coaches, and teachers.
Findings from this study illuminate that high poverty schools with both high and
low performance levels used data to make short-term changes by targeting certain types
of students and curricular areas. Low poverty schools with high and low Academic
Performance Index (API) scores were more likely to use data across all students and
subject areas. Administrators that led high performing schools regardless of poverty
level were more apt to use data to build the capacity of the staff through effective
teaching practices. The motivations to use data and the accompanying challenges
differed across all nine schools.
ix
On a final note, as the majority of students in high poverty schools are
overrepresented by minority students, this study suggests that these students may be most
susceptible to the negative consequences of accountability policies and targeted data use.
Implications for policy and practice are also discussed.
1
CHAPTER ONE
Overview of the Study
Education then, beyond all other devices of human origin, is the great equalizer
of the conditions of men, the balance-wheel of the social machinery. Horace
Mann
Introduction
Background of the Problem
Public education has long been touted as the great American equalizer of
opportunity (Duke, 2000). However, throughout the history of this nation, the academic
achievement levels of students from various socioeconomic and ethnic groups have not
been equal. In fact, scholars have argued for decades that public schools reproduce social
inequality (Bowles & Gintis, 1976; Bourdieu & Passeron, 1977; Collins, 1979; Diamond
& Spillane, 2004). Studies have shown that students who come from disadvantaged
backgrounds are more likely to drop out than students from more economically affluent
homes (Garcia, 2002). Rather than provide equal opportunities for all students, public
education has done little to level the playing field both economically and socially (Duke,
2000; Lee & Wong, 2004). This bleak prognosis for the future of America has been the
catalyst for many large-scale reforms and federal policies that have occurred throughout
the history of public education.
Since its inception in 2001, the implementation of No Child Left Behind (NCLB)
has had far-reaching influences throughout public schooling (Hamilton, 2003). The
current wave of large-scale reform and federal policies focuses on various measures of
accountability. Test-based accountability systems and the increasing use of data to make
2
decisions are all common components of public educations age of accountability (Earl &
Fullan, 2003).
Specifically in California, a test-based accountability system has propelled leaders
and educators to find ways to increase student test scores and overall school performance.
This test-based accountability system is characterized by four major elements: goals in
the form of standards, measures of performance such as assessments, targets for
performance (i.e., adequate yearly progress), and consequences attached to the success or
failure of meeting those targets (Hamilton, 2003). Schools that fail to meet yearly targets
are placed on probation status. This probationary status known as Program Improvement
(PI) can eventually lead to sanctions such as reconstitution of the staff, mandatory
intervention for students, and student retention (Madaus, 1988).
In addition, Californias accountability system provides schools with
disaggregated data on subgroups (i.e., minority students, English Learners, students
living in poverty, and students with disabilities) with the assumption that this data will be
used to shed light on the discrepancies between these groups of students. Optimistically,
the increased use of data obtained from accountability systems would lead to a
heightened awareness of underperforming students and therefore, raise the achievement
level of all students (Woody, Buttles, Kafka, Park, & Russell, 2004; Darling-Hammond,
2004). By establishing this accountability system, it is hoped that educational equality
can be reached. For example, in a study of how districts are using accountability policies
to close the achievement gap, Woody, Bae, Park, and Russell (2006) highlight the efforts
of three districts that are addressing the issues of inequities in achievement and are
making considerable gains.
3
Yet, others argue that federal accountability policies will only exacerbate the
inequalities that exist amongst various minority students (McNeil, 2000; Diamond &
Spillane, 2004) especially in the lowest performing schools (Diamond, 2007). Narrowing
of the curriculum due to increased test preparation, drill worksheets, and workbooks
further marginalizes these students (McCraken & McCraken, 2001; McNeil, 2000; Yeh,
2001; Diamond & Spillane, 2004). With instructional time allocated away from the core
curriculum, student learning could be significantly limited (Madaus & Clarke, 2001;
Woody et al., 2004).
NCLB holds all schools accountable to the same standards and testing targets
regardless of students background. Schools in urban, low-income neighborhoods face a
plethora of challenges. Violence, poverty, crime, and inadequate housing are just some
of these challenges (McLoyd & Wilson, 1991). These children who come from families
below the poverty level are nearly twice as likely to be retained than their more
advantaged peers (Garcia, 2002). Brooks-Gunn and Duncan (1997) studied the effects of
income on childrens physical, health, cognitive ability, school achievement, and
emotional and behavioral well-being. Children in poverty experience low birth rate,
learning disabilities, limited school achievement, and internalizing and externalizing
behavior problems. The effects of an impoverished community are detrimental to their
academic performance, as many of these students are performing below grade-level
standards. Proponents of NCLB argue that it can counteract some of these effects, in
theory, by holding all students to high academic standards (Prince, 2004).
4
Accountability Influences on Race/Ethnicity and Socioeconomic Status
As data use has just recently made unprecedented influence on educational policy
and accountability, much of the past research has focused on how high-stakes testing and
accountability policies have impacted minority students. Lomax, West, Harmon, Viator,
and Madaus (1995) discovered that mathematics and science instruction in high-minority
classes differed significantly from low-minority classrooms. Minority students received
less quality instruction and more instruction in test preparation. The incentives, or rather
perverse incentives (Koretz, 2002), to meet testing targets play a crucial role in how
teachers instruct certain students in their classrooms. Likewise, Madaus and Clarkes
(2001) study examined how high-stakes testing can negatively impact minority students.
Furthermore, past research has identified socioeconomic status as consistently
influencing student achievement (Roscigno, 1998; Garcia, 2002). Although it is hard to
measure this connection, Duncan & Magnuson (2005) surmise that the factors involved
in low socioeconomic status such as poor child health care and unstable parental mental
health can lower a childs academic achievement and reduce a familys resources.
Likewise, Smith, Brooks-Gunn, and Klebanov (1997) found that poverty accounted for a
substantial portion of the achievement gap. Children from families with extreme poverty
fared the worst with the greatest achievement gap. In contrast, children of wealthy
parents have secure access to good prenatal health care and nutrition and numerous
opportunities for educational experiences. Although these connections are not causal in
nature, it is still important to note that race/ethnicity and socioeconomic status do have
some link to the achievement gap and should not be discounted.
5
Equally important are the perceived expectations that teachers have on their
students abilities. Diamond, Randolph, and Spillane (2000) posit that the race and
socioeconomic status of schools can impact teacher beliefs about student potential. In a
review of past studies, Cotton (1989) gives evidence in support of the power of
expectations on affecting student outcomes, beginning with the original Pygmalion study
conducted by Rosenthal and Jacobson (1968). Other studies on self-fulfilling
prophecies emphasize how teachers low expectations can reduce the self-images of
students leading them to put forth less effort in school. In response to low effort, teachers
gave these students less challenging work (Farkas, Grobe, Sheehan, & Shuan, 1990).
Therefore, as African-American and low-income students are perceived as having
educational deficits, teachers behaviors toward students and their reactions to this
behavior can have drastic affects on their academic achievement. This, of course, would
have serious implications on how data on these students are utilized, or underutilized, in
classroom pedagogy.
Use of Data
In response to the movement towards accountability, the use of data by all
stakeholders in the educational process has increased tremendously in the past decade
(Lee & Wong, 2004; Marsh, Pane, & Hamilton, 2006). School leaders are propelled to
improve the achievement of students and make sense of the plethora of data that inundate
their lives (Earl & Fullan, 2003). Used appropriately, data can significantly improve
decision making and lead to overall school reform (Datnow, Park, & Wohlstetter, 2007).
Numerous studies have shown that data are an invaluable tool for school planning (Earl
& Fullan, 2003; Datnow et al., 2007). Factors that have contributed to effective data use
6
are developing a school culture that promotes data use, strong leadership and support
systems, and efficient accessibility to multiple forms of data (Feldman & Tung, 2001;
Bernhardt, 2003; Love, 2004; Lachat & Smith, 2005; Ikemoto & Marsh, 2007).
In addition to the successes of data use, studies also indicate the presence of
barriers that hinder effective implementation of data-driven decision making. These
challenges include teacher buy-in, the schools capacity for data use, data literacy, and
lack of time (Feldman & Tung, 2001; Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006;
Mandinach, Honey, & Light, 2006; Ikemoto & Marsh, 2007; Vaughan & Kelly, 2008).
Furthermore, there is a growing sense that the accountability movement, and data
use in particular, may play out differently depending on school context. For example, in
their studies of Chicago Public Schools, Diamond and Cooper (2007) found that
responses to data differed based on the schools accountability status (probationary or
high performing). How administrators and educators used data to make decisions and
implement these reforms differed in low performing and high performing schools. Low
performing schools used data to develop quick-fix strategies to avoid the sanctions of a
test-based accountability system. Focus was placed on isolated groups of students who
were at the cusp of meeting certain targets. On the other hand, high performing schools
used data for school-wide systemic change. Data was used to analyze the effectiveness
of classroom instruction and monitor the needs of students as a whole. In these schools,
time and resources were dispersed widely to all students at all grade levels. Therefore,
the schools accountability status was a contributing factor in how these differing schools
used data to implement changes.
7
Although studies have been conducted on the differences between data use in low
and high performing schools, there is only a small body of research that documents if the
poverty levels of school communities also affect data use (Diamond, Randolph, &
Spillane, 2000). Therefore, the goal of this case study is to increase the body of
knowledge by focusing on the accountability status and poverty levels of schools, and
how these contexts influence how data is used and the subsequent decisions made from
data.
Research Questions
This qualitative case study will focus on the use of data in schools that vary in
accountability status and poverty levels. The study will highlight eight schools in an
urban, unified school district and one school in a suburban school district that vary in
these two contexts. Specifically, the study will address the following overarching
question:
How does the use of data vary in schools that differ in accountability status and
poverty levels?
In order to gain more detailed knowledge about data use, the following subquestions will
also be addressed:
a. How do the perceived pressures of accountability influence data use in each
type of school?
b. To what extent are there different motivations to use data in differing contexts?
c. How do educators expectations of the school population influence data use in
each type of school?
8
Significance of the Study
Because the use of data is a fairly new phenomenon given the recent
implementation of NCLB and performance-based accountability systems, it is crucial that
research is conducted to add to this growing body of knowledge. Past research has
focused on the use of data in high performing, effective schools to make decisions, but
there has been little research conducted on the comparison of data use in schools that
vary in accountability status and poverty levels. By noting the differences and
similarities between these types of schools, it is hoped that this study will provide district
and site administrators with the knowledge of why data use varies in different kinds of
schools. Perhaps this knowledge can propel leaders to create more optimal environments
for data use as well as determine what barriers exist at the site level that prevent the use
of data. Further, this case study will also highlight how test-based accountability systems
may affect student subgroups, as they widen the achievement gaps between
disadvantaged and advantaged students. Ultimately, the decisions made from data have
serious implications on race and class.
Limitations/Assumptions
Limitations of this study include a small sample size and the brevity in conducting
interviews. There will only be two schools per type of context: high and low performing,
and high and low poverty. A contention of this study is that data use by administrators
and educators will vary in low and high performing schools. Another is that educators in
low performing schools will use data to implement short-term solutions that do little to
build the capacity for school improvement. In contrast, there is a possibility that data use
in high performing schools will focus on longer-term, more powerful solutions that lead
9
to overall school change. Furthermore, the accountability status of each school could
play a more influential role in determining data use than the poverty level of the schools
community. Administrators and educators in low performing schools face tremendous
pressures due to accountability sanctions. Therefore, it is possible that data use in these
schools will be significantly higher than in schools that do not face the possible sanctions
of probation. In addition, the role of incentives, or rather perverse incentives, in these
low performing schools will be a constant reminder to educators of their need to stay
focused on meeting yearly targets. In conclusion, the research from this study will
illuminate how various types of schools manage their data use so that educators can best
meet the needs of all children, regardless of socioeconomic and cultural backgrounds.
10
CHAPTER TWO
Literature Review
For the first time ever, we are looking ourselves in the mirror and holding
ourselves accountable for educating every child. That means all children, no matter
their race or income level or zip code. U.S. Secretary of Education, Margaret
Spellings
Introduction
In response to concerns that public schools were failing to meet the needs of all
students, the recent focus of education has turned to accountability as an effective way
for overall school reform (Woody, Buttles, Kafka, Park, & Russell, 2004). Therefore, as
a component of accountability, the use of data in schools is argued to enhance the quality
of instruction and school-based decision making (Diamond & Cooper, 2007). Such
practices of teachers using data to make better-informed decisions would not only
improve the instructional quality in the nations lowest performing schools, but would
ultimately begin to close the achievement gap (Diamond & Cooper, 2007).
As the power behind accountability escalates and the pressures of yearly targets
mount, there is a tremendous need to understand how data-driven decision making is
impacting Americas schools and its students. The following literature review will focus
on three major areas.
1. A review of how educational inequities have set the foundation for the current
wave of accountability policies and educational reform.
2. An examination of the successes and challenges of data-driven decision
making, as well as how the use of data varies in different types of schools.
3. An analysis between the use of accountability policies and its impact on
minority students and the achievement gap.
11
Foundations of Educational Reform
Educational Institutions and Inequality
With respect to inequality, are educational institutions part of the problem or part
of the answer (Downey, von Hippel, & Broh, 2004)? Because public education is free,
there is the argument that the United States school system is a leveling institution and
provides equal access and opportunity for all students. Yet, others dispute that schools
actually reproduce social inequality (Bowles & Gintis, 1976; Bourdieu & Passeron, 1977;
Collins, 1979). The central theme in reproduction theory believes that schools serve the
interests of those students at the top rather than those at the bottom.
Many studies have shown that those students at the bottom, predominantly low-
socioeconomic (SES) and minority students, are continuously given unequal access and
opportunity in schools (Oakes, 1985; Gamoran, 1986; Condron and Roscigno, 2003; Lee,
2004; Darling-Hammond, 2005). Oakes (1985) found that low-SES and minority
students were overrepresented in the lower tracks of schools which often provide
different, inferior educational opportunities than those in the higher tracks. Oakes (1985)
contends that this type of ability-grouping leads to very different educational experiences
for various types of students. Condron and Roscigno (2003) found that schools with the
highest proportions of poor students are particularly disadvantaged with the lowest
allocations of local dollars. Furthermore, their study discovered that higher spending of
school resources promoted achievement through the schools physical condition and the
degree of consistency in the learning environment.
In addition, Lee (2004) found that minority students do not receive equal
educational opportunities, do not achieve minimal levels of competency, and do not learn
12
in racially integrated schools. Similarly, students of different races and social-class
backgrounds have historically had different levels of access to high-quality instruction
(Darling-Hammond, 2005). Studies have also shown that students of high to middle
socioeconomic backgrounds receive more challenging instruction that promotes critical
thinking, problem solving, and active participation in learning (Gamoran, 1986).
Furthermore, Stanton-Salazar (1997) has asserted that the accumulation of social capital
or relationships with institutional agents and networks are problematic for low-status
children and youth.
Although Schooling in Capitalist America: Educational Reform and the
Contradictions of Economic Life was written in 1976, Bowles and Gintis maintain
credibility with their assertion that schools which serve low-SES students more often
stress rote memorization, obedience, and punctuality that prepare them for low-wage
jobs. Similarly today, low performing schools with high proportions of minority students
have teachers who teach only the material covered on standardized tests as they scramble
to meet the demands of a test-based accountability system (Diamond & Spillane, 2004).
These examples illustrate how schools have historically played an active role in
perpetuating inequalities between students (Roscigno, 1998; Downey et al., 2004;
Diamond, 2007).
Historical and Current Movements Toward Educational Reform
At many times during the nations history, concern about student achievement has
led to educational reform initiatives. For example, in 1983, the landmark report, A
Nation at Risk, (National Commission on Excellence in Education, 1983) alerted the
American people that schools have lost sight of the basic purposes of schooling. It raised
13
serious concerns about the educational foundations of American society by citing the
rising tide of mediocrity as threatening the future of this nation (p. 1). This significant
report marked the beginning of an educational movement that has given rise to standards-
based reform, test-based accountability systems, and unprecedented data use (Lee &
Wong, 2004; Coburn & Talbert, 2006).
Californias standards-based reform movement began as there was growing
consensus amongst educators of the importance in providing a uniform and focused
vision of what students should know and be able to do. There was much discussion about
the high quality of the standards and how they would lead students to academic success
in the face of a changing, global economy (California Department of Education, 2007).
In the decade since the adoption of Californias content standards, the significance of this
passage grows larger each day in the current times of greater accountability.
Ogawa, Sandholtz, Martinez-Flores, and Scribner (2003) explain that curriculum
standards give teachers a common sequence of targets in which to drive classroom
instruction. Standards offer students and educators a consistent and coherent guide by
detailing what the students must demonstrate. The standards represent state alignment to
what every student in each grade level in the state of California should be able to
accomplish by the end of one academic year. With the adoption of the content standards,
education throughout the state became better aligned.
In the majority of Californias K 12 schools, there are 180 days of instruction
with an average school day lasting for six hours. Marzanos (2003) research indicates
that in the thirteen years of instruction from K 12
th
grade students need to learn 15,465
standards in 9,042 hours of instructional time. With the call for tighter accountability,
14
federal policy makers and school officials needed a way to ensure that standards were
effectively taught at the schools. In California, as in most states, a test-based
accountability system was born, using the California Standards Test (CST) as the
assessment aligned to the California content standards. Ideally, this assessment made
administrators, teachers, and students all accountable for the voluminous standards K
12
th
grade students must know and be able to do before they transition to the next grade
level.
Furthermore, with the apparent inequalities present in schools, it is obvious that
the recent trend of education reform has focused on increasing student achievement
through accountability policies. As a way to lead to more equitable educational
opportunities and school improvement, federal and state policymakers are encouraged by
increasing test scores and have continued to support accountability policies (Woody et
al., 2004). Therefore, test scores and other forms of data use which are integral to
accountability have become a promising systemic reform strategy (Earl & Fullan, 2003;
Lachat & Smith, 2005; Datnow, Park, & Wohlstetter, 2007).
In 2001, federal education legislation known as the No Child Left Behind
(NCLB) Act was enacted emphasizing standards, assessments, and consequences tied to
performance on these assessments (Hamilton, 2003). There are two basic goals that
define NCLB. The first goal is to close the achievement gap between high and low
performing students, especially the achievement gaps between minority and non-minority
students, and between disadvantaged children and their more advantaged peers (NCLB,
2001, Sec 1001). The term achievement gap refers to the differences in achievement
between White, Asian, and economically advantaged students and their African-
15
American, Latino, Native American, Southeast Asian, and socioeconomically
disadvantaged peers (Symonds, 2004). As achievement gaps appear on every widely
used assessment measuring student achievement, from the National Assessment of
Educational Progress (NAEP) to the Standardized Achievement Test (SAT), as well as on
other indicators such as high school graduation rates (Symonds, 2004), it was hoped that
NCLB could hold schools accountable for improving student achievement through the
close monitoring of standards (Ikemoto & Marsh, 2007).
Secondly, NCLB is in place to implement a test-based accountability system that
gives consequences to those schools, districts, and states that fail to improve the
academic achievement of its students (Abernathy, 2007). The heart of this accountability
system is Adequate Yearly Progress (AYP). Schools must meet AYP targets based on
student performance of standardized assessments given once a year. Therefore, schools
must closely monitor the achievement levels of all their students or face sanctions such as
probation, reconstitution of staff, or government take-over.
Heightened accountability for student achievement and increased sanctions have
also been features at the state level for over a decade, predating NCLB. The 1994 results
of the NAEP found California tied for last place. This led to mounting concern that
policymakers needed to improve the quality of Californias public schools (Woody et al.,
2004). Therefore, in 1999 the legislature led by Governor Gray Davis approved the
Public Schools Accountability Act (PSAA) which centralized school reform at the state
level. This act included state-wide standardized testing, rewards and sanctions, and the
incorporation of curriculum standards. In addition, PSAA was seen as a crucial first step
toward monitoring and improving student achievement for all schools on the same
16
standards and was hoped to provide a system that sunshines the equity issue (Woody et
al., 2004, p. 15). Therefore, schools were not just responsible for school-wide targets, but
for the first time were required to meet targets for racial/ethnic and socioeconomic
subgroups of students (Woody et al., 2006).
This requirement to monitor racial/ethnic socioeconomic subgroups of students
came from consistent test results showing significant achievement gaps between African-
American and Latino students and their White and Asian peers (Carroll, Krop, Arkes,
Morrison, & Flanagan, 2005). More specifically, performance of students disaggregated
by race/ethnicity on the mathematics section of the California Achievement Test, Sixth
Edition (CAT/6) in 2003 showed that the percentage of White and Asian students scoring
at or above the 50
th
national percentile differed from the percentage of African-American
and Latino students scoring at or above the 50
th
national percentile by approximately 30
to 40 percentage points. In other words, roughly 70 percent of White and Asian students
scored at or above the 50
th
national percentile compared to less than 40 percent of
African-American and Latino students. This gap between racial/ethnic groups was found
to be consistent across grade levels. Likewise, scores on the NAEP revealed consistently
lower achievement levels for all racial/ethnic groups of California students (Carroll et al.,
2005). Therefore, it was hoped that the successes and failures of schools seen through
increased accountability policies would shed light on how educators and administrators
needed to improve school practices for the benefit of these students.
Furthermore, there is growing evidence that state level accountability systems
have improved outcomes for all students (Skrla, Scheurich, Johnson, & Koschoreck,
2004). The researchers posit that components of state accountability systems can be used
17
to leverage better schooling outcomes for minority children. Most notably, the Texas
accountability system has made educational equity a priority for all students. Through
the use of disaggregated data based on race and socioeconomic levels, the Texas system
is highly focused on student results. The results of the Texas Assessment of Academic
Skills (TAAS) suggest that from 1994 to 2001, passing rates for all student groups have
been impressive. Likewise, Grissmer, Flanagan, Kawata, and Williamson (2000)
confirmed that Texas was one of the two states making the largest overall gains with
minority students achieving top national ranks in comparison with their peers in other
states on the NAEP. These studies show that serious consideration should be given to
state accountability systems in leading to more equitable schooling outcomes for students
of color (Skrla et al., 2004).
Consequences of Test-Based Accountability Systems
Incentives and sanctions. Although NCLB has many good intentions, there are
many who argue that this law has harmed schools, districts, states, teachers, and most of
all the students (Darling-Hammond, 2004; Woody et al., 2004). Critics dispute that the
underfunded policy adds onto the already grossly unequal and inadequately funded urban
schools and requires these schools to meet high test score targets that disproportionately
penalize them for their failures (Darling-Hammond, 2004). For schools that fail to meet
state targets in language arts and mathematics, as a whole school which is the Academic
Performance Index (API), or for any one subgroup which is AYP, the sanctions become
increasingly more severe each year. In the first year of AYP failure, the school is subject
to public identification of its failing status. In the second year of failure, the schools
accountability status becomes labeled as Program Improvement. Schools must develop
18
a school improvement plan, use 10 percent of their Title I funds for professional
development, and allow parents to transfer their children to higher performing schools.
In year three, the same consequences apply as year two, as well as the need to provide
intervention services for the students. Years four and five can bring about corrective
actions such as reconstitution of the staff and restructuring as a charter (Abernathy,
2007).
On the other hand, schools in highly affluent communities that traditionally have
very high API scores and meet AYP, receive very few rewards for their high achievement
levels. These schools receive very limited NCLB funding and do not have access to same
support systems as lower-performing schools.
The threat of sanctions and other negative consequences that are tied to low
student test scores leads many teachers to be sidetracked by perverse incentives (Koretz,
2002) to teach to the test (McNeil, 2000;Smyth, 2008). Teachers have incentives to
cheat; at the same time, administrators have incentives to look the other way (Abernathy,
2007). Teachers are compelled to use certain practices that lead to overly improved
student scores or behave in particular ways knowing the positive and negative outcomes
that may transpire as a result of high or low test scores (Koretz, 2002). They have
incentives to revise the priorities of their instruction, as they devote greater amount of
time to tested areas (Rothstein, 2004). If all schools are faced with the moral dilemma to
teach appropriately due to increasing accountability pressures, what is the fate of low-
performing schools faced with insurmountable challenges?
Impact of accountability pressures on curriculum and instruction. In order to
meet these targets, many schools have replaced instructionally rich classroom practices
19
for test preparation programs and other watered-down curricula that do not emphasize the
use of higher-order thinking skills (Darling-Hammond, 2004; Abernathy, 2007;). In the
years since this test-based accountability system has been in place, many schools have
instilled a test prep culture to combat the pressures of AYP targets. Workbooks and drill
worksheets are common to these schools (Haladyna, Nolen, & Haas, 1991; McCraken &
McCraken, 2001). This type of test-taking program is taught in isolation, separated from
other core curriculum areas such as language arts and mathematics. It is argued that
schools have become test-taking factories and the curriculum is prescribed, less
meaningful, and often does not relate to students lives (Yeh, 2001). Furthermore,
drilling students on specific items on standardized tests is ethically inappropriate (Smyth,
2008).
In addition, Stephens, Pearson, Gilrane, Roe, Stallman, Shelton, Weinzierl,
Rodriguez, and Commeyras (1995) found that teachers of low socioeconomic students
tend to be held more accountable to tests and, thus, spent more time teaching to the test
and other test preparation activities. Yet, as more and more schools rely on test
preparation programs to increase scores, can student gains actually validate what they
have learned (Koretz, 2002)? These gains may be purely inflated due to test preparation
and, therefore, have no true meaning on actual student achievement.
Finally, without the proper funding, support, and resources, accountability
systems may actually widen achievement gaps by rewarding high performing schools and
punishing less advantaged ones (Lee & Wong, 2004). As demands for accountability
increase, low performing, urban schools grapple with equity concerns arising from the
20
enormous diversity of their students (Lachat & Smith, 2005). These issues of equity will
be discussed in subsequent pages.
Data-Driven Decision Making
The Successes of Data-Driven Decision Making
In the wake of NCLB, data-driven decision making (DDDM) has become integral
to educational policy, accountability, and practice (Mandinach et al., 2006). This catch
phrase, data-driven decision making can be defined as the use of data to inform a
variety of decisions made about curriculum and instruction, professional development,
and student interventions (Woody et al., 2006). Data has become the focal point as its
uses are touted as stimulating positive change and improvement and for ensuring
accountability efforts (Earl & Fullan, 2003). Effective data use has been increasingly
identified as a central tenet in raising test scores (Kennedy, 2003), improving school
cultures and teacher attitudes (Feldman & Tung, 2001), and improving the achievement
of low performing, at-risk students (Armstrong & Anthes, 2001; Protheroe, 2001).
Likewise, numerous studies have indicated that several key factors influence data use:
building a foundation, leadership and school culture, accessibility to data, and technology
and data-system capacity (Armstrong & Anthes, 2001; Sutherland, 2004; Datnow et al.,
2007). The following studies illustrate that data can be a powerful ally in stimulating
effective change (Bernhardt, 2003; Love, 2004; Lachat & Smith, 2005).
Feldman and Tung (2001) observed teams from six public schools in
Massachusetts as they implemented the various aspects of data-driven decision making:
discussions on data, recommendations for school-wide data based on data, and
implementation of these changes. Through interviews and observations, the researchers
21
sought answers to what impact the DDDM process had on schools and the school culture,
what kind of support facilitated the process, and what barriers that the school faced when
implementing DDDM. Feldman and Tung (2001) discovered that there were varying
degrees of data-driven decision making implementation, but that overall, schools noted
changes in teachers, school cultures, and students as a result of this process.
Positive impact on staff, school culture, and students. According to Feldman and
Tung (2001), teachers reported having deeper understanding of using and creating data in
different ways. They had a wider perspective on the definition of data, collected more
data, and were able to differentiate instruction better with more detailed evidence on each
student. Through the process of DDDM, they felt less reactive and did less blaming on
the student. As teachers became more thoughtful and open-minded, they were able to
collect more data and approach students with multiple ways to help them. Teachers also
discovered the powers of reflecting on their practice through the analysis of student work.
This study (Feldman &Tung, 2001) also illuminated how the process of DDDM
changed schools by deprivatizing practice and building a more professional culture.
Through teacher collaboration and inquiry, these teachers were compelled to grow as
individuals and as a cohesive group. In 5 of the 6 schools, the teachers were willing to
have deep, professional conversations about difficult issues such as equity of student
access or achievement and discussed ways to help all students improve.
As a result of data-driven decision making at these schools, student achievement
improved significantly. In one school, a high failure rate in math classes prompted the
inquiry process. The school created ways to identify the students and to create support
systems such as intervention programs and focused study time. One year later, the math
22
achievement levels and self-esteem of these students had significantly increased. In
addition, it was hoped that as teachers modeled the process of inquiry that students would
also be engaged in similar behavior. Sure enough, students at multiple schools
collaborated in groups to solve school-wide issues and concerns (Feldman & Tung,
2001).
Leadership and support systems. Feldman and Tung (2001) also discovered in
the schools with the most effective implementation of data-driven decision making,
strong leadership and support systems were crucial to their success. In these schools,
teacher leaders emerged to facilitate inquiry groups by challenging the thinking of others
and coordinating the process. Likewise, successful administrators provided a vision for
the school around data use, held high expectations of their staff to participate, and built in
time for collaboration (Lachat & Smith, 2005). Similarly, in their study of high
performing school systems that use data, Datnow et al. (2007) found that it was essential
that explicit expectations for data use were instilled and upheld by all teachers and
administrators. In these school systems (Datnow et al., 2007) and also evident in the
Massachusetts schools (Feldman & Tung, 2001), developing a foundation and school
culture for data use were vital to an effective data-driven decision making process.
Schools that viewed accountability as helpful rather than threatening were able to engage
in solid discussions around data.
Additionally, Love (2004) describes the Using Data Project which has been
integral to providing support to educators to develop better data literacy and analysis
skills. Through data facilitators comprised of teachers and administrators, the data teams
dig deeply into data evidence, collaborate, reflect, and learn to improve teaching and
23
learning. Similar to the schools observed by Feldman and Tung (2001), the Using Data
Project developed a dedicated, collaborative culture through the use of an inquiry
process. In these schools, the aim was to influence the school culture to be one in which
educators use data continuously, collaboratively, and effectively (p. 23) to make
school-wide improvements in teaching and learning.
Interestingly, Schmoker (2003) believes that the most crucial school improvement
processes do not have to require sophisticated data analysis or expert knowledge. He
posits that over-analysis can lead to overload, which can develop into long, convoluted
plans that are rarely read and implemented. Therefore, his recommendation is that data
analysis can be learned by the teachers themselves to develop the goals they believe will
have the most significant impact on student achievement.
Finally, these studies (Feldman & Tung, 2001; Schmoker, 2003; Love, 2004;
Lachat & Smith, 2005; Datnow et al., 2007) illustrate differences in the degree of
involvement of leaders and support systems in the data-driven decision making process,
but they do not discount how important they are in the overall progression to effective
school reform.
Accessibility to data. In their study of educators in ten districts spanning four
states, Ikemoto and Marsh (2007) found a common set of factors that led to more
successful data-driven decision making. They discovered that accessibility to receiving
data greatly affected the use of data by school personnel. Schools that had on-site
capability of viewing data, disaggregating it, and displaying the results were much more
effective than those schools that had to rely on an outside individual or organization to
run the data. Simultaneously, the authors cite how accessibility to multiple forms of data
24
and the ability to make comparisons between the various forms of evidence were integral
to more complex discussions about how data could influence school change.
Additionally, Bernhardt (2003) defines four types of student data: demographic
(i.e., gender and ethnicity), student learning (i.e., state tests/district benchmarks, teacher
grades, authentic assessments), perceptions (i.e., questionnaires, interviews,
observations), and school processes (i.e., school programs, instructional strategies,
classroom practices). By intersecting all four data categories, a rich, complex description
of the school emerges that can be used to improve student learning.
Not until you intersect all data categories at the school level and over time will
you be able to answer questions that allow you to predict whether the actions,
processes, and programs that you are operating will meet the needs of all students
(pg. 28).
These studies (Feldman & Tung, 2001; Bernhardt, 2003; Love, 2004; Lachat &
Smith, 2005; Datnow et al., 2007; Ikemoto & Marsh, 2007) highlight the various factors
that enable successful data-driven decision making at district and school levels. Through
the concerted effort of teacher leaders and administrators, DDDM can create a
professional culture in schools that enhances more critical dialogue and reflection, fosters
collaboration, and provides leaders with evidence to move their schools toward greater
student achievement and overall school improvement.
Closing the Achievement Gap
District level data-driven decision making. In order to address accountability
policy, specific school districts are also using evidence from data to ensure that all
students are achieving. Skrla et al. (2004) examined four high poverty and diverse
districts in Texas that have achieved district-wide recognition in the states accountability
25
system. Increased pressures and demands placed on board members, administrators, and
teachers to raise student achievement have been a driving force in educational equity. Of
the people interviewed, many recognized the TAAS as responsible for leading to this
leverage in social justice.
Additionally, in their study of three successful districts that are tackling issues of
equity in achievement, Woody et al. (2006) spotlighted how Lemon Grove, Long Beach,
and Ceres Districts in California used distinctive approaches to reform. Although each
district approached issues of race/ethnicity and socioeconomic status in different ways,
all acknowledged the existence of inequities and put forth a concerted effort to diminish
those gaps.
More specifically, Lemon Grove Districts equity program focused on district-
wide training session aimed at raising awareness on racism and prejudice, school-level
teams that conduct action research related to equality in the classrooms, and school-level
equity teams that plan school-wide activities. Administrators, teachers, and parents were
committed to having courageous conversations (p. 13) about topics such as race and
class which are often difficult to discuss. An administrator, after going through the
equity professional development, spoke of now viewing certain situations through an
equity lens (p. 14). In the classroom, Collaborative Action Research for Equity (CARE)
teachers developed personal relationships with students of color and designed culturally
relevant pedagogy to meet their needs. Equity teams were also focused on improving
relationships with parents so that they were comfortable in approaching the school with
their concerns.
26
Similarly, Long Beach Unified School District (LBUSD) is another district
working hard to address the discrepancies between minority students and their non-
minority peers. In this district where only 44 percent and 51 percent of African-
American and Latino students, respectively, passed the math section of the California
High School Exit Exam (CAHSEE) compared to 86 percent and 80 percent of Asian and
White students respectively, the achievement gap was an area of concern. Similar
differences were found between low-socioeconomic students and their more advantaged
peers in reading. In order to address these challenges, LBUSD focused their efforts on
the use of data to improve student achievement. Unlike many districts and schools that
focus solely on test scores, schools in LBUSD collected a variety of quantitative (CST,
CAHSEE, CAT/6, and end of course exams) and qualitative data (Key-Results Walk-
Throughs) to inform their practice. From these results, data is disseminated and
presented online for teacher access. This Academic Profile on each student is available at
the beginning of the school year. Although the data is disaggregated based on
race/ethnicity, schools are encouraged to focus on proficiency levels rather than looking
at the data through a racial lens. Finally, the district has invested time and energy in
building a capacity for data collection and analysis through professional development
given to the instructional leaders. Although it is still unclear if teachers are using the data
to inform their daily instruction, LBUSD can still be viewed as model for tackling the
difficult issues of equity in student achievement (Woody et al., 2006).
Finally, Ceres Unified School District attributes district-wide vision, changes in
organizational structure, and the coaching model to increases in student achievement.
This district-wide vision prioritized literacy as reading is the building block on which all
27
other subjects rest (p. 35) and provided adequate funding to ensure literacy development
for all the students. Changes also were made to the organization of schools, as six
coaching positions were added as support mechanisms for teachers. These coaches each
have their own specialty: technology, K 2, 3 6, Gifted and Talented Education
(GATE), English Language Development (ELD), and secondary. In this way, the needs
of diverse learners can be better addressed. These research studies suggest that district
initiated data-based decision making can have a promising impact on minority students
and their achievement. Specifically, through the determined efforts of these districts,
administrators and teachers have become increasingly aware of the gaps in student
achievement and the various ways in which these gaps can be closed. The districts
efforts to focus on data of diverse students led to overall district-wide improvement.
Therefore, the lessons about data use learned in these districts have broad implications for
practice at school levels.
School-level data-driven decision making. Just as district policies are influential
in addressing the needs of minority children, so too are school-level policies and
strategies that close the achievement gap. Symonds (2004) surveyed thirty-two K 8
schools in the San Francisco Bay Area and compared responses from schools narrowing
achievement gaps with those schools maintaining or widening gaps. In this study, gap-
closing schools were those schools in which all students made progress, however low
performing students made more significant progress. In contrast, in non-gap-closing
schools, high performing students made greater improvement than low performing
students. Findings indicated stark differences between these types of schools regarding
28
the use of data. More specifically, teacher support for data, leadership for equity, and
school focus emerged as the three key themes based on the findings.
Teachers in gap-closing schools shared similar characteristics in regards to their
data use. These teachers used data to understand the skill gaps of their low performing
students, received professional development on how to analyze this data and how to link
this to instructional strategies, and collaborated with their colleagues on strategies.
Leaders in gap-closing schools made closing the achievement gap a priority, set
measurable goals to close these gaps, have people of color in leadership roles, and
provided structured opportunities for faculty to discuss issues of race and ethnicity.
Finally, in gap-closing schools, school focus was placed on school factors inside the
school as opposed to outside-school factors that affect student achievement. Gap-closing
schools also provided frequent professional development on literacy instruction
(Symonds, 2004).
In summary, the Symonds (2004) report noted four recommendations for using
data to narrow achievement gaps. Schools need frequent, reliable data to provide
educators with a clear picture of students strengths and weaknesses, as well as how those
instructional strategies are working. Secondly, so that teachers may use their data
effectively, structured opportunities to reflect, discuss, collaborate, and create
instructional strategies are needed. Teachers require effective professional development
to tailor their instruction to meet the needs of their students. Thirdly, teachers must face
the difficult issue of race/ethnicity, be comfortable in discussing why gaps exist, and
work together to combat bias. Schools need to hire greater numbers of minority people
as teachers and administrators. Finally, schools need to use data to focus on what is most
29
important and centralize their efforts on these areas. As these two studies indicate
(Symonds, 2004; Woody et al., 2006), data can play an integral role in closing the
achievement gap.
The Challenges of Data-Driven Decision Making
Despite evidence of the power behind data in improving student achievement,
there are challenges that schools face in implementing data-driven decision making.
Following NCLB policy, the implicit assumption is that the availability of data will
inform and produce changes in overall teaching practice. However, the actions of
helping educators turn accountability data into usable information are lacking in NCLB
(Wayman, 2005). Specifically, Feldman and Tung (2001) found three barriers that
affected implementation of the data the greatest. Teachers were often hesitant to buy-in
to the process of data analysis and were resistant to change. Secondly, schools that
struggled with DDDM did not have enough expertise in interpreting data. In other words,
the schools capacity for data had not been adequately developed. Thirdly, a lack of time
contributed to failed data collection and analysis, collaboration of instructional strategies,
and the implementation of these strategies into the classroom. The following sections
will discuss these challenges in depth.
Teacher buy-in. Building a culture for data use is a difficult process as many
teachers and administrators lack formal training experience in analyzing and interpreting
data. Others may also be hesitant to use data to make decisions because they believe the
data is unreliable or inaccurate (Mandinach et al., 2005; Ikemoto & Marsh, 2007). These
doubts greatly affected teacher buy-in as teachers were hesitant to use particular types of
data to inform their practice.
30
In their study of high performing schools that are achieving with data, Datnow et
al. (2007) discovered that teacher buy-in was affected by many large scale reforms and
programs being implemented by the district at the same time. As new teachers struggled
with data management systems, time was needed to be comfortable with new processes
and teacher buy-in was always a concern for administrators.
Further, in studies conducted by Kerr et al. (2006) and Ikemoto and Marsh (2007),
staff buy-in was affected by the accuracy and validity of data measures. Principals and
teachers were often concerned that state assessment data were not good measures of
students skills or that district assessments had changed in quality from the first
administration to the second. Others believed that the test results were not good
measures of students skills.
Finally, Woody et al. (2004) discovered that exposure to and use of subgroup data
to influence classroom practices were inconsistent and nonexistent by teachers in the
schools they studied. Although NCLBs intention for disaggregated data is to improve
the performance of subgroups, this was not always the case. Many teachers expressed a
sense of futility they knew the inequalities existed, but how to make the gaps disappear
remained blurred. A large number of teachers were resistant to change their instructional
strategies based on student performance by categories of race/ethnicity, language, and
class. In conclusion, these studies indicate that a committed school culture that supports
data use is integral to the effectiveness of data to make school-wide decisions.
Schools capacity for data use. Numerous studies have shown that school
faculties lack the capacity to use data to address problems, to formulate questions, to
select indicators, interpret results, and monitor progress (Feldman & Tung, 2001;
31
Protheroe, 2001). Providing data to teachers is not adequate, as they also need support in
data analysis and how to incorporate it into their classroom (Vaughan & Kelly, 2008).
Datnow et al. (2007) describes the strategies for building school capacity for data-driven
decision making:
• invest in professional development of data-informed instruction
• provide time to collaborate and share data within and across schools
• provide ongoing training and support for data use at all levels
Further, as Earl and Katz (2002) note, as data use is now not a choice, but a must,
it becomes incumbent that teachers have adequate data assessment literacy. Data literacy
is a part of a schools capacity for data use, yet computers have been found to be unused
or underutilized in most schools (Zhao & Frank, 2003). This technological barrier is
illustrated through teachers unwillingness to use computers because of their
unfamiliarity on how to operate them (Wayman, Stringfield, & Yakimowski, 2004; Kerr
et al., 2006). Schools use a variety of student data that are all accessed in a variety of
ways. This makes obtaining the data even more difficult for teachers who lack general
expertise with computers. Likewise, Zhao and Frank (2003) posit that unless teachers
hold positive attitudes toward technology, it is not likely that they will use it to aid their
teaching. Therefore, data that is only accessible through the computer will not likely be
accessed, and will ultimately not be used for actionable information.
Lack of time. Feldman and Tung (2001) also cited the lack of time for the staff to
analyze, synthesize, and interpret data as limiting effective data-driven decision making
in school sites. As teachers are already overwhelmed with an inordinate amount of
content standards to cover in an academic year (Marzano, 2003) and other
32
responsibilities, they often do not have the time to engage in data-driven decision
making. Marsh et al. (2006) found that data use required a significant amount of time for
preparation, as well as analysis and action time that is often unavailable.
In conclusion, these studies highlight how challenges can hinder the use of data to
make educational decisions. In theory, the data resulting from high-stakes testing can be
a valuable policy lever for improving the educational outcomes for all students (Klein,
2004), but equal attention must be given to reducing the resistance of data use in schools.
Data Use in High and Low Performing Schools
As has been presented, the impact of accountability and high-stakes testing fall
along two opposing lines of arguments (Diamond & Spillane, 2007). One side suggests
that the use of data will make a positive difference in closing the achievement gap by
improving overall school-based decisions and instructional practice. According to this
argument, more equitable educational opportunities will be available to all students.
However, an alternative perspective suggests that the use of testing data will have
negative outcomes for certain students as testing data will further marginalize low-
achieving students, limit the amount of rich instruction they receive, and ultimately lead
to greater inequality (Booher-Jennings, 2005).
In support of the latter argument, Diamond and Cooper (2007) collected data from
eight urban elementary schools in Chicago Public Schools. The researchers hypothesized
that responses to testing data would vary depending on the schools accountability status,
high or low performing. Of these eight schools, six had demonstrated significant
improvements on the Iowa Test of Basic Skills (ITBS), while two of the schools were
low performing and had been placed on academic probation. Their findings suggest that
33
the ways in which data are interpreted and analyzed and the instructional strategies that
resulted differed significantly between these types of schools. Specifically, schools that
had historically done very well used testing data to implement school-wide reform to
transform educational practice while probation schools used data to devise quick-fix
strategies to avoid further sanctions.
Furthermore, Diamond and Spillane (2007) found that high performing schools
used data to improve the overall instructional quality of all their students at all grade
levels. Contrastingly, probation schools used testing data to focus on certain students
such as the bubble kids, (p. 254) those students on the threshold of passing to the next
level. Similarly, Booher-Jennings (2005) found that the majority of school interventions
were offered only to the bubble kids, therefore, withholding resources from those who
needed it the most. Likewise, Marsh et al. (2006) agree that the focus on bubble kids has
serious ramifications for those students at both ends of the spectrum. Finally, high-
achieving schools also used data to monitor school-wide student needs, while probation
schools focused on those grades where testing carried the highest stakes (Diamond &
Spillane, 2007).
Therefore, these distinctly different uses of testing data propose that
accountability policies may in fact affect students educational outcomes in these types of
schools. Students in high performing schools in Chicago may actually benefit more from
accountability policies than those students in probation schools (Diamond & Spillane,
2004).
As the preceding sections indicate, accountability policies, most notably data use,
are effective in raising student achievement if used in proper ways. However, there is
34
considerable concern that a schools accountability status can play a significant role in
the outcomes of minority children, as these students may be more susceptible to the
negative consequences of these practices. This is not to argue that accountability policies
are unnecessary, rather it is important that educators are cognizant of how these
educational policies may in fact widen achievement gaps and reproduce social inequities,
which, in essence, goes against its very promise to overcome these barriers.
Educators Beliefs and Expectations on the Race and Class of Students
In light of the influence that accountability policies may have on overall student
achievement, it is also important to consider the role that teacher beliefs of students may
have on their educational experiences. In their influential publication, Pygmalion in the
Classroom, Rosenthal and Jacobson (1968) concluded that students intellectual
development were significantly influenced by what teachers expected of them and how
those expectations were communicated. In the years since their study, additional research
(Thorndike, 1968; Wineburg, 1987) have been conducted in similar areas, yet the results
have mixed findings. Nonetheless, it is important to consider that self-fulfilling
prophecies and other teacher beliefs and expectations may have some affect on student
outcomes.
More recently, Diamond, Randolph, and Spillane (2000) highlighted how the race
and class composition of schools was influential in impacting teachers beliefs about the
students capacity to learn. Through interviews and observations conducted at five urban
elementary schools, Diamond et al. (2000) found that teachers and administrators in
schools with a majority of African-American and low-income students held more deficit
oriented beliefs about them than when there were a majority of White, Asian, or middle-
35
income students. In one such school with 100 percent African-American students and 88
percent low-income students, teachers and administrators believed that family
environments and community contexts contributed to the students inability to focus and
other limitations in class. These educators focused on the students deficits, as opposed
to other schools with higher income students that emphasized students assets. Therefore,
the data suggest that racial/ethnic and socioeconomic composition of schools relate to the
staffs overall beliefs about students (Diamond et al., 2000).
Similar studies show that faculty in low achieving schools view their students as
being quite limited in their ability to learn and do not think they are responsible for
raising their achievement levels (Cotton, 1989; Epps, 1995; Ferguson, 1998).
Contrastingly, a quantitative study by Goldsmith (2004) involving a random survey of
24,600 students in eighth grade found that African-American and Latino students in
segregated-minority schools with many minority teachers had great optimism about their
future education and tended to have positive attitudes about their teachers and classes.
These findings suggest that segregated-minority schools with high numbers of minority
teachers may be better able to serve minority students. Therefore, as the conflicting
studies show, the research on the power of expectations in affecting student outcomes
remains mixed.
Outlook and Implications for Minority Students
As studies from this literature review indicate, many questions still remain about
how to ensure that accountability policies are adequately addressing gaps in achievement.
Given that accountability policies, most specifically data use, have conflicting findings in
relation to closing the achievement gap, what is the outlook for minority students and
36
those from less advantaged backgrounds in this era of large-scale reform? The question
remains if accountability policies and the use of data are in fact exacerbating the
inequalities between minority and non-minority children and disadvantaged and
advantaged students. The magnitude of NCLBs promise to close the achievement gap is
altruistic, but how to do so remains a mystery. Therefore, it becomes increasingly
incumbent for policymakers to explain and clarify accountability policies to
administrators, educators, parents, and students so that data can be used effectively to
help all students succeed.
Summary of the Literature Review
Since the advent of NCLB, the unprecedented use of data has had mixed results in
ensuring that all students, regardless of background, can achieve at high standards. For
many urban schools that grapple with the challenges of poverty, inequalities still exist for
these minority students, leading many to believe that schools and education are not the
great equalizer as they are touted to be (Smyth, 2008). In these inner-city schools, a great
number of teachers are not convinced that accountability reforms are the answer to
achievement gaps (Woody et al., 2004). At the same time, other urban districts in
California show promise of brighter futures, thus instilling the belief that all children can
learn, despite the challenges they face in their communities and homes.
As studies have shown (Oakes, 1985; Gamoran, 1986; Condron and Roscigno,
2003; Lee, 2004; Darling-Hammond, 2005), minority students from low socioeconomic
backgrounds significantly receive less opportunity and access to challenging, high-quality
instruction. In these urban schools, the low allocation of funds and curricula that is rote
37
(Bowles & Gintis, 1976) and test-centered have led to increased inequities between
students.
Historically, education has been at the forefront of politics with landmark reports
such as A Nation at Risk suggesting that American society and schools have lost sight of
the basic principles of schooling (NCEE, 1983). In response to the need for educational
reform, a great number of federal legislation were passed at the turn of the century. Most
notably, NCLB promised increased funding for schools that served the poor, gave
assurance that highly-qualified teachers would teach every child, and held schools
accountable for raising student achievement through disaggregated data (Wood, 2004).
Similarly at the state level, the Public Schools Accountability Act in California
called for reform as bleak results from NAEP showed widening achievement gaps
between minority students and their White and Asian peers. Through state-wide
assessments, standards-based instruction, and the monitoring of student achievement
through rewards and sanctions, it was hoped that improved school practices would lead to
high achievement for all students.
Unfortunately, NCLBs promise to remedy public education has come with a
hefty price (Darling-Hammond, 2004; Woody et al., 2004). Schools that fail to meet
yearly targets are subject to severe sanctions (Abernathy, 2007). High performing
schools often receive few rewards, and some even lose funding. Therefore, the incentives
for raising achievement by teachers and administrators vary greatly in high and low
performing schools.
As a result of accountability pressures, many low performing schools have
become obsessed with test scores and have thus, narrowed the curriculum to include
38
mainly test preparation (Stephens et al., 1995; McNeil, 2001; Smyth, 2008). As tests
alone do little to increase the capacity of school to deliver better educational services, the
unfortunate consequence is greater inequity between students in high performing schools
and those of low performing schools. (Karp, 2004).
With mounting pressures of accountability, the use of data has had far-reaching
influences on school reform (Mandinach et al., 2006). Numerous studies have indicated
the successes of data use in positively impacting student achievement (Kennedy, 2003),
school culture and teacher attitudes (Feldman & Tung, 2001), and closing the
achievement gap (Armstrong &Anthes, 2001; Protheroe, 2001). Successful
characteristics of data-driven decision making include: a school culture devoted to data
use, strong leadership and support systems, and easy accessibility to data. Evidence from
state, district, and school-level data-driven decision making have illustrated that student
gains in achievement are increasing (Skrla et al., 2004; Symonds, 2004; Woody et al.,
2006).
However, despite the successes, districts and schools have encountered challenges
in the use of data. Specifically, teacher buy-in, a schools capacity for data use, data
literacy, and lack of time are prominent barriers that can affect how and if data is
translated into usable action (Feldman & Tung, 2001). Studies on teacher beliefs and
expectations of students have conflicting findings on their influence on student outcomes
(Diamond et al., 2000; Goldsmith, 2004).
Although a plethora of studies have been conducted on the effective uses of data
in driving educational decisions, very few have focused on the contrasting uses of data in
high and low performing schools. One such study by Diamond and Cooper (2007) found
39
that Chicago schools on probation were more apt to use data to focus on certain students
and other quick-fix solutions. Contrastingly, the researchers noted that high performing
schools used testing data to monitor school-wide student needs and to improve the overall
curriculum at the school. Therefore, in light of the apparent gap in literature of how
school context and socioeconomic status affect the use of data, this research study will
investigate how nine, very different California schools are coping with the pressures of
accountability. This study will seek to uncover the similarities and differences of data
use between high and low performing schools with high and low poverty levels. The
findings will hopefully shed light on how to best utilize data in the midst of escalating
accountability pressures.
In conclusion, what lies ahead for the future of these students living in a test-
based accountability system appears to be daunting. Although data use has the promises
to uplift and reform schools, the challenges of race/ethnicity and poverty may be far too
great. How educators deal with the daily pressures of accountability and incentives, and
their expectations of the students will give insight on how to create more optimal
environments for data use. Hopefully, this research will give true meaning to NCLBs
promise of ensuring that all children have a fair, equal, and significant opportunity to
obtain a high-quality education (NCLB, 2001 Sec. 1001).
40
CHAPTER THREE
Research Methodology
Introduction
Background of the Problem
In the era of accountability, the use of data has become a leading strategy for
overall school improvement (Woody, Buttles, Kafka, Park, & Russell, 2004; Datnow,
Park, & Wohlstetter, 2007). The desperate state of American education has been
illustrated through significant educational reports in the 1980s (NCEE, 1983). These
reports laid the foundation for current educational reforms such as standards-based
reform, test-based accountability systems, and data-driven decision making (Lee &
Wong, 2004; Coburn & Talbert, 2006). Since the passing of federal legislation No Child
Left Behind (NCLB), schools and districts have thoroughly witnessed a transformation in
accountability policies. For the first time, consequences have been attached to students
performance on statewide assessments (Hamilton, 2003). Proponents of NCLB have
argued that with tighter accountability all of Americas children, minority and non-
minority, low and high poverty, and students with disabilities, can succeed at higher
levels. With increased accountability, the use of data has become pivotal in affecting the
achievement levels of students and reducing the achievement gap (Armstrong & Anthes,
2001; Woody, Bae, Park, & Russell, 2006).
Yet, other scholars argue that accountability policies will only exacerbate the
inequalities that exist amongst various minority groups (McNeil, 2001; Diamond &
Spillane, 2004). The pressures of accountability have lead to narrowing of the
curriculum, increased use of test preparation, and instructional time allocated away from
41
the core curriculum (Darling-Hammond, 1995; Yeh, 2001; Abernathy, 2007). These
consequences can have severe ramifications for urban, minority students (Madaus &
Clarke, 2001).
Research Methodology Overview
This chapter describes the research design, sample and population,
instrumentation, data collection plan, proposed data analysis, ethical considerations, and
limitations. The purpose of this study was to investigate the use of data in schools that
vary in accountability status and poverty levels. The accountability status of a school is
connected to the schools overall Academic Performance Index (API). Therefore,
schools with high API scores were labeled as high performing and schools with low API
scores are labeled as low performing. In addition, the distinction between high and low
poverty schools was determined by the percentage of students that qualify for the
National School Lunch Program. This study examined how differences in school
contexts can affect the use of data among school administrators and teachers. Eight
elementary schools in the New Horizons Unified School District (NHUSD) and one
elementary school in Summerfield Unified School District (SUSD) that vary in these
contexts were studied to answer four established research questions.
How does the use of data vary in schools that differ in accountability status and
poverty levels?
a. How do the perceived pressures of accountability influence data use in
each type of school?
b. To what extent are there different motivations to use data in differing
contexts?
42
c. How do educators expectations of the school population influence
data use in each type of school?
Research Design
A qualitative, descriptive-analytic case study research method was used in this
study. Merriam (1998) posits that case studies are intensive, rich descriptions and
analyses of individuals, groups, or interventions. For the purposes of this study, it was
hoped that an in-depth, holistic understanding (Merriam, 1998, p. 19) of how data was
used by administrators, coordinators, coaches, and teachers at these sites was attained.
As Merriam (1998) notes, the case study method can uncover the interaction of
significant factors, specifically accountability status and poverty levels, on the
phenomenon, in this case, data use. As various sites were examined and compared with
each other, this study employed a multiple case study research design.
The decision to investigate nine schools was based on the desire to gain insight on
the differences and similarities between school leaders and teachers and their use of data.
It was my goal to establish positive rapport with the participants to clearly understand
how data use may differ in each school site. As Patton (2002) explains, this study was
most likely be aligned to applied research as it sought to understand why the use of data
are varied in different types of schools. The studys findings had policy implications on
how accountability policies have impacted the use of data for certain types of students by
school personnel. Furthermore, implications also highlight ways to create the more
favorable environments for data use to increase the achievement levels of all students.
43
Sample and Population
I utilized a specific set of selection criteria to identify nine elementary schools in
two unified school districts that vary in performance levels and poverty levels. This
section outlines the selection criteria, the sampling procedures, and participants.
Selection Criteria
Within these two districts boundaries were a plethora of high and low performing
schools in both high and low poverty communities. Therefore, in order to illuminate
these differences and similarities clearly, Patton (2002) believes that purposeful sampling
is crucial for yielding insights and in-depth understanding (p. 230). The following
selection criteria was utilized:
1. The high performing schools had API scores for years 2006-2007 and 2007-
2008 of at least 700 or above.
2. The low poverty schools had only 10 percent or less students as qualifying for
free and reduced lunch.
3. The low performing schools had API scores for years 2006-2007 and 2007-
2008 of 600 and less.
4. The high poverty schools had at least 85 percent or more students qualifying
for free and reduced lunch.
5. Schools with Similar Schools Ranking of 9 and 10 were considered high
performing and schools with Similar Schools Ranking of 1 and 2 were
considered low performing.
The following visual representation, Figure 1, illustrates the four different types
of schools that were selected for this study.
44
High Performing/Low Poverty
Low Performing/Low Poverty
High Performing/High Poverty
Low Performing/High Poverty
Figure 1. Four types of schools with various performance and poverty levels
Sampling Procedure
A variety of resources from the California Department of Education (CDE) and
NHUSD and SUSD websites including 2006-2008 API scores, Similar Schools Rankings,
school demographic characteristics, and Title I rankings were utilized.
The Similar Schools Ranking is a method to compare a schools academic
performance with a 100 schools that face similar opportunities and challenges (CDE,
2007). Schools with a Similar School Ranking of 9 or 10 are performing well above
average for schools with similar characteristics. Schools with a Similar School Ranking
of 1 or 2 are performing well below average for schools with similar characteristics.
Following a comprehensive search, Table 1 illustrates the sample of schools that were
purposefully selected based on the above criterion.
45
________________________________________________________________________
School Name API Points Percentage of Poverty Similar Schools
Ranking
________________________________________________________________________
A1 958 (High) 6 percent 10
A2 942 (High) 3 percent 9
B1 896 (Low) 7 percent 2
B2 890 (Low) 3 percent 2
C1 738 (High) 95 percent 10
C2 730 (High) 94 percent 10
C3 712 (High) 94 percent 9
D1 603 (Low) 99 percent 1
D2 568 (Low) 97 percent 1
________________________________________________________________________
Table 1: Sample of nine schools
Due to the large variation between the nine schools in terms of API scores and free and
reduced lunch percentages, this type of purposeful sampling is known as maximum
variation sampling (Patton, 2002). I purposefully chose a wide range of schools on
opposite sides of the continuum so that differences and/or similarities emerged between
them.
The rationale for choosing these schools that vary widely in these two contexts
was to gain the greatest insight to my four research questions. First, I hypothesized that
46
high performing schools do not have the pressures of accountability and therefore may
have greater freedom to use data for overall school reform? Also, I imagined that high
performing schools might not have serious incentives and motivations to use data at all.
On the other hand, low performing schools were hypothesized to be constantly under the
threat of accountability sanctions and thus would use data much more to make short-term,
quick fix solutions. As indicated in the literature, there are also other more common
barriers such as teacher buy-in, a schools capacity for data use, and lack of time that can
limit the uses of data. Furthermore, as poverty levels of students are an important context
in academic achievement, there was a possibility that it may not play a crucial role in the
use of data amongst school staff. Therefore, the selected nine schools that had maximum
variation were chosen to clearly illuminate how and if data was used differently across
contexts. Further, it was imperative that purposeful sampling was utilized to increase the
validity of this research study.
Participants
The study involved administrators, coordinators, coaches, and/or teachers at each
school site. The principal and/or assistant principal, coordinator/coach, and three to five
teachers at each site directly informed this study through semi-structured interviews. The
teachers were self-selected through a volunteer recruitment process at a faculty meeting.
Of these teachers, various grade-level teachers were interviewed as they provided the best
span of data use throughout the grade levels. In addition, these teachers represented a
wide range of teaching experiences. Thirty-three individuals participated in the semi-
structured interviews across the nine schools.
47
Overview of the Districts
One proposed district selected for this study was the New Horizons Unified
School District (NHUSD). NHUSD is the second largest district in the nation serving
over 700,000 students within eight local districts. With 463 elementary schools, 91
middle schools, and 102 high schools, students come from seven different ethnic
backgrounds: 11.2 percent African-American, 0.3 percent American Indian, 3.8 percent
Asian, 2.2 percent Filipino, 73.3 percent Latino, 0.3 percent Pacific Islander, and 8.9
percent Caucasian. More than ten different languages comprise a large English Learner
subgroup within NHUSD, with the largest group being Spanish-speaking students. There
are significant differences in socioeconomic backgrounds of the students within NHUSD
as the boundaries stretch over 710 square miles.
The district as a whole is in Program Improvement Year Three because in 2007-
2008 it did not meet its Adequate Yearly Progress (AYP) criteria in English Language
Arts (ELA) for Latino, African-American, and socioeconomically disadvantaged
students, and ELA and mathematics for students with disabilities and English Learners
(CDE, 2008). Currently, the overall API is at 683, a twenty-one point increase from the
prior year.
The second proposed district site was the Summerfield Unified School District
(SUSD) which is comprised of 25,288 students. With 17 elementary schools, 8 middle
schools, and 4 high schools, SUSD is considered a fairly small, suburban school district.
The district as a whole is performing above the statewide API target at 819. Many
individual schools within the district have high API scores. SUSD as a whole has low
poverty levels, and the main ethnicities represented are: 36 percent White students, 31
48
percent Asian students, 18 percent Latino students, and 4 percent African-American
students.
Overview of the Schools
School A1 is located in an upper middle class neighborhood in New Horizons.
The ethnic and racial breakdown is 84 percent White, 7 percent Asian, 5 percent African-
American, and 3 percent Latino. In 2006-2007, the schools API was 939. Last year, the
school dropped to API 933. The school had a ranking of 9 on the Similar Schools Rank
in 2007 when compared with schools like Schools B1 and B2.
School A2 is also located in an affluent neighborhood in New Horizons. The
ethnic and racial breakdown is 68 percent White, 20 percent Asian, and 5 percent
African-American. In 2006-2007, the schools API was 965. Last year, the school
dropped to API 962. The school had a ranking of 10 on the Similar Schools Rank in
2007 when compared with schools like Schools B1 and B2.
School B1 has 7 percent of students on free and reduced lunch. The school
consists of 69 percent White, 12 percent Latino, 11 percent Asian, 4 percent African-
American, 2 percent Filipino, and 1 percent Native American. The schools API two
years ago was 892. This year the API is 896. School B had a Similar Schools Rank of 2
in 2006 when compared with schools like Schools A1 and A2.
School B2 is the only school located in SUSD. It is located in a middle to upper
middle class neighborhood and has only 3 percent of students on free and reduced lunch.
The school consists of 53 percent White, 32 percent Asian, 10 percent Latino, 2 percent
African-American, and 1 percent Filipino. The schools API two years ago was 893.
49
This year the API is 890. This school is very similar to School B1. School B had a
Similar Schools Rank of 2 in 2007 when compared with schools like Schools A1 and A2.
School C1 is located in a low socioeconomic area where 95 percent of the
students are on free and reduced lunch. The school is currently at an API of 738. Over
the past five years, there have been steady and marked improvements in the API scores.
The schools API score has increased 163 points since 2003. This past year the school
has come out of Program Improvement status and currently has a Similar Schools
Ranking of 10 when compared with schools like School D1 and D2.
School C2 is located in a low socioeconomic community with 83 percent Latino
students and 16 percent African-American students. The schools high API of 777 has
made it a 10 in the Similar Schools Ranking in 2007 when compared with schools like
D1 and D2.
School C3 is a recent recipient of the Riordan Foundation School Principal
Turnaround Award. With an API of 712 and a Similar Schools Ranking of 9, the school
is considered high performing when compared with schools like D1 and D2. There are
74% Latino students and 25% African-American students at this school.
School D1 is located in the city of Rosewood where 99 percent of the students
qualify for free and reduced lunch. The school currently has an API of 652. Although it
has made considerable growth in the last three to four years, it is still a 1 on Similar
Schools Ranking when compared to schools like Schools C1, C2, and C3. The ethnic
make-up consists of 72 percent Latino students, 27 percent African-American students
and 1% other.
50
School D2 is located in an impoverished community where 97 percent of the
students qualify for free and reduced lunch. The school currently has an API of 577 and
is in Program Improvement Year 5. For the year 2006-2007, the API was 564. The
ethnic make-up consists of 64 percent Latino students and 35 percent African-American
students. School D has a Similar Schools Ranking of 1 when compared to schools like
Schools C1, C2, and C3.
Instrumentation
The instruments that were used for this research study include semi-structured
interview protocols for administrators, coordinators/coaches, and teachers (see Appendix
A, B, and C). The instrumentation was refined through cohort field tests to ensure that
the questions promoted ecological validity (Cicourel, 1982) and yielded answers that
gave insight to the four research questions. Figure 2 illustrates the connection between
the research questions and data collection.
51
Semi-Structured
Interviews
Researchers Journal
Reflections of interviews
(participants and sites)
X
X
Research Question #1: How does
the use of data vary in schools that
differ in accountability status and
socioeconomic levels?
X
Research Question #2: How do the
perceived pressures of accountability
influence data use in each type of
school?
X
Research Question #3: How does
the role of incentives for
achievement in each type of school
affect data use?
X
Research Question #4: How do
perceived expectations of the school
population influence data use in each
type of school?
X
Figure 2. Matrix of research questions and instrumentation
52
Data Collection Procedures
The data collection process consisted of semi-structured interviews and a
researchers journal. Prior to the interviews, I met with each participant to explain the
study, the interview process, and how I would ensure his/her confidentiality. Each
participant signed informed consent forms for participation in the research study. This
section outlines the specific procedures for data collection and the potential bias involved
in this phase.
Semi-Structured Interviews
The process of interviewing can be described as a negotiation of social roles and
frames of references between strangers (Malloy, 2008). Therefore, it was imperative that
I was well-prepared, attentive, and kept an open mind during the data collection phase.
At each site, interviews were conducted with the principal and/or assistant principal,
coordinator/coach, and three to five teachers of various grade levels. I used a protocol of
open-ended questions to guide the interviews. The interviews with the participants took
place in private and secured rooms at the school site, usually in the participants office or
rooms. The rationale for this setting was to provide the most familiar and comfortable
environment for the participants. The interviews at each site were conducted from
October 2008 to December 2008.
The goal of the interviews with administrators was to find out if the accountability
status and socioeconomic contexts of the school affected how data were used in making
school-wide decisions such as in the allocation of resources or overall curricular
decisions. Teacher interviews focused on how data from state and district assessments
were used and for what purpose. Other topics that were discussed with both
53
administrators and teachers included how accountability pressures, motivations,
challenges, and expectations of students have played a role in the use of data.
Prior to each interview session, I encouraged the participants to be honest and
candid with their answers. I reiterated to the participants that I was not looking for the
right responses, but that sincere and direct answers would clearly illuminate how
varying school contexts could affect the use of data. I hoped that these responses could
ultimately help facilitate optimal environments of data use for all schools and highlight if
there were differences between certain types of schools around the use of data.
Each interview lasted approximately 35 to 40 minutes and was digitally recorded.
I transcribed the first 13 of the 33 interviews and the last 20 interviews were transcribed
through an outside transcriber. During this data-gathering phase, I tried to possess as
much sensitivity to the information being gathered, as I was aware that eventually I
would become the voice of my participants (Merriam, 1998; Patton, 2002). In addition,
throughout the data collection phase, I realized that good communicator is one who is
able to establish rapport, asks thoughtful questions, and listens intently (Merriam, 1998).
Researchers Journal
I also utilized a researchers journal to record observations that I had during the
entire data collection phase. This journal proved to be helpful in recording my reflections
about each interview session, the participants I met, and the school sites that I visited.
Through these notes, I was able to jot observations of the participants and schools that
may not have been addressed in the actual interview. This journal tremendously
supported my findings and was a beneficial tool in the data collection phase.
54
Potential Bias In Data Collection Phase
Implicit in qualitative research is the potential for researchers bias (Merriam,
1998). Therefore, during the conduction of interviews, assumptions and biases were
filtered. It was crucial that I was constantly cognizant of the lens that I was looking
through to avoid early judgments. Finally, in order to ensure the accuracy of what was
stated during the interviews, I discriminated from any interpretations during the data
collection phase.
Data Analysis Procedures
Data analysis can be described as the process of bringing order into data and
finding patterns or relationships within and across categories that are supported with
evidence (Malloy, 2008).
Specifically, data was analyzed by coding the transcribed interviews by hand.
Initially, I audited the data by reading and scanning through all the transcribed pages of
interviews. Next, I began coding the data by starting with more macro-levels concepts
and moving to more micro-level concepts. I initially coded all the content according to
the four research questions that guided this research study. Each research question was
given a color, and pertinent comments made by participants were highlighted according
to those colors. From those research questions, I developed eight over-arching themes. I
continued the iterative steps of coding and refining to create sub-themes. Figure 3
represents the process of developing themes, more refined codes, and the correlations to
research questions.
Equally important during this time was to make sure that trivial deductions were
not made hastily. The coding and recoding phase occurred over one week. There was no
55
need to conduct interview follow-ups, but I did email a few participants to clarify certain
segments of their interview. Finally, conclusions from this analysis were generated in the
form of charts following each of the four research questions. These visual
representations helped tremendously in sifting through all the data.
56
Over-arching Themes Refined Codes/
Sub-themes
Correlation of
Codes/Themes to
Research Questions
Basic Level Students
Subgroups
Far Below Basic and
Below Basic Level
Students
1 Focus on Certain
Levels of Students
Individual and All Students
Variation of Data Use:
Research Question #1
Broader Content Clusters 2 Focus on Certain
Curricular Areas
Departmentalizing
High Poverty Schools and
Pressures of Accountability
3 Accountability in
Various School
Contexts Low Poverty Schools and
Pressures of Accountability
Increased Test Prep 4 Focus on Data
Less Time for the Arts
Pressures of
Accountability:
Research Question #2
Leadership and
Accountability
Buy-in
5 Foundation for Data
Use
Data As Part of a Greater
Whole
Personal Incentives
Validity
More Than Just Numbers
6 Beliefs about Data Use
Data Use Is Not the
Answer
Time
Lack of Training and
Transfer
Lack of Technology
7 Challenges
Stability of Staff
Different Motivations to
Use Data:
Research Question #3
High Expectations
Special Education Students
8 Teacher Expectations
Is it Fair?
Educators Expectations
of School Populations:
Research Question #4
Figure 3: Visual representation of themes, codes, and research questions
57
Ethical Considerations
To ensure that data from this study was obtained in an ethical manner, the
following steps were implemented. These ethical considerations included the
Institutional Review Board (IRB), NHUSDs approval from the Committee for External
Research Review, informed consent forms, confidentiality, and the careful protection of
participants anonymity. First, the IRB for the University of Southern California (USC)
granted approval to the study in July 2008. Secondly, NHUSDs External Research
Review ensured that the districts ethical obligations were met (Guidelines for External
Research Review, 2008). NHUSD granted approval in August 2008.
• Protecting students and employee from risk of harm, violations of rights, and
losses of privacy
• Protecting the educational process from unwarranted distractions and
interruptions
• Protecting public resources including data from misappropriation for private
or unjustified use
As there was some difficulty in securing site permission requests, the study was
enlarged to include another district and more schools in NHUSD. Therefore, in
November 2008, USCs IRB granted approval for additional schools. The Elementary
Director of Summerfield Unified School District was also notified prior to the studys
data collection phase.
Prior to conducting interviews and observations, I obtained informed consent
forms from participants to ensure their voluntary participation in the research study, their
right to withdraw from the study at any time, and a description of the nature and purpose
58
of the study. As mentioned earlier, participants in interviews could have decided at any
time to comment off the record. These requests were regarded with utmost integrity and
none of those comments were included in the findings.
With a limited number of participants and schools, it was understood that
confidentiality might be compromised. Therefore, I ensured the risks to anonymity by
guarding the names of all participants from other participants. All data obtained from the
study has been kept strictly confidential and no names have been used in the final
dissertation.
Limitations
Even though there has been a considerable amount of effort taken to ensure the
validity of the data taken from the interviews, there are still limitations to this study. One
limitation was the generalizability of this study. With a small sample size of nine schools
in two districts, it was acknowledged that the results may not be comprehensive of what
is happening in the majority of Californias schools. There were only two schools that
represented one of four school contexts. In addition, as with case study research, there
was the possibility of researchers bias. As an educator in NHUSD for nine years, I was
cognizant of my own judgments and beliefs about data use, accountability, and
educational equity. I continuously gave careful consideration of my integrity as a
researcher to ensure maximum validity and reliability.
Summary of Research Methodology
In this chapter, I have described the research design, sample and population,
instrumentation, data collection plan, proposed data analysis, ethical considerations, and
59
limitations that were used to conduct my research study. In the subsequent chapters, I
will describe my research findings and present my data analysis.
60
CHAPTER FOUR
Findings
Introduction
The purpose of this study was to investigate the similarities and differences of
data use in schools of various performance and poverty levels. The following sections
are devoted to the findings of this qualitative study and are organized around the
overarching research question: How does the use of data vary in schools that differ in
accountability status and socioeconomic levels? In addition, three sub questions also
guided the focus of the study.
a. How do the perceived pressures of accountability influence data use in each
type of school?
b. To what extent are there different motivations to use data in differing
contexts?
c. How do educators expectations of the school population influence data use in
each type of school?
Through the coding and analysis of data garnered through semi-structured
interviews of administrators, coordinators, coaches, and teachers, major themes and sub-
themes were extracted according to the research questions. This chapter will present an
analysis of each research question and the themes that emerged. The major themes that
will be examined are variation of data use, pressures of accountability, motivations to use
data, and expectations of the school population. Again, the aim of the study was to
illuminate how these themes vary across four different types of schools, high and low
performing, with high and low poverty levels.
61
It is important to note that schools characteristic of the high poverty context
varied in performance levels to a greater degree than those schools in the low poverty
context. Specifically, there was approximately a 200-point difference in Academic
Performance Index (API) scores between high and low performing schools in high
poverty areas. However, between high and low performing schools in low poverty
communities, the API point difference was just 60. As mentioned before in Chapter
Three, the contexts of high and low performing were determined by Similar Schools
Ranking. For example, schools with API scores of 890 with a Similar Schools Ranking
of 2 were considered low performing for the purposes of this dissertation. In all other
instances, an 890 API score would be considered high performing. In the same way, a
school with a 730 API and a Similar Schools Ranking of 10 is considered high
performing, though many would view a score less than 800 as low performing.
Variation of Data Use
With nearly eight years of momentum behind it, the federal law, No Child Left
Behind (NCLB), has forever altered the state of education. By tightening the reins on
accountability, it was hoped that NCLB would place greater focus on Americas poorest,
low performing schools through the increased use of test-based accountability systems
and data driven decision-making. As a result, some districts and schools have begun to
witness immense pressures to meet testing requirements and to examine testing data with
a critical eye. But with each passing year, more schools have fallen into Program
Improvement (PI) status as testing requirements have become more rigorous and
stringent. Many educators have questioned the intention of NCLB as schools wrought
with challenges are held to the same standards as schools in affluent communities. There
62
is no question to the benefits of examining data as a reform strategy, as schools can get a
clear picture of students needs and monitor those needs throughout the school year. But
are certain types of schools forced to use data in particular ways to meet the requirements
of NCLB? This question will hopefully be illuminated in the forthcoming sections.
Types of Student Assessment Data
When looking at the types of student assessment data used by participants, there
was consistency between all nine schools regardless of context. Within New Horizons
Unified School District (NHUSD), there was no surprise that across all eight schools data
analysis focused primarily on student assessment data that were most readily available to
all participants: the California Standards Test (CST), California Achievement Test, Sixth
Edition (CAT-6), California English Language Development Test (CELDT), district
benchmarks in language arts, mathematics, and science, end of story or end of chapter
assessments, and teacher-generated assessments. The CST is a state-mandated
standardized assessment that is given once at the end of the school year in the spring.
Schools typically receive CST data within three months during mid-August. District
benchmarks in Open Court (language arts program), mathematics, and science are
mandated by the district and are given 3 to 4 times a year following district pacing plans.
Open Court assessment data are inputted by teachers or Literacy Coaches within a
window and are accessible at any time. Mathematics and science assessments are usually
sent down to the district and are accessible 2 days later. Although schools in
Summerfield Unified School District (SUSD) currently do not utilize district-wide
benchmark assessments, similarly these participants focused most of their data use on
63
CSTs, chapter tests offered by their literacy and mathematics programs, and/or teacher-
generated assessments.
More specifically, teachers in NHUSD had access to formative assessment data
(Open Court, math, science, and teacher-generated assessments) and usually received
summative data (CST, CAT-6, and CELDT) from their administrators or coordinators.
This teacher described the kinds of data related to classroom practices that she has access
to:
We get the Open Court tests that we give every 6 to 8 weeks, and we have those
printed out. We get the math quarterly tests. We get the standardized tests back
that they take at the end of the year. Plus, whatever we do in our own classroom,
you know, we keep records.
Likewise, administrators stated that they use both formative and summative data, and
could access information for every student at the school through student data systems.
Literacy coaches have access to all Open Court data through NHUSDs Student Online
and Reporting System (SOAR). Mathematics coaches obtain individual as well as grade-
level and school-wide math scores through Princeton Review. English Learner (EL) and
Title I Coordinators have access to CELDT, CST, and CAT-6 data through Student
Information Systems (SIS). As this math coach explained:
I have access to CST scores, CAT-6 scores, basically all the STAR tests that the
students do. I have access to quarterly assessment data for mathematics that we
do three or four times a year, and I have access to it for not only our school but for
our local district and our district as a whole. I mean those are the really hard
pieces of data we have. We also base a lot of decisions on classroom
observations, conversations with teachers, and conversations with administrators
and things like that. A little more qualitative, I would say.
64
It is apparent from his explanation that many administrators, coordinators, and coaches
also used classroom observations as ways to assess student progress and teacher
effectiveness. For the most part, these out-of-classroom participants reported having
greater access to various forms of data within their school, across schools, and throughout
their local district.
Although there were consistencies in the types of assessments accessible to
participants across the district, those similarities dissipated when looking into how that
data were used specifically to target certain types of students or certain areas of
underachievement in certain schools. The greatest differences in variation of data use
were gleaned from schools of different poverty levels. Therefore, schools with both high
and low API scores in high poverty schools focused data use on certain types of students.
On the other hand, these students looked very different in low poverty schools of high
and low performance levels, as discussed below.
Focus on Certain Types of Students
Basic level students. One of the most prominent themes that emerged from the
interviews was the variation of data use on certain types of students between schools of
different performance and poverty levels. It became thoroughly clear that high poverty
schools that were both high and low performing, placed an emphasis on those students
who in essence would give the school the largest API increase on the CST or as some
participants clearly stated the biggest bang for the buck. As discussed earlier, these
students, also known as bubble students, are those middle band students who are on the
verge of becoming Proficient. Therefore, in some schools, all monies for intervention
and after-school tutoring went solely to the students scoring at the Basic level. Even
65
more eye opening was the blatant identification of these students as described by one
coach in a high performing, high poverty school:
What we do is...we have bands, in other words, basic goes from this number to
this number, so any student that lands within those numbers, we want to know
who they are, by name...the mentality is if you focus on the Basic, if you were to
move, say within the class theres about six or seven students that are Basic, if
you are able to move at least five of those to Proficient that you know in regards
to percentages and numbers thats going to make a big difference.
Through this specific targeting, the progress of Basic level students was tracked, their
progress noted, as well as their influence on the schools API. And as one administrator
lamented to me, this particular focus on API points is crucial because it is what the public
sees, what the parents know, and what is always reported in the newspapers.
With so much time, monies, and intervention going to the Basic students, these
high poverty schools often did not address the needs of the other sides of the spectrum:
the Far Below Basic/Below Basics and the Advanced/Proficient students. In some cases,
teachers were often told to forget about them [Far Below Basic students] and they
didnt get it last time, they're not going to get it now, they're not going to get it any time
soon by their administrators. These types of sentiments seemed to leave many teachers
feeling hopeless and angry, but despite the schools focus on Basic students they knew in
their hearts that meeting the needs of their most struggling students was one of dire
importance. A teacher at a low performing, high poverty school mirrored these feelings.
I taught first grade for nine years and I taught every student, and the lowest
achieving students. Those were the ones who spent the most time with me
because I wanted everyone ready for second grade. So thats a big problem for
me when I hear we are going to focus on six children, and which doesnt
necessarily mean that we are going to disregard the other children, but I am afraid
it is happening at least in some classrooms.
66
Similarly, in low performing, high poverty schools, the students who were the
most successful on the CST and needed the most challenge in their classrooms were often
subjected to the same curriculum as their lower performing peers. A frustrated teacher at
a low performing, high poverty school stated:
You know the playing field is always unequal most for the kids at the top. The
poor kids who come to this school that are really smart, that really want to learn,
and they get to middle and high schools, and some of them to ivy league schools,
and they realize that they dont know anything. Because they are not getting an
equal education, because its dumbed down because there is no focus on them.
Therefore, it became clear that the focus on particular students in order to attain API
targets was a defining characteristic in schools with Free and Reduced Lunch percentages
of 94 percent and above, with both high and low performance levels.
Subgroups. In addition to focusing on the Basic students, high poverty schools
that were both high and low performing also placed an emphasis on certain subgroups
more often than schools in more affluent communities where these subgroups were not
significant in number. As mentioned before, one of the many changes of NCLB was the
use of disaggregated data based on subgroup or ethnicity to determine the performance
level of a school. Data from subgroups such as English Learners, African-American
students, students living in poverty, or students with disabilities were hoped to highlight
the achievement between these groups of students and their White and Asian peers. In
order for schools to meet Adequate Yearly Progress (AYP), subgroups must reach their
yearly growth targets in English Language Arts and Mathematics on the CST. As high
poverty schools began to notice that certain subgroups were not making AYP, there was a
concerted effort to target those groups of students and give them as much attention
67
needed to help them improve. A Title I Coordinator at a low performing, high poverty
school highlighted this issue through his statement:
Ill give you an example. The year before this year, we met our math objectives
for AYP except for one subgroup, African-American....and when we didnt hit
AYP because of that subgroup, we were very disappointed. So to ensure that that
never happened again, you bet we targeted African-Americans. And we targeted
them through intervention and whatever we felt that needed to take place in order
for that group to meet AYP in math.
In all cases, intervention was not withheld from other subgroups such as English Learners
or students with disabilities, to the benefit of one group. But there was again the use of
CST data to focus attention and monies on a targeted group of students in these lower
performing schools.
Far Below Basic and Below Basic Students. Conversely, in low poverty schools,
data use varied significantly in relation to which students received the most attention in
their schools. According to CST data, these schools have historically had large
percentages of Advanced and Proficient students. Therefore the curricular focus was
placed on the few Far Below Basic and Below Basic students in each class. These
administrators and coordinators were cognizant of the fact that moving Far Below Basic
students to Basic or Proficient resulted in much higher API points than moving Basics to
Proficient as illustrated by the comment made by one administrator at a very high
performing, very low poverty school.
And since our students are achieving, the question is how to kick it up. So again,
nobody is falling through the cracks...your score is rated by the low students,
because the low students are the ones that will determine the API. Although
most people think of it as how many you can get Advanced and Proficient. If you
can't move your lowest students, your API will never go anywhere...Its the low
students that have to move. So instead of the focus on the struggling-to-get-there
group, you have to focus on the lowest group and move them up.
68
With high percentages of students already achieving at elevated levels, 80 to 90 percent
in some cases, and insignificant numbers of middle band students, these schools realized
that their biggest bang for the buck came from moving the most struggling students.
Therefore, these children were pulled out for extra resource time in the afternoon through
intervention programs like the Learning Center and were monitored by the
administrative or instructional team after each assessment period.
Individual students and all students. In addition, high and low performing
schools, especially those in low poverty areas, placed much of their focus on individual
students. As discussed in the previous section, when these schools looked at their
aggregate CST data, it showed that a high percentage of students were performing at the
level of Proficient or higher in one area, or in essence that a great number of children
were doing very well. In order to paint a more detailed picture of that 20 percent not
achieving, it became incumbent for these schools to look deeper into the data and to look
at individual students and their needs. One administrator at a lower performing, low
poverty school echoed these sentiments.
One of the things that with a high performing school is to look at...you have to
look more individually. Because if you just look at the large aggregate data, its
not going to tell you much. Again, some of the grade levels, 70 percent of the
students are proficient or advanced, so thats not going to tell you anything. So
you really have to look at individual students and look at where their needs are
and focus on that.... it [the data] can be used for is to really narrow in on those
students, more on an individual level to look at specific students and say, Heres
this students need, as opposed to at third grade, we missed the boat on whatever,
writing applications, because you're not going to see that broad pattern in a
grade level. You're going to see it more individually. And in that respect, you're
going to see different students have different needs.
69
Finally, low poverty schools were more likely than high poverty schools to place greater
emphasis on using data for all students, as opposed to one band of students or to one
subgroup. In many cases, there was little distinction in using data for certain groups of
students in low poverty schools, with the exception of Far Below Basics. In these
schools, data was used primarily for all students.
The one exception came from a coach at very low performing, high poverty
school who stated that although he was aware of the connection in moving certain groups
of students on API growth, his school has always focused on giving all they can for each
and every student.
I think the message here has pretty much always been we need to teach every
student. That every student deserves 100 percent from you, to do everything you
can...for each and every student. I will say that we are aware of the fact that
moving a Far Below Basic student to a Below Basic student results in greater API
gain than moving a Proficient student to Advanced or whatever...So I see that
people are aware of that, but I dont believe that has driven what has happened
around here, at least from what I have seen.
Perhaps this untargeted approach has attributed to the schools Similar Schools Ranking
of 1 and an API score that has yet to reach 600. Or perhaps this has also led to the lack of
differentiation across all levels of students and somewhat ineffective teaching practices.
Regardless of whatever circumstances have led the school to this point, it makes for an
interesting comparison between different schools and the various uses of data for certain
types of students or even no students at all. In the end, this use of data in lower
performing, high poverty schools does not seem to fit the accountability goals of
policymakers to ensure that all students are meeting proficiency.
70
Another salient characteristic of data use in high performing schools regardless of
poverty levels was the way in which these administrators went above and beyond in
utilizing data. These administrators used data religiously to ensure that all children were
achieving and making progress. Perhaps, this relentless focus on using data contributed
to their high Similar Schools Ranking and their high API scores.
Summarizing this section, it is clear that there are significant differences in the
way student assessment data were used to target certain groups of students depending on
the poverty levels of the school. Figure 4 represents a continuum of responses from
participants in all types of schools. As indicated, high and low performing schools with
low poverty levels placed greater focus on those students at the bottom of the spectrum,
the Far Below Basics and Below Basics, whereas high poverty schools, regardless of API
scores, tended to focus on Basic students. Schools that were low to high performing, and
located in low poverty areas focused much of their attention on all students.
71
High/Low Performing, Low Poverty
High/Low Performing, High Poverty
Figure 4. Continuum of responses from high/low performing schools in low to high
poverty areas on how data is focused on certain types of students.
I mean, obviously, as a teacher, you're always looking at
your kids that are probably the farthest behind; they're
the ones that stick out the most, the ones that you have
the tender heart for that you're going, Come on, now
we want to get you up to speed.
And since our students are achieving, the question is
how to kick it up. So again, nobody is falling through
the cracks...your score is rated by the low students,
because the low students are the ones that will determine
the API.
Well, I think I use it for everybody because I
think its just as important to use for your really
high achieving as it is for your low.
We pull the scores to see which students need extra
attention, our Far Below Basics are our number one
goal. We want to make sure that theyre average at
least, Proficient is what we are looking for, and if they
are not, that is a red flag for us.
To be honest with you, we were kind of told forget
about the Far Below Basics and Below Basics.
Concentrate on those Basics and concentrate on
keeping the Proficients and Advanced at their level
that they are in. That was our only focus. All our
intervention went to Basics.
And so, in my classroom, I have identified those
students that are performing Basic and those are the
students that we focus on. We focus on everyone, but
the Basic students are the ones that we want to move up
to Proficient. And so, our data drives us to pick those
students and just kind of help them and push them.
We would break it apart and then we would select
target students that you know were at a certain level,
and I mean, we are told our biggest bang for our buck
in terms of AYP and API is to look at our Basic
students. And that whole middle band and pushing
them towards proficiency and advanced status.
72
As high poverty schools witnessed insurmountable pressures in accountability,
they turned their focus on targeting certain students to meet those testing requirements as
Figure 4 indicates. Additionally, as the upcoming section describes, these types of
schools also believed that focusing on certain subject areas could also help to raise
overall test scores.
Focus on Broader Content Clusters and Departmentalizing
As many students in lower performing, high poverty schools struggle on certain
skills such as reading comprehension and writing strategies, these types of schools were
more apt to shore up these areas in different ways. When asked questions about how data
were used in their schools, lower performing schools in high poverty areas tended to use
data to focus on broad content clusters on the CST that students did not perform well in.
In many cases, the analysis of student assessment data from the CST or district
benchmarks resulted in grade-level foci in certain skills, such as drawing conclusions,
making inferences, or combining sentences. With greater numbers of lower performing
students, broad trends across grade-level content areas were evident, and many
administrators focused their professional development on these areas. Likewise, it was
expected that explicit instruction in these content clusters would occur in every
classroom. One administrator who has experiences at both low and high performing
schools summed up this finding perfectly.
The data is used, I think, in a broader range at a low performing school, because
you're seeing broad trends. At a high performing school, you dont use the data in
such a broad way, because again, when you look at a lot of that aggregate data, all
it tells you is that you're doing wonderful...again, even when you look at the
content clusters to focus on the third grade, here was our area of need. Our
average score was 30 percent correct; the statewide average was 70 percent
73
correct; thats really where our biggest need is and to focus that across a grade
level.
In these lower performing schools, participants talked about realizing that all they
had to accomplish was to have their students get four or five more correct questions on
a particular section of the CST. As a coordinator in a low performing, high poverty
school expressed:
What does that mean in real terms? That means four or five questions. Can you
do four or five questions? Can you help your students? And these areas they
need help in, do you think in one year you can teach them these skills in four or
five questions? So you break down like that and now the teacher starts to see that
as a roadmap on how to get to AYP. I can do this.
Although this school understood that more questions needed to be answered correctly in
specific sections, best practices and how to get there still seemed to be lacking.
Not only were higher performing schools in high poverty areas extremely well-
informed about what content areas carried the greatest weight in terms of CST scores, but
they also made connections to teaching strategies. These participants very candidly
shared that sections such as the Literacy Response and Analysis section on the CST
typically have very few questions. Therefore, students need to score more than half of
those questions correctly to pass this section. In order to meet those targets, a lot of time
was spent focusing on that strand through test preparation, explicit classroom
instruction, and intervention. Similarly, these types of schools delved into district
curriculum to discover which teaching strategies would give the students the best
foundation in language arts or math. A salient comment that illustrates this data-based
strategy was as follows:
74
I constantly looked at the standards. We broke it down to sub-strands, like
Literary Response and Analysis, Writing Strategies, Reading Comp. And with
the principal, she and I really said, when I looked at the part of OCR that could
give us the most bang for our buck as far as standards, we were like, Oh my God,
its Comprehension Skills. And so we really devoted ourselves to
Comprehension Skills.
Perhaps this concentrated focus on standards-based instruction, teaching strategies, and
explicit classroom instruction is what set these higher performing schools apart from the
lower performing schools of similar poverty levels. Inherent in these schools was also
the presence of differentiated instruction as one teacher revealed to me:
So we go standard by standard, then see which kids are not meeting those
standards, then we group those. What we used to do, we used to group each area,
then we would...what do you call it, mix. So I was really good at vocabulary or
comprehension skills, and then the teacher was really good at vocabulary, and
then someone would be good at grammar, structure, and conventions. And then
so we would each group the kids. Today, I have all the Far Below Basics and the
Below Basics, and mine is comprehension skills, that wouldnt be the same lesson
I would do for the Proficient and Advanced two days later. So we would each
mix those kids around.
Considering the other context, when asked about his schools next steps in regards to
data-driven decision making, this principal from a low performing, high poverty school
commented that more standards-based instruction was needed.
I think we are good in the data-driven decision making...Our next steps overall are
to start work more on the standards-based instruction to show them [teachers] that
the data says thats where we need to be now. So lets follow that. And see what
happens.
Apparently, he understood that this type of instruction could be a reason why his school
is performing below other higher performing schools with similar poverty levels.
Furthermore, as schools and grade-levels began to focus on under-performing
content clusters of the CST, administrators in high poverty areas also paired specific
teachers that taught these content areas successfully with groups of students that had
75
scored low in the same areas. Likewise, in other schools in the months prior to testing,
teachers were asked to departmentalize in various strands of language arts or math: areas
they had the most expertise in and were the most comfortable with. Using these
strategies, administrators felt more at ease knowing that their most talented teachers were
instructing the students who would give them the best growth on the CST.
In summary, high poverty schools used much of their student assessment data to
spotlight broad trends and patterns of student underachievement on the CST and district
benchmarks more so than low poverty schools. Professional development, test
preparation, intervention, and classroom instruction seemed to emphasize these areas and
teachers were expected to follow-through in ameliorating these low areas. There seemed
to be more explicit teaching strategies and standards-based instruction occurring in high
performing schools than in low performing schools of similar poverty levels. Throughout
all high poverty schools, matching teachers with students and departmentalizing were
also other methods of ensuring that patterns and trends of low achievement in content
clusters were adequately addressed. These administrators from high poverty schools used
data to pair teachers with strong foundations in certain areas with students who needed
help in those same areas.
As accountability pressures differed for all schools, it became apparent that the
utilization of data was equally varied. Each type of school tailored their data use to fit the
needs of their students and subgroups in order to best meet yearly targets on the CST. As
Figure 5 illustrates, parallel uses of data were found in schools with similar levels of
poverty. These high poverty schools used student assessment data for Basic students,
certain subgroups, and to highlight broad content clusters. On the contrary, low poverty
76
schools were more apt to use data for Far Below Basic and Below Basic students as well
as for all students. In addition, high performing schools independent of poverty levels
showed stronger understanding of the connection between student underachievement and
effective teaching strategies. A more detailed description of the pressures of
accountability and its influence on data use will be discussed in the next section.
77
Low Performing High Performing
Low Poverty
* Used data to inform
interventions for Far Below and
Below Basic students
* Used data for all students
* Used data to inform interventions
for Far Below and Below Basic
students
* Used data for all students
* Presence of explicit teaching
strategies and standards-based
instruction
High Poverty
* Used data to inform
interventions for Basic level
students
* Used data to focus on
subgroups
* Used data to spotlight broad
content clusters that were weak
* Matching teachers with
students, departmentalizing
* Used data to inform interventions
for Basic level students
* Used data to focus on subgroups
* Used data to spotlight broad
content clusters that were weak
* Matching teachers with students,
departmentalizing
* Presence of explicit teaching
strategies and standards-based
instruction
Figure 5. Matrix depicting the variation of data use across all types of schools
78
The Pressures of Accountability
With mounting pressure from test based accountability systems and meeting
yearly growth targets, all participants acknowledged the increased use of data, both
district-wide and school-wide, though their feelings about these pressures and use of data
spanned the entire continuum. When asked about how the pressures of accountability
have influenced data use, the following comments from participants in all types of
schools, from high and low performing, to high and low poverty levels, sum up the
dichotomy of these pressures.
I think when I first started teaching, there was so much less testing, that I felt like
the pressure level wasnt as high as it is today...I feel that we push the students so
much harder than we had to 20 years ago. I know when I was in third grade, I
was not doing what Im now teaching third graders to do. [Teacher, Low
Performing/Low Poverty School]
It [NCLB] has created accountability. More students are succeeding everyday,
and we are not leaving as many children behind as we used to. At the same time,
the pressures create an environment where people are very stressed which creates
not a paranoid, but less productive place. [Coordinator, High Performing/High
Poverty School]
I am not really worried about AYP and API to tell you the truth. I just want to see
all the children making progress. [English Learner Coordinator, Low
Performing/High Poverty School]
I think data is mandatory...again, we push it, we really push it down everybodys
throat. [Title I Coordinator, Low Performing/High Poverty School]
The expectation is that my school needs to achieve, and thats something that I
grapple with and I am just constantly thinking of how I can raise test scores. It
kind of consumes me, its important. Everything kind of revolves around that.
[Principal, Low Performing/Low Poverty School]
Well, the pressure honestly is to make sure that students achieve and can be
successful in society. I am more pressured by that than any federal anything. I
see the faces of those little children every day, I see them, and they are my
children and I just feel pressured if we dont do what we need to do to set a
79
good foundation for them. Thats my biggest pressure. [Principal, High
Performing/High Poverty School]
As these comments indicate, participants personal feelings about accountability
and testing often affected the ways in which they used data. However, throughout all
schools, it was apparent that data should be used to guide the curriculum as a necessary
part of teaching. Catch phrases such as data informs the instruction or data drives the
curriculum were echoed in every school regardless of API scores or poverty levels.
Although the uses of data were understood, how and if data were used differed in all
schools. Furthermore, many participants also voiced their concern that NHUSD as a
whole had become too data-driven and that too much emphasis was placed on testing.
As one upper grade teacher lamented to me, during one year her class will take four Open
Court Assessments, four math quarterly assessments, three science periodic assessments,
and the CST, plus any other tests she generated herself. This teachers concern was that
there was simply not enough time to teach all the standards required for fifth grade, let
alone the material on each of the tests.
High poverty schools. Although all schools addressed some sort of increased
pressure due to NCLB, the most striking comments came from those participants working
in challenging, high poverty schools. In these schools, there seemed to be overwhelming
sense of desperation due to the heightened pressures of accountability, NCLB, and their
schools yearly progress. Administrators, coordinators, coaches, and teachers often
expressed outright anger and frustration in the ways that the district used student
assessment data to pressure schools to meet growth targets. One administrator in a low
performing, high poverty school mirrored these feelings poignantly:
80
Its [data] being used to beat us up, beat us down, tell us what were not doing
right. Not usually used in a positive way... None of us feel safe, secure, none
of us feel like we want to take any chances.
In those high and low performing schools with API scores ranging from 570 780 in
high poverty communities, participants expressed their understanding of the district
and/or administrations expectation that student assessment data, whether it was CST,
district benchmarks, or teacher-generated assessments, would be used to guide and
monitor instruction. Of the 17 interviews from these types of schools, all felt some type
of pressure, with most citing federal and state pressures, and others citing personal
pressures to help all children be successful.
Low poverty schools. Conversely, participants from both high and low
performing schools (API scores in the high 800s and 900s) and low poverty levels
usually felt less pressured to focus on testing and data use, although there was one
exception from a newly appointed principal. For the most part, the sentiments were fairly
similar in that having high performing students year in and year out lessened the
pressures of accountability. These feelings can be best summarized by the following
statements made by participants at schools with API scores at 890 or above.
I dont feel the pressure at all. Because we dont have that, because we are a high
performing school, and because we go above and beyond. I really dont.
[Principal, High Performing/Low Poverty School]
I know were high performing. I know each group of kids I get is going to be, for
the most part, very high performing. They do just fine on the tests...so do I feel
the pressure? No. I would probably rephrase it that I feel the nuisance and the
distraction of it, but I dont feel pressure from it, personally. [Teacher, Low
Performing/Low Poverty School]
81
No, I dont feel any [pressures] and we dont talk about it here because we dont
need to...no one is concerned about us. [Teacher, Low Performing/Low Poverty
School]
Also, many participants from very high performing schools in low poverty areas were
quick to give credit to the parents of their students and the familys access to outside
resources as the main reasons for their students high achievement. Many attributed the
students ability to already be reading when they began kindergarten to very involved
parents.
I also feel like being at a higher socioeconomic area the students have a little
more support whether it be from their parents or an outside resource like a tutor
that their parents can afford to send them to...its not just us, I dont think its just
us, they come with a lot of knowledge.
Therefore, with lower levels of pressures to meet yearly targets and greater parental
support, it was blatantly clear that the majority of these participants did not have as many
feelings of desperation as the other participants in high poverty schools expressed.
The Focus on Data Means A Focus on Testing
As high poverty schools tended to focus more time on reviewing and analyzing
student assessment data, accompanying that was the inevitable consequence of a greater
focus on testing. Many teachers talked about the overabundance of testing and the
consequences for students who learn more about bubbling-in tests than real learning.
I think we test way too much in New Horizons Unified...I feel that sometimes all
were creating is a standardized test taker instead of a critical thinker, a person
that can think on their own and solve problems.
Nearly all schools used some sort of test preparation, but the duration and extent
of test prep varied amongst all schools. Some schools in high poverty communities
82
reported using test prep materials throughout the whole year with an extensive review
period prior to testing. This coordinator from a high performing, high poverty school
described the test preparation program at her school.
Its a two and half month plan; its a 10 week plan. It goes standard by standard
by standard. Every single day. Fridays all test prep day where we build stamina
by testing them for two hours straight. For 10 weeks.
Similarly, a math coach from a high performing, high poverty school described the
culture of test prep at his school.
I mean...three weeks before the test, she [the principal] stopped almost all
instruction to have teachers just concentrate on nothing but test prep. So because
the culture of the school was the test, teachers, months before the test, abandoned
actual instruction and concentrated on...not necessarily abandoned, but again,
made the test prepping more of an emphasis as opposed to actual instruction in the
classroom.
But evidence of a test prep culture was not synonymous only with high poverty
schools. As this third grade teacher from a low performing, low poverty school
explained:
We have test prep books that we still give. We usually dont start that until like
Marchish. So I can honestly say test prep for standardized test taking starts about
March and I will spend about half an hour on it, two to three days a week.
Moreover, there was evidence of test preparation at one of the highest performing
schools, with the lowest levels of poverty. This first grade teacher revealed:
There is a lot of test prep. I mean were encouraged to do it. Not teaching to the
test so much, as just making sure that when those tests come around, that
everyone is reviewing and that we've taught all that we need to teach. Very high
expectations are at this school, both from parents, teachers, and the
administrators.
83
As illustrated, the use of test preparation was common in most, if not all, nine schools as
the pressures of accountability have seemingly infiltrated all types of schools regardless
of API scores and poverty levels.
The Focus on Testing Means Less Time for the Arts
Consequently, as lower performing schools in high poverty communities focused
their time on testing, other subjects such as art, music, physical education, science, and
social studies were often ignored or glossed over quickly. There was an obvious
discrepancy in room environments of the classrooms visited by the researcher. In both
high and low performing schools in low poverty areas, classrooms and hallways were
beautifully adorned with student artwork and illustrations. More specifically, only one of
the five high poverty schools that participated in the study was highly decorated with
student art pieces. This observation most likely was also reflected in classroom
instruction as teachers in these schools spent more time teaching language arts and
mathematics, and less time teaching the arts to the detriment of the students. An
administrator voiced this concern:
Part of me says that we've become so focused on that big aggregate data that we
lose the individual in there; that we forget that students need other things besides
reading, writing and arithmetic...people are not seeing how other curricular areas
fit into the picture. And really, those are the areas that students, those are
preferred student activities: science, art, social studies, for some students, all
those are high interest and areas that really build critical thinking and I think that
gets lost in it.
It became abundantly clear that the students in lower performing schools with high
poverty levels performed on tests, rather than performing in music concerts or art
shows.
84
Figure 6 shows a Venn diagram comparing the responses between schools of
various poverty levels. Once again, examination of the poverty levels of schools revealed
greater differences in accountability pressures rather than the performance levels of the
schools. In understanding how the perceived pressures of accountability influenced data
use in various types of schools, the main difference was the greater focus on data in high
poverty schools of high and low performance levels. The threats of sanctions and the
stigma of Program Improvement caused many administrators in these schools to focus on
data as a tool to boost and monitor the scores of their students. Accordingly, as these
schools focused more time on testing, extracurricular subjects such as music and art were
often neglected. In addition, a common theme between all schools was the use of test
preparation. This similar finding between all schools is illustrated in the middle,
overlapping section of the Venn diagram. Even at high performing schools, participants
reported that testing and test preparation was all part of the bigger game, as one third
grade teacher lamented. With the pressure to excel at all levels, perhaps this is why so
many schools felt the need to have all students acquire test-taking strategies.
Furthermore, teachers also cited that the increased pressure of testing had negative
ramifications for all students, as schooling today has forgotten the little kid inside.
85
• We are high-performing. No
one is concerned about us.
• Less need to focus on data as
a whole
• Teaching all areas of the
curriculum including music,
art, social studies, science,
and physical education
High Poverty Schools Low Poverty Schools
(Both High and Low Performing) (Both High and Low Performing)
• Sense of desperation due to
NCLBs possible sanctions:
The pressure to improve is
much, much greater for
teachers and the school
site.
• Greater need to focus on
data to monitor and increase
scores of certain students
• More focus on language arts and
math, absence of the arts
Figure 6. Comparison of thoughts between high poverty and low poverty schools on how the perceived pressures of accountability
affect data use
• Use of test
prep
• Belief that
accountability
is important,
but students are
over-assessed.
86
Different Motivations to Use Data
As the previous themes indicate, a schools level of poverty had a tremendous
impact on how data was used to target certain students and subgroups to meet the
pressures of NCLB and state mandates. Additionally, the extent to which there were
different motivations to use data in various types of schools was also significant.
However, particular motivations to use data could not be attributed to certain types of
schools.
As the participants feelings and opinions in regards to data varied considerably at
each school, it become apparent that there were three factors that had an impact on their
reasons to use data. These influencing factors included the districts expectations, the site
administrations expectations, and their own personal opinions on data. As mentioned
before, every participant understood that NHUSDs beliefs firmly expected all educators
to use data to drive the curriculum. Although some participants chuckled at their own
responses of the districts expectation is that data informs the instruction, it became
very clear that NHUSD as a whole has reiterated the data slogan quite a few times with
their employees. Conversely, as SUSD does not utilize district-wide assessments, there
was little push to use data to make school and class-wide decisions. However, the
districts expectations alone were not a quantifying reason as to why individuals used
data at their school sites or in their classrooms to make decisions.
Another important factor to consider was the site administrators expectations on
how their school site personnel should use data. Every site administrator who
participated in the study recognized the value and importance of data, yet there were
87
some who felt that data was not the sole answer to all problems; data is not the end all
be all. It is important to emphasize how site administrators passed along their
expectations of data to their staff and how they held their teachers accountable for the
data made a difference in the staffs motivation to use it. In other words, how data was
implemented in schools fluctuated tremendously from administrator to administrator.
Nevertheless, those administrators who held high expectations of their staff and held
them accountable for using data in complete ways saw far more of their teachers
analyzing and using data. In most cases, these administrators led high performing
schools. Finally, motivations to use data varied the most with the individual
coordinators, coaches, and teachers. The emerging themes can be best described in three
main sections: foundation for data use, beliefs on data use, and challenges.
Foundation for Data Use
Leadership and accountability. Another way to understand the motivations to
use data came through close examination of the leaders who participated in the study. It
became apparent that those leaders from high performing schools, regardless of poverty
levels, who had established a strong foundation in effective teaching, stressed the
importance of collaboration amongst teachers, and had staff buy-in on a school-wide
vision to help students achieve were the most successful in using data on a persistent
basis. These leaders required grade-levels to collaborate in developing standards-based
lessons, that action plans were followed through by showing quality evidence, and that
data was used consistently as a vehicle for monitoring students progress. They held their
staff accountable for the progress of their students and gave them the necessary tools to
88
be successful. For teachers, the biggest tool was the allocation of time for grade-levels
to work together to analyze the data to develop standards-based lessons and actions plans.
In addition, these leaders provided all the necessary information for each student in every
subject area and were extremely knowledgeable about data. This teacher from a high
performing, high poverty school spoke of her principals support in the use of data.
She gives us the data; she has it too. Shes very knowledgeable about how to use
the data. If you need help, she knows it all. She knows how to use it. I think our
whole school runs on the data of our children.
Additionally, teachers at low performing schools who believed that their school lacked a
foundation for data use, cited administrative accountability to follow through with the
schools vision and need for greater teacher professionalism as ways to improve the use
of data.
If you are going to set a plan or vision, you got to make sure that you follow
through and we got to make sure that your teachers are accountable as well.
Teachers, myself included, we just got to be more professional. And understand
that is part of the game that we are playing now in terms of targeting students and
doing things like that.
More specifically, only a few administrators were actually successful in
conveying the importance of data to their entire staff and had their complete buy-in.
When asked about resistors to using data, one principal from high performing, low
poverty school shared with me:
I've turned over more than 50 percent of the school staff. And when youve hired
a bulk of school and you kind of know which things you're going to weed out
first. There are people that have brought in more, there are people that have
brought in less, but somebody thats not on board would not last here very long.
89
Furthermore, these principals who have been at their school sites long enough to make
cultural shifts in their staff and possess the leadership qualities to make those changes
happen, may have far fewer ineffective teachers and resistors for using data as a whole.
As discussed before, successful administrators provided time for collaboration,
had the buy-in of their staff, monitored and tracked their students, and used data to make
thoughtful, school-wide decisions. These high performing schools in both high and low
poverty areas witnessed greater overall data use by their administrators and teachers. For
example, this principal from the highest performing school with very low poverty levels
detailed her schools data.
To begin with, in every assessment period, the teachers all do action plans, which
I guess is kind of standard for New Horizons Unified. And they plan for the
intensive need students, the students that are below, the students that are
benchmark and the students that excel. I have my own charting system for how
they do in English and math. And I color code it...When I look at the children
that remain, these children are being tracked for a significant period of time. And,
of course, there's sometimes children that just did poorly on the test. We monitor
this and there's a constant movement of children out. So, for instance, in these
classes where you see a high density of purple because there are a lot of English
Learners in third grade, those students, to begin with, because you're talking
about how to use data. I prioritize more TA time, because those are where the
students are that need the language support. So there's more TA time, more
volunteers in those classrooms, there are more people scaffolding and teaching
and pre-teaching, so that those students get additional skills so that they dont fall
behind.
Through her detailed explanation there was no question that data for various students was
tracked, and more importantly, this information was used to provide greater primary
language support through the use of paraprofessionals in the classroom.
90
Additionally, a coordinator in a high performing, high poverty school explained
the thorough process of analyzing data that her principal goes through yearly in relation
to CST data and district benchmark assessments.
She goes through those test scores. Over and over and over...she spends like
weekends on them...She reviews every classrooms data. She notices a trend.
For example, only 3 out of 20 students are answering literary response and
analysis questions correctly. So she will have the literacy coach provide
professional development where the literacy coach gives examples of direct
phases of instruction on ways students can master those standards. Or, she will
have the coaches go directly to the room and provide demo lessons. Also, she sits
in during grade level planning and asks leading questions.
For a teacher at this same school, this is how data use was described:
Data drives all of the instruction and the decision-making. We use data to decide
what to teach, what to put aside for a later time, and ranking the important
standards. We strategically plan the important standards and plan how to
explicitly teach it.
These comments illustrate that these types of leaders in both high and low poverty areas
were not only extremely knowledgeable about data, but used this information to make
decisions that positively impacted the instruction in the classrooms. As this data shows,
perhaps it is more about the type of leader that affects the use of data as opposed to the
type of school context.
Buy-in. Administrators that were the most successful in using data to make
school and classroom decisions had the buy-in from all stakeholders: their teachers, their
parents, and their students. As discussed before, staff buy-in for a culture of data driven
decision-making is crucial, but to carry that buy-in further to parents and students was an
unforeseen finding. Schools with the highest API scores and lowest amounts of Title I
funding had the financial and physical support of the parents and equipped them with
91
information from data so that they could make decisions on how the school would use
their monies. For example, at one very high performing school, the collaboration with
faculty, parents, and the administration resulted in the funding of two after-school
programs for kindergarten and first grade students who were struggling in phonemic
awareness and oral language skills. (NHUSD does not typically fund intervention for the
primary grades.) The principals vision to not let any of the students fall through the
cracks, which was supported by the data, led to the buy-in of all stakeholders. And this
buy-in played a pivotal role in creating these classes for the after-school program.
And those eight or nine students who would fall through the cracks or would have
to get private tutoring and may not be able to afford it, are able to benefit from
being in this customized class...And thats how the faculty, the administration,
and the parents all come together to have an intervention class.
Furthermore, this principal also spoke of how her students had the buy-in of their own
progress on CST and district benchmark assessments. She spoke of empowering the
students with strategies to help them improve their learning and track their progress.
Through these self-reflecting strategies, she hoped the students would understand their
need to improve, make the necessary changes, and be proactive and committed to their
achievement. Having had experiences working at other low performing schools in the
district, she acknowledged that the lack of student buy-in about their own test scores
could be a contributing factor in a schools underachievement.
Again, the student has to be more proactive as a buy-in for this. Im not sure, or
Ive actually again worked in low performing schools. There isn't always a buy-
in that the student either knows what's wrong or knows how to fix it or has any
ideas on how to fix it, or is even concerned with how to fix it.
92
These profound insights about the importance of stakeholder buy-in about data and the
decisions made from it from a principal of a very high performing school had lasting
impressions on the researcher.
Data as part of a greater whole. Schools in which data were used as a vehicle to
achieve a school-wide goal such as getting a school out of PI or improving best practices
seemed to have greater motivations to use data. As mentioned before, in many lower
performing schools where a sense of desperation exists, many utilized data as a way to
increase student achievement. It is also important to recognize that some schools used
data to reach long-term goals of best practices and collaboration. One particular low
performing school has instilled the Instructional Learning Team ( Pearson Achievement
Solutions) process as way to deepen collaboration and the discussion of best practices
through the detailed development of a lesson. Grade-level teams begin the process by
analyzing data and focusing their lesson on this need. At this school, time is always
allocated for Instructional Learning Team meetings and participation is mandatory.
Through this process, the use of data has become an important and necessary part of this
schools culture.
Similarly, in an attempt to improve the overall instruction at her school site, a
principal described her Differentiated Learning Committee as a way that various
members of the school staff come together to analyze data. As she saw a need for grade-
levels to be able to connect data, lesson plans, and action plans, she created this
committee on her own.
93
Yesterday, our Differentiated Instructional Committee met. That committee
actually works on aligning student work and aligning all of our plans and
resources, our data. We particularly looked at third grade. We looked at third
grade data, we looked at third grade action plans, and we looked at lesson plans.
There was a huge disconnect...for many it was not a match, so that is conversation
that the whole staff needs to hear. And it is what our committee is going to
present back to staff. And our committee consists of, we have four teachers, and
then all of the support staff.
Both of these examples convey a deeper motivation to use data as part of a greater vision
to align the schools needs with classroom instruction, lesson plans, and action plans.
Beliefs About Data Use
When asked questions about the reasons for using data to make school and
classroom decisions, the responses were very different. There was no real clear emerging
theme if particular schools had certain beliefs about data. In some cases, despite the
district and administrations value on the use of data, many participants outwardly
resented using certain types of data, mainly CST assessment data, whereas others took
their administrators attitudes on data to heart and used it very effectively. The following
comments summarize the various opinions about data.
I mean I dont know, I could show you my black data book and it is empty. It has
been empty for 5 years. I am not going to do it. I am not going to sit there and
write data down that we get in 16 different reports, rewrite it into another book. It
is a waste of time and nobody has ever said anything to me. [Teacher, Low
Performing/High Poverty School]
I dont like data, as a matter of fact...I think that data is just a bunch of numbers...
Data [CST data] is -- it gives the school a sense of achievement -- maybe false
achievement. [Teacher, High Performing/High Poverty School]
For awhile there, we were getting so data driven, everything was data, data, data.
It was frustrating because its like this is not the only tool -- these formal
94
assessments are not the only tool that we use for instruction. [Teacher, High
Performing/High Poverty School]
Honestly, its [data] just a blueprint as to what were doing, and I always like to
know the truth. It doesnt lie to me. It tells me exactly what it is that were
doing. [Title I Coordinator, Low Performing/High Poverty School]
I do. I dont think its [data] the only thing thats available, but yes, I think it
absolutely has value. [Principal, Low Performing/Low Poverty School]
I feel that incorrect conclusions are drawn from data sometimes when you look at
large samples...I feel like there are a lot of variables that they dont always
consider. And I dont think there is a good understanding of just random
statistical fluctuations as being as normal thing you are going to see in data. [Math
Coach, Low Performing/High Poverty School]
Despite the overarching negativity that stemmed from these comments, there was no
clear indication that certain types of individuals were more or less oriented in using data.
The age of the participants, or the stability of the staff, seemed to be a defining
characteristic of those participants who were more or less inclined to use technology and
hence, data. This challenge to data use will be discussed in forthcoming sections. As
discussed before, because high poverty schools used data in specific ways to target
students, subgroups, and curricular areas, the participants from these types of schools
were also more inclined to use data.
Personal incentives. Despite the district and administrations push to use data in
particular ways and the somewhat negatives feelings associated with the overuse of data,
many teachers were still very cognizant of the advantages of using teacher-generated
assessments and district benchmark data in their own classrooms. Those participants
who spoke very highly of these benefits were motivated to use data for two main reasons:
95
student achievement and success and because it made them a better teacher. These
teachers believed as their own teaching improved, the students would also progress.
I want them to succeed, but I also use it for myself. I want to improve as a
teacher, so that the next time I teach the same lesson, I can improve, and my
students can improve. So I use this to analyze myself, strengths, weaknesses and
come up with recommendations for myself as a teacher.
The responses of these participants spanned all types of schools in high and low
performing schools in high and low poverty areas. This teacher in a low performing, low
poverty school explained her incentives for using data.
Just wanting to be a better teacher, wanting to see your students success. And for
me, its wanting to have that information right in front of me, not what I think is
going on or the feelings I have. Its having something thats really concrete.
Through this teachers comments, it is clear that she did not want to rely on the feelings
that she had about her students progress. Likewise, many participants understood the
importance of concrete information to help their children be more successful and to give
them assistance in particular areas. This administrator remarked about how data could
help his teachers:
It helps teachers to focus on what needs to be taught and what they need to focus
more in the classroom. I think, if the teacher knows what they are doing and they
are using the data to support it, it kind of justifies their teaching and why theyre
doing it.
Additionally, teachers also saw the value in using data to focus on their own instruction.
This second grade teacher summed up the importance of using data to reflect on his
practice for the betterment of his own children:
96
You use it to reflect on best practice to be able to target our places where we most
need improvement, formulate our strategies, and our lesson planning based on
what our areas of weaknesses and strengths are and to be able to see along the
way if what we are doing is working before we spend a whole year doing it and
then find out its not working.
Overwhelmingly, teachers understood that using classroom data was pivotal in
understanding their own teaching practice which ultimately led to improving the
achievement of their students.
Validity. The negative responses for data use came most vehemently from
participants who felt that certain types of data such as CST data were not a true, fair
indication of a students learning. In most cases, these participants taught in low
performing, high poverty schools. Much of the anger stemmed from the fact that CST
data only provided a snapshot in time and was not a valid indicator of student success,
despite the districts reliance on that data to rank schools and monitor their progress. As
one teacher expressively stated:
I dont think it [CST data] tells the truth. In a sense, it gives you snapshot of one
day, in one space in one time. You dont know which kids got up with a stomach
ache, which kid had to change 6 diapers of his baby brother and sister before he
got here, which kid was up all night with the helicopters, like in this building here
which kid took the test in 110 degree heat with no air. I mean you dont know
that. I still think there is a lot of bias even in some of the questions, you know.
They use certain vocabulary; they reference certain things like country clubs.
Camps, certain words that they use. You know, so I think its more so just the
number 2 pencil I blame more than anything else...it is one tool but it would be
last one that I would look at to diagnose a childs progress or needs.
In all, most teachers were reluctant to use CST data as the most important tool,
yet knew that there was some information that could be gleaned from it. The following
comment illustrates the transition of a current administrator on his views about CST data.
97
I think its important, especially as an administrator. I know as a teacher I
used to kind of overlook it because theres this mindset that CST scores arent
really important. They dont really give you the entire picture of what a child
can do or what teachers can do in the classroom. But it does provide
important information, a snapshot in time of what the students are doing. I
think as an administrator, thats one of the few common assessments that we
have that we can, that everyone, the entire staff can relate to and look into
and see some trends. And because of NCLB, its even more imperative to
meet those benchmarks.
As this administrator illuminates, this type of data did provide a starting point from a
common, state-wide assessment when perhaps little information was known about a
specific student and his/her needs at the beginning of the year. But again, many used it
with a grain of salt, as CST data was often untimely and in some cases, lacked validity.
For the most part, teachers in NHUSD saw district benchmark assessments as carrying
more validity, despite the fact that it is non-standardized. Although there were no blatant
revelations of cheating going on at particular school sites, some participants questioned
the validity of test results with so much rampant test preparation going on or what they
referred to as teaching to the test strategies.
We use the Measuring Up program, so that is kind of like clone questions [of the
CST]. Basically again...validity. Is it really...are we really improving or are we
teaching them how to answer a set of questions? And that goes back to the
question of validity again, I mean, essentially that is teaching to the test, and you
know, personally I dont like it...but again, there is pressure.
Because of their discomfort with standardized test data, many teachers focused on
other forms of data that they felt were more reliable. Many teachers reported that they
placed more value on district and teacher-generated assessments, as these assessments
98
tested what the children were exposed to on a daily basis. This teacher spoke about the
districts Open Court Reading (OCR) assessments:
The OCR benchmarks a little different because they are more directly related
to what you are actually teaching, you know. Am I teaching inferential stuff?
Am I teaching this and that? Vocabulary that I taught. Because they have been
totally exposed to it, as opposed to they are going to open the CST, and it is cold
read. So yes, I like the Open Court better, but I also...its not a, what do you call
it, consistent or fair because the tests are not standardized.
Although many teachers addressed their preference for tests taken within the context of
their daily curriculum, they also understood that these forms of data were not
standardized and the skills on each benchmark assessment were different each time.
Compared with teachers, administrators and instructional team members, such as
coaches and coordinators, used both CST and district assessment data more frequently
because a significant part of their job responsibilities included the constant monitoring of
all students through the use of data. Again, due to their out-of-classroom positions, these
participants did not have easy access to teacher-generated tests and, therefore, focused
much of their time on data taken collected from student data systems.
More than just numbers. Another theme that emerged from the participants
motivations to use data began with the idea of the whole child. Many participants
dissatisfaction with the districts pitch to become so data-driven stemmed from the fact
that data lacked the humanistic side of education. To these individuals, data could never
tell the story about the particular girl sitting in that third grade class whose dad was
incarcerated, or the little boy in fifth grade whose mother did not pass the second grade in
her home country, or even the girl whose mother is a TV star and does not have the time
99
to help her with homework. These comments were compelling, and it becomes crystal
clear that schools and their staffs have already begun to realize that data are not only
about numbers, but that it has faces and stories. One teacher mirrored these feelings
precisely:
You know what kind of conflicts were going around among them? What issues
did a kid have? A lot of things have to go into consideration when were taking
data and were driving our instruction through it. I can't go against what this child
is going through, hes got family problems and all these things and hes bright,
but I dont know how well hell perform...So what is the district doing to consider
all the other things that affect the outcome? Thats one of the things that makes
me not depend on data.
Because the district has done little to understand the ramifications of these bigger societal
problems that inundate our childrens lives in all types of communities, in both high and
low poverty and minority and majority neighborhoods, there is a sense of cynicism that
exists in education and in using data.
Data use is not the answer. Another unexpected finding from a coach in a low-
performing, high poverty school concerning his reluctance to use data was his belief that
using it would not solve his schools problems. At Program Improvement Year 5 status
and an API score that has not reached 600, he firmly believes that data use is not the
answer to his schools issues. Although he understands the value and importance of data
and uses it readily throughout his job as a math coach, his insight into the next steps for
his school are invaluable.
I mean using it [data] to make decisions is obviously necessary on a lot of
different levels, but I dont know if that is where we are at as a school. I dont
know that we are at the point where we are going to solve our problems by
looking at a piece of paper and seeing what we did well at and we didnt. I mean
when our scores are where they are, the system is broken. We need to fix the
100
system rather than fixating on the differences we are going to see in various
pieces of data. So...I mean I think that data is important but I dont think it will
solve all your problems in a school like this just through looking at data.
Similarly, many participants alluded to the idea that all data does is put a band-aid on the
wound, and does little to heal the wound. They believed that short-term repairs that
result from focusing on data such as targeting students and test prep will do little to
ameliorate bigger problems such as ineffective teaching, poor leadership, and lack of
accountability. In low performing, high poverty schools where often times the system is
broken as this participant described, perhaps the use of data may not be an effective and
efficient way to improve the schools academic achievement.
The Challenges
When asked about the challenges faced in using data, the themes that emerged did
not always point entirely to certain types of schools. For the most part, schools with
strong foundations for data use faced the least challenges. The following sections
describe the most salient challenges in using data across all nine schools.
Time. The lack of time to compile data, analyze it, make effective decisions on
this data, and implement these decisions was cited as a major challenge for nearly all the
schools that participated in the study. As one participant remarked:
It takes a long time to compile these reports. Thats something that I think I
personally struggle with because there's not enough time on top of all the duties
you have to do to get that report, especially if its school wide. And because I
think I understand that and believe in that, it takes me that much longer to be able
to give teachers data with that information -- real data, not just artificial data that
gives you a score. So thats very challenging and time consuming.
101
Schools with the most successful data analysis methods had administrations that allocated
more time than the weekly faculty and grade-level meetings to discuss data. These
administrators went above and beyond to provide additional time for these teachers and
grade-levels to collaborate and plan their lessons.
Another component of time that proved challenging to all participants regardless
of school contexts was the untimely return of certain forms of assessment data. Many
teachers emphasized that CST data was invariably old data, as it was received
anywhere from three to six months after the students initially took the test. In many
cases, those students had moved on to other classrooms or had already matriculated to
middle school.
I get their scores usually a couple months after I already have the kids...Now, the
data from last year, its relevant because its interesting and maybe there's
something in there, but for the most part its old. It tells me where they were, it
tells me what they were doing, a snapshot of where they were a long time ago.
And so, really that becomes far less significant to me than two months of
information that I have with them in the classroom.
Although teachers could use that information in broad ways to see how they could
improve their own teaching practices, many viewed these forms of data as less useful
than more current information.
More specifically, one teacher expressed her outrage being forced to review old
data. As a fifth grade teacher, by the time she received her scores, her students have
already left for middle school.
First we focus on how our kids from the prior year which I think is useless, our
kids are gone, you know we already taught them, they didnt get it, let the other
teachers deal with that, let them help those students. I want to know how my kids
are now-- what do they bring into my classroom? How can I help them? Because
102
if I analyze what I did last year, you know, Im not gonna help them, because
their weaknesses are not going to be the same.
This challenge of untimely data made many teachers feeling frustrated and less motivated
to engage in data analysis.
Lack of training and transfer to practice. Even though the district had highly
touted the uses of data to inform classroom instruction, many participants honestly
revealed that they lacked the training and basic understanding on how to read and
decipher data effectively. This challenge was reflected not only through the candid
responses of participants, but was apparent through the vast difference of knowledge base
on data that each participant possessed. Again, for the most part, teachers who had strong
administrative leaders who held them accountable for using data, had a much deeper
knowledge of data and how to transfer those decisions made from data into their
classrooms. Likewise, teachers who understood the benefits of data were more motivated
to use it, and some had even independently researched effective strategies for data driven
decision-making.
In all, administrators, coordinators, and coaches were better trained in using data
and many felt more confident in using it. This finding could be most likely attributed to
their regular monthly district meetings. Nevertheless, the overall lack of training that the
district has provided for all levels of the school site was apparent throughout all schools.
As a coordinator revealed, this lack of understanding and comfort with data could have
serious implications on the important decisions teachers make.
103
I think that they need to be trained on how to use it effectively. I dont think they
are making good use of it. And if they dont understand it they might not be very
comfortable...I dont really see them using data, it is more like this is what we
think they need. Coincidentally, they decided to work on reading comprehension,
which is what the students need. But, I dont think they really had that data in
front of them to make those decisions.
As nearly all participants had access to various forms of data and most could read
it, more specifically, the major difference between schools that used data effectively or
not was how that data was transferred into classroom practice. Teachers often found it
challenging to differentiate learning for the two ends of the spectrum, their very high and
their very low students. When asked about the challenges they faced in using data, these
comments best sum their thoughts.
Sometimes [the challenge is] understanding how to use and create the strategies to
make the improvement. Sometimes -- I try to think of ways to differentiate and
thats difficult. But for the children that need that extra -- you know trying to
think of different ways that I can do that for them. [Teacher, Low
Performing/Low Poverty School]
Just sometimes [the challenge is to] try to come up with the ideas of how are we
going to bring up this low end of the bell curve...So sometimes its just really
brainstorming and trying to come up with new ideas. [Teacher, Low
Performing/Low Poverty School]
Lack of technology. The lack of new and working equipment was a contributing
factor to challenges in data use for at least two of the schools that participated in the
study. In one school located in a high poverty area, the administrator commented that the
districts refusal to service computers that are now 8 years old has made data use
difficult. This school received a grant last year to replace the computer lab, but many
teachers are still unable to access data with their own laptops due to various computer
104
glitches. Similarly, another schools data scanner malfunctioned and has been broken for
over a year. Although SUSD does not currently mandate the use of district benchmarks
this year, the participants of this school cited the malfunctioning equipment last year as a
major challenge.
Stable staff. As participants were not quick to admit their discomfort with data or
technology, this information was usually garnered from the administrators. Nevertheless,
many administrators were hesitant to talk about how the stability of the teaching staff
affected the use of data, many acknowledged that this correlation did exist, though it
wasnt absolute. For the most part, stable staffs are usually found in higher performing
schools of low poverty communities. In many cases, these teachers own children had
attended the school, which was also their own neighborhood school. As one
administrator pointed out, the average length of service for some of his teachers in the
district is 16 years with 13 of those years at their school. With an incredibly stable staff
and high API scores, there has been much less of a need to focus on data and in this
sense, as he revealed, a greater resistance in using data.
One of the differences in my experience is that some of it has to do with the
number of years a teacher has been at a high performing school or at a school that
I think newer teachers coming into the field had much greater training and
exposure to using data. In other schools, it wouldnt even be a question about
even broaching it. Whereas, I think in a school thats been historically high
performing and where you have a very stable teaching staff, it hasnt had that
impact on them. So it [data] hasnt been that daily push for them.
Similarly, administrators from high poverty schools have also observed that a few of their
veteran teachers were less willing to try new things compared to newer teachers,
105
especially if it involved the use of technology. Overall, low poverty schools of both high
and low performance levels showed greater resistance to use data which could be
attributed to a variety of reasons: fear of technology, less need to focus on data, or
elapsed time in attending graduate courses. Figure 7 summarizes the various challenges
that participants have witnessed across different types of schools.
106
Challenges
High Performing/High Poverty
Low Performing/High Poverty
High Performing/Low Poverty
Low Performing/Low Poverty
Time to Analyze Data
X
X
Untimely Data
X
X
Lack of Training and
Transfer
X
X
Lack of Technology
X
X
Stable Staff
X
Figure 7. Presence of challenges at high-low performing/high poverty and high-low
performing/low poverty schools
107
In summary, the motivations to use data were less varied across different types of
schools when compared to the preceding themes: variation of data use and pressures of
accountability. The findings on motivations to use data show greater similarities between
all nine schools. However, through the examination of the main themes: foundations for
data use, beliefs on data use, and challenges, and subsequent sub themes, it is clear that
district, administrative, and personal expectations influenced the motivations to use data
in certain schools.
Through the comments of the teachers, it was apparent that administrators from
high performing schools regardless of poverty levels that had implemented a strong
foundation for data use through the buy-in of their staff, students, and parents were more
successful in using data. These administrators from high performing schools held their
staff accountable for using data on a consistent basis to monitor the progress of their
students. How these administrators valued data is evidenced in their acute understanding
of data and the persistent allocation of time to analyze and use it. Other schools that used
data as part of a greater whole also had higher levels of data use, however, the contexts of
these schools were less salient.
The beliefs on data use spanned the entire continuum across all nine schools. In
other words, certain types of schools did not seem to report particular opinions on data.
Across all nine schools, participants reported the positive uses of data to increase student
achievement and their preference to use assessments that were more connected to their
daily teaching. The lack of validity present in CST data and other forms of standardized
assessment data due to its snapshot nature, made many participants outwardly resentful
108
toward using it. In addition, many participants expressed their frustration that data is
more than just numbers. Many participants from all types of schools echoed the
sentiments that data needs a more humanistic side and may not always be the answer to
solving a schools problems. As one Title I coordinator expressed, Youve got to really
dig deep into it and find out what the data means, really, versus just the numbers.
In the same way, the majority of challenges in using data were encountered at all
types of schools, independent of performance and poverty levels. Specifically, time, lack
of training, transfer to practice, and lack of technology were challenges faced by nearly
all participants. The age of participants or stability of staff and its effect on data use
seemed to be a more defining characteristic of schools in low poverty areas.
Educators Expectations of School Populations
High Expectations of Students
All schools. When asked about the expectations of the school population, findings
seemed to show that nearly all participants working in the nine schools held high
expectations for their students and believed that they all have the potential to succeed. As
one administrator in a high performing, high poverty school acknowledged:
I do believe they [the students] can. I believe it is up to us to make sure they have
the opportunities that build background knowledge to help them understand
whatever subject matter we bringing to them. I believe they all have great
potential, but I do believe that it is up to the teaching staff, the support staff, to
bring to them all of the experiences that sometimes create the gaps in education.
Although participants held high expectations for their students, those from high poverty
schools were also realistic and understood that there were often differences, such as
language barriers, more educated parents, and access to outside experiences, between the
109
students they taught and those from more affluent communities. Similar to participants
who were less inclined to speak of their discomforts in using data, very few participants
openly expressed having low expectations of their students. When asked if his students
from a high poverty community could achieve as well as those in more affluent areas, this
teacher very poignantly described this challenge.
I think a very small majority of them, the kids who have the support at home can
[achieve]. But the kids who are just as smart and they dont have role models at
home and they dont have people asking them what they are doing because I cant
compete no matter what I do with the kids for 6 ½ to 8 hours a day. He goes
home and nobody is talking to him about school. He is not involved in any
intellectual activity, how can I the kid cant compete with that.
Honest conversations about the real challenges of high poverty schools and the reality
that many students would not achieve at the same levels of students in more affluent
communities was rare, as most participants in these types of schools were altruistically
optimistic.
The things about the [high income neighborhood schools] is the teachers dont
teach, its the parents that provide the support. Whereas we, on our side, do all
the six hours of the day is all about us. We know the instruction, we know the
curriculum, parents dont. They just help the kids with the homework and like,
make sure they have the right tutors, you know. I think we are would actually be
more successful than these schools. [Coordinator, High Performing/High
Poverty School]
Yes, because I think if you teach students in a way that is...it has been shown to
be useful, and that if you are present in the classroom. It is not a mystery to teach
kids that are coming in and struggling or how to teach kids that come from
poverty or how to teach kids that look different than you are. So yes, I think we
can achieve as well as anybody. [Math Coach, Low Performing/High Poverty
School]
110
As these comments from participants in high poverty schools show, despite the
challenges present in high poverty communities, school personnel still held tight to high
expectations of all their students. Perhaps, the fear of having deficit thoughts about
students or the need to sound correct may have influenced real conversations about
how data use could be affected by teacher expectations.
Special Education Students
Low poverty schools. Another finding about the expectations of certain students
came primarily from low poverty schools that had high percentages of special education
students. In many of these schools, the special education students were mainstreamed
into the regular classroom. In one lower performing, low poverty school with large
numbers of special education students, teachers seemed to outwardly disregard the data
from the special education children in their classroom.
And Im looking over all the questions and I have a few children that are special
needs and they have trouble. So, you know, you kind of discount that -- if you
take that out...I do have children with special needs, so those are the ones that
you kind of flag out, like if I were doing the Open Court test now -- and part of it
is fluency testing and they have to be at a certain point in order to pass.
One participant also expressed her concern with the other children in the class whose
education was disrupted due to the special needs children. She expressively asked,
What about the other children that are being left behind because of those two or three
children? Thats tough to watch. Although these sentiments were not common in all
nine schools, these teachers expectations of special education students and the disregard
of their data was an unexpected finding.
111
Is it fair?
In order to get participants to speak candidly about their expectations of students,
participants were asked a question about the fairness of using data for certain school
populations to raise test scores. Reponses to this question, from all four types of schools,
seemed to fall into two main camps. One side strongly believed that targeting certain
student populations was definitely unfair because it inevitably left out other students.
The following comments from participants in various school contexts sum up these
beliefs.
The Far Below Basics just kind of...I dont know what happens to them. I have a
couple this year that Im hoping go up higher. But thats not really fair. We put
so much focus on the Proficient and Advanced and were focused on bringing
them up higher. I just feel like the Far Below Basics, there's no hope. [Teacher,
High Performing/High Poverty School]
There's only one pie, to give a bigger piece of that pie to a Basic group so you can
raise your test scores, to me, in and of itself, is unfair, because then thats saying
implicitly that you're denying those resources to lower achieving kids that may be
more in need, depending on what your definition of need is. So Im going to say
that, in that way, to strictly dedicate resources to a specific group, specifically to
raise test scores is very unfair. [Teacher, Low Performing/Low Poverty School]
I feel that when the focus of your school becomes meeting an arbitrarily imposed
testing target rather than educating all of your kids to the best of your ability,
providing everyone with a quality education that they deserve, you are off-track.
And it doesnt surprise me, and those two things are not exclusive. I mean you
can target your Basic kids or whatever and you are still increasing student
achievement, so it is not like they are opposed to each other. But it is not a wise
top priority. [Math Coach, Low Performing/High Poverty School]
On the contrary, there were just as many participants who believed that when a particular
school population has specific needs, it is absolutely fair to use data to assist them.
112
Yes...I totally agree...especially with kids that are Far Below Basic. Or its the
English Learners, or the African-American population. Why is it that they are all
failing? Yes, we definitely have to target them to figure out what is going on, you
know. [Coordinator, High Performing/High Poverty School]
I think that its fair if it is an assessed need. When you talk about fairness and
you talk about differentiating, of course if there is a need for that group, then yes,
that is what all of the English Learner laws are about. That is targeting a group of
students based upon their need. I think it has to be done based upon need.
[Principal, High Performing/High Poverty School]
Therefore, it was not necessarily educators low or high expectations of student
populations that influenced data use, but rather the understanding that different groups of
children may have different needs. Clearly, it was this expectation of a possible need
that influenced the use of data for certain school populations.
As this study indicates, teacher expectations of students did not influence how and
if data were used. Comments show that teachers did not make blatant revelations of their
deficit expectations based on a students ethnicity or class. Rather, the actual student test
scores and underachievement as described in preceding sections were more of an
indicator on how and if data-driven decision making was implemented. It would be
interesting to pursue future research that involved more targeted questions on how
teacher expectations of a students ethnicity or socioeconomic class could affect how data
were used.
Conclusion
This chapter represents the analysis and interpretations of the data collected in
nine different elementary schools in two California school districts on the varying uses of
data. The main research question focused on how data use varied in schools of different
113
accountability status and poverty levels. Also, this study examined sub questions that
investigated how perceived pressures of accountability, motivations, and educators
expectations influenced the use of data in schools of differing contexts. After collecting
and analyzing data, major themes were extracted and linked to the four guiding research
questions.
Overall, data were used very differently for certain types of students amongst low
and high performing schools. In lower performing schools, Basic students and certain
subgroups were often targeted to ensure that growth targets for the CST were achieved.
Likewise, grade-levels in lower performing schools also used data to focus on broad
content clusters in certain academic areas that showed student underachievement.
Conversely, higher performing schools used data to target different students. With high
percentages of Advanced and Proficient students, these schools focused data use on their
most struggling students, their Far Below Basic and Below Basic students. In addition,
the focus on data seemed to center on all students at higher performing schools.
The pressures of accountability also varied drastically between low and high
performing schools. In some cases, there was an overwhelming sense of desperation felt
at the schools with low API scores with Program Improvement status. Consequently, the
focus on data was much greater at these schools. With an increased focus on data, the
majority of classroom instruction centered on the tested areas of language arts and
mathematics. Contrastingly, higher performing schools showed evidence that their
students were learning art, music, social studies, and science. In all schools regardless of
accountability status and poverty levels, test preparation was used, though its frequency
114
and duration differed. It was abundantly clear that the pressures to focus on testing and
the use of testing data were much less significant at higher performing schools.
The motivations to use data were influenced by the expectations of the district,
administration, and individual. Although nearly all participants could readily attest to the
districts rationale behind the use of data, schools with strong administrators that held
their staff accountable for using data had the most effective data use. Likewise, those
participants that held tight to their personal beliefs on the positive uses of data were the
most successful in using data. Individual beliefs concerning datas validity and lack of
humanistic qualities made some participants weary of using it. In addition, challenges
such as time, lack of training and transfer to practice, lack of technology, and stability of
staff were also prevalent across nearly all schools and affected the utilization of data.
Furthermore, it was evident that nearly all participants from all types of schools
held high expectations of their students. However, teachers from low poverty schools
with high numbers of special education students did mention that data from these
children was often overlooked because they were graded on different standards. The
fairness of targeting certain students to raise test scores fell on two sides of the fence.
Whether participants believed it was unfair or fair to target certain student populations,
initial expectations of certain types of students had less influence on data use than the
other three overarching themes. This studys findings do not point to a connection
between teacher expectation and data use. In sum, it was evident through the comments
and stories of these participants, the use of data does vary significantly across schools of
115
different API scores and poverty levels. The final chapter will provide greater synthesis
of these findings and the connections to prior research.
116
CHAPTER FIVE
Summary and Conclusions
Introduction
Background of the Problem
Since the 1950s, public education has sought to provide greater equity in
educational opportunity for all Americans (Duke, 2000). However, multiple studies have
revealed that students from more affluent communities fare much better in classrooms
than those from disadvantaged backgrounds (Garcia, 2002; Lee & Wong, 2004).
Throughout the years, scholars have argued that public schools may actually reproduce
social inequality (Bowles & Gintis, 1976; Bourdieu & Passerson, 1977; Collins, 1979;
Diamond & Spillane, 2004).
In an effort to bring more equity to education, the implementation of No Child
Left Behind (NCLB) in 2001 was hoped to strengthen accountability by requiring states
to implement statewide accountability systems. Through rigorous content standards,
annual testing, and assessment data based on subgroups, NCLB ensured that all students
would reach proficiency by 2014 (NCLB, 2001) and therefore, level the playing field for
all students. Despite NCLBs altruistic motivations, many still believe that accountability
policies will only worsen the inequalities amongst minority students and the schools they
attend (McNeil, 2000; Diamond & Spillane, 2004).
With a heightened focus on accountability and high stakes testing, the use of data
has become a major reform strategy and a very useful tool for school planning (Earl &
Fullan, 2003; Datnow, Park, & Wohlstetter, 2007). Countless studies have been
117
conducted on the positive uses of data in improving schools decision-making and the
achievement of students (Earl & Fullan, 2003; Lachat & Smith, 2005; Datnow et al.,
2007). In addition, studies also indicate the presence of barriers that affect the successful
implementation of data driven decision-making in schools (Feldman & Tung, 2001; Kerr,
March, Darilek, & Barney, 2006; Mandinach, Honey, & Light, 2006; Ikemoto & Marsh,
2007). Moreover, emerging sentiments question if data use plays out differently in
particular types of schools (Diamond & Cooper, 2007).
Purpose of the Study
The objective of this study was to compare the uses of data in schools that vary in
accountability status and poverty levels. Thirty-three semi-structured interviews were
conducted with administrators, coordinators, coaches, and teachers in nine elementary
schools. Eight of these schools were located throughout the New Horizons Unified
School District (NHUSD) and one school was located in the Summerfield Unified School
District (SUSD). As mentioned before, all nine schools were related in terms of Similar
Schools Ranking and fit one of four contexts: high performing/low poverty, high
performing/high poverty, low performing/low poverty, or low performing/high poverty.
The overarching research question was the following: How does the use of data vary in
schools that differ in accountability status and poverty levels?
Three guiding questions also helped to focus the study:
a. How do the perceived pressures of accountability influence data use in each
type of school?
118
b. To what extent are there different motivations to use data in differing
contexts?
c. How do educators expectations of the school population influence data use in
each type of school?
Through an analysis of these interviews, a major finding drawn from this study
showed that a schools accountability status and poverty levels did influence the ways in
which data were used by administrators, coordinators, coaches, and teachers. Of these
two contexts, the poverty levels of schools more often determined which students,
subgroups, and curricular areas were targeted to meet the pressures of accountability
policies. Although the motivations to use data fluctuated across all types of schools,
higher performing schools with strong foundations in data use were more apt to use it
effectively. Common challenges such as lack of time, transfer of data, issues with
technology, and the stability of staff in using data were faced in nearly all schools.
Finally, as educators expectations of students are an important component of student
success, it was found that most held high expectations of their students. Although there
was little evidence pointing to the connection between expectations and data use, certain
types of schools seemed to disregard the data of special needs children. Overall, it was
abundantly clear that data use was varied in schools with different accountability status
and poverty levels and this, in turn, had a profound impact on teaching, learning, and
student achievement.
119
Research Findings and Connections to Prior Research
The connections between the finding of this study and prior research on the use of
data makes for an interesting comparison. In the forthcoming sections, the literature from
Chapter Two and the findings from this study will be discussed and compared.
Foundations for Educational Reform
Relevant research regarding educational reform set the foundation for a new era
of accountability including high stakes testing and the use of data to make school and
class-wide decisions. Many studies have indicated that poor, minority students are
persistently given unequal access and opportunity in schools (Oakes, 1985; Gamoran,
1986; Darling-Hammond, 2005). Through increased accountability policies and the
enforcement of strict yearly targets, student achievement could be monitored and tracked.
Thus, as prior research and the findings of this study both indicate, the far-reaching
influences of NCLB have undoubtedly penetrated the walls in all of Americas public
schools.
How does the use of data vary in schools that differ in accountability status and
poverty levels? Through the analysis of interviews, it became apparent that data use
varied significantly more so between high and low poverty schools than between high
and low performing schools in terms of which students and curricular areas received the
most support and focus. Figures 4 through 6 in Chapter Four provide a visual
representation of these differences.
Among the high poverty schools of high and low performance levels in this
sampling, student assessment data was used to target Basic students, those students who
120
were on the verge of moving up a performance level. By examining California Standards
Test (CST) data, high poverty schools identified these students and placed
insurmountable time, monies, and efforts on increasing their levels. In their study of
eight urban elementary schools that varied in accountability status, Diamond and Cooper
(2007) discovered the same targeted approach on these bubble kids (pg. 254).
However, inconsistent with the findings of this study, Diamond and Cooper
(2007) found that responses to student assessment data were varied due to the schools
accountability status, not poverty levels. With significant focus on those students who
could boost test scores the best, participants in these high and low performing schools of
high poverty levels, acknowledged that students on both ends of the spectrum were often
ignored. Similarly, Booher-Jennings (2005) and Marsh, Pane, & Hamilton (2006) found
the same targeted approach on these bubble kids through after-school tutoring, small
group/individual instruction, and Saturday intervention. As all three studies indicate, the
focus on a certain group of students at the expense of others has severe ramifications for
educational equity. They [the school] leave behind the lowest performing students
(Diamond & Cooper, 2007, p. 252). Furthermore, below the bubble kids (Far Below
Basics) were considered beyond hope and were not the focus of extra resources
(Diamond & Cooper, 2007).
On the other hand, schools in low poverty areas that were both high and low
performing used testing data in very different ways. These schools were determined to
place much of their attention on students at the low end of the continuum, individual
students, and all students. Diamond and Cooper (2007) also discovered similar findings
121
in high performing schools that modified instruction for all students to improve the
overall instructional quality. Moreover, consistent with this studys findings, Diamond
and Cooper (2007) also reported that higher performing schools took personalized
approaches in using data to promote learning for all students across all grade-levels.
Just as high poverty schools focused on particular students, there was also a push
to concentrate on particular curricular areas that needed improvement. Therefore, many
participants in these types of schools stated that they used testing data to modify and
improve their instruction in broad content areas. Administrators also planned
professional development around these curricular subjects. Likewise, Marsh et al.,
(2006) also found that data was used to make decisions around instruction, curriculum,
and professional development. In their study, teachers reported using testing data to
identify areas that students needed to strengthen.
Summing up, there were parallel findings between this study and those conducted
by other researchers in the data use. However, the most glaring difference came from the
type of school context (i.e., accountability status or poverty level) that affected which
students and curricular areas were the schools main foci. Prior research has studied
schools of various performance levels, and scholars have found that this context was the
most salient characteristic for differences in data use (Diamond & Cooper, 2007). As this
current study employed two contexts and four types of schools, findings showed that high
poverty schools, rather than high and low performing schools, showed greater similarities
in the variation of data use.
122
How do the perceived pressures of accountability influence data use in each type
of school? The pressures of NCLB was another major theme identified in this study as
all high and low performing schools in both high and low poverty areas reported
increased data use due to heightened accountability policies (Figure 3). Similarly, other
studies have corroborated that school personnel across high and low performing schools
paid attention to accountability messages about students test scores (Diamond &
Spillane, 2004; Diamond & Cooper, 2007).
In the age of accountability, student assessment scores forcefully shape the
publics perception of schools. Therefore, in schools with Program Improvement status
and where other sanctions have already been placed (i.e., reconstitution of staff), there
was a pronounced focus on the use of testing data. Specifically in this study, high
poverty schools that were lower performing had tremendous pressures to meet yearly
Academic Performance Index (API) and Adequate Yearly Progress (AYP) targets.
Therefore, there was a push to use data much more than in low poverty, high performing
schools.
As Diamond and Cooper (2007) conclude, with insurmountable pressures to excel
on standardized tests, both probation and high achieving schools in their study focused on
certain subject areas: language arts and math. Findings from this sampling reflect that
high poverty schools also concentrated on the tested areas on the CST, however, low
poverty schools were more likely to teach all areas of the curriculum (language arts,
math, science, music, art, social studies, and physical education). Contrary to Diamond
123
and Coopers (2007) study, there was no prioritization of language arts and math over
other subjects in these low and high performing, low poverty schools.
As the findings from this study indicate, all nine schools used test preparation to
some degree. With high poverty schools leading the way in time spent in test prep mode,
certain subjects were often ignored or glossed over. Similarly, multiple studies also show
that increased pressures of accountability have led many schools to replace higher-order
curricula with watered-down test preparation programs (Haladyna, Nolen, & Haas, 1991;
McCraken & McCraken, 2001; Darling-Hammond, 2004; Abernathy, 2007). Moreover,
Stephens, Pearson, Gilrane, Roe, Stallman, Shelton, Weinzierl, Rodriguez, and
Commeyras (1995) found that teachers of low socioeconomic students spent greater time
in test preparation activities due to increased pressures of high-stakes testing. True to this
studys findings as more schools implement test prep cultures, the reliability of these test
scores became suspect for many participants. Similarly, Koretz (2002) also questioned if
student gains can actually be a true indicator of what they have learned.
In summary, the conclusions drawn from this study are comparable to those found
in previous research studies. With heightened accountability policies and the publics
consternation over test scores, there is no question that all types of schools face pressures
to excel on standardized tests. More specifically, as high poverty schools combat these
pressures with test-prep cultures and other watered-down curricula, the outcome for these
poor, minority students is paramount.
124
To what extent are there different motivations to use data in differing contexts?
The motivations to use data varied across all nine schools in this study with the greatest
contributing factors being leadership and accountability, buy-in, and challenges.
High performing schools in both low and high poverty communities reported
strong foundations in data use as leaders held their school staffs accountable for
implementing school and class-wide decisions based on data. Feldman and Tung (2001)
also support this finding as they discovered that the most effective implementation of
data-driven decision making (DDDM) were evidenced in schools with strong leadership
and support systems. Similar to the findings in this study, the conclusions drawn by
Lachat and Smith (2005) and Datnow et al. (2007) also illustrate that administrators who
held high expectations for data use, provided a vision for the school around the
importance of data, and built in time for teacher collaboration were the most successful in
DDDM.
As many participants revealed in this study, their motivation to use certain forms
of data was shrouded by its validity and reliability. Specifically, many participants
expressed that the CST was only one piece of information that provided a snapshot in
time, and yet many important school-wide decisions were based on this data. Mandinach
et al. (2005) and Ikemoto & Marsh (2007) would support these findings as doubts about
inaccurate data greatly affected teacher buy-in in their research studies. Interestingly,
findings from this current study showed that buy-in for the use of data went far beyond
just the staff. An example of this came from an administrator from a high performing
125
school with low poverty levels who insisted that both parent and student buy-in were
crucial for overall student achievement and progress.
The greatest challenges in using data reported by participants in this study were
the lack of time, untimely data, lack of training, lack of technology, and stability of staff.
The challenges faced by nearly all thirty-three participants have striking similarities to
those studies presented in Chapter Two. Feldman and Tung (2001) and Marsh et al.
(2006) cited lack of time for the school personnel to analyze, synthesize, and interpret
data as a major reason in limiting effective data use. Furthermore, Earl and Katz (2002)
and Vaughn and Kelly (2008) posit that teachers need support and training in data
analysis and how to transfer this information to the classroom. Although the extant
literature did not point to a lack of available technology, the willingness of a staff to use
technology was a critical barrier to data use (Zhao & Frank, 2003; Wayman, Stringfield,
& Yakimowski, 2004; Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006).
Overall, the similarities in motivations to use data were clear between this studys
findings and those found in the literature review. However, the sentiments expressed by
some participants that data needs a more humanistic side and that it is often not the
answer to all problems was unique to this study.
How do educators expectations of the school population influence data use in
each type of school? Another major theme in this study addressed how educators
expectations could influence the way in which data were used. Educators from high
poverty schools of both high and low performance levels were asked if they believed
126
their students could achieve as well as those students in other, more affluent parts of the
district.
The findings from this study overwhelmingly highlighted the high expectations of
nearly all participants. The participants in high poverty schools with a large number of
poor, minority students were realistic in understanding the challenges that plagued their
communities. However, many of them did not exhibit deficit-oriented beliefs as previous
studies have noted (Diamond, Randolph, & Spillane, 2000). These expectations were not
necessarily the catalyst for increased data use, but the understanding that their children
were not performing as well as other students and needed intervention was integral in
focusing attention on them.
Unique to this study was the finding illustrated in higher performing, low poverty
schools about teacher expectations of special education students and the influence on
data. Some participants expressed their frustration with mainstreaming certain students at
the expense of the other students in class. In addition, these teachers were more apt to
disregard the data of these children. Whether connections between special needs students
and poor, minority students and the expectations of their teachers can be made to data use
would be an interesting avenue to pursue in the future.
Implications for Policy and Practice
As a result of this study and prior research conducted on the use of data, four
major implications can be drawn in reference to future policy and practice. In an era of
accountability, districts and schools should consider the following:
127
* District and schools should be cognizant of how accountability policies have led many
high poverty, low performing schools to focus on quick-fix, short term changes to meet
the pressures of yearly targets. As schools in the inner-city wrought with challenges are
forced to focus on meeting yearly testing requirements, administrators and teachers are
under pressure to do whatever they can to meet these targets. Often, short-term changes
such as focusing on students who will give the best outcomes on standardized tests and
narrowing the curriculum (Lomax, West, Harmon, Viator, & Madaus; 1995) are the only
viable options for these schools. But these solutions will only exacerbate the inequalities
that exist amongst poor, minority students (McNeil, 2000; Diamond & Spillane, 2004,
Diamond, 2007).
* High-stakes testing and pressures of accountability and its influence on data use have
potentially negative ramifications for students in high poverty, urban communities. As
previous research has indicated, the effects of poverty are detrimental to the academic
performance of students (Gunn and Duncan, 1997; Garcia, 2002). With the challenges
these students face in the community, plus the adverse effects of high-stakes testing and
accountability pressures, the outlook is bleak for many of these students. As Figures 4
through 6 outline in Chapter Four, the focus on only certain types of students and specific
curricular areas in schools heavily populated by poor, minority students limits their
access to equal educational opportunities.
* Districts and schools must be held accountable for building capacity of school
personnel for data analysis, interpretation, and transfer to practice through training and
on-going, effective professional development. Within this study, numerous participants
128
voiced the need for training in how to use data effectively. Although the districts mantra
over the years has been data drives the instruction, few individuals were actually using
data to produce long-term, systemic changes to teaching and learning. It is imperative
that districts and schools invest the time, efforts, and monies into building a foundation
for data use so that challenges can be overcome.
* Multiple types of data should be used in making crucial district and school-wide
decisions. Ideally, the district believes that all types of data are important, but CST data
and other forms of student assessment data were the most readily used by administrators,
coordinators, coaches, and teachers. Prior research by Bernhardt (2003) also states that a
complex description of schools can only be attained through the analysis and intersection
of all four types of student data: demographic, student learning, perceptions, and school
processes.
Recommendations for Future Research
As the sample size for this study was very small and there were obvious
limitations, future research would entail a much larger sample size, involving districts
and schools across California. For example, one might expect that schools in rural areas
(i.e., those that are very small) might face different issues and challenges with respect to
accountability and data use.
In addition, participants from district-level personnel should be interviewed to get
a more balanced response of how data use is varied across districts and schools. The
following research questions could be investigated:
129
* How does the use of data vary in districts that differ in accountability status and
poverty levels?
* What is the role of district personnel in the use of data in districts of varying
contexts?
* How does data use vary in urban, suburban, and rural districts and schools?
Furthermore, as the findings from this study indicate that school-site
administrators have a powerful affect on the staffs buy-in of data use, future research
could also entail an investigation of these leaders.
* What are the defining characteristics of leaders who are able to build school
capacity for data use?
* How do leaders and teachers attitudes and beliefs about data use affect data
use in school sites?
* How do leaders encourage the use of data amongst resisters or those not
attempting to engage in DDDM?
Clearly, districts support the use of data at varying levels, but give different
messages to schools in terms of where they should place their focus and resources.
Another recommendation would be to explore how other student variables such as race,
gender, and special needs affect the use of data in certain types of schools. A final
recommendation would be to investigate how other forms of data may be used differently
in schools of various contexts such as school processes (i.e., school programs,
instructional strategies, and classroom practices) and teacher/student perceptions (i.e.,
questionnaires, interviews, observations).
130
All of these studies could be undertaken using qualitative methods. However, it
may also be beneficial to implement survey methods to gather data across a larger sample
of schools. A mixed methods study combining qualitative and quantitative methods
would be ideal.
Conclusion
It is the hope of policymakers that accountability policies and high stakes testing
will direct educators to make well-informed instructional decisions that positively impact
teaching and learning for all students. Through this study of different schools with
various accountability and poverty levels, these benefits from accountability policies
need to be viewed with caution.
The evidence from this study clearly shows that different types of schools have
varied uses of data. High poverty schools with both high and low performance levels
with predominantly minority children used data in ways that hastened the rise of testing
points. Data was not used in ways to encourage systemic change, but rather to pinpoint
which students and areas would give the highest growth, as quickly as possible, on
standardized tests. In contrast, low poverty schools of both high and low performance
levels used data more readily across all students and all curricular areas. Although the
most struggling students were targeted, administrators in these types of schools used data
to improve the instructional practices of their teachers and to provide evidence to their
parents in support of creating educational programs. Furthermore, those administrators
from high performing school regardless of poverty levels with a strong understanding of
data use were more capable in building capacity within their schools.
131
As we proceed to 2014 and 100 percent student proficiency, there will
undoubtedly be greater pressures to achieve at all schools. More importantly, as the
evidence from this study indicates, educators and policymakers need to be cognizant of
the particular ways in which accountability polices are carried out at certain types of
schools. With intentions to pursue educational equity, accountability systems may
actually be undermining equality for all students. I argue that the consequences for poor,
minority students will leave an indelible blemish that will undeniably impact the future of
our nation.
132
REFERENCES
Abernathy, S.F. (2007). No Child Left Behind: And the public schools. Ann Arbor, MI:
The University of Michigan Press.
Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to
raise student achievement. American School Board Journal, November, 2001, 1-4.
Bernhardt, V. L. (2003). No schools left behind. Educational Leadership, 60(5), 26-30.
Booher-Jennings, J. (2005). Below the bubble: "Educational triage" and the Texas
accountability system. American Educational Research Journal, 42(2), 231-268.
Bourdieu, P., & Passeron, J.C. (1977). Reproduction: In education, society, and culture.
Beverly Hills, CA: Sage Publications, Inc.
Bowles, S., & Gintis, H. (1976). Schooling in capitalistic America: Educational reform
and the contradictions of economic life. New York: Basic Books.
Brooks-Gunn, J., & Duncan, G.J. (1997). The effects of poverty on children. The Future
of Children, 7, 55-71.
California Department of Education (2007). Retrieved September 20, 2007 from
http://api.cde.ca.gov/AcntRpt2007/2007GrowthSch.aspx?allcds=1964733601848
5
Carroll, S.J., Krop, C., Arkes, J., Morrison, P.A., & Flanagan, A. (2005). Californias K-
12 public schools: How are they doing? Santa Monica, CA: RAND.
Cicourel, A.V. (1982) Interviews, surveys, and the problem of ecological validity,
American Sociologist, 17, 11-20.
Coburn, C. E., & Talbert, J.E. (2006). Conceptions of evidence use in school districts:
Mapping the terrain. American Journal of Education, 112(4), 469-495.
Collins, R. (1979). The credential society: An historical sociology of education and
stratification. New York, NY: Academic Press, Inc.
Condron, D.J., & Roscigno, V.J. (2003). Disparities within: Unequal spending and
achievement in an urban school district. Sociology of Education, 76, 18-36.
Cotton, K. (1989). Expectations and student outcomes. NW Archives: School
Improvement Research Series. Retrieved October 8, 2007 from
http://www.nwrel.org/sepd/sirs/4/cu7.html
133
Darling-Hammond, L. (2004). From separate but equal to No Child Left Behind: The
collision of new standards and old inequalities. In D. Meier & G. Wood (Eds.),
Many children left behind: How the No Child Left Behind Act is damaging our
children and our schools (pp. 3-32). Boston, MA: Beacon Press.
Darling-Hammond. L. (2005). New standards and old inequities: School reform and the
education of African American students. In J.E. King (Ed.), Black education: A
transformative research and action agenda for the new century. Washington,
D.C.: American Educational Research Association.
Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high
performing school systems use data to improve instruction for elementary
students. Los Angeles: Center on Educational Governance.
Diamond, J.B. (2007). Where the rubber meets the road: Rethinking the connection
between high-stakes testing policy and classroom instruction. Sociology of
Education, 80, 285-313.
Diamond, J.B., & Cooper, K. (2007). The uses of testing data in urban elementary
schools: Some lessons from Chicago. In P.A. Moss (Ed.), Evidence and decision
making (National Society for the Study of Education Yearbook, Vol. 106, Issue 1,
pp. 241-263). Chicago, IL: National Society for the Study of Education.
Diamond, J., Randolph, A., & Spillane, J.P. (2000, August). Race, class, and beliefs
about students in urban elementary schools. Paper presented at the meeting of
American Sociological Association, Washington, D.C.
Diamond, J., & Spillane, J.P. (2004). High stakes accountability in urban elementary
schools: Challenging or reproducing inequality? Teachers College Record,
106(6), 1140-1171.
Downey, D.B., von hippel, P.T., & Broh, B.A. (2004). Are schools the great equalizer?
Cognitive inequality during the summer months and the school year. American
Sociological Review, 69(5), 613-635.
Duke, N.K. (2000). For the rich its richer: Print experiences and environment offered to
children in very low- and very high-socioeconomic status first-grade classrooms.
American Educational Research Journal, 37(2), 441-478.
Duncan, G.J., & Magnuson, K.A. (2005). Can family socioeconomic resources account
for racial and ethnic test score gaps? The Future of Children, 15(1), 35-54.
Earl, L., & Fullan, M. (2003). Using data in leadership for learning. Cambridge Journal
of Education, 33(3), 383-392.
134
Earl, L., & Katz, S. (2002). Leading schools in a data-rich world. In K. Leithwood, P.
Hillinger, G. Furman, P. Gronin, J. Macbeath, B. Mulford, and K. Riley (Eds.),
The second international handbook of educational leadership and administration.
Dordrecht, Netherlands: Kluwer.
Epps, E.G. (1995). Race, class, and educational opportunity: Trends in the sociology of
education. Sociological Forces, 10(4), 593-608.
Farkas, G., Grobe, R.P., Sheehan, D., & Shuan, Y. (1990). Cultural resources and school
success: Gender, ethnicity, and poverty groups within an urban school district.
American Sociological Review, 55, 127-142.
Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision making to
improve instruction. ERS Spectrum, 19(3), 10-19.
Ferguson, R. (1998). Teachers perceptions and expectations and the Black-White test
score gap. In C. Jencks and M. Phillips (Eds.), The Black-White test score gap.
Washington D.C.: Brookings, Institution.
Gamoran, A. (1986). Instructional and institutional effects of ability grouping. Sociology
of Education, 59, 185-198.
Garcia, G.E. (2002). Chapter 1 (Introduction) In Student cultural diversity:
Understanding and meeting the challenge (3
rd
ed., pp. 3-39). Boston, IL:
Houghton-Mifflin.
Goldsmith, P.A. (2004). Schools racial mix, students optimism, and the Black-White
and Latino-White achievement gaps. Sociology of Education, 77, 121-147.
Grissmer, D., Flanagan, A., Kawata, J., & Williamson, S. (2000). Improving student
achievement: What NAEP state scores tell us. Santa Monica, CA: RAND.
Guidelines for External Research Review (2008). Retrieved April 12, 2008 from
http://notebook.lausd.net/portal/page?_pageid=33,16510&_dad=ptl&schema=PT
L_EP
Haladyna, T.M., Nolen, S.B., & Haas, N.S. (1991). Raising standardized achievement
test scores and the origins of test score pollution. Educational Researcher, 20(5),
2-7.
Hamilton, L. (2003). Assessment as a policy tool. Review of Research in Education, 27,
25-68.
135
Ikemoto, G.S., & Marsh, J.A. (2007). Cutting through the data driven mantra: Different
conceptions of data-driven decision making. In P.A. Moss (Ed.), Evidence and
decision making (National Society for the Study of Education Yearbook, Vol.
106, Issue 1, pp. 105-131). Chicago: National Study for the Study of Education.
Distributed by Blackwell Publishing.
Karp, S. (2004). NCLBs selective vision of equality: Some gaps count more than others.
In D. Meier & G. Wood (Eds.), Many children left behind: How the No Child Left
Behind Act is damaging our children and our schools (pp. 53-65). Boston, MA:
Beacon Press.
Kennedy, E. (2003). Raising test scores for all students. Thousand Oaks, CA: Corwin
Press, Inc.
Kerr, K.A., Marsh, J.A., Ikemoto, G.S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons
from three urban districts. American Journal of Education, 112(4), 496-520.
Koretz, D.M. (2002). Limitations in the use of achievement tests as measures of
educators productivity. The Journal of Human Resources, 37(4), 752-777.
Lachat, M.A., & Smith, S. (2005). Practices that support data use in urban high schools.
Journal of Education for Students Placed at Risk, 10(3), 333-349.
Lee, J. (2004). Multiple facets of inequity in racial and ethnic achievement gaps. Peabody
Journal of Education, 79(2), 51-73.
Lee, J., & Wong, K.K. (2004). The impact of accountability on racial and socioeconomic
equity: Considering both school resources and achievement outcomes. American
Educational Research Journal, 41(4), 797-832.
Lomax, R.G., West, M.M., Harmon, M.C., Viator, K.A., & Madaus, G.F. (1995). The
impact of mandated standardized testing on minority students. Journal of Negro
Education, 64(2), 171-185.
Love, N. (2004). Taking data to new depths. Journal of Professional Development,
25(4), 22-26.
Madaus, G.F. (1988). The distortion of teaching and testing: High-stakes testing and
instruction. Peabody Journal of Education, 65(3), 29-46.
136
Madaus, G.F., & Clarke, M. (2001). The adverse impact of high-stakes testing on
minority students. In G. Orfield & M.L. Kornhaber (Eds.), Raising standards or
raising barriers? Inequality and high-stakes testing in public education (pp. 1-
49). New York: Century Foundation Press.
Mandinach, E.B., Honey, M., & Light, D. (2006). A theoretical framework for data-
driven decision making. Paper presented at the annual meeting of the American
Educational Research Association, San Francisco, CA. Retrieved January 28,
2008 from
http://cct.edc.org/admin/publications/speeches/DataFrame_AERA06.pdf.
Malloy, C. (2008, April). Data collection: Interviewing others. Power point presentation
presented in Inquiry II class at the University of Southern California, Los
Angeles, CA.
Marsh, J.A., Pane, J.F., & Hamilton, L.S. (2006). Making sense of data-driven decision
making in education. Paper presented at the annual meeting of the American
Educational Research Association, San Francisco, CA. Retrieved January 26,
2008 from http://www.rand.org/pubs/occasional_papers/2006/RAND_OP170.pdf.
Marzano, R.J. (2003). What works in schools: Translating research into action.
Alexandria, VA: Association for Supervision and Curriculum Development.
McCraken, N.M., & McCraken, H.T. (2001). Teaching in the time of testing: What have
you lost? The English Journal, 30-35.
McLoyd, V.C., & Wilson, L. (1991). The strain of living poor: Parenting, social support,
and child mental health. In A.C. Huston (Ed.), Children in poverty (pp. 105-135).
Cambridge, MA: Cambridge University Press.
McNeil, L.M. (2000). Contradictions of school reform: Educational costs of standardized
testing. New York, NY: Routledge.
Merriam, S.B. (1998). Qualitative research and case study applications in education. San
Francisco, CA: Jossey-Bass.
National Commission on Excellence in Education. (1983). A nation at risk: The
imperative for educational reform. Retrieved March 17, 2008, from
http://www.ed.gov/pubs/NatAtRisk/index.html
No Child Left Behind. (2001). Retrieved February 24, 2008 from
http://www.ed.gov/nclb/landing.jhtml
137
Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven, CT: Yale
University Press.
Ogawa, R.T., Sandholtz, J.H., Martinez-Flores, M., & Scribner. S. P. (2003). The
substantive and symbolic consequences of a districts standards-based curriculum.
American Educational Research Journal, 40(1), 147-176.
Patton, M.Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA:
Sage Publications, Inc.
Prince, C.D. (2004). Changing policies to close the achievement gap. Lanham, MD:
ScarecrowEducation.
Protheroe, N. (2001). Improving teaching and learning with data-based decisions: Asking
the right questions and acting on the answers. Educational Research Service
Spectrum. Retrieved April 7, 2008 from
http://www.ers.org/spectrum/sum01a.htm
Roscigno, V.J. (1998). Race and reproduction of educational disadvantage. Social
Forces, 76(3), 1033-1060.
Rosenthal, R. & Jacobson, L. (1968). Pygmalion in the classroom: Teacher expectation
and pupils intellectual development. New York: Holt, Rinehart, & Winston.
Rothstein, R. (2004). Class and schools. Washington, D.C.: Economic Policy Institute.
Schmoker, M. (2003). First things first: Demystifying data analysis. Educational
Leadership, 60(5), 22-24.
Skrla, L., Scheurich, J.J., Johnson, J.F., & Koschoreck, J.W. (2004). Accountability for
equity: Can state policy leverage social justice? In L. Skrla and J.J. Scheurich
(Eds.), Educational equity and accountability (pp. 51-78). New York, NY:
RoutledgeFalmer.
Smith, J., Brooks-Gunn, J., & Klebanov, P. (1997). The consequences of living in
poverty on young childrens cognitive development. In G. Duncan and J. Brooks-
Gunn (Eds.), Consequences of growing up poor (pp. 132-189). New York:
Russell Sage.
Smyth, T. S. (2008). Who is No Child Left Behind leaving behind? Clearing House: A
Journal of Educational Strategies, Issues and Ideas, 81(3), 133-137.
138
Stanton-Salazar, R.D. (1997). A social capital framework for understanding the
socialization of racial minority children and youths. Harvard Educational Review,
67(1), 307-346.
Stephens, D., Pearson, P.D., Gilrane, C., Roe, M., Stallman, A.C., Shelton, J., Weinzierl,
J., Rodriguez, A., & Commeyras, M. (1995). Assessment and decision making in
schools: A cross-site analysis. Reading Research Quarterly, 30(5), 478-499.
Sutherland, S. (2004). Creating a culture of data use for continuous improvement: A case
study of an Edison Project School. American Journal of Evaluation, 25(3), 277-
293.
Symonds, K.W. (2003). After the test: How schools are using data to close the
achievement gap. San Francisco: Bay Area School Reform Collaborative.
Thorndike, R.S. (1968). Review of Pygmalion in the classroom. American Educational
Research Journal, 5, 708-711.
Vaughan, D. & Kelly, K. (2008). Building a data culture: A district-foundation
partnership. Using Data for Decisions. Providence, NH: Annenberg Institute for
School Reform.
Wayman, J.C. (2005). Involving teachers in data-driven decision-making: Using
computer data systems to support teacher inquiry and reflection. Journal of
Education for Students Placed at Risk, 10(3), 295-308.
Wayman, J.C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school
improvement through analysis of student data. Baltimore: Center for Research of
Students Placed at Risk, Johns Hopkins University.
Wineburg, S.S. (1987). The self-fulfillment of the self-fulfilling prophecy. Educational
Researcher, 16(9), 28-37.
Wood, G. (2004). A view from the field: NCLBs effects on classrooms and schools. In
D. Meier & G. Wood (Eds.), Many children left behind: How the No Child Left
Behind Act is damaging our children and our schools (pp. 33-50). Boston, MA:
Beacon Press.
Woody, E.L., Bae, S., Park, S., & Russell, J. (2006). Snapshots of reform: District efforts
to raise student achievement across diverse communities in California. Berkeley,
CA: Policy Analysis for California Education.
139
Woody, E.L., Buttles, M., Kafka, J., Park, S., & Russell, J. (2004). Voices from the field:
Educators respond to accountability. Berkeley, CA: Policy Analysis for
California Education.
Yeh, S.S. (2001). Test worth teaching to: Constructing state-mandated test that
emphasize critical thinking. Educational Researcher, 30(9), 12-17.
Zhao, Y., & Frank, K.A. (2003). Factors affecting technology uses in schools: An
ecological perspective. American Educational Research Journal, 40(4), 807-840.
140
APPENDIX A
Administrator Interview Protocol
Participants Name _______________________________ Date _________________
Position _____________________________
[Introduction: My name is Christine Sanders and I am working on my dissertation on
how data use varies in different types of schools. The purpose of the study is to gain a
greater understanding of how school contexts can affect the use of data by school
personnel. This interview will be integral to my study and will help me tremendously in
adding new knowledge to data-driven decision making.
During the interview, I will be recording your responses which will be kept strictly
confidential. If there is something you would like to say off the record, I will stop the
recorder and those comments will not be included in my write-up. The interview will
take approximately 45 minutes. Do you have any specific questions before we begin?]
I. Background Laying the Foundation
1. Please tell me briefly about your administrative experience.
2. How long have you been at this school?
3. What is your prior experience and training?
II. Knowledge of School Background
1. Tell me briefly about the students, parents, and community of your school.
[Question 4]
2. What is your schools current API score? [Question 1]
3. Did you meet your AYP last year? [Question 1]
4. Have you heard of Similar Schools Ranking? In 2006-2007, _______
Elementary was ranked _____, what do you think has contributed to your
schools academic performance?
5. Would you say your school is high or low performing? Why? [Question 1]
6. How do you feel about your schools overall academic performance?
141
7. What do you think has contributed to your schools academic performance?
III. Student Population and Diverse Learners
1. Briefly describe your schools demographic make-up. [Question 1]
2. Are all your students achieving?
3. Do you think your students can achieve as well as any other subgroup, lets
say, as students on the West Side? Why or why not? (Use if urban school)
4. If yes, what you do think is contributing to their success?
5. If not, what do you think is contributing to their struggles?
6. What do you think will help all your students have success?
7. How has your school used data to address the needs of your diverse students?
[Question 4]
IV. Data
1. What kinds of data related to classroom practices do the teachers have access
to and use? (Probe: state/district assessments) [Question 1]
2. What are the expectations of your district about how schools should use data?
[Question 2]
3. What are your expectations of the teachers for using data? [Question 2]
4. Do you use data for certain types of students? If so, please describe these
students (Probe: High/Low Kids, long term, short term solutions) [Question 1]
5. Why do you use data? (Probe: consequences, incentives) [Question 3]
6. Do you allocate time for the collaboration of teachers in order to analyze data
and develop action plans based on this data? If so, when? (Probe: Faculty
meetings, grade-level meetings) [Question 1]
7. How are you taught to use the data effectively? (Probe: PD) [Question 1]
8. What kind of support do you provide the teachers so that they use the data
effectively? [Question 1]
142
9. Do you have access to a database of student assessment results? If so, how
often do you access this system? [Question 1]
10. How do you encourage teachers to use data? [Question 3]
11. What incentives are there to use data? [Question 3]
V. Beliefs on Data Use
1. How do you feel about using data? (Probe: relevancy, effectiveness) [Question
1].
2. Have you noticed a shift in your own beliefs about data or students as a result
of focusing on data? [Question 4]
3. Do you think this school has a culture of data-driven decision making?
Explain why or why not. [Question 1]
VI. Pressures of Accountability
1. Do you feel the pressures of federal and state accountability policies daily?
[Question 2]
a. Is yes, how have these pressures facilitated or hindered your work in the
area of data use? [Question 2]
b. If no, why not? [Question 2]
2. Have you ever experienced a situation where student assessment data was
used in a way that you felt was unfair or unjust just to meet the pressures
of accountability? [Question 2]
3. Do you think that targeting a certain group of students to raise test scores
is fair? (Probe: certain level of students, subgroups)
I. Future Outlook
1. What do you think are your schools next steps in regards to data-driven
decision making?
[Concluding Remarks/Questions: Is there anything else I should know? Thank you for
your time and cooperation. I can share this report with you once it is done and I may
need to contact you for follow-ups.]
143
APPENDIX B
Teacher Interview Protocol
Participants Name ________________________________ Date _________________
Position _____________________________
[Introduction: My name is Christine Sanders and I am working on my dissertation on
how data use varies in different types of schools. The purpose of the study is to gain a
greater understanding of how school contexts can affect the use of data by school
personnel. This interview will be integral to my study and will help me tremendously in
adding new knowledge to data-driven decision making.
During the interview, I will be recording your responses which will be kept strictly
confidential. If there is something you would like to say off the record, I will stop the
recorder and those comments will not be included in my write-up. The interview will
take approximately 45 minutes. Do you have any specific questions before we begin?]
I. Background Laying the Foundation
1. What grade do you teach?
2. How many years have you taught?
3. How long have you been at this school?
4. What is your prior experience and training?
II. Knowledge of School Background
1. Tell me briefly about the students, parents, and community of your school.
[Question 4]
2. What is your schools current API score? [Question 1]
3. Did you meet your AYP last year? [Question 1]
4. Have you heard of Similar Schools Ranking? In 2006-2007, _______
Elementary was ranked _____, what do you think has contributed to your
schools academic performance?
5. Would you say your school is high or low performing? Why? [Question
1]
6. How do you feel about your schools overall academic performance?
[Question 4]
144
7. What do you think has contributed to your schools academic
performance?
III. Student Population and Diverse Learners
1. Briefly describe your schools demographic make-up. [Question 1]
2. Do you think your students can achieve as well as any other subgroup,
lets say, as students on the West Side? Why or why not? (Use if urban
school)
3. Are all your students achieving?
a. If yes, what you do think is contributing to their success?
b. If not, what do you think is contributing to their struggles?
c. What do you think will help all your students have success?
4. How has your school used data to address the needs of your diverse
students? [Question 4]
IV. Data
1. What kinds of data related to classroom practices do teachers have access
to and use? (Probe: state/district assessments) [Question 1]
2. What are the expectations of the district about how schools should use
data? [Question 2]
3. What are your principals expectations about how teachers should use
data? [Question 2]
4. Do you use data for certain types of students? If so, please describe these
students. (Probe: High/Low Kids, long term, short term solutions)
[Question 1]
5. Why do you use data? (Probe: consequences, incentives) [Question 3]
6. Is time allocated for the collaboration of teachers in order to analyze data
and develop action plans based in this data? If so, when? (Probe: Faculty
meetings, grade-level meetings) [Question 1]
7. How are you taught to use the data effectively? (Probe: PD) [Question 1]
145
8. Do you have access to a database of student assessment results? If so,
how often do you access this system? [Question 1]
9. What incentives are there to use data? [Question 3]
10. How does your principal support data use amongst the teachers?
[Question 1,2,3]
V. Beliefs on Data Use
1. How do you feel about using data? (Probe: relevancy, effectiveness)
2. Have you noticed a shift in your own beliefs about data or students as a
result of focusing on data? [Question 4]
3. Do you think this school has a culture of data-driven decision making?
[Question 1] Explain why or why not.
VI. Pressures of Accountability
1. Do you feel the pressures of federal and state accountability policies daily?
[Question 2]
a. Is yes, how have these pressures facilitated or hindered your work in the
area of data use? [Question 2]
b. If no, why not? [Question 2]
2. Have you ever experienced a situation where student assessment data was
used in a way that you felt was unfair or unjust just to meet the pressures
of accountability? [Question 2]
3. Do you think that targeting a certain group of students to raise test scores
is fair? (Probe: certain level of students, subgroups)
VII. Future Outlook
1. What do you think are your schools next steps in regards to data-driven
decision making?
[Concluding Remarks/Questions: Is there anything else I should know? Thank you for
your time and cooperation. I can share this report with you once it is done and I may
need to contact you for follow-ups.]
146
APPENDIX C
Coordinator/Coach Interview Protocol
Participants Name _________________________________ Date _________________
Position _____________________________
[Introduction: My name is Christine Sanders and I am working on my dissertation on
how data use varies in different types of schools. The purpose of the study is to gain a
greater understanding of how school contexts can affect the use of data by school
personnel. This interview will be integral to my study and will help me tremendously in
adding new knowledge to data-driven decision making.
During the interview, I will be recording your responses which will be kept strictly
confidential. If there is something you would like to say off the record, I will stop the
recorder and those comments will not be included in my write-up. The interview will
take approximately 45 minutes. Do you have any specific questions before we begin?]
I. Background Laying the Foundation
2. What is your current position?
3. How long have you been at this school?
4. What is your prior experience and training?
II. Knowledge of School Background
1. What is your schools current API score?
2. Did you meet your AYP last year?
3. Would you say your school is high or low performing? Why? [Question
1]
4. How do you feel about your schools overall academic performance?
[Question 4]
5. Have you heard of Similar Schools Ranking? In 2006-2007, _______
Elementary was ranked _____, what do you think has contributed to your
schools academic performance?
III. Student Population and Diverse Learners
147
1. Are all your students achieving?
a. If yes, what you do think is contributing to their success? [Question 1,4]
b. If not, what do you think is contributing to their struggles?
2. What do you think will help all your students have success?
3. Do you think your students can achieve as well as any other subgroup, lets say,
as students on the West Side? Why or why not?
IV. Data
1.What kinds of data related to classroom practices do you have access to and
use? (Probe: state/district assessments) [Question 1]
2. Why do you use data? (Probe: consequences, incentives) [Question 3]
3. What are the expectations of your local district and LAUSD about how schools
should use data? [Question 2]
4. What are your principals expectations about how you should use data?
5. Do you use data differently to address the needs of your diverse students?
How so?
6. Do you use data for certain types of students? For example, your high or low
kids?
7. In regards to your use of data, do you think it is making long or short-term
changes? Explain.
8. Is time allocated for the collaboration of teachers and/or administrative staff in
order to analyze data and develop action plans based in this data? If so, when?
(Probe: Faculty meetings, grade-level meetings) [Question 1]
9. How are you taught to use the data effectively? (Probe: PD) [Question 1]
10. Do you have access to a database of student assessment results? If so, how
often do you access this system? [Question 1]
11. What incentives are there to use data? [Question 3]
12. How does your principal support data use for you and other staff members?
148
V. Beliefs on Data Use
1. How do you feel about using data? Do you think it is important? (Probe:
relevancy, effectiveness) [Question 1].
2. What challenges do you face in using data?
3. How do you think your teachers feel about data?
4. (For math coach) How do you encourage them to use data?
5. Have you noticed a shift in your own beliefs about data or students as a result
of focusing on data? [Question 4]
6. Do you think this school has a culture of data-driven decision making?
[Question 1] Explain why or why not.
VI. Pressures of Accountability
1. Do you feel the pressures of federal and state accountability policies daily?
[Question 2]
a. Is yes, how have these pressures facilitated or hindered your work in the
area of data use? [Question 2]
b. If no, why not? [Question 2]
2. Have you ever experienced a situation where student assessment data was
used in a way that you felt was unfair or unjust just to meet the pressures of
accountability?
3. Do you think that targeting a certain group of students to raise test scores is
fair? (Probe: certain levels of students, subgroups)
VII. Future Outlook
a. What do you think are your schools next steps in regards to data-driven
decision making?
[Concluding Remarks/Questions: Is there anything else I should know? Thank you for
your time and cooperation. I can share this report with you once it is done and I may
need to contact you for follow-ups.]
149
APPENDIX D
Over-Arching Themes
1. Focus on Certain Levels of Students
2. Focus on Certain Curricular Areas
3. Accountability in Various School Contexts
4. Focus on Data
5. Foundations for Data Use
6. Beliefs about Data Use
7. Challenges
8. Teacher Expectations
150
APPENDIX E
Refined Codes/Sub-Themes
1. Basic Level Students
2. Subgroups
3. Far Below Basic and Below Basic Level Students
4. Individual Students and All Students
5. Broader Content Clusters
6. Departmentalizing
7. High Poverty Schools and Pressures of Accountability
8. Low Poverty Schools and Pressures of Accountability
9. Increased Test Prep
10. Less Time for the Arts
11. Leadership and Accountability
12. Buy-in
13. Data as Part of a Greater Whole
14. Personal Incentives
15. Validity
16. More Than Just Numbers
17. Data Use Is Not the Answer
18. Time
19. Lack of Training and Transfer to Practice
20. Lack of Technology
151
21. Stability of Staff
22. High Expectations
23. Special Education Students
24. Is it Fair?
Abstract (if available)
Abstract
Since the passing of No Child Left Behind (NCLB) and related state mandates, schools have witnessed a major transformation in accountability policies. Therefore, as a result of these stringent laws, the use of data has become a prominent reform strategy. Prior research has focused on the successes of high performing districts and schools, as well as the significant barriers that have affected the implementation of data-driven decision making (DDDM). However, little is known as to whether schools with varied performance levels (i.e., high vs. low performing) and student populations (i.e., high vs. low poverty levels) implement DDDM differently.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
How districts prepare site administrators for data-driven decision making
PDF
Beyond the numbers chase: how urban high school teachers make sense of data use
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
Do we really understand what we are talking about? A study examining the data literacy capacities and needs of school leaders
PDF
School culture, leadership, professional learning, and teacher practice and beliefs: A case study of schoolwide structures and systems at a high-performing high-poverty school
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
Multiple perceptions of teachers who use data
PDF
Allocation of educational resources to improve student learning: case studies of California schools
PDF
The implementation of data driven decision making to improve low-performing schools: an evaluation study of superintendents in the western United States
PDF
Overcoming a legacy of low achievement: systems and structures in a high-performing, high-poverty California elementary school
PDF
Doubling student performance through the use of human capital at high-performing high-poverty schools
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
Effective factors of high performing urban high schools: a case study
PDF
School-wide implementation of the elements of effective classroom instruction: lesson from a high performing high poverty urban elementary school
PDF
Building data use capacity through school leaders: an evaluation study
PDF
A case study: school-wide implementation of the elements of effective classroom instruction: lessons from high-performing, high poverty urban schools
PDF
Promising practices to promote and sustain a college-going culture: a charter-school case study
PDF
A case study of factors related to a high performing urban charter high school: investigating student engagement and its impact on student achievement
Asset Metadata
Creator
Sanders, Christine Hiromi
(author)
Core Title
The pursuit of equity: a comparative case study of nine schools and their use of data
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/10/2009
Defense Date
03/06/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accountability policies,barriers to data use,data-driven decision making,equity in education,minority students,No Child Left Behind,OAI-PMH Harvest,performance levels of schools,poverty levels of schools,targeted data use
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Datnow, Amanda (
committee chair
), Malloy, Courtney (
committee member
), Stowe, Kathy Huisong (
committee member
)
Creator Email
chs9365@lausd.net,csanders19@hotmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2076
Unique identifier
UC1199375
Identifier
etd-Sanders-2828 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-208761 (legacy record id),usctheses-m2076 (legacy record id)
Legacy Identifier
etd-Sanders-2828.pdf
Dmrecord
208761
Document Type
Dissertation
Rights
Sanders, Christine Hiromi
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
accountability policies
barriers to data use
data-driven decision making
equity in education
minority students
No Child Left Behind
performance levels of schools
poverty levels of schools
targeted data use