Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Cross-case analysis of the use of student performance data to increase student achievement in California public schools on the elementary level
(USC Thesis Other)
Cross-case analysis of the use of student performance data to increase student achievement in California public schools on the elementary level
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
CROSS-CASE ANALYSIS OF THE USE OF STUDENT PERFORMANCE DATA
TO INCREASE STUDENT ACHIEVEMENT IN CALIFORNIA PUBLIC
SCHOOLS ON THE ELEMENTARY LEVEL
by
Lucy Katherine Hunt
________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(EDUCATION)
December 2006
Copyright 2006 Lucy Katherine Hunt
ii
ACKNOWLEDGMENTS
The dissertation process is a testament to the power of persistence. Great
thanks goes out to all who guided, nurtured and supported me on this journey. Each
day as I worked on this project, I read a phrase taped above my computer:
For of all sad words of tongue or pen,
The saddest are these:
“It might have been!”
And I am the one
Who decided it would not be.
-John Greenleaf Whitter and James
Bugental
I decided it would be and it is done.
iii
TABLE OF CONTENTS
PAGE
ACKNOWLEDGMENTS ii
LIST OF TABLES v
ABSTRACT vii
CHAPTER 1: OVERVIEW OF THE STUDY 1
Introduction 1
Statement of the Problem 7
Purpose of the Study 8
Importance of the Study 9
Limitations 11
Delimitations 11
Assumptions 12
Definitions of Terms 12
Organization of the Study 19
CHAPTER 2: LITERATURE REVIEW 21
Status of Student Performance in the United States 21
The Nature of Comprehensive School Reform 36
Role of Data In School Reform 71
CHAPTER 3: RESEARCH METHODOLOGY 93
Introduction 93
Sample and Population 95
Instrumentation 99
Data Collection 103
Data Analysis 110
Summary 114
CHAPTER 4: ANALYSIS AND INTERPRETATION OF DATA
AND FINDINGS 115
Introduction 115
Research Question One: Description of Data Use
Policies and Strategies: The Design 115
Research Question Two: Degree of Design Implementation 137
Research Question Three: Adequacy of the Design 155
Discussion 162
iv
CHAPTER 5: SUMMARY, CONCLUSION, IMPLICATIONS 183
Overview and Purpose of the Study 183
Summary of Findings 184
Conclusions and Recommendations for Improved Practice 188
Recommendations for Future Research 194
REFERENCES 203
APPENDICES
A Conceptual Framework A – Description of Data Use
Policies and Strategies: The Design 215
B Conceptual Framework B – Implementation of Data
Use Policy and Strategy in Practice 219
C Teacher Questionnaire 221
D Stages of Concern (Teachers) 225
E Situated Interviews 231
F Researcher Observation and Rating Form, Question #3 233
v
LIST OF TABLES
Table Page
3.1 Sample and Population With Key Features of Each District
and School for Each Case 97
3.2 Personnel Resources Available at the Schools in Each of
These Cases 99
3.3 Conceptual Framework: A Description of Data Use Policies
And Strategies: The Design 100
3.4 Conceptual Framework B: Implementation of Data Use
Policy and Strategy in Practice 100
3.5 Current vs. Emerging Data Practices 101
3.6 The Relationship of Data Collection Instrumentation to Research
Questions 103
3.7 Characteristics of the Leadership and Teachers Interviewed 106
4.1 Comparison National Percentile Rank (NPR) for Stanford-9
Scores 1988 and 2002 118
4.2 Comparison Between 1999 and 2002 API Scores 119
4.3 Average Responses to Implementation of Current Data
Practices to Improve Achievement on Current State
Assessments 139
4.4 Average Responses to the Degree of Implementation of
District Designs for Data Use with Respect to Emerging
State Assessments 142
4.5 Average Responses to the Degree Accountability
is Linked to the Use of Data Strategies 145
4.6 Perceptions Regarding Whether Improvement in Student
Achievement is Due to the Implementation of Data Use
Strategies 147
vi
4.7 Case 1. Distribution of Strongest Teacher Concerns 150
4.8 Case 2. Distribution of Strongest Teacher Concerns 152
4.9 Case 3. Distribution of Strongest Teacher Concerns 153
4.10 Adequacy Rating of District Support for the Standards-Based
Instruction and Assessment 156
4.11 District and School Accountability to Standard-Based
Curriculum 158
4.12 Degree to Which High School Performance is Aligned to
Standards and Communicated to Teachers, Students and Parents 160
4.13 Relationship Between Design/Implementation of Data Plan Use 178
5.1 Summary of Findings from Research Question One: Design 184
5.2 Summary of Key Findings for Research Question Two:
Implementation 186
5.3 Summary of Findings for Adequacy of District Plans 187
5.4 Comparison Between 1999 and 2005 API Scores 197
5.5 Comparison Between 2004-2006 for AYP 198
vii
ABSTRACT
There has been much written about the educational challenges facing the
United States of America today. These challenges include, but are not limited, to
educational inequity, low test scores, teacher competency and training, and
achievement gaps. An era of standards-based reform and comprehensive school
reform evolved as a reaction to these challenges. There is also an increase in
assessment and accountability of student progress as a part of these educational
reform measures. Key elements to school reform include effectively utilizing and
collecting data. This includes using data to monitor student achievement and hold
districts, schools, and educators accountable for student progress. This also includes
making the teacher an integral part of the reform process.
The purpose of this study is to evaluate the design, implementation, and
adequacy of data usage to improve student academic achievement in three cases at
the district, school, and elementary classroom levels. Three research questions will
be used to investigate the “data use” policies and strategies in these three cases:
1. What is the district design for using data regarding student performance,
and how is that design linked to the current and the emerging state
context for assessing student performance?
2. To what extent has the district design actually been implemented at the
district, school and individual teacher levels?
3. To what extent is the district design a good one?
viii
The three cases had varying degrees of adequacy and implementation with data
use practices. The cross-case analysis brought to light three distinct responses to
reform with varying degrees of effectiveness. Each case was at a different stage of
an evolutionary data reform process. Illuminating effective practices for utilizing
data to improve the delivery of instruction and student performance will be found
valuable to multiple audiences: researchers, policymakers, administrators, teachers,
and community members.
1
CHAPTER 1
OVERVIEW OF THE STUDY
Introduction
The United States is in the throes of yet another wave of educational reforms
in response to low student achievement especially amongst poor and minority
students. The purpose of these reforms is to improve public education and enable all
students to reach proficient levels of academic achievement. These reforms center
around standards-based reform and comprehensive school reform. These reforms (in
the State of California) are spearheaded by No Child Left Behind and the California
Public Schools Accountability Act of 1999. These are data driven reforms that call
for a shift in design, implementation, and adequacy on the district, school, and
classroom levels. Teachers are being asked to use data as a key element in
instructional planning and practice.
There are a number of national and international tests that help paint a picture
of current student performance in the United States. The National Assessment of
Educational Progress (NAEP) results over the past 33 years show the following
trends: 1) More progress is being made in mathematics than in reading. Math scores
were higher than any previous assessment year for 9- and 13-year olds, but was not a
statistically significant difference for 17 year olds. 2) Nine year olds average reading
scores in 2004 were higher than any assessment year, but 13 and 17 year olds did not
have a statistically significant difference in scores from the 1999 assessment. 3)
Some achievement gaps between Whites, Black and Hispanics are narrowing in
2
reading and math, but a large gap still exists. The gap narrowed more between 1971
and 1999 and has stayed flat from 1999-2004 (Nation’s Report Card, 2004).
International studies also shed light on the current performance of American
youth. The Trends in International Mathematics and Science Study (TIMSS) was
conducted in 2003 and found that the United States ranked slightly above average
with many nations clearly outperforming the United States; however, when the
TIMSS data are broken down into socioeconomic levels, middle and high-income
families were competitive with the highest achieving nations in the world. The 1995
TIMSS, 1999 TIMSS-R, and 2003 TIMSS demonstrated that race and
socioeconomic status greatly affected how a child performed on the assessment.
Indeed, there are many factors contributing to the achievement gap in the
United States. The United States has one of the highest rates of childhood poverty
among all the industrialized nations: 21.5%. Reeves (2000) also presents the strong
correlation between teacher quality and poverty; the more likely a student was to
participate in free and reduced lunch programs (a strong indicator of poverty), the
less likely a student was to have a teacher with an advanced degree. In addition, in
high poverty schools, only 67% of teachers of mathematics had a major or minor in
math on the secondary level compared to the national average of 80% (Berliner,
2001b). Darling-Hammond (2000) found a significant positive correlation between
state student scores on grade-eight NAEP mathematics, and the percentage of
teachers with a degree in mathematics or math education.
3
There has been a pervasive achievement gap between African American and
White students. The general consensus amongst researchers on graduation rates is
approximately 2/3 of teenagers graduate from high school; although, only 50% of
African Americans and Hispanics graduate (Thornburgh, 2006, p. 32). Different
researchers have different ideas as to exactly what is causing this achievement gap.
Berliner (2001) and Goodlad (1994) center their discussion around the effects of
poverty. Jencks and Phillips (1998), authors of The Black-White Test Score Gap,
think it goes beyond poverty and found that predominantly Black schools do spend
about as much per pupil as predominantly White schools; however, Black children
are more likely to be in larger classes, get less attention, and have less academically
skilled teachers than similar White children.
School reformers have long been interested in improving student
performance. Current school reform includes standards-based reform which has its
roots in Benjamin Bloom’s 1956 work, Taxonomy of Educational Objectives. This
reform movement gained momentum through a series of national educational reform
programs including a Nation at Risk (Reagan, 1983), Goals 2000 (former Bush and
Clinton), and now No Child Left Behind. The concept behind the standards-based
reform movement is setting clear standards that all students are supposed to meet.
Dovetailing with setting clear standards is the concept of accountability on the part
of all parties involved in the educational experience of a student. Critics of the rapid
waves of school reform feel that “high stakes testing narrows the whole enterprise of
4
education and could halt the development of truly significant improvements in
teaching and learning” (Lewis, 2002b, p. 179).
California has been involved in standards-based reform for a number of
years. Content standards were adopted across curricular areas and the STAR
(Standardized Testing and Reporting) Program was established in 1997. The 1999
California’s Public Schools Accountability Act created a system of rewards and
interventions for improving student performance. The Academic Performance Index
(API) is the cornerstone of this act and its purpose is to measure the academic
performance and growth of schools. California is also instituting the California High
School Exit Exam which is aligned to reading and math standards. The exam began
counting as a graduation requirement in the 2005-2006 school year.
Education Commission of the states (2000a) reports that standards
by themselves, are not likely to yield gains in student
achievement or any of the other improvements states are relying
on them to produce. Content standards are only one piece in a
puzzle that also encompasses performance standards,
assessment, accountability, professional development, teacher
education, resource allocation, and intervention and support for
struggling students and schools (p.1)
These elements form the basis of comprehensive school reform in addition to a
schoolwide vision or integrated school plan. Comprehensive school reform
programs provide a systematic way to help schools improve with research-based
replicable strategies for whole school change (NCCSR).
Sherry P. King (1999) characterizes schools of the 21
st
century as systems of
continuous improvement; in essence data-driven organizations. A good data-driven
5
organization uses multiple indicators to flesh out an adequate portrait of how a
student is achieving, using both summative and formative data. The teacher
becomes a very important player in the process of continual data analysis.
Unfortunately, often in the context of current standards-based reform movements,
teachers are being asked to use data at very sophisticated levels when they have often
not been sufficiently trained to do so. Goertz (2001) and Winkler (2002) show
evidence that teacher implementation of standards-based reform and data driven
practices is mixed especially among veteran teachers. Nevertheless, the pressure to
be comfortable using data to drive instructional practice is mounting for teachers.
Indeed, teacher accountability is causing the need for a shift to occur in the teaching
community; Caldwell (1999) describes it as a “new professionalism.” Fullan, Hill,
and Crévola (2006) are touting the reappearance of teachers in the educational
reform process as a “breakthrough” or “tipping point” that will make the final
difference for learners. Hill and Crévola (1999) and Behuniak (2002) describe the
importance of effective professional development for teachers to become
comfortable using data-driven teaching practices. This may include restructuring
schools and reorganizing time.
Blankenstein (2004) writes that “Teachers in high-performing schools don’t
view “data” as abstract, out-of-context information that shows whether they’re
meeting their goals; they interact with data in a much more personal way, using data
of various kinds to make daily decisions about teaching” (p. 156). This interactive
6
relationship with data can be extremely effective at raising student achievement and
meeting the needs of all learners.
A landmark review by Black and Williams (1999) found that
focused efforts to improve formative assessment produced
learning gains greater than one-half standard deviation, which
would be equivalent to raising the score of an average student
from the 50
th
percentile to the 85
th
percentile. In other words,
formative assessment, effectively implemented, can do as much
or more to improve student achievement than any of the most
powerful instructional interventions, intensive reading
instruction, one-on-one tutoring, and the like (Shepard et al.,
2005, p. 277)
In order for teachers to become adept at “expert teaching” including using
effective data practices, much new learning must occur. “Assessment of student
learning is an integral part of the learning process. A generation ago, it was
considered sufficient if teachers knew how to give tests that matched learning
objectives.
To be effective, teachers must be skillful in using various
assessment strategies and tools such as observation, student
conferences, portfolios, performance tasks, prior knowledge,
assessments, rubrics, feedback, and student self-assessment.
More importantly, they must be able to use insights from
assessment to plan and revise instruction and to provide
feedback that explicitly helps students see how to improve
(Shepard et al. 2005, p. 276)
For school improvement to occur, professional development, collaboration,
pedagogical improvement, and student learning need to interact (Fullan, 2002).
“Collaboration that allows teachers to create a common ground for discussing
students’ needs becomes part of a school culture that supports an effective
intervention system” (Baker & Brown, 2006, p. 36). Research shows that data use
7
can facilitate identification and implementation of strategies that can lead to
improved academic progress (Schmoker, 1999; Bernhardt, 2000; Johnson, 2002;
Blankenstein, 2004; Fullan, Hill, & Crévola, 2006). There are many pathways to
reform, and the teacher is becoming an ever more important part of systemic school
reform since the teacher has the closest connection to the student and has the most
power and responsibility for reforming instructional practice. Where the place of the
teacher falls in relation to schoolwide data reform including design, implementation,
and adequacy is open to investigation.
Statement of the Problem
It has been established that there are challenges facing the state of the
American educational system. Comprehensive and standards-based reform
movements were implemented in hopes of improving student achievement. Various
accountability measures through No Child Left Behind and the California Public
School’s Accountability Act demand advanced data usage. This data is used to
calculate district, school, and student progress and it is publicly displayed.
Publicized test scores/ Academic Performance Index (API) rankings judge the
effectiveness of districts, schools, and grade levels. Districts and school sites must
develop effective data use plans that meet the expected outcomes of the
accountability benchmarks.
Indeed, districts and schools are inundated with data. There is testing data,
criterion referenced and norm-referenced, summative and formative data,
performance-based data, portfolios, presentations, etc. Yet, research reports that
8
very few districts, schools, and teachers really know how to effectively use this data.
This inability to use data effectively may stem from a lack of familiarity, training, or
emphasis on data use within a school culture. Researchers and practitioners are not
in agreement as to what is the most advantageous way to use data to improve the
delivery of instruction and student performance. It is difficult to create data use
policies without clear ideas about “best practices” in the area of data use. Additional
information is needed regarding what key elements are needed in an effective data
use plan that is responsive to student, teacher, administrator, and district needs within
the context of both current and emerging accountability measures. The decision-
making processes that district and school sites utilize when designing a district data
use plan are also important pieces of the puzzle.
There appears to be two main types of data usage that warrants a closer
inspection. There is standardized testing data that is primarily used to evaluate
student progress at the end of a school year (summative assessment) and there is in-
house data from periodic or on-going assessments (formative) that can be more
responsive to the immediate instructional needs of students.
Purpose of the Study
The purpose of the study is to examine how districts, schools, and classrooms
design and implement data at the elementary level. Three cases will help to evaluate
the adequacy and impact of these designs within the context of current and emerging
data use. This study will explore the effect district data use policies have on the local
school site, and in what ways the data improves instructional practices in the
9
classroom and raises student achievement. Three research questions guided the
cross-case analysis:
1. What is the district design for using data regarding student performance,
and how is that design linked to the current and the emerging state
context for assessing student performance?
2. To what extent has the district design actually been implemented at the
district, school and individual teacher level?
3. To what extent is the district design a good one?
Importance of the Study
This study will present patterns across cases that can shed light on effective
and ineffective data practices across three elementary cases in California. This
information will have relevance to community members, classroom teachers, school
and non-school based administrators, policymakers, and educational researchers
especially in the area of accountability and reform. This study will be of particular
interest to the above listed parties in California, and may be of interest across the
United States because many of the same data use issues are applicable. Each
audience member will view the results from their “lens” or educational perspective.
The study results will center around four thematic areas that build upon the
current literature of data use. The four thematic areas will be: 1) policy
implementation issues, 2) implementation practices, 3) instrumental leadership: roles
and communication, and 4) instructional responsiveness to data: evolving reform.
Within the context of policy implementation issues, this study will share how three
10
groups of districts, schools, and teachers responded to state reform efforts in regard
to data usage. The study will examine what elements of a responsive data plan are
working and what was implemented as the legislation intended. In addition, it will
identify whether the districts and schools are tackling emerging data trends in state
reform legislation or whether they are staying with more current trends. Within
implementation procedures, the study will explore training, technology, funding, and
time issues.
Findings will show how successful schools implement district designs for
data use. The study will highlight who is the most instrumental leader in the change
process to implement data reform measures and at what level this change occurred
(district, school, or classroom) in the three cases. Within instrumental leadership, the
study will explore change practices and conducive environmental/emotional
considerations needed for change to occur. Likewise, the study will present findings
on how data use impacts instructional practices and how important the role of the
teacher is in this reform process.
These four themes will lend new information towards the body of research
literature on data use that could impact future policy decisions, the conceptualization
of data change processes, perspectives on how to effectively implement a strong data
use plan, and the responsiveness to varied student needs thereby helping to close the
achievement gap.
11
Limitations
A cross-case analysis was conducted based on data collected from three
districts and three elementary schools located within these districts in Southern
California. This small sample may limit generalizability for other schools and
districts. The cross-case researcher was restricted to cases selected by primary
researchers and restricted to the data and information reported in their dissertations.
Case 2 did not report eight of 34 questions in the Teacher Questionnaire which
created missing data for the cross-case analysis. There may be biases from the
researchers (primary and secondary) in the findings through subjective
interpretations.
Delimitations
This is a descriptive cross-case study. The data collected is delimited to three
districts and three elementary schools within these districts in Southern California.
The sample was purposive and included schools that were selected based on four
criteria: 1) the district has a design in place for collecting and using data regarding
student performance; 2) the district must use multiple measures (including norm and
criterion-referenced testing) to gauge student achievement that are linked to the
current and emerging state context; 3) the district and school were identified as
engaging in “best practices” for data use and showed student achievement gains as a
result of these practices; 4) The district must have a diverse student population with a
mid-range socioeconomic status. Interviews at each site included a district
administrator, site administrator, and six teacher participants. The small sample size
12
and purposive sampling limits generalizability to other districts and elementary
schools; however, a multi-case analysis does promote more generalizability than a
single case.
Assumptions
The basis for the selection of the particular schools and districts is the belief
they are utilizing student performance data to make data-based decisions to increase
student achievement. It is also assumed that the data collected will be sufficient for
the purposes of this study, and that all participants will respond honestly in
interviews and on questionnaires.
Definitions of Terms
For the purpose of this study, the following terms are defined as follows:
Academic Performance Index (API): A statewide ranking of schools based on
their SAT 9 scores (in 2002); it ranges from 200-1000. Most schools have an API, a
state ranking (by elementary, middle, or high school), a ranking in comparison to
100 similar schools, and growth targets for the following year (Ed-data) (Schoolwise
Press).
Accountability: The idea that students, teachers or schools should be held
responsible for improving student achievement and should be rewarded or
punished based on whether or not they are successful (APS).
Alternative Assessments: Ways other than standardized tests to get information
about what students know and where they need help, such as oral reports, projects,
performances, experiments, and class participation (Ed Source) (Schoolwise Press).
13
Assessment: An instrument, tool, process or exhibition composed of a
systematic sampling of behavior for measuring student’s competence, knowledge,
skills, or behavior.
Authentic Assessment: See performance assessment.
Benchmark: A standard or reference by which performance can be measured or
judged (APS).
California High School Exit Exam (CAHSEE): A state exam that California
public high school students, beginning with the class of 2006, must pass in order to
graduate. Its purpose is to test whether students have mastered the academic skills
necessary to succeed in the adult world. It is a pass-fail exam divided into two
sections: English/language arts (reading and writing) and mathematics. Sophomores,
juniors, and seniors can take the test. Once students pass a section of the test, they do
not have to take that section again (Ed Source) (Schoolwise Press).
California Standards Tests (CST): Tests in English/language arts, mathematics,
science, history/social science, and other topics comprising items that were
developed specifically to assess students' performance on California's content
standards. The CST is part of the STAR testing program. Students at different grade
levels take different tests, depending on the courses they are taking (Schoolwise
Press).
California Achievement Test (CAT/6): A standardized, nationally normed test
of basic skills that replaced the SAT-9 in 2003 as a state-required test for grades 2
through 11. Results are used to compare the scores of individual students and schools
14
with others in the area, across the state, and throughout the United States. The API is
calculated using this exam instead of the SAT-9 (Ed-data) (Schoolwise Press).
Comprehensive School Reform: An approach to school improvement that
involves adopting a design for organizing an entire school rather than using
numerous unrelated instructional programs. New American Schools, an organization
that promotes comprehensive school reform, sponsors several different designs, each
featuring challenging academic standards, strong professional development
programs, meaningful parental and community involvement, and a supportive school
environment (ASCD).
Conceptual Framework: Representation of the context of the study which
includes assumptions, key factors, variables and beliefs that support the study
(Maxwell, 1996). A conceptual framework is used in research to outline possible
courses of action or to present a preferred approach to a system analysis project. The
framework is built from a set of concepts linked to a planned or existing system of
methods, behaviors, functions, relationships, and objects. A conceptual framework
might, in computing terms, be thought of as a relational model (Wikipedia).
Content Standards: Academic standards that describe what students should
know and be able to do in core academic subjects at each grade level (APS).
Criterion-referenced test: An approach to testing in which an individual’s score
on a test is interpreted by comparing it to a prespecified standard of performance
(Gall, Borg, & Gall, 1996). These are tests that are written to reflect a state or local
school district curriculum. Student results are compared to a benchmark, rather than
15
to other students’ scores. Criterion-referenced tests may include essay questions
and/or require students to perform experiments (APS).
Curriculum: The courses of study offered by a school or district (APS).
Data: Factual information, especially information organized for analysis or
used to reason or make decisions
(http://education.yahoo.com/reference/dictionary/entry/data).
Data-based decision-making: Analyzing existing sources of information (class
and school attendance, grades, test scores) and other data (portfolios, surveys,
interviews) to make decisions about the school. The process involves organizing and
interpreting the data and creating action plans (ASCD).
Data use: This term will be used throughout this study to describe the use of
data or data usage.
Differentiated Instruction: This is also referred to as "individualized" or
"customized" instruction. The curriculum offers several different learning
experiences within one lesson to meet students' varied needs or learning styles. For
example, different teaching methods for students with learning disabilities
(Schoolwise Press).
Disaggregated Data: The presentation of data broken into segments of the
student population instead of the entire enrollment. Typical segments include
students who are economically disadvantaged, from racial or ethnic minority groups,
have disabilities, or have limited English fluency. Disaggregated data allows parents
16
and teachers to see how each student group is performing in a school (Ed Source)
(Schoolwise Press).
Formative assessment: An assessment teachers use to see what students know
and understand about particular subjects. Once they can see students’ weaknesses in
particular areas, teachers can adjust their teaching practices to improve student
achievement (APS). By contrast, an examination used primarily to document
students' achievement at the end of a unit or course is considered a summative test
(ASCD).
High-stakes test: A test that results in some kind of consequence for those who
score low, some kind of reward for those who score high, or both. For example,
students who pass a high school exit exam might receive a diploma, while students
who fail do not (APS).
II/USP (Immediate intervention/underperforming schools program): The
Immediate Intervention/Underperforming Schools Program was designed to
encourage a schoolwide improvement program in schools with very low test scores
and to provide assistance and intervention. Schools in the lowest five deciles of API
scores were eligible if they did not meet their API targets. It was replaced in 2002
with HPSGP, a similar program (Ed-data) (Schoolwise Press).
Likert scale: A type of psychometric scale often used in questionnaires, and is
the most widely used scale in survey research. It asks respondents to specify their
level of agreement to each of a list of statements (Wikipedia).
17
Multiple measures: Relying on more than one indicator to measure a student’s
academic strengths and weaknesses.
NCLB (No Child Left Behind): Signed into law by President Bush in 2002,
No Child Left Behind sets performance guidelines for all schools and also stipulates
what must be included in accountability reports to parents. It mandates annual
student testing, includes guidelines for underperforming schools, and requires states
to train all teachers and assistants to be “highly qualified” (Schoolwise Press).
Norm-referenced assessment: An approach to testing in which an individual’s
score on a test is interpreted by comparing it to the scores earned by a norming group
(Gall, Borg, & Gall, 1996). To set the norm, a representative sample of students
(e.g., fourth graders) takes the test. Their scores are then used as a benchmark for
fourth-grade students who take the test later. Norm-referenced assessments are
generally multiple-choice tests, and they are designed so that student scores will fit
on a bell shaped curve. This means that half of the students who take the test will fall
below the 50% mark (APS).
Performance assessment: Performance assessments are sometimes referred to
as “alternative” or “authentic” assessments. A performance assessment is the
opposite of a multiple-choice test. It requires students to give their own responses to
a question rather than choose it from a set of possible answers provided for them.
Examples of performance assessments include essay questions, portfolios and
demonstrations (APS).
18
Performance standards: This type of standard describes how well or at what
level students should be expected to master content standards. For example, while
content standards may say that all eighth graders should learn Algebra I,
performance standards would say what level of mastery of Algebra I is necessary for
promotion to the next grade (or for achievement with honors) (APS).
Portfolio assessment: A student portfolio is a collection of various samples of
that student’s work. It can include writing samples, examples of how the student
solved mathematical problems, results of scientific experiments, etc. Teachers
evaluate the portfolio based on established content and performance standards
(APS).
Professional development: Also known as staff development, this term refers
to experiences, such as attending conferences and workshops, that help teachers and
administrators build knowledge and skills (ASCD).
SAT-9 (Stanford-9): A basic skills, nationally normed, multiple choice test
given to virtually all California school students in grades 2 through 11.
Standard: A degree or level of achievement. The “standards movement” began
as an informal effort grown out of a concern that American students were not
learning enough. The U.S. Congress adopted this concept more formally with its
1994 reauthorization of the federal Title I program (APS).
Standards-based reform: An effort to reach consensus on and establish
standards for what students need to know and be able to do at each grade or
19
developmental level. Once standards are established, reform efforts focus on making
necessary changes in curricula, methods of instruction and assessment (APS).
Standardized test: A test for which procedures have been developed to ensure
consistency in administration and scouring across all testing situation (Gall, Borg, &
Gall, 1996). This is a test that is in the same format for all takers. It often relies
heavily or exclusively on multiple-choice questions. The testing conditions –
including instructions, time limits and scoring methods – are the same for all
students, though sometimes accommodations on time limits and instructions are
made for disabled students (APS).
Summative assessment: A test given to evaluate and document what students
have learned. The term is used to distinguish such tests from formative tests, which
are used primarily to diagnose what students have learned in order to plan further
instruction (ASCD).
Organization of the Study
This study will be organized into five chapters. Chapter 1 is the overview of
the study and includes an introduction, statement of the problem, purpose of the
study, importance of the study, limitations, delimitations, assumptions, and
definitions. Chapter 2 will be a review of pertinent literature associated with the
topic of data use to increase student achievement. Chapter 3 is the methodology
section and includes sample and population, instrumentation, data collection, and
data analysis. Chapter 4 is the analysis and discussion of each research question and
20
findings. Chapter 5 is a summary of the findings, conclusions and implications of
the study and any recommendations for future research.
21
CHAPTER 2
LITERATURE REVIEW
Status of Student Performance in the United States
Elmore (2002) writes:
The pathology of American schools is that they know how to
change. They know how to change promiscuously and at the
drop of a hat. What schools do not know how to do is to
improve, to engage in sustained and continuous progress toward
a performance goal over time. So the task is to develop practice
around the notion of improvement (p. 2).
Elmore goes on to characterize American education as functioning----“like a
profession without a practice.” He would like to see education create a community
of practice which becomes a profession (Video Lecture, 2006). Newman et al.
(2001) echo his sentiment and write:
In comparison to other professions where research has
produced highly reliable methods of diagnosis and intervention,
such as medicine or engineering, educators face substantial
uncertainty about how to proceed…. Incoherence is aggravated
by unaligned district and state policies and by rapid changes
among them (p. 41).
The reason American education frequently changes is the desire to improve
academic performance for all learners. Various statistics and studies plead a case for
much needed improvement. The federal government and states have implemented
many reform efforts including: standards-based reform, comprehensive reform, No
Child Left Behind, closing the achievement gap, etc. At the heart of the current
wave of reform movements is data and its many permutations and uses in the reform
process. How data reform is communicated from the State to the district and finally
22
to the teachers in the trenches in the classroom is an excellent question and will be
explored in this review of literature.
Assessment Results
NAEP
The National Assessment of Educational Progress (NAEP) tests, called
“America’s report card,” are administered by the U.S. Department of Education and are
mandated by the U.S. Congress. NAEP has been conducting assessments with a
representative sample of the nation’s students at grades four, eight, and 12 in core
academic subjects including mathematics, reading, science, history, and other subject
areas since 1969. The NAEP state level assessments began in 1990 and use a
representative sample of 10,000 students per state in grades four and eight (NCES,
2002). NAEP data in reading and mathematics identify state trends in the student
achievement gap between Whites and minorities (Black and Hispanic) and top and
bottom quartiles. Looking at the NAEP long-term trend assessment (conducted from
1971-2004), one can see the performance of a representational sample of America’s
students over a period of 33 years in reading and math for ages 9-, 13-, and 17-year old.
The following trends were found: 1) More progress is being made in mathematics than
in reading. Math scores were higher than any previous assessment year for 9- and 13-
year olds, but was not a statistically significant difference for 17-year olds. 2) Nine-year
olds average reading scores in 2004 were higher than any assessment year, but 13- and
17-year olds did not have a statistically significant difference in scores from the 1999
assessment. 3) Some achievement gaps between Whites, Blacks and Hispanics are
23
narrowing in reading and math, but a large gap still exists. The gap narrowed more
between 1971 and 1999 and has stayed flat from 1999-2004 (Nation’s Report Card,
2004).
In the State of California, results from the 2003 National Assessment of
Educational Progress (NAEP) indicated that only 22% of California fourth graders and
21% of eighth graders scored at or above the proficient level in reading; comparable
figures for mathematics were 28% for fourth graders and 22% for eighth graders
(National Center for Statistics, NCES, 2005). This was, however, an improvement from
the 1988 and 1994 reading assessments. Hispanic, Black, Asian and economically
disadvantaged students showed the largest scores gains at the fourth grade level. The
eighth grade results were not as encouraging. The scores of White students and English
learners improved less than these groups improved nationally. Hispanic student scores
on the eighth grade level remained unchanged from the 1998 assessment. Asian, Black
and economically disadvantaged student did show improvements slightly greater than
the nation as a whole (NCES, 2005).
TIMSS
In 1995, the Third International Mathematics and Science Study (TIMSS)
was sponsored by the International Association for the Evaluation of Educational
Achievement. The purpose of this study was to measure student learning on a
specific set of objectives in mathematics and science in the 41 participating
countries, and to determine the key factors in explaining differences in student
performance. A representative national sample of schools and classrooms was
24
selected at each of the three grade levels of the study. The study, first released to the
public in 1996, was a 41-nation study of fourth, eighth, and twelfth graders. In the
study the United States ranked about average; Singapore and other Asian nations
clearly outscored the United States. Fourth graders performed well in both
mathematics and science in comparison to students in other nations, eighth graders
performed near the international average in both math and science, and twelfth-
graders scored below the international average in mathematics and science general
knowledge as well as in physics and advanced math (National Center for Educational
Statistics, 2000, p. 1).
Most of the press reports released to the general public divided the results
into “three statistically homogenous groups of nations---those ahead of us, those tied
with us, and those behind us” (Berliner & Biddle, 1998, p. 4). In numerous accounts,
the United States was portrayed as failing to meet the needs of their students. It is
true, overall the U.S. did not do well, but the data must be disaggregated to find the
real problem areas. “Averages mask the truth” found in the TIMSS information
(Berliner, 2001b, p.2). What was not widely publicized was that the United States
was in good company with equally average nations including Thailand, Israel,
Germany, New Zealand, England, Norway, Denmark, Scotland, Spain, and Iceland
in mathematics, and England, Flemish Belgium, Australia, Sweden, Germany,
Canada, Norway, New Zealand, Thailand, Israel, Hong Kong, Switzerland, the
Russian Federation, and Scotland in science (Berliner & Biddle, 1998, p. 5).
25
Disaggregating TIMSS
When the TIMSS data are broken down into socioeconomic levels, middle
and high-income families were competitive with the highest achieving nations in the
world. This can especially be seen in science where there is no significant difference
between U.S. and Japanese students at the 95
th
percentile (Congressional Digest,
1997a, p. 263). Berliner and Biddle (1998) add another point to this discussion:
Moreover, and a bit amusing, was that Asian American public
school students scored above the average of Asian students in
the Asian nations that participated in the study…So we know
that our public system of education, as diverse and incoherent as
it is, can turn out world-class young mathematicians if you are
raised in certain states, of a certain income level, and of a
certain ethnicity (p.7).
Wealth and social conditions were the largest determinants on the TIMSS test in the
United States for below average performance. To better illustrate the great effects
wealth and poverty have on TIMSS results, one can look to the First in the World
Consortium. This consortium was composed of a group of 20 public school districts,
made up of 32 elementary schools, 17 middle schools, and six high schools from the
North Shore of Chicago. They entered the TIMSS study as a separate nation. These
schools were predominantly White, middle, and upper-middle class schools, and they
scored very highly on the TIMSS results; fifth in the world in math and second in the
world in science (Berliner, 1993). The TIMSS data appear to describe sectors of
America’s educational system as functioning at the top of international comparisons
and other sectors sorely in need of fixing.
26
TIMSS-R
In 2000, the results were released from the TIMSS-R (Third International
Mathematics and Science Study-Repeat). This was a repeat assessment from four years
earlier. Eighth graders in the United States “exceeded the international average” among
the 38 nations who took the tests; they ranked 19
th
in mathematics and 18
th
in science
(National Center for Educational Statistics, 2000, p. 2). The TIMSS-R provided an
opportunity to compare the achievement of its eighth-graders in the original TIMSS and
its eighth graders four years later. This comparison showed remarkably stable
performance. Between 1995 and 1999, there was no change in eighth-grade
mathematics or science achievement in the U.S (National Center for Educational
Statistics, 2000). During this assessment, a difference in gender performance appeared
in science where eighth grade boys outperformed eight grade girls (National Center for
Educational Statistics, 2000). The original TIMSS results found the United States to be
one of 33 countries where there was no statistically significant difference between the
performance of eight grade boys and girls in mathematics; in science, the U.S. was one
of 11 nations with no statistically significant difference (Congressional Digest, 1997a).
Disaggregating TIMSS-R.
Once again, disaggregating the TIMSS-R data in science by race, the White
eighth graders scored fourth in the world. In comparison, African Americans and
Hispanics are “doing the equivalent of third world country scores” (Berliner, 2001a, p.
2). Berliner and others are quick to point out that this is not a race issue, but a poverty
issue. “[C]hildren from poor schools are doing very poorly, and children from the
27
wealthier schools are very competitive in the international marketplace…particularly in
science.” (p. 2). A disparity amongst student achievement can also be seen when each
state is looked at as a separate country. Berliner (2001b) says that certain states would
be considered top countries such as Iowa and Nebraska in Mathematics. In fact, only
one nation, Singapore, outscored 14 U.S. states including Connecticut, Iowa, and so
forth in mathematics. On the flipside, educational systems like Louisiana, Mississippi,
and the District of Columbia are scoring near the bottom on the TIMSS tests in
mathematics. Berliner (2001a) writes:
When we break out their data, when we disaggregate and pull
out their data and compare it to the international data we realize
that the United States is running separate educational systems.
It’s running one system for basically White kids---particularly
wealthy White kids. And they’re getting very fine educations,
highly competitive. And we’re running another system for poor
kids and kids in poor states. They are not getting the
educational programs they need to be highly competitive (p. 3).
Bracey (2002) echoes this sentiment and defines the “dual system;” “One for
poor and minority students, and the other is for the rest of us” (p. 30). The United States
has the highest rate of childhood poverty among all the industrialized nations of 21.5%;
this statistic cannot be left out of the student performance discussions.
Trends in International Mathematics and Science Study, 2003
In 2003, TIMSS (with a new name) offered an international trend comparison in
math and science at grades 4 and 8. Twenty-five nations collected data on 4
th
grades and
45 nations collected data on 8
th
graders. In 2003, 4
th
graders scored 518 in math, which
exceeded the international average of 495. The U.S. was 12
th
out of 25 countries. U.S.
28
eighth graders scored 504 in math, also exceeding the international average of 467. The
U.S. was 10
th
out of 45 countries.
Disaggregating the data from NAEP, TIMSS, and TIMSS-R has brought
forth examples of disparity for student achievement based on socioeconomic levels,
race, and state or residency. Looking into greater detail in these areas, one finds that
state policies and context measures are related to scores. Blank and Wilson (2001)
analyzed three state context measures that are commonly cited as reasons for
differences in achievement by state: amount of money spent on education, students
living in poverty, and adult education level. They found that states with higher
NAEP mathematics scores in 1996 had better conditions for education in their
schools. “In comparison to low-performing states, the highest-scoring states
(Minnesota, North Dakota, Montana, Connecticut) had 10% to 20% fewer children
living in poverty, 10% to 15% more adults with high school diplomas, and $1000 to
$1,500 higher expenditures per pupil” (p. 28). These same inequities were also
found in the 2000 NAEP.
Equity Issues in Student Performance
There are many profound equity issues facing the United States education system
that NAEP and TIMSS have only begun to illuminate. These issues cannot be left out of
the discussion regarding any school reform measure. These issues include: drop-out
rates, poverty, teacher quality, and the achievement gap between White and minority
students.
29
Drop-out Rates
The general consensus amongst researchers on graduation rates is
approximately 2/3 of teenagers graduate from high school; though, only 50% of
African Americans and Hispanics graduate (Thornburgh, 2006, p. 32). Kozol (1995)
describes the drop-out rate at a “hypersegregated” high school in New York City: the
graduating class consists of 200 students in a school of approximately 3,200.
“Almost a thousand students out of these 3,200 are officially ‘discharged’ for poor
attendance or a number of other reasons, including violent behavior, every year” (p.
2). A school official Kozol interviews asks what would happen “‘if almost a third’ of
a middle-class White school ‘were to disappear between September and June?’” (p.
3). A 1993 Harvard University study finds New York’s public schools to be the most
segregated in the nation for Hispanic children. Similar patterns of segregation can be
found around the country where there “are simply no White children, or not more
than token numbers” (Kozol, 1995, p. 3). Professor Gary Orfield, one of the authors
of the Harvard study said, “The civil rights momentum of the 1960s is dead in the
water and the ship is floating backward” (Kozol, 1995, p. 3).
Indeed, Goodlad (1994) writes about the need for school reform to be
occurring simultaneously with social reform movements. Berliner (1993) echoes this
and adds, “schools may be failing, but the causes for that are usually outside the
school building. Those causes are embedded in the social inequities prevalent in our
society” (p. 23). The issue of equity is one of great importance when discussing the
status of student achievement, and only when all students truly have equal access to
30
knowledge will “schools provide for the future of the democracy by fulfilling the
obligation, articulated by Thomas Jefferson, of each generation to render its youths
fit to govern the next” (The College Board, 1994, p. 2).
Poverty
Since 1968, The Panel Survey of Income Dynamics (PSID), sponsored by the
University of Michigan’s Institute for Social Research, has been tracking the annual
income conditions and other circumstances of about 6000 nationally representative
families. Martin Orland (1994) analyzed the PSID data and came to the following
conclusions: 1) Students experiencing long-term poverty or who attend schools with
high poverty concentrations are much more likely to have educational difficulties
than students from families whose duration in poverty is short or who attend schools
with low poverty rates; 2) Students who experience long-term poverty also attend
schools with high poverty conditions. Black students, students from rural areas, and
those who live in the South would appear from the data to be particularly likely to
experience these multiple disadvantaged conditions; 3) There are independent school
factors that depress the academic achievement levels of all students attending high-
poverty schools. Thus, initiatives aimed at enhancing the quality of these schools
holds promise for enhancing student achievement (pp. 54-55).
Reeves (2000) agrees that there is a clear cause and effect relationship
between poverty and academic achievement in students. He wonders, however, if
other variables besides demographics might be affecting student achievement? He
finds that when researchers consider other variables besides demographics such as
31
whether students have teachers who are certified or not, the power of demographic
variables as predictors of academic success is dramatically reduced. The direct link
between high poverty and low achievement may not be so direct. Poverty may in
fact be a confounding variable. There is a strong correlation between teacher quality
and poverty; the more likely a student was to participate in free and reduced lunch
programs (a strong indicator of poverty), the less likely a student was to have a
teacher with an advanced degree (Reeves, 2000). The 2005 Status of the Teaching
Profession report from the State of California indicated that during the 2004-2005
school year, one out of every five teachers (21%) in the lowest achieving schools
were “underprepared” and/or novice, compared to 1 in 10 schools (11%) in the
highest achieving schools (Esch et al., p. xi).
Credentialed Teachers
Another striking issue in American education is the lack of credentialed
teachers in some of the most needy areas; in high poverty schools, only 67% of
teachers of mathematics had a major or minor in math on the secondary level,
compared to the national average of 80% (Berliner, 2001b, p. 3). Weiss (1994),
Blank and Wilson (2001), and Berliner (2001b) all raise concern regarding unequal
dispersion of subject-matter qualified and credentialed teachers between high
poverty and low poverty schools. Darling-Hammond (2000) conducted an analysis
of NAEP mathematics assessment results from 1996 to determine the effects of
teacher differences. She found a significant positive correlation between state
student scores on grade-eight NAEP mathematics, and the percentage of teachers
32
with a degree in mathematics or math education. Attracting and retaining teachers
with an area of expertise in math and science can be very difficult due to low pay and
the status of teachers in the United States. Lucas (1997) closely examined the
history of teachers and found that teaching has never been a particularly attractive
job; it has always had “low status” and low pay associated with it. One of the only
perks of the job is an easier teaching assignment. Darling-Hammond (1994) writes:
The result of an incentive structure that makes easier
assignments virtually the only reward for seniority and skill is
that teachers pay their dues teaching the most challenging
students in the tougher schools, then transfer out to schools
(and, eventually, districts) where the working conditions are
better and the students are easier to teach because they have
been taught at home (p. 244).
Berliner (2001a) suggests moving the best teachers to the neediest students. In
Australia, the teachers work for the state rather than the district. In contrast, teacher
seniority in the United States usually allows teachers some flexibility in choosing where
they teach and they usually gravitate to the easiest kids to teach. This leaves the children
most in need of skillful and sensitive teaching the least likely to
receive it. Poor Black and Hispanic students, for example, are
concentrated in central cities, where teacher shortages tend to be
at least three times greater than in rural areas or suburbs
(Darling-Hammond, 1994, p. 237).
Darling-Hammond (1994) uses the Rochester, New York, School District as a model for
increasing teacher professionalization and dramatically increasing salaries to retain and
draw teachers to inner-city schools. In this case, the model included increasing salaries
which attracted teachers from suburban districts as well as professors from the
University of Rochester.
33
Teacher Salaries
Berliner (1993) writes: “Manski (1987) found that higher salaries attract
teaching candidates with higher academic ability, and Murnane and Olsen (1989)
found that teachers’ salaries affect the accumulation of experience in the profession”
(p. 12). America does spend huge amounts of money on education. The bulk of the
money, however, is spent on higher education. On average the U.S. has two to three
times more people per 100,000 of the population enrolled in higher education than
most other countries (Berliner, 1993). The amount of money spent on K-12
education is markedly less than the amount spend on higher education: “We find that
out of sixteen industrialized nations, thirteen of them spent a greater percent of per-
capita income on K-12 education that we do” (p. 14). Berliner feels that “we are
among the most cost-efficient nation in the world, with an amazingly high level of
productivity for the comparatively low level of investment that our society makes in
K-12 education” (p. 15). The United States has on average 180 school days over a
course of a year based on an old agrarian model. Canada has 190, Korea 225, and
Japan 240. More school days, like higher pay for teachers, costs a great deal more
money, but if we are truly interested in competing internationally, increasing the
number of school days is a logical solution. Berliner (2001a) suggests 200 school
days for students and 210 for teachers for professional development time.
Achievement Gap
The National Assessment of Educational Progress consistently reports that
the average 8
th
grade minority student performs at about the level of the average 4
th
34
grade White student (NCES, 2003 as reported in Barton, 2006). What is creating this
achievement gap between White students and students of color? Barton (2006) did a
meta-analyses of the literature on the achievement gap. He identified 14 factors that
correlated with achievement. The factors that correlate with student achievement
before and beyond school are: birth weight, lead poisoning, hunger and nutrition,
reading to young children, television watching, parent availability, student mobility,
and parent participation. The factors that correlate with student achievement in
school are: rigor of curriculum, rigor of curriculum, teacher experience and
attendance, teacher preparation, class size, technology-assisted instruction, and
school safety.
Other researchers have similar ideas about what is creating the achievement
gap. For Berliner (2001a) and Goodlad (1994) it boils down to an issue of poverty.
For Jencks and Phillips (1998), the achievement gap cannot simply be explained
away by poverty. Jencks and Phillips (1998) make the assertion that test scores of
Black children lag behind those of White children from equally affluent families.
They also assert:
Despite glaring economic inequalities between a few rich
suburbs and nearby central cities, the average Black child and
the average White child now live in school districts that spend
almost exactly the same amount per pupil. Black and White
schools also have the same average number of teachers per
pupil, the same pay scales, and teachers with almost the same
amount of formal education and teaching experience (p. 4).
They conclude that the most important resource differences between Black and
White schools seems to be that both Black and White teachers in Black schools have
35
lower test scores than their counterparts in White schools; this brings us back full
circle to the argument of teacher quality.
Cultural Arguments for the Achievement Gap
There are many cultural arguments that attempt to explain the achievement gap
between Black and White students for which John Ogbu (1994), a Nigerian
anthropologist, is probably the best known. He suggests that caste-like minorities
throughout the world tend to do poorly in school, even when they were visually
indistinguishable from the majority. Ogbu goes on to make the argument that because of
their caste-like status, Blacks developed an “oppositional” culture that equated academic
success to “acting White.” Immigrants of color, often from low-socioeconomic
backgrounds, are often academically successful because they have not grown up being
stigmatized by a caste-like status (Ogbu, 1994). Claude Steele (1999a) argues that
people of all races avoid situations in which they expect others to have negative
stereotypes about them, even when they know that the stereotype does not apply.
According to Steele, many Black students “disidentify” with school because constructing
a personal identity based on academic competence entails a commitment to dealing with
such stereotypes on a daily basis. Steele (1999a) has shown with a series of experiments
with Stanford Students that merely asking test-takers to report their race or telling them
that a test measures intellectual ability, lowers Black students’ scores.
36
Closing the Achievement Gap
Whatever the factors are that lead up to the achievement gap, they cannot be
ignored in the current and emerging context of school reform. Jencks and Philips (1998)
point out that not a great deal of research has been conducted regarding the achievement
gap between U.S. students including the factors that may be causing the gap: schools’
racial mix, class size, teachers’ test scores, ability grouping, and other policies. They are
hesitant to suggest a finite reason for the achievement gap. Though, they focus on
equalizing expenditures between predominantly White and Black schools, selecting
teachers with comparable qualifications and test scores at high-risk schools, and
focusing on early childhood cognitive development with youngsters. Whatever, the
reason behind this gap, current educational reform is working to ameliorate some of
these possible causes of the gap. Manning and Kovach (2003) Summarize various
research on what students need to close the achievement gap: 1) Access to challenging
curriculum and instruction, 2) extra supports, 3) high quality teachers, and 4) high
expectations.
The Nature of Comprehensive School Reform
Setting the Stage for School Reform
One often hears the term “crisis” in public education. A number of important
reform movements have arisen in response to this “crisis” including standards-based
reform and comprehensive school reform. Perhaps, Pedro Noguera more aptly renamed
the “crisis” in public education as a “chronic condition” because there seem to be
permanent problems in public education that have not surfaced recently and show no
37
indicators of leaving quickly (P. A. Noguera, lecture at University of Southern
California, January 24, 2001). The current reform movements have stiff competition to
fix this chronic condition in American education. There have been many proposed
vehicles for educational reform on both the national and local levels including charter
schools, school choice, standards-based instructions, accountability, and state takeovers
of entire districts just to name a few. The next section will explore the federal influence
on reform including No Child Left Behind, the standards-based reform movement,
comprehensive reform and end with an exploration of reform measures in the State of
California.
Preparing Students for the 21
st
Century
Educational reformers are not only concerned with school equity, but with
effectively educating all the nation’s schools to be prepared to meet the ever growing
demands of a 21
st
century job market. There is great concern about preparing
students for the future. Repeated many times throughout the press is the charge that
our schools are not supplying us with enough mathematicians and scientists to
maintain our competitiveness in the world markets. To be competitive in a world
market, we want students to experience academic success. Larson, Guidera, and
Smith (1998) define the basic skills needed in this competitive world:
Individuals need the academic knowledge and skills to equip
them to succeed in today’s ever-changing economy. It should
come as no surprise then, that almost 90 percent of new jobs
require more than a high school level of literacy and math skills.
38
Once a student drops out of school, they are usually unable to find employment.
Only 4 in 10 of 16-19 year-olds are employed, as are few than 6 in 10 of 20-24 year-
old dropouts (Barton, 2006, p. 16). “Close to half of all 17-year-olds cannot read or
do math at the level need to get a job in a modern automobile plant…they lack the
skills to earn a middle-class paycheck in today’s economy.” In practical terms, about
half the nation’s children are being educated for jobs that pay $8.00 an hour or less
(Murnane & Levy, 1996, p. 35). Tucker and Codding (1999) add that more than a
million youngsters graduate each year in U.S. schools with only an 8
th
grade level of
literacy that is not sufficient to do college level work.
Federal Influence on School Reform
In 1983, a report entitled, A Nation At Risk, from the National Commission
on Excellence in Education, proclaimed, “the educational foundations of our society
are being eroded by a rising tide of mediocrity that threatens our very future as a
Nation” (Congressional Digest, 1997b, p. 257). This report signaled a rush of school
reform beginning at the federal level of government and working its way down to the
states and in some cases vice-versa: top-down and bottom up reform efforts. In
1989, President George H. W. Bush and the Nation’s governors endorsed a set of
national goals to provide a framework for improving and restructuring education.
Though, the America 2000 Excellence in Education Act did not pass, President Bill
Clinton proposed Goals 2000: Educate America Act which was signed into law; it
codified the national education goals adopted in 1989, established national
performance standards, and provided financial assistance for state reforms. Even
39
before Goals 2000 was signed into law, national and state standards in the core
subjects were already under development. The National Council of Teachers and
Mathematics (NCTM) published national mathematics standards in 1989
(Congressional Digest, 1997c, p. 259). Goals 2000 legislation and state and local
implementation “concentrate on comprehensive change, school improvement, and
achievement for all children” (Goals, 1998, 2000, p. 1).
No Child Left Behind Act
Berliner (1993), Education Commission of the States (1996), Thomas and
Bainbridge (2001), Reeves (2000), and many others suggest focusing resources for the
most needed areas in America. George W. Bush’s latest plan picks up on this vein of
thinking. In this age of accountability and reform, the federal government passed an act
called No Child Left Behind (NCLB). The act is composed of three main components:
money, accountability and standards. Schools are held accountable for standards and for
monetary expenditures; thus, making sure the government is getting the most from their
investment. This is a good thing since there has been a 41% increase in federal funding
since fiscal year 2000; this brings the total amount set forth by the US government to 15
billion per year up from 10.4 billion (U.S. Department of Education, 2002). The NCLB
act is hoping to tighten up the achievement gap between the rich and the poor. Title 1
services 14 million disadvantaged children. The key components of NCLB include
school districts measuring student performance annually and making “adequate yearly
progress” (AYP) towards state standards, both overall and for subgroups of students.
Title I schools that are foundering at making strides toward AYP, receive new resources
40
and extra help. Parents are also given the option to transfer their child to a better-
performing public school or to receive extra support like tutoring for their student. The
President has also initiated the new $900 million Reading First Program and Good
Start, Grow Smart early childhood initiative. The Reading First Program provides
funding in the form of federal grants for five components of a reading program:
phonemic awareness, phonics, fluency, vocabulary, and text comprehension. These are
based on the conclusions of the National Reading Panel (Lewis, 2002).
NCLB Accountability
At the heart of No Child Left Behind are four basic education reform principles:
stronger accountability for results, increased flexibility and local control, expanded
options for parents, and an emphasis on teaching methods that have been proven to work
(U.S. Department of Education, 2002). At the heart of NCLB legislation is increased
accountability in education:
States create their own standards for what a child should know and learn for all grades.
Math and reading standards will be created immediately and standards must also be
developed for science by the 2005-2006 school year.
• Once standards are in place, states must test every student’s progress toward
those standards by using tests that are aligned with the standards. Beginning in
the 2002-2003 school year, schools must administer tests in each of three grade
spans: grades 3-5, grades 6-9, and grades 10-12; beginning in the 2005-2006
school year, tests must be administered every year in grades 3 through 8 in
41
math and reading. Science achievement will also be added in the 2007-2008
school year.
• Each state, school district, and school will be expected to make adequate
yearly progress toward meeting state standards. This progress will be
measured for all students and sorted by race, socio-economic status,
disabilities, and English proficiency.
• School and district performance will be publicly reported in district and state
report cards.
• If the district or school continually fails to make adequate progress toward
the standards, then they will be held accountable (U.S. Department of
Education, 2002).
Qualified Teachers under NCLB
The federal No Child Left Behind Act requires schools to have qualified teachers
in every classroom receiving Title 1 funding starting this year. By 2005-2006, every
classroom must have a qualified teacher. In the State of California, the California
Commission on Teacher Credentialing is moving to meet new federal guidelines on
having “qualified” teachers in every classroom. The federal Department of Education
has defined “qualified” as a credentialed teacher. More than 20,000 out of California’s
300,000 teachers are working on emergency permits as of 2005. The commission now
plans on reclassifying thousands of teachers who hold emergency permits and
intensifying training effects to get everyone a credential. Esch et al. (2005) point out that
proportionally, the number of uncredentialed math, science, and special education
42
teachers is even greater than that of uncredentialed teachers in general in the State of
California; nearly 49% of first year special education teacher are unprepared (x). In
addition, instructional aides hired before the start of the 2002-2003 school year must
complete two years of higher education, obtain an associates degree, or pass a rigorous
test assessing their abilities within the next four years (Wormser, 2003). Many theorists
support the professionalization of the teaching profession, which includes standards for
the profession and minimum competency (a certification). Linda Darling-Hammond
(1994) writes:
The public education system ought to be able to guarantee that
every child who is forced to go to school by public law---no
matter how poor that child’s parents are, or where that child
lives, or how little or how much he or she has learned at home--
-is taught by someone who is prepared, knowledgeable, and
competent. That is real accountability. When it comes to
equalizing opportunities for students to learn, that is the bottom
line (p. 257).
Choice Options under NCLB
NCLB legislation also provides more choice options for parents. If a school is
chronically identified in need of improvement, they will be able to transfer their child to
a better performing public school or public charter school. There is also $200 million in
federal funds available in state and local communities to help establish and fund charter
schools. How has implementation of this transfer policy played out in real life? Los
Angeles Unified School District (LAUSD) serves as an example; the district has 150
schools that are deemed failing by NCLB legislation. Schools are defined as failing if
overall test scores and those of significant subgroups fall short of growth targets for two
years in a row. In July of 2002, Los Angeles Unified School District sent out more than
43
700,000 brochures to parents regarding their rights to transfer. Response has been slow
according to Theodore Alexander, associate superintendent of LAUSD, due to a shortage
of classroom space on successful campuses as well as transportation and distance issues
(Gao, 2003, p. 1).
High-Stakes Testing within NCLB
Lewis (2002) writes that NCLB has good goals, but feels that “high stakes
testing narrows the whole enterprise of education and could halt the development of
truly significant improvements in teaching and Learning” (p. 179). States are
rushing to meet NCLB testing requirements. “Not even half the states came close to
meeting the NCLB mandate of testing every year at grades 3-8 when the legislation
was passed….states are grabbing standardized tests off the shelf---no matter whether
they meet their learning standards or not” (Lewis, 2002, p. 179). Test manufacturers
are also rushing to produce or recycle tests rather than take the time needed to
produce reliable instruments (Lewis, 2002, pp. 179-180). The questions on the
instruments they are producing do not tap higher order thinking skills; Shepard et al.
(2005) cite Fleming and Chambers (1983) study analyzing 8, 800 test questions and
found that nearly 80% were at the lowest level in Bloom’s taxonomy (p. 302). All
states will be required to participate in the reading and math assessments given by
NAEP. Edward Haertel of Stanford University estimates that it would take 100
years for all students in the nation to attain the proficient level on NAEP at the
current rate of improvement (Lewis, 2002, p.180). Other researchers, including
Damian Betebenner, Eva Baker, and Robert Linn have suggested that the goal be for
44
all students to read the “basic” level of NAEP rather than the proficient (Lewis,
2002, p. 180).
AYP and NCLB
Lewis (2002) writes about the “paranoid sentiment” of many that feel NCLB
is setting up the public schools for failure; this is all a plot to discredit public
education to the point where privatization and choice are seen as the only answers.
This sentiment is echoed by Elmore (2005) writes:
In the next two or three years, a very large number of schools,
most of them in urban areas, most of them with largely poor,
minority student populations, will be classified as failing under
the newly authorized accountability provisions of No Child Left
Behind….Under the most generous interpretation of No Child
Left Behind, the law is designed to galvanize and focus the
attention of educators around the problem of failing schools.
Under a less generous interpretation, it is designed to remove
large numbers of children and schools from direct public
management and to move public education funds from public
schools to privately managed schools and supplementary
service providers operated with public funds (p. 229).
Other researchers in addition to Elmore are concerned about large numbers of
students not meeting the Adequate Yearly Progress (AYP) expectations. Fullan
(2003), Franklin (2006), and Popham (2006) all characterize the current AYP
expectations as “altogether unrealistic.” Joel Packer of the National Education
Association, predicts that “75 to 90 percent of all schools will eventually receive a
failure rating” under NCLB due to the AYP (Annual Yearly Progress) (As cited in
Franklin, 2006, p. 7).
45
Rigorous Standards under NCLB
Carol Ann Tomlinson (2002) lauds No Child Left Behind legislation as an
advance toward equity that ensures all children achieve proficiency in foundational
skills of learning. Tomlinson, primarily a gifted education researcher, is concerned
that the legislation may not provide enough rigor for students; “there is no incentive
for schools to attend to the growth of students once they attain proficiency, or to spur
students who are already proficient to greater achievement, and certainly not to
inspire those who far exceed proficiency” (Tomlinson, 2002). In fact, Clark (2005)
writes that “…data show that more than 20% of high-performing schools fall into
the bottom quartile in terms of student growth…we realize we need to challenge
them, just as we challenge their low- and average-performing peers” (p. 56).
Tomlinson finds it ironic that legislation that sets out to raise educational standards
does it through a remediation-focused initiative. She is also concerned with the
legislation repeating a pattern of approaching “a group of largely poor and minority
students with minimal expectations for achievement” (p. 5).
This resonates with some of the essential factors Lisa Delpit (1999) feels are
important to success in urban classrooms: 1) Do not teach less content to poor, urban
children, but understand their brilliance and teach more; 2) Whatever methodology
or instructional program is used, demand critical thinking; 3) Assure that all children
gain access to “basic skills,” the conventions and strategies that are essential to
success in American education (p. 1). Tucker and Codding (1999) also advocate
conceptual mastery, learning how to learn, and problem solving as standards are set
46
to “understand the concepts underlying the disciplines” (p. 32). Williams (1999)
writes, "Ensuring success for all the student populations---with success defined as
access to information and the ability to process it---emerges as the central challenge
for education institutions” (p. 94). What is clear from these findings is that reform
proposals must be introduced to increase the learning success of a diverse student
population (Williams, 1999). One can see a discrepancy between rigorous standards
for all students and the reality of helping all students meet those standards.
Standards-Based Reform
History of Standards-Based Reform
The history of standards-based reform goes back to the educational philosophies
of Benjamin Bloom in his 1956 work, Taxonomy of Educational Objectives. In this
work, Bloom talks about having students develop “higher order thinking skills” versus
rote memorization. Bloom’s philosophies were a driving force behind the beginning of
standards-based reform then called, “Outcome Based Reform” (STEP, 2000). The roots
of the standards-based reform movement grew out of public dissatisfaction with the low
level of achievement demonstrated by American Students especially in comparison with
the performance of student in other industrialized nations (NWREL1997). As discussed
earlier, the standards-based reform movement gained momentum in 1983, with a Nation
at Risk (Reagan), Goals 2000 (former President Bush and Clinton), and now No Child
Left Behind (current President Bush).
What do we mean when we talk about “standards?” Tucker and Codding (1999)
define standards as not only what the student should know and be able to do, but also
47
how well the student needs to know it and be able to do it. Setting clear standards that all
students are supposed to meet and exceed is a radical change from “the implicit goal of
most of the current century” (p. 29). The goal of education before standards-based
reform was to sort students by “ability” and to educate each student accordingly for his
or her station in life. Naturally, Tucker and Codding (1999) go on to add that suddenly
popping a new set of rules into place without designing a new system to accommodate
the new set of standards in incredibly unfair to students:
Simply putting the students who have not succeeded into the
programs in place for those who have succeeded will not work;
they will not be able to do the work. Simply telling students that
they will not be promoted to the next grade when they have
failed the last will not by itself work; they will not know how to
do better in the future what they could not do in the past.
Simply threatening educators in schools that are failing will not
work; it will just demoralize them and persuade capable
educators to avoid problem schools. It is crucial to remember
that getting all students to a high standards is a task for which
the current system was never designed. (p. 29).
Tucker and Codding (1999) feel that every youngster is America should be provided a
comprehensive education that would “qualify” then for a college education, without
remediation, at least on the local community college level (p. 26).
Hill and Crévola (1999) identify elements of standards-based reform that can be
characterized as “zero tolerance policies” (p. 119). The policies include setting
standards that most students are expected to achieve, meeting targets for the standards
over a finite period of time, and refocusing and redesigning schools to meet the
standards as a number one priority. These zero-tolerance policies can best be seen in the
accountability movement.
48
Training and Professional Development within Standards-Based Reform
Fuhrman and Odden (2001) stress the importance of instructional practice for
increasing student performance. They feel there are significant instructional changes
that must take place for standards-based reform to be effective; for these changes to
occur, there must be large investments in professional development and curricula.
Without these changes, they suggest we will continue to see only “modest
improvements” in student learning even with the “significant energies and dollars
invested in education reform over the past decade” (p. 61). “If you build the standards,
you have to spend five years investing in the people so they understand how to meet
those standards. The test should come later, not earlier. Most states have missed that
link” (Berliner, 2002, p. 6). Standards are incredibly important to guide the curriculum,
but “investing in teacher’s time to master the techniques to do that” is also incredibly
important (pp. 6-7).
Data is an important component of current school reforms and is at the heart
of the accountability movement. The Education Commission of the States (1996)
defines the relationship between data and standards through assessment:
There is widespread agreement that conventional standardized
tests fail to measure many valued student outcomes, but it has
been difficult and expensive to develop credible alternatives to
capture those results. Standards-driven reform is helping to
meet this challenge head-on by supporting the development of
assessments aligned with standards” (p. 22).
Accountability is currently based on state-developed standardized tests. For this
reason, Sanders and Horn (1995) “strongly advise the use of multiple indicators of
student learning, including those provided by standardized tests” (p. 1). Popham
49
(2006) writes about the distinction between assessment of and assessment for
learning. Assessment for learning involves the frequent, continual use of both
formal and informal classroom assessments while assessment of learning is used for
the purposes of giving grades or evaluating schools. Popham writes, “As this
pressure continues to mount, I fear that the big tests will drive out the little tests that
can demonstrably help students learn. States’ NCLB assessments of learning will
swamp teachers’ classroom assessments for learning” (p. 83).
Accountability within Standards-Based Reform
Teaching to the Test
Robert Marzano and John Kendall of the McREL education institute figured
out that covering all the standards on the average state test would take nine more
years of schooling (Meier, 2002). Meier 2002, writes:
Of course, no one is planning to add nine years to the schooling
of every child. And in real life, good sense takes over, and
schools actually prepare students only for a sufficient amount of
the material that they discover, over time, is actually likely to be
on the tests and is necessary to achieve a passing score and so
ensure that the school looks good compared with the
competition (p. 193).
It’s called teaching to the test; an unfortunate byproduct of high-stakes testing.
Meier also points out that “unless tests are devised for all subject areas, everything
not being tested---music, dance, the visual arts---is driven out of the curriculum” (p.
196). Meier, 2002, quotes Robert Linn, a leader in the field of testing as saying, “I
am led to conclude, that the unintended negative effects of high-stakes accountability
uses often outweigh the intended positive effects” (p. 195). Meier calls for
50
“standards in terms of both means and ends not standardization…we may find that
old-fashioned standardized tests are one tool among many that will prove useful” (p.
198). What will become important is training teachers to effectively use
standardized tests as tools as well as other assessments.
What becomes apparent from the research on testing effects is
that teachers are the mediators who determine how much
external tests will reshape the curriculum. Thus teachers have a
responsibility to understand that raising test scores is not always
the same thing as improving learning, and they should
reorganize how instructional choices make the difference in
fostering real or spurious gains (Shepard et al., 2005, p. 311).
Authentic Standards Movement and Evil Twin
Thompson (2001) writes about the “authentic standards movement and its evil
twin” (p. 1). Thompson defines the evil twin as “high-stakes, standardized, test-based
reform.” (p. 1). Thompson is concerned that high-stakes testing is focusing too much on
the outcome of one test, thereby overshadowing the true intention of standards-based
reform to “provide high levels of support for all students, teachers, and educational
leaders” (p.1). His argument is not unique, and many researchers go back and forth
between the pros and cons of assessment testing.
Pros and Cons for Assessment Testing
Critics of testing programs argue that standardized tests in use today:
• Interfere with good teaching and learning by narrowing the curriculum and
emphasizing rote memory.
• Discriminate against minorities and students.
• Increase the dropout rate
51
• Cost states and districts precious dollars more effectively spent in other ways
• Do not offer accurate measures of student performance and potential.
• Create rewards and penalties not conducive to comprehensive, long-term
school improvement.
• Are not yet fully integrated with curriculum, standards, teacher and
administrator preparation and professional development, and additional
resources for school improvement.
Advocates on the other hand, say testing:
• Motivates students to excel
• Focuses teachers and students on state education goals.
• Offers quality assurances to the public
• Provides uniform data for comparing student, teacher, and school
performance, and targeting rewards and resources for improvement.
• Establishes or raises standards for student performance (The Education
Commission of the States, 2000b).
Probably the most reasonable approach to the assessment piece of standards-
based reform is a balanced approach. Sanders and Horn (1995) write:
No reasonable person claims that any form of assessment can
appraise the totality of student’s school experience or even the
entirety of the learning that is a part of that experience.
However, it is possible to develop indicators to measure
learning along important dimensions, closely related to the
curriculum, both in standardized assessment instruments and in
alternative forms of assessment. The real issue is not whether
standardized assessment or alternative assessment is the better
model in every case for the evaluation of educational outcomes.
Rather, the issue is choosing the most appropriate indicator
52
variables for the specific purpose at hand, whatever that may be
(p. 2).
In conclusion, school reform, accountability, and assessment need a mixture of
traditional and alternative forms of assessment to get a complete picture of student,
school, and district performance. Using only a standardized test as the sole
accountability measure is not fair to the person or entity being held accountable; Johnson
(2002) calls this “shortsighted” (p. 3). It goes against the true purpose of standards-
based reform and comprehensive school reform where multiple data indicators can give
a clearer picture of progress.
Effects of Standards-Based Reform on Teachers Practices
Standards Implementation
Margaret Goertz (2001) discusses data from a multi-state (including California),
multi-district study of education reform conducted by the Consortium for Policy
Research in Education (CPRE). This study found:
For some teachers standards-based reform was just another in a
long line of reform initiatives. For others, standards were
explicit in the state frameworks or embedded in state
assessments provided a catalyst and a language of thinking
about their practice and student work. A few teachers felt that
state policies impinged on the more innovative curriculum and
assessment systems designed by their schools and districts (p.
66).
Across the states, teaching techniques remained traditional. Teachers “wove selected
innovative strategies, such as writers’ workshops and manipulates into relatively stable
practice. This approach reflected teachers’ attempts to balance the pressure of state and
local standards and assessments with what they saw as their students’ particular needs”
53
(p. 66). In the change process, Hill and Crévola (1999) point out how “fragile” reforms
can be, and how important it is to “incorporate continuous improvement processes into
daily operations so that the gains achieved during the implementation phase are not only
sustained but progressively extended” (p. 138). Indeed, teacher accountability is causing
the need for a shift to occur in the teaching community; Caldwell (1999) describes it as a
“new professionalism.”
Role of the Teacher in School Reform
The role teachers are to play in the reform process is a very important one. “The
strategies that prove most effective at increasing student achievement are those that
strengthen the teaching and learning process” (Education Commission of the States,
1996, p. vi). Standards-based reform includes key elements that lie completely in
teachers’ hands. Clear and rigorous standards mean nothing without a prepared teacher
and valuable instructional delivery. “Standards-based reform and reform networks can
help focus policy on these areas and provide educators with the resources and knowledge
they need to improve their practices” (p. vi). Flexibility during the school reform
process is also very important. The Education Commission of the States (1996) writes:
Schools must function more like communities and less like
government agencies. It also rests on a belief that the individual
school---as opposed to a state agency---has the best information
and expertise with which to make education decisions about the
children it serves. The one-size fits all approach to education is
increasingly suspect, as more is discovered about the different
ways children learn (p. 10).
Standards are a wonderful framework to help define the high quality
instruction that would flesh out the frame. Often states create the framework, add on
54
accountability, and spend no time fleshing it out through training and professional
development for teachers to create true comprehensive school reform. An
educational expert can write beautiful, enriching classroom standards in hopes of
raising student achievement, but it is the teacher(s) who is actually implementing the
standards. A principal also has an important role to play as an instructional leader
and provider of support and guidance during standards implementation. Research
suggests that
real gains in student achievement result from the restructuring
of the teaching and learning process itself. The 1995 ECS
report concluded that the most promising reforms are those
aimed at what goes on in the classroom: strengthening the
integration between students and teachers (Education
Commission of the States, 1996, p. 15).
This integration between students and teacher can be guided by the assessment
pieces paralleling the standards-based reform movement: Darling-Hammond and
Baratz-Snowden (2005) write:
Assessment allows teachers to figure out how to pursue their
curriculum goals in ways that will work for the students they
teach. Assessments, and the feedback they can provide, are
actually another source of learning, not just an evaluation of it.
Teachers need to know how to construct, select, and use formal
assessment tools to show them how students are learning and
what they know, so that they can give constructive feedback
that guides further learning and informs instruction (p. 9).
Finance and School Reform
Berliner (1993) believes that school funding should be equalized “so that schools
in one part of the state or even within a district, cannot spend twice or three times more
per child per-year than other schools in the state (p. 25). Many researchers are calling for
55
truly innovative school finance strategies instead of “old fashioned formulas”
(Education Commission of the States, 1996). These might include school based
budgeting and restructuring teacher compensation to reward those who develop new
skills. Odden (1999) defines the fiscal goal of teaching students to high standards as
determining a spending that is adequate to fiscally support an education program that can
teach the average student to those standards. An adequate fiscal base must first be
determined by looking at four factors: 1) Identify inputs and cost them out, 2) Link
spending to a level of student performance, 3) Identify the costs of a high-performance
school model, and 4) Adjusting for special needs. Odden (1999) distills a suggested
state-to district school finance structure that “aligns the finance system with the policy
system of teaching students to ambitious proficiency standards” to five main elements:
1. A base spending level that would be considered “adequate” for the average child.
2. An extra amount of money for each child from a low-income background---
approximately $1,000 in a combination of federal Title I and state compensatory
education dollars.
3. An extra 130% for each disabled student.
4. An extra amount for each student who needs to learn English.
5. A price adjustment for all dollar figures to ensure comparable spending power (p.
148).
New Fiscal Adequacy
Odden (1999) writes,
The traditional focus on fiscal equity did not improve equity all
that much and now needs to give way to the issue of adequacy;
56
education program and finance systems need to be reengineered
to 21
st
century strategies that allow and provide incentive to
schools to teach students to high standards (p. 149).
He recommends the following performance enhancement elements:
1. Provide school sites substantial control over their resources, so they can
reallocate funds to the needs of more effective, higher performance school
strategies.
2. Change teacher compensation to provide salary increases for the knowledge,
skills, and expertise teachers need to teach a more rigorous curriculum and to
engage in the required actions related to school restructuring and resource
reallocation.
3. Create school-based performance incentives that provide monetary rewards
for schools that consistently improve student achievement (p. 149).
Within the concept of comprehensive school reform, Odden (1999) suggests
reallocating resources usually used for specialist positions to expanding teacher jobs to
include instructional as well as specialized tasks. “Additional ingredients” in high
performing school designs include a school wide instructional facilitator and $75,000 for
professional development (in a school of 500 students). Overall, Odden (1999) suggests
empowering site professionals (teachers and principals) to use site dollars more
effectively and creatively. Schryver (1998) echoes this sentiment. He asserts that
America is spending more but educating less. He asserts that the percentage of the
public school budget devoted to instruction has declined between 1960 and 1990 from
61% to 46%. He believes school districts need to carefully analyze how funds are being
57
spent. Picus (2000) reiterates this by writing, “One of the problems is that districts often
seek to implement new educational programs but rarely try to eliminate programs that
don’t appear to be working or are not as cost-effective as other programs” (pp. 1-2).
Standards-Based Reform and School Equity
All Children Can Learn
At the core of standards-based reform is the presupposition that “all children can
learn.” This is a truly noble and correct sentiment, though probably too simplified for
the complex and ever-changing world of true school reform. Thomas and Bainbridge
(2001) discuss a number of fallacies associated with this phrase and standards-based
reform; two that are important to highlight in relation to the recent No Child Left Behind
Act and California’s treatment of standards-based reform is: 1) the fallacy that all
children can learn---at the same level and in the same amount of time, and 2) the fallacy
of uniform standards for all children. Children do not all learn at the same level and in
the same amount of time for many reasons. Children mature and grow physically and
cognitively at different rates. In addition to different rates of development, children are
also exposed to different environmental conditions. Cognitive brain development tells
us that synaptic connections and the formation of the cerebral cortex occurs between
birth and age 10 (Thomas & Bainbridge, 2001). Children not receiving good nutrition
and stimulating experiences in their earlier formative years, generally achieve at lower
levels than children from more enriching environments. All children can learn when
proper public policy is enacted “to provide economic opportunity for families, health
care for children, and parenting education for young mothers” (p. 661).
58
Children Are Not Uniform
Continuing with this theme, the second fallacy that there should be uniform
standards for all children is complicated by the fact that children are not uniform!
Setting a rigid bar for everyone to achieve makes sense from the cold, removed offices
of Washington D.C. or Sacramento, but not in earthy and ever-changing classrooms. It
has been discussed that many learners in the State of California will not be able to meet
the criteria set forth by NCLB, but the standards bar continues to be set where it is. To
evaluate that bar will be standardized tests. They key is, have we provided enough
support for the students to even hope of ever attaining that bar for their students?
Early Intervention
Early intervention for “at-risk” students is incredibly important if we want
students to truly achieve the high standards we have set forth. Enormous amounts of
money are spent trying to remediate students when an investment in early childhood
education could fix many of these later problems (Thomas & Bainbridge, 2001).
Perhaps a more seductive goal is to have all students ready to learn when they begin
school. Early childhood education stressing cognitive development is incredibly
important for helping all students reach the high bar that federal and state legislators
have set. Jencks and Philips (1999) see Head Start as a viable way to raise three and four
year olds’ cognitive skills because changing the pre-school experience is much easier
than changing a home experience. Head Start provides grants to local public and private
agencies to provide comprehensive child development services to children and families.
Currently, Head Start promotes school readiness for approximately 915,000 pre-
59
schoolers from low-income families; there are many more students that would benefit
from a similar opportunity.
Comprehensive School Reform
Education Commission of the States (2000a) reports that standards
by themselves, are not likely to yield gains in student
achievement or any of the other improvements states are relying
on them to produce. Content standards are only one piece in a
puzzle that also encompasses performance standards,
assessment, accountability, professional development, teacher
education, resource allocation, and intervention and support for
struggling students and schools (p. 1).
The states that have shown improvement in student achievement (North Carolina, Texas,
Maryland, Connecticut and Kentucky) have been the states that have “shown sustained
commitment to aligning other components of their education systems with standards” (p.
1). This is because standards-based reform must evolve into comprehensive school
reform.
Comprehensive School Reform is synonymous with “whole-school” and
“schoolwide” reform. Standards-based reform is a key component of comprehensive-
based reform, but unlike SBR, which does not work effectively in isolation, CSR is an
approach to school improvement that focuses on revitalizing the entire school; it
provides a systematic model for change accompanying the standards-based reform.
This is how CSR fits into the reform mix:
Rather than implementing isolated programs that may or may
not improve the academic performance of all students, CSR
schools seek to revitalize themselves by implementing
scientifically based models for comprehensive school
improvement that focus on all aspects of the school’s
operations. These models are based on challenging academic
60
standards, strong professional development components, and
meaningful outreach to both parents and communities. Expert
trainers and facilitators work with schools at every stage of
implementation (North Center Regional Educational
Laboratory, p. 1).
CSR is based on the idea that there is a systematic way to help schools improve, and this
does not simply include throwing high standards out and saying, “OK, meet them.”
“After carefully reflecting on their existing programs, schools engaged in CSR coalesce
around a design for change and carefully implement that design to improve students’
education. CSR gives educators research-based replicable strategies for whole school,
rather than piecemeal change” (NCCSR, p. 1). Although they vary in approach, all
comprehensive school reform models push teachers, principals and parents to focus on
redesigning curriculum, student assessment, professional development, governance,
management and other key functions around one school wide vision (Education
Commission of the States, 1999, p. 1).
History of Comprehensive School Reform
The idea for comprehensive school reform came into play in the early 1990s to
target schools with students at-risk of academic failure. Before the early 1990s, the
focus of Title I legislation, federal assistance to school for educationally disadvantaged
youth, was compensatory educational programs that usually manifested in the form of
pull-out programs for remediation. In 1997, Congress created the Comprehensive
School Reform Demonstration Program (CSRD) which encouraged the development of
comprehensive reform programs. The 2001 Elementary and Secondary Act (ESEA),
better known as No Child Left Behind (NCLB) continues to support the Comprehensive
61
School Reform (CSR) Program which is regulated by the federal government under Title
1 (NCCSR, p. 2). States provide competitive grants which school districts apply for on
the behalf of specific schools. The schools have usually “indicated a readiness to adopt
comprehensive reforms to help students reach high standards” (Department of
Education, 2002, p. 1). Each participating school receives at least $50,000 of CSR funds
renewable for up to three years. Congress has appropriated $235 million to support
comprehensive reforms in schools eligible for Title 1funds.
CSR Components
“CSR is neither a particular design model nor a federal grant---it is a change
strategy and a framework that helps schools plan and implement reform to improve the
achievement levels of all students” (NCCSR, p. 1). During the application process,
schools must align with eleven comprehensive reform program components:
• Employs proven methods and strategies based on scientifically based research
• Integrates a comprehensive design with aligned components
• Provides ongoing, high-quality professional development for teachers and
staff
• Includes measurable goals and benchmarks for student achievement
• Is supported within the school by teachers, administrators and staff
• Provides support for teachers, administrators and staff
• Provides for meaningful parent and community involvement in planning,
implementing and evaluating school improvement activities
62
• Uses high-quality external technical support and assistance from an external
partner with experience and expertise in schoolwide reform and improvement
• Plans for the evaluation of strategies for the implementation of school reforms
and for student results achieved, annually
• Identifies resources to support and sustain the school's comprehensive reform
effort
• Has been found to significantly improve the academic achievement of
students or demonstrates strong evidence that it will improve the academic
achievement of students (U. S. Department of Education, 2002, p. 2).
Schools adopt models or designs that meet their specific needs. These
“replicable, research-based, externally designed models can be effective tools for
schools implementing CSR. There are numerous commercially available models, each
with its own philosophical grounding and evidence of effectiveness from which to
choose” (NCCSR, p. 4). These commercial models provide a wide range of services and
now many programs have been around long enough for the completion of evaluation
studies regarding their effectiveness. “The advantages of adopting these ‘off the shelf’
instructional models are clear. School staffs need not reinvent the wheel. Organizations
behind each of the schoolwide reform models provide professional development,
materials, and networks of fellow users” (Fashola & Slavin, 1998, p. 2).
63
Effective Programs of Comprehensive School Reform
Thomas and Bainbridge (2001) write about the effective schools movement
attributed to the late Ronald Edmonds. Edmonds defined five characteristics evident in
effective schools: strong leadership, clear emphasis on learning, positive school climate,
regular and appropriate monitoring of student progress, and high expectations for
students and staff. The five elements create a strong framework by which one could
evaluate school programs and their reform efforts. Schools that are implementing
reform effectively have the resources, qualified personnel, and innovative instruction
needed for the reforms to be successful. Education Commission of the States (1996) and
the North Carolina Board of Education found programs, schools, and instructional
strategies that stretch across the United States, have different philosophical roots, some
target specific groups (urban students, English Language Learners, etc.), some are
affiliated with universities, but the one common denominator they all have is innovation
in regard to comprehensive school reform. Each takes the comprehensive reform
movement and adapts it to a particular belief system that the program feels will best
address the needs of learners.
Example of Comprehensive School Reform
Many of the comprehensive reform programs have encouraging results. Take
Atlas Communities as an example. ATLAS communities is a coalition of education
reform projects initiated by Ted Sizer of Brown University, James Cormer of Yale
University, Howard Gardner of Harvard University and Janet Whitla of the Education
Development Center in Boston. The design revolves around pathways---feeder patterns
64
of schools from kindergarten to grade 12. Teams of teachers from across each pathway
work together to design curriculum and assessment strategies based on locally defined
standards and, in collaboration with parents and administrators, to implement sound
policies and management structures that support improved teaching and learning.
Multiple forms of assessment are used to assess student work including portfolios and
student-led conferencing. The design is in 100 schools in 13 states. The results from
this program are quite impressive and students have made gains in many of the pathways
(Education Commission of the States, 1996 & ATLAS Communities Website).
Reform in California
California has been on a journey of standards-based reform over the past few
years, and currently is struggling to implement new policies from NCLB. The California
Department of Education defines standards-based reform as “adopting state content
standards and establishing effective procedures for assessment, instructional
improvement, and accountability designed to ensure that every student achieves to the
standards” (p. 1). The California State Board of Education (SBE) has adopted content
standards for English-language arts, mathematics, history-social science, science, and
visual and performing arts. The STAR (Standardized Testing and Reporting) Program
was established in 1997; currently, there are three components to the STAR program:
California Standards Tests, (CAT/6) The California Achievement Test-6 (This replaces
the Stanford 9), and the Spanish Assessment of Basic Education, Second Edition
(SABE/2) (replaced with the Aprenda 3 in 2006). Students with a primary language
other than English are given the CELDT (California English Language Development
65
Test) to monitor students’ progress towards full proficiency in English. In 1998, the
SBE initiated the development of performance standards by adopting three performance
levels and descriptions used by NAEP: basic, proficient, and advanced.
California’s Public Schools Accountability Act
In 1999, California’s Public Schools Accountability Act created rewards and
interventions for improving student performance. The Academic Performance Index
(API) is the cornerstone of California’s Public Schools Accountability Act (PSAA). The
purpose of the API is to measure the academic performance and growth of schools. It is
a numeric index that ranges from a low of 200 to a high of 1000. A school’s score
placement on the API is an indicator of a school’s performance level. The interim
statewide API performance target for all schools is 800. A school’s growth is measured
by how well it is moving toward or past that goal (California Department of Education,
2001). The 2002 Base API score (the year the study was conducted) incorporates results
from four types of assessments that are part of the STAR (Standardized Testing and
Reporting) Program: 1) the Stanford-9 (standardized test), all content areas; 2) CST ELA
(California Standards Test in English Language Arts); 3) The California Standards Test
in Mathematics (CST MATH); and the California Standards Test in Social Science (CST
SS) for grades 10-11. In addition, the 2002 Base API will also include the California
High School Exit Examination (CAHSEE) grades 10-12; the law requires that CAHSEE
test results make up at least 60% of the API (California Department of Education, 2001).
API growth was supposed to have been accompanied by a rewards program for schools
that met their targets, but due to state budgetary issues the awards have been
66
inconsistent. Low-Performing Schools who do not meet their API growth target receive
special assistance from California’s Immediate Intervention/ Underperforming Schools
Program (II/USP) and High Priority Schools Grant Program.
The 2005-2006 API is composed of slightly different assessments and
weightings: California Standards Test (CST): English Language (grades 2-11) including
a writing assessment at grades 4 and 7, Mathematics (grades 2-11), History-social
science (grades 8, 10, and 11), Science (grades 5, 9-11); California Alternate
Performance Assessment (CAPA; for special education students); and the California
Achievement Test (CAT-6), a norm-referenced test (grades 3 and 7); California High
School Exit Exam (CAHSEE) English-language arts and mathematics, grade 10 (and
grade 11 if student passes the test). In four years, the CAT-6 has almost been faded just
as it replaced the Stanford-9, performance writing assessments have been added, more
science and history have appeared as assessments, and the CAHSEE now actually
determines if a student can graduate. Many of the elements referred to in the study as
“emerging” have come to fruition by 2006.
California High School Exit Exam
Beginning with the 2005-2006 school year, each student completing grade 12
is required to pass the High School Exit Exam to earn a high school diploma. The
2005-2006 school year was the first year this requirement was in place, although, the
exam had been given for four previous year. The exam is aligned to reading and
mathematics standards, and the purpose of the CAHSEE is to improve student
achievement in high school and help ensure that students who graduate from high
67
school can demonstrate basic competencies in reading, writing, and mathematics.
Results from the exam, which was administered to last year’s sophomores, juniors,
and seniors, show steady improvement in the number of students in the class of 2007
who have met the CAHSEE requirement.
Since the class of 2007 initially took the CAHSEE as 10th graders in 2004-05,
an estimated 89% have passed the English-language arts (ELA) portion of the
CAHSEE, while an estimated 88% have passed the mathematics portion of the exam
(CDE News Release, 2006). There continues to be a disparity on the passing rate
between White, Black, and Hispanic students.
Quality Index Standards in California
Lewis (2002) suggests that California may need to lower its quality index
standards because it is facing the possibility that virtually all its schools will not meet the
proficiency standards required by NCLB due to the mandate to have all subgroups of
students reach proficiency; Ohio and Louisiana have already lowered their standards.
On January 9, 2003, however, the Los Angeles Times published an article reporting that
the California Department of Education will not be lowering standards to more easily
comply with NCLB legislation.
But projections indicate that sticking to the more rigorous
measure of proficiency means that as many as 98% of schools
will fall short by the 2014 deadline. And 100% of schools
serving mostly low-income children---4, 943 campuses---will
fail ---to meet the mark (Helfand, 2003, B1).
The state’s education secretary, Kerry Mazzoni, said:
We felt from the very beginning that [having 100% of students
at the proficient level] was an unrealistic expectation….But we
68
are keeping out eyes on the prize for what we want in
California---keeping student achievement central to all our
decisions. We would rather set the bar high and not have
everyone reach it than set it low and have everyone reach it
(Helfand, B1).
California State Expectations of Schools for Data Use
Winfield (1990) writes that
a combination of top-down and bottom-up approaches in
implementing school improvement efforts appears to be
necessary for success.…The framework of the ‘nested layers’ in
which schools operate further suggests that actions at higher
layers (e.g., district and state) influence conditions occurring at
the school and the classroom levels (as cited in Purkey & Smith,
1983, p. 59).
California has high expectations for schools. They are exerting their influences by
creating a data-driven accountability system for all districts and schools. California
established the Public Schools Accountability Act (PSAA) in 1999. Included in this act
was the creation of rankings for each school based on an Academic Performance Index
(API). The API is evolving over a six-year planned period from 2001-2006. Every year,
the calculation for the API changes slightly as new assessments that are more aligned to
state standards are added and old assessments are phased out. Each year’s calculations
are based on very different measures. Each year, a school is supposed to meet a growth
target set at 5% of the difference between the school’s base API and the statewide
performance target of 800. In addition, each numerically significant subgroup at the
school must improve at least 80% of the schoolwide target (AYP). For 2005, 56% of
California’s schools met their AYP targets. This is a decline from 65% in 2004. Under
NCLB, schools must meet annual AYP targets; over time, 100% of students must scores
69
at the proficient level or above so that by 2013-14 100% of schools meet their AYP.
California API scores have continued to rise even if their AYP scores have gone down.
From 1999-2005, California’s median elementary school API rose 122 points from 629
to 751 (CDE News Release, 2005).
California has an elaborate website that provides a wealth of statistical
information. On a service aptly called, DataQuest, one can find information about
counties, schools districts and individual schools on most of the following topics: API
scores, course enrollments, drop-out rates, English Language Learners, enrollment,
expulsion rates, graduation rates, CAHSEE (California High School Exit Exam), SAT,
ACT, AP Scores, Physical Fitness Results, Staffing, Special Education, and
Standardized Testing Results (CAT-6, Aprenda 3, California Standards Test). California
also provides copies of state standards and frameworks in PDF form for English,
Mathematics, Social Studies, Science, Visual and Performing Arts, and Challenge
Standards. It is implied that school districts and schools will align their curriculum to
state standards and they will be held accountable by their students’ demonstrated
proficiency for these standards. There are performance-based rewards and sanctions
based on API scores. 2002 marked the first time that the API will be used to identify
schools for potential state-imposed sanctions. “A school that entered the Immediate
Intervention/Underperforming Schools Program (II/USP) in 1999 and failed to
demonstrate significant growth during the last two years now is subject to a range of
potential state sanctions” (CDE, p. 2). In addition to sanctions for lack of growth, the
PSAA provides cash awards based on API growth; however, funding for the API-based
70
awards was not included in the state budget for 2002-2003 and has not been since
(CDE).
The new state accountability system has had a strong influence at the school and
classroom levels. The top-down influence, though, has not necessarily created buy-in at
the school and classroom levels. Nor, has California provided explicit instructions for
incorporating data analysis into improving student achievement or how to align state
standards to curriculum. Carol Jago, an English teacher at Santa Monica High School,
directs the California Reading Literature Project at UCLA and edits California English,
an educational journal. Jago (2000) writes:
Anyone in his or her right mind would want children in
California to know and be able to do the things the standards
describe. In an ideal world, they could provide a model rallying
point. The problem is achieving these standards in substandard
conditions. Compared with other states, California is at the
bottom in terms of per pupil spending and at the top in terms of
English language learners and children living in poverty. Thirty
percent of our kindergartners come to school speaking no
English. With burgeoning enrollment, California is going to
need 300, 000 new teachers by the year 2005. In 1998, the state
was forced to grant 21, 000 emergency credentials in order to
staff classrooms. School buildings are in abysmal physical
condition….For a “golden” state, California schools are looking
more than a bit tarnished (p. 64).
Jago is expressing frustration that the state standards are a top-down mandate that seem
to have no anchor to what a typical California educator is experiencing at a school.
California has set up a system of standardization, but that is not necessarily going to be a
catalyst for change at the local school and classroom level. This is where policy needs to
be connected to whole school reform efforts so there can also be the bottom-up reform
that Winfield (1990) discussed.
71
Role of Data in School Reform
The stage has been set regarding student performance in the United States,
and the main educational reform movements have been reviewed. Now the focus
shifts to the role of data in the school reform process: beginning on the district level
and trickling down to the classroom level.
Data Driven Organizations
Sherry P. King (1999) characterizes schools of the 21
st
century as systems of
continuous improvement. She writes, “School systems will have to be much more
dynamic, data-driven organizations that can be immediately responsive and that allow
for learning at all levels. In the past, we boasted about our five-year strategic plans. In
the context of uncertainty and rapid change that defines our era, those plans may be
outdated before they are complete” (p. 165). Technology will also be a very important
tool to provide the information for continuous improvement. King (1999) sees the
process of using data to impact student performance as a cycle of gathering data, making
plans, implementing them, and analyzing the effects for modification. She uses Costa
and Kallick’s (1995) feedback spiral model to help create a learning organization. King
(1999) writes, “Actual student achievement should be the focus of all the work of the
school, whether examining data on student achievement, developing standards, making
instructional decisions, or involving all stakeholders in the issues surrounding school
life” (p. 167).
72
Data Management Systems
Data management is an incredibly important part of standards-based reform.
Christie (2000) writes:
The data management system must allow immediate and
flexible access to data to enable screening, diagnosis,
intervention, instructional fine-tuning, and more informed
decision making at the school district and state levels. Given
the breadth of information that is needed to support these
purposes, the system must accept and link multiple categories of
longitudinal data. It must also be designed so that the time and
effort to enter and access data do not take significant time away
from instruction (p. 8).
The Annenberg Institute for School Reform completed a 2005 study called, From
Data to Decisions. They write about schools becoming “Smart Schools” where
schools, students, and teachers are provided with needed support and timely
interventions, “and holding people throughout the system accountable by developing
and consistently monitoring appropriate indicators of school and district performance
and practice” (Miles & Foley, 2005, p. 2). The emergent field of knowledge
management is common in business circles, but is just beginning to be applied to the
field of education. The Annenberg study states:
(D)ata warehouse technology is a powerful tool that can provide
educators and other district, school, and community leaders
with greater access to information and new opportunities to
create and act on knowledge. By making better decisions,
districts can improve practices that influence teaching and
learning and, ultimately, student achievement (p. 2).
Using data can help make a school a “results-driven organization.” Data analysis
has a reputation of depersonalization; however (King 1999) feels data “can actually
support schools in helping to identify problems, generate possible responses, and raise
73
issues connected to student achievement” (p. 167). Data–driven reform is not
accomplished by using one form of assessment and certainly not just standardized test
scores published in a newspaper. “Actual samples of student work are the best evidence
of how students are doing, what our standards look like in practice, and how our students
are doing in relation to those expectations” (King, 1999, p. 173). In fact, good data
analysis needs multiple indicators to flesh out an adequate portrait of how a student is
achieving.
Data-Driven School
Johnson (2002) points out that “(m)easuring and monitoring outcomes,
program effectiveness, and policies and practices at all levels of the institution
should become interwoven into the everyday life of the institution” (p. 10). This is
part of a challenging change process that encompasses changing the very culture of
districts and schools to become a “data culture.”
Becoming a data-driven school as a part of comprehensive school reform can help
schools:
• Replace hunches and hypotheses with facts;
• Identify root causes of problems, not just the symptoms;
• Assess needs, and target resources to address them;
• Set goals and keep track of whether they are being accomplished;
• And focus staff development efforts and track their impact (Bernhardt, 2000,
p. 2).
74
Bernhardt (2000) describes how schools must gather data beyond test scores to look
at students, teachers, and the school community in many different ways. Effective data
analysis of a school or program can include four different types of data:
1. Student learning data describe an educational system in terms of standardized test
results, grade point averages, standard assessments, and other formal
assessments.
2. Demographic data provide descriptive information on items such as enrollment,
attendance, grade level, ethnicity, gender, home background, and language
proficiency.
3. Perceptions data help us understand what students, parents, teachers, and other
think about the learning environment. Perceptions data can be observed in a
variety of ways, such as questionnaires, interviews, and observations.
4. School process data define programs, instructional strategies, and classroom
practices. To collect school process data, educators must systematically examine
their practice and student achievement, making sure both are aligned with
specifically defined, desired student outcomes (pp. 2-3).
The real power of data analysis begins as these categories are crossed two, three, or even
four at a time. An example of how three measures could intersect at a school might be:
Do students of different ethnicities perceive the learning environment differently, and do
they score differently on standardized tests in patterns consistent with these perceptions?
(p. 3).
75
The above is an example of the convergence of demographics, perceptions, and student
learning data. Using data to consistently drive instructional improvement can really
make a difference at a school engaged in comprehensive school reform or other school
reform measures.
Data Use at the School and Teacher Level
In the context of current standards-based reform movements, teachers are
being asked to use data at very sophisticated levels when they have often not been
sufficiently trained to do so. They are being asked to 1) improve instruction in their
classroom by aligning curriculum to standards, and 2) making sure students meet
these standards and can demonstrate their proficiency on a standardized test.
Schools and teachers are then held accountable for their student’s gains or losses on
the standardized tests. They are being asked to do this in school culture’s where there
typically has been little data analysis used to improve practice, little collegiality, and
little time or money invested in professional development to make teachers proficient
at data analysis. Research tells us that teachers rarely use data to inform educational
practice. In fact, while teachers report they feel pressure to improve test scores, they
believe such scores are not particularly useful in helping to drive instruction in a
positive way” (Khanna et al., 1999, p. 3). An achievement test can be a powerful tool
for a trained educator and training in assessment practices should take place early on
in a teacher’s career “to avoid allowing misunderstandings or unproductive practices
to become ingrained” (p. 203). Shepard et al. (2005) add on to this thought and
discuss the fact that large-scale assessments point out educational problems with
76
poor test results, but do not come with a manual on how to fix the educational
problems. “Ironically, test and measurement textbooks tell preservice teachers how
tests are made and the meaning of stannine and percentile scores but not how to
improve instruction in response to test results (pp. 313-314). Schools and teachers
must begin to look beyond standardized testing data and into emerging data use
practices and innovative assessment practices.
Teacher Responsibility for Data Use
As teachers are confronted with the effects of various federal, state, and district
policies trickling down to the classroom level, Brian Caldwell (1999) asserts there will
surely be “a profound change in the role of the teacher” (p. 48). There will be a new
professionalism associated with teaching; one where professional development is
continuous. Caldwell (1999) suggests that teachers must continue to build upon their
current knowledge of a given subject area; in essence, becoming an expert in that area.
“They need to be skillful in using an array of diagnostic and assessment instruments to
identify precisely what entry level and needs exist among their students and the resultant
approaches to learning and teaching are different in each classroom” (p. 49). Caldwell
(1999) presents 2 strategic intentions for the new professionalism regarding data use:
1. There will be planned and purposeful efforts to reach higher levels of
professionalism in data-driven, outcome oriented, team based approaches
to raising levels of achievement for all students.
2. Substantial blocks of time will be scheduled for teams of teachers and
other professionals to reflect on data, devise and adapt approaches to
77
learning and teaching, an set standards and targets that are relevant to
their students.
Using Data to Monitor and Assess within Reform Movements
Hill and Crévola (1999) build on Caldwell’s concept of a new
professionalism and stress how important monitoring and assessing student progress
is to the Standards-Based Reform Movement and in Comprehensive School Reform.
Assessment becomes very important in establishing targets a student needs to meet
for growth which need to be monitored to make sure a student has met standards.
Even more importantly, a teacher become a diagnostician “to find out as much as
possible about each student, to establish starting points for teaching, and to use this
diagnostic information to drive classroom teaching programs” (p. 127). Having
detailed diagnostic information on each student tends to be very time consuming, but
infinitely worthwhile because it provides a teacher with a “detailed diagnostic profile
of each student and the information necessary for making their teaching efficient and
focused” (p. 127). “Teachers work as part of a team, and they devote much time out
of class to preparation, and in briefing and debriefing meetings to assess the
effectiveness of approaches to plan new ones” (p. 49). Fullan, Hill, and Crévola
(2006) feel that the “missing piece” in assessment for learning is creating a
“manageable system for going from data to instruction” (p. 20). In essence,
streamlining “current sporadic data collection” and creating data analysis that is
automated and “individualized instruction is delivered on a daily basis in every
classroom” (p. 20).
78
Many teachers are experts at making successful applications of educational
assessments in a variety of forms; “Every capable teacher can provide numerous
examples of ways in which formal and informal means of assessing student achievement
helped to diagnose a leaning problem, document progress, identify an effective
instructional approach, and produce numerous other desirable outcomes” (Behuniak,
2002). When teachers use classroom data (which may come in the form of traditional
assessments, alternative assessments, portfolios, or teacher observation) to drive student
instruction, they are meeting the needs of each individual leaner and pushing them
toward or beyond the bar set by standards. From social reformists to gifted education
researchers, the need to have students go beyond basics is heartily felt. Learners need to
be challenged to “stand on mental tiptoes.”
Student Involvement in Data Process
Chappuis (2005) discusses how important it is to “involve students as decision-
makers, teachers acknowledge the contributions that students make to their own success
and give the opportunity and structure they need to become active partners in improving
their learning” (p. 43). McTighe and O’Conner (2005) also stress how important it is to
make students in integral part of the assessment process through the use of choices,
rubrics, etc.; “Assessment becomes responsive when students are given appropriate
options for demonstrating knowledge, skills, and understanding” (p. 15).
Classroom Assessments and Data Use Practices
McTighe and O’Connor (2005) define the three categories of classroom
assessments:
79
1) summative assessments are what students have learned at the conclusion of an
instructional segment. Examples include: tests, performance tasks, final exams,
culminating projects, portfolios, etc. By themselves, summative assessments are
insufficient tools for maximizing learning.
2) Diagnostic assessments or pre-assessments typically precede instruction and
provide information to assist teacher planning and guide differentiated instruction.
Examples include: skills checks, surveys, etc. Normally, teachers don’t grade the
results.
3) Formative Assessments occur concurrently with the instruction. They provide
specific feedback to teacher sans students for the purpose of guiding teaching to
improve learning. Examples include: ungraded quizzes, oral questioning, teacher
observations, concept maps, portfolio reviews, etc. Formative assessments probably
provide the most direct effect on data-driven instructional practices.
Herman and Baker (2005) describe formative assessments (also called
benchmark tests, progress monitoring systems, etc) as filling in the gap left by large
scale standardized assessments to “provide both accurate information about how well
students are progressing toward mastery of standards and useful diagnostic feedback
to guide instruction and improve learning” (p. 49). Formative assessments in effect
can serve as instructional scaffolding. Shepard (2005) considers formative
assessments as a form of instructional scaffolding similar to Vygotsky’s (1978) zone
of proximal development: “…formative assessment is a dynamic process in which
supportive adults or classmates help learner move from what they already know to
80
what they are able to do next, using their zone of proximal development” (p. 66).
McTighe and O’Conner add this analogy to the discussion: “Like successful athletic
coaches, the best teachers recognize the importance of ongoing assessments and
continual adjustments on the part of both teacher and student as the means to achieve
maximum performance” (p. 11).
Receiving timely assessment data allows teachers to individualize learning
and provide needed support or acceleration for students. Formative assessments
provide this feedback in a much more timely manner than annual standardized tests.
Indeed, Gusky (2005) and Herman and Baker (2005) all comment on how data from
annual state tests is often “too little, too late.” Herman and Baker (2005) go on to
discuss the “utility” of tests; “utility represents the extent to which intended users
find the test results meaningful and are able to use them to improve teaching and
learning.” Formative assessments provide high utility for schools and teachers. As
long as test results can be quickly returned to teachers, then teachers can use these
results to make instructional changes to the class content. The Annenberg Data
Study (2005) calls this “dynamic data,” and recommends using “data warehousing”
to make manipulating data simple and effective. They describe one district’s data
program which allowed teachers to see all their test scores and then connect those
scores in “endless combinations” (Miles & Foley, p. 16).
This brings us back to the important concept of training teachers to be
flexible and instructionally responsive to new information gleaned from periodic
assessments. Herman and Baker (2005) write, “…schools must ensure that (the
81
teachers) have the pedagogical knowledge and access to alternative materials that
they need to bridge identified learning gaps” (p. 54). It is crucial that districts and
schools do make sure that teachers have this pedagogical knowledge to deftly
analyze, synthesize, and apply the data to student and curricular needs.
Unfortunately, Guskey (2005) is just one of many researchers who points out that
“Rarely, however, do teachers get help in…. developing classroom assessments that
not only address standards accurately, but also help identify instructional weaknesses
and diagnose individual student learning problems” (p. 32). Leahy, Lyon,
Thompson, and Wiliam (2005) explored a number of ways to train teachers on
assessment practices; they found that there was no “one size-fits-all package.” Just
like students, teachers needed instruction about assessment tailored to their needs. It
is for this reason, that they recommend,
Given this variability, it is important to offer teachers a range of
techniques for each strategy, making them responsible for
deciding which techniques they will use and allowing them time
and freedom to customize these techniques to meet the needs of
their students (p. 20).
Professional Development to Achieve Effective Reform Using Data
Professional learning for teachers regarding data use is a key element for
successful implementation of district and school practices. Hill and Crévola (1999)
found in their experience that
the great majority of teachers are able to improve their
effectiveness as professionals, given the right conditions and
support. But achieving quantum improvements in teacher
effectiveness is difficult if not impossible using traditional
model so professional development and inservice training (p.
129).
82
Indeed, According to the U.S. Department of Education, 99% of American public school
teachers participated in some type of professional development over the course of the
year. “However, most teachers participated in these activities for only 1 to 8 hours, or
for no more than 1 day” (Mayer, Mullens, & Moore, 2001, p. 40). In contrast, Hill and
Crévola (1999) present an example of innovative professional development used in the
Early Literacy Research Project (ELRP). The ELRP professional development consists
of school and university connections and really exemplifies a learning community.
Primary teachers at each participating school are put into professional learning teams. A
coordinator is assigned to each team and is provided with release time. Each team has
been provided with eight full days of off-site university-based professional development
in the first year and four days in the second year. The learning teams meet weekly for
various activates designed by the coordinator: modeling, demonstrating, coaching,
mentoring, and observing each other, and through visits to professional learning teams
operating in other ELRP schools. A university-based professional development
coordinator also comes to the school for a full-day visit. Hill and Crévola (1999)
describe the results of the ELRP as “dramatic improvement” in student performance and
“equally dramatic improvement in the morale and feelings of efficacy and achievement
among teachers and school administrators” (p. 139).
Lesson Study
Increasing collegiality is such an important component for actively engaging
a community of teachers at a school as is evidenced by Caldwell’s (1999) description
of a new professionalism and as can be observed in the Early Literacy Research
83
Project (ELRP). Other countries in comparison to the U.S. have increased
collegiality and advanced techniques for improving instruction and thereby student
learning. Berliner (2001a) comments on the training teachers used in foreign
countries to analyze and use available qualitative data. In Japan, teachers engage in
“lesson study.” Teachers are given time off to go observe other teachers teach a
lesson.
And then they all de-construct the lesson and talk about it, and
then they all go away with an understanding of how that
particular lesson let’s say on the rain cycle or quadratics, or
whatever’s common to the curriculum, they all get a chance to
polish their lesson. Polish the stone (p. 4).
“The observed lessons, called ‘research windows,” are regarded not as an end in
themselves but as a window on the larger vision of education shared by the group of
teachers, one of whom agrees to teach the lesson while all the other make detailed
records of the learning and teaching as it unfolds. These data are shared during a
post-lesson colloquium, where they are used to reflect on the lesson and on learning
and teaching more broadly” (Lewis, Perry, & Murata, 2006, p. 3). Lesson study is
one form of a professional learning community that provides the “concrete”
professional learning that Fullan, Hill, and Crévola (2006) recommend.
This concept is slowly catching in within the parameters of standards-based
reform. For example, UTLA (United Teacher’s Los Angeles), one of the largest
teachers unions in the country has developed a book called, Lesson Study: Integrating
Standards, Curriculum, and Assessments 2000. New emergency credentialed teachers
have already been trained on the concept which includes:
84
(T)eachers, working together, to take a standard, write lessons
to teach the standard, develop assessments to determine if the
lessons taught the standard, and then test the lessons and
assessments in their classrooms. Teachers evaluate what they
have produced and change and refine it based on student
reaction (UTLA, 2002, p. 2).
Los Angeles Unified School District is tentatively encouraging Lesson Study in its
schools. Berliner (2001a) adds that there is a lot to be learned from international studies
and one of the main lessons is “lesson study.” He writes, “giving people a change to
engage in professional development is the best way possible, in a peer group that’s task
oriented” (p. 6).
Altering the Structures of Schools for Reform Measures and Data Use
Learning Communities
In order for schools and teachers to effectively use data to drive instructional
practice, a shift in the structure of schools and specifically in professional learning
must occur. One shift will be the need for the creation of a professional learning
community where teachers can reflect and dialogue about data. Schmoker (2005)
defines a professional learning community:
It starts with a group of teachers who meet regularly as a team
to identify essential and valued student learning, develop
common formative assessments, analyze current levels of
achievement, set achievement goals, share strategies, and then
create lessons to improve upon those levels. Picture these teams
of teachers implementing these new lessons, continuously
assessing their results, and then adjusting their lessons in light
of those results. Importantly, there must be an expectation that
this collaborative effort will produce ongoing improvement and
gains in achievement (p. xii).
85
What Schmoker (2005) is describing is not only a professional learning community
but also a “culture of continuous improvement that accompanies this culture of
inquiry” (Datnow, 2004, p. 16). Datnow uses these descriptors to describe effective
districts that use data-driven decision-making in their regular practices.
Williams (1999) highlights some of the outcomes Darling-Hammond (1997)
reports for restructured schools that have put staff and time into creating learning
communities. Some of the characteristics of these schools include: 1) Active in-depth
learning; 2) Emphasis on authentic performance; 3) Attention to development; 4)
Appreciation for diversity; 5) Opportunities for collaborative learning; 6) A collective
perspective across the school; 7) Structures for caring; 8) Support for democratic
learning; 9) Connections to family and community (pp. 331-332). A five-year study
from the Center on Organization and Restructuring of Schools at the University of
Wisconsin, “concluded the most important factor in successful reform is the presence of
a strong professional community in which teachers pursue a clear, shared purpose for
student learning; engage in collaborative work; and take collective responsibility for
learning” (Education Commission of the States, 1996, p. 16)
Teacher’s Role in Reform
The discussion regarding the changing role of the teacher as a catalyst for
improving student achievement through adequately using information and data is
refreshing. Ideas for new teacher professionalism, learning communities, team
meetings, and lesson studies are tantalizing options. Caldwell (1999) calls for
“substantial blocks of time” to be scheduled for teachers to meet in teams.
86
Currently, if one is a conscientious teacher that actually plans lessons, adequately
assesses student work, and engages in long-term planning with peers, he or she is
spending two to three hours of personal, unpaid time per day. In reality, what a
school day looks like may have to radically change to provide time for teacher
observation, articulation, analysis of information, and implementation of
instructional changes. Where the time and money to accomplish this will come from
is a difficult question to answer? We must return to the suggestions of Picus (2000),
Odden (1999), and Schryver (1998) about creatively using current funding levels
may provide some answers. Sirotnik (2002) points out that corporations invest as
much as 10 times the paltry 1% or so that school systems spend on professional
development:
(B)etter teaching produces better results. However, the
magnitude of resources required for professional development--
-consistent with new developments in the disciplines and higher
expectations for teaching, learning, and assessment—is huge
compared to the minuscule amounts provided in current
education budgets (p. 668).
Berliner (2001a) also suggested adding days to the school year
including 10 days for teacher professional development. Little
(2002) adds another idea: American schools would do well to
support more out-of-class time for teachers. Without altering
the number of paid teacher days or lengthening the official duty
day, many “restructuring” schools have reorganized time to
enable teachers to collaborate on a daily or weekly basis (p. 48).
Not providing adequate time and training for teachers to grow and impact student
learning is criminal when they are being held accountable for just that.
87
Rethinking the Role of Teachers
There is considerable new thinking about how data could be used at the teacher
level. Organizational learning and reflective communities of practice are very concerned
with how teachers engage with information, peers, and ultimately students to improve
instruction. Rollie (2002) describes reform approaches as being “simplistic and
inconsistent. Reformers have not strategically used what we have recently learned about
how to best help schools achieve superior results” (p. vii). For school improvement to
occur, professional development, collaboration, pedagogical improvement, and student
learning need to interact (Fullan, 2002). This helps define a learning organization,
which “means continually acquiring new knowledge, skills, and understanding to
improve one’s actions and results” (Fullan, 2002, p. 7).
Professional Communication and Collaboration
One of the most important elements of an effective school is professional
communication and collaboration. Schools can isolate teachers, support an “individual
artisan,” or have a collaborative culture (Little, 2002). The latter promotes “the power of
professional community to heighten teachers’ efficacy and strengthen the overall
capacity of a school to engage in change” (Little, 2002). Teachers at a school with a
collaborative culture “would more often communicate about the progress of students,
develop curriculum or assessments together, and spend time in one another’s classrooms.
Their week would incorporate regularly scheduled time for consultation and
collaboration in addition to personal planning time” (Little, 2002). Lipman (1998)
88
makes the assertion “that teacher collaboration will facilitate critical inquiry, reflection,
and dialogue essential to educational change” (Williams, 1999, p. 101).
Conclusion
There are many pathways to reform, and every year more reliable data are being
generated as to what reform measures are more successful than others. Reformers hoped
that Standards-Based Reform and NCLB would be the final push to raise student scores
and see success for all students. Four years into the legislation, this does not appear to
be happening. Standardized, structured, and “teacher-proof” curriculum and materials
were developed to raise student achievement; however, NAEP scores have not shown
positive gains since NCLB was instituted. Finally, it appears that teachers have been left
out of the equation and Fullan, Hill, and Crévola are touting their reappearance in the
educational reform process as a “breakthrough” or “tipping point” that will make the
final difference for learners. Their call is supported by many other researchers to
increase the proficiency of teachers in high-quality instruction (Darling-Hammond,
2000).
There appeared to be a disconnect between policy and high quality
curriculum instructional methodology. It was not until the most recent writing,
namely, Breakthrough by Fullan, Hill, and Crévola that seemed to reemphasize the
important role of the teacher and high-quality instruction in all reform elements.
Darling-Hammond (2000) cites Brandt (1998), Danielson (1996), Schlechty (1997),
Wiggins and McTighe, (1998) to describe “expert” or distinguished” teaching:
(It) focuses on the understanding and skills of a discipline,
causes students to wrestle with profound ideas, calls on students
89
to use what they learn in important ways, helps students
organize and make sense of ideas and information, and aids
students in connecting the classroom with a wider world (p. 7).
This “expert teaching” is characterized by “differentiated instruction,”
“individualized instruction” or “personalization” (Darling-Hammond, 2000; Fullan,
Hill, & Crévola, 2006). Tomlinson (2000) defines differentiated instruction as
follows:
What we call differentiation is not a recipe for teaching. It is
not an instructional strategy. It is not what a teacher does when
he or she has time. It is a way of thinking about teaching and
learning. It is a philosophy….By definition, differentiation is
wary of approaches to teaching and learning that standardize.
Standard-issue students are rare, and educational approaches
that ignore academic diversity in favor of standardization are
likely to be counterproductive in reaching the full range of
learners (p. 6).
Sirotnik (2002) chimes in regarding the importance of individualization for students
and adds,
…educators should know more about any given child than any
test can tell us. And if they worked in organizations and
political setting that valued them as professionals and provided
the training, resources, and environments necessary to do their
work well, they could make good judgments about each and
every young person in our schools (p. 672).
Within individualized instruction, student’s needs are constantly assessed
and reassessed. Data is an integral part of driving high-quality instruction
through the assessment process.
In order for teachers to become adept at “expert teaching” including using
effective data practices, much new learning must occur. Shepard et al. (2005) expand
upon this idea of new learning:
90
Assessment of student learning is an integral part of the learning
process. A generation ago, it was considered sufficient if
teachers knew how to give tests that matched learning
objectives….To be effective, teaches must be skillful in using
various assessment strategies and tools such as observation,
student conferences, portfolios, performance tasks, prior
knowledge, assessments, rubrics, feedback, and student self-
assessment. More importantly, they must be able to use insights
from assessment to plan and revise instruction and to provide
feedback that explicitly helps students see how to improve (p.
276).
Designing and implementing an adequate data design plan for a district,
school, and teachers is a daunting task. Making sure the large policy framework
trickles down to classroom practice is difficult. Providing high-quality, effective
staff development is an important component in this process. Hirsh (2004)
recommends that school districts no longer engage in “adult pull-out programs” as
their primary type of staff development. She does not feel that this kind of learning
will produce positive results. Rather, she recommends that professional learning be
embedded in the school day. This might look like adults working in learning
communities using
disaggregated data to set priorities for adult learning, to monitor
students’ progress, and to help sustain continuous
improvement….Professional development focuses on
deepening educators’ content knowledge, applying research-
based strategies to help students meet rigorous standards, and
using a variety of classroom assessments (p. 13).
Resnick and Glennan (2002) describe nested learning communities and
nested learning plans. They see no reasons why district, school, and individual
learning plans could not “nest” together to “become the strategic plan for the
91
improvement of instruction and learning for the district as a whole”….becoming
“real instruments for change” (p. 21).
There are many elements that must work in tandem as this change process
occurs. This may create disconnect or “disequilibrium” as Fullan (2001) calls it.
Deal and Peterson (1994) write: “Most changes require transformation in deeper
patterns of people’s thoughts, feelings, and beliefs….For anyone, this process is
frightening, disorienting, and upsetting” (p. 105). Fullan (2001) describes the
dynamics behind these change processes in more detail:
For all these reasons, successful organizations don’t go with
only like-minded innovators; they deliberately build in
differences. They don’t mind so much when others---not
themselves---disturb the equilibrium…Successful organizations
and their leaders come to know and trust that these dynamics
contain just about all the checks and balances needed to deal
with those few hard-core resisters who make a career out of
being against everything---who act, in other words, without
moral purpose (p. 43).
Indeed, this definition of expert teaching aptly describes where our students
need to be in the 21
st
century. Friedman (2004) writes that creativity is the most
important attribute we can continue to nurture in our students, and it may be the one
advantage America has over global competitors.
On such a flat earth, the most important attribute you can have
is creative imagination---the ability to be the first on your block
to figure out how all these enabling tools can be put together in
new and exciting ways to create products, communities,
opportunities, and profits. That has always been America’s
strength, because America was, and for now still is, the world’s
greatest dream machine (Friedman, 2004, p. 469).
92
He goes on to write that we need the next generation to be more imaginative than
ever. Good thing that the next generation, dubbed “the millenials” are characterized
by optimism, team-playing, confidence (Wilcox, 2006, p. 8). Within the mix of
standards-based reform, accountability, data-driven practices, etc., one cannot forget
the most important element of education: igniting passion in students.
93
CHAPTER 3
RESEARCH METHODOLOGY
Introduction
Thirteen Ed.D. candidates and two Ph.D. candidates at a large research
institution, under the guidance of Dr. David Marsh, Assistant Dean for Academic
Programs, worked as a cohort to investigate and describe how urban schools and
school districts throughout the state of California use data. These 15 doctoral
students had a wide range of experiences and positions ranging from being an
assistant superintendent to full-time graduate students. The researcher’s results
formed the basis of their dissertations. Three of these dissertations formed the
foundation for this comparative case study. Each researcher selected an urban and
ethnically diverse school within the state of California that had been recommended
as having advanced practices in data use. The three cases each looked at how a
district designed and used student performance data to increase student achievement,
and where this design fell in the context of current and emerging trends in data use in
the State of California. And finally, was this design implemented and adequate?
The purpose and focus of this cross-case analysis was to explore the effect
district data use policy had on the local school site and in what ways was data
effective at improving instructional practices in the classroom and thereby raising
student achievement. This chapter describes the design of the research study and the
94
methodology used for instrumentation development, data collection, and data
analysis.
Three research questions guided each of the case studies and cross-cases
analysis:
1. What is the district design for using data regarding student performance,
and how is that design linked to the current and the emerging state
context for assessing student performance?
2. To what extent has the district design actually been implemented at the
district, school and individual teacher level?
3. To what extent is the district design a good one?
This comparative case study is a collection of findings from three elementary
schools located in Southern California. The study used a multi-method qualitative
approach that focused on in-depth interviews, observations, and document analysis
(Best & Kahn, 1993; Patton, 1990). The study is a descriptive, analytic case study
that is qualitative in nature. Miles and Huberman (1994) describe the strengths and
defining characteristics of qualitative research: 1) Focus on naturally occurring,
ordinary events in natural settings, 2) Local groundness---data was collected in close
proximity to a specific situation, 3) Richness and holism---strong potential for
revealing complexity (p. 10). Miles and Huberman (1994) go on to define and
contextualize qualitative analysis as such:
Qualitative data….are a source of well-grounded, rich
descriptions and explanations of processes in identifiable local
contexts….good qualitative data are more likely to lead to
serendipitous findings and to new integrations; they help
95
researchers to get beyond in initial conceptions and to generate
or revise conceptual frameworks. Finally, the findings from
qualitative studies have a quality of “undeniability.” Words
especially organized into incidents or stories, have a concrete,
vivid, meaningful flavor that often proves far more convincing
to a reader---another researcher, a policymaker, a practitioner---
than pages of summarized numbers (p. 1).
Under the umbrella of qualitative analysis is the subset of a case study. “The
case study is a way of organizing social data for the purpose of viewing social
reality….The case study probes deeply and analyzes interactions between the factors
that explain present status or that influence change or growth” (Best & Kahn, 1993,
p. 193). This project focuses on multiple cases; two of the main purposes for looking
across multiple cases is to 1) increase generalizability and 2) deepen one’s
understanding of the findings “to develop more sophisticated descriptions and more
powerful explanations” (Miles & Huberman, 1994, p. 172). This study utilizes
qualitative analysis techniques including: interviews, observations, and documents
analysis. It uses a multi-case study approach and ultimately strives for
generalizability and a more sophisticated explanation in the study of elementary
teacher data practices. Based on these definitions, a descriptive cross-case study is
appropriate for this study.
Sample and Population
This study focuses on three elementary schools located in Southern
California. All of the districts and schools in this study were selected based on four
main criteria that created a purposive sample: 1) The district has a design in place for
collecting and using data regarding student performance; 2) The district must use
96
multiple measures (including norm and criterion-referenced testing) to gauge student
achievement that are linked to the current and emerging state context; 3) The district
and school were identified as engaging in “best practices” for data use and showed
student achievement gains as a result of these practices; 4) The district must have a
diverse student population with a mid-range socioeconomic status. A purposive
sample is helpful in narrowing down a wide range of research study choices and
providing a “logic and coherence that random sampling can reduce to uninterpretable
sawdust” (Miles & Huberman, 1994, p. 27).
A district administrator and site administrator were interviewed from all the
districts and sites. The interviewees were preferably the superintendent or an
assistant superintendent and a principal. In addition, there were six teacher
participants interviewed at each school; the teachers came from both positions of
leadership at the school (i.e., coaches, coordinators, etc.) as well as typical classroom
teachers. All subjects participating in the interviews were assured anonymity.
The three cases represent 3 elementary schools located in Southern California
with three different populations. One of the districts was considered “inner-city” and
two were more suburban. The sample and population are summarized in the
following table for the three cases:
97
Table 3.1
Sample and Population With Key Features of Each District and
School for Each Case
Case # Schools
in
District
Location # of
Students in
District
% of
Minority
Students in
District
# of
Students in
School
% of Eng.
Lang.
Learners
% of
Minorities
in School
1 10E,
3M,2H
Los
Angeles
15000 82% 523 50%
(approx)
88%
2 34E,
8M,5H
Orange
County
45892 27% 1006 < 3% 13%
3 6E,1M,1
H
Inland
Empire
16598 62% 700 52% 81%
There is quite a range in the size of the selected district cases; they range from
having 15,000 students in the district to almost 46,000. The districts also range in
diversity from a school with 13% of the population minority students up to 88% of
the population.
Case 1 consists of a K-12 unified district, “Studio Unified School District”
and a K-5 elementary school, “Divine Elementary School.” This elementary school
is a 1997 California Distinguished School, and many other schools in the district
have this designation as well. 66.2% of the school population is Hispanic and
represents the largest minority population group. Over half the students who enroll
speak another language besides English as their primary language. This is also an II/
USP school in its 3
rd
year of plan implementation.
Case 2 is also a K-12 unified school district, “Sun City Unified” and a K-5
elementary school, “Southern Elementary School.” The district spans 195 square
miles and has doubled in enrollment since 1992. The district is composed of a 27%
98
minority population while Southern Elementary has only a 13% minority population.
Southern Elementary is surrounded by a National Forest; the school incorporates the
study of nature and the environment into the curriculum. The PTA is an integral part
of this school and hosts many family events. The school has a stable student
population with a low transiency rate. Less than 1% of the students are on a free or
reduced price lunch program. There is a large emphasis on technology at this school;
there are 150 internet connected computers. There is also a school television show
that highlights student achievement and school activities.
Case 3 is a relatively small K-12 district, and a K-5 elementary school,
“Grand Elementary School.” Grand Elementary School has a large minority
population of 88%. Grand has shown strong academic gains as measured by the API
scores. From 1999-2001, the score has increased by 41 points.
Each of the cases has varied personnel resources at the school sites.
Naturally, The larger the school, the amount of personnel resources increases. Only
Case 2 had an assistant principal on the site. Two of the cases had at least a part time
literacy specialist and all of the cases had either an English Language Development
Specialist or aides to assist with second language learners. The schools also had
some unique personnel resources including: P.E. teachers, Program Manager, and
Computer Lab Aide. Table 3.2. describes the personnel resources available at the
schools in each of these cases.
99
Table 3.2
Personnel Resources Available at the Schools in Each of These Cases.
Case 1
Case 2
Case 3
Number and
type of
Personnel
Resources
For Each School
Site
-Principal
-23 General
Education Teachers
-3 Special
Education Teachers
-Curriculum
Specialist
-ELD Specialist
-Part time Literacy
Resource Teacher
-Psychologist
-13 Other Classified
Staff
-Principal
-Vice Principal
-42 General Education
Teachers
-Psychologist
-Speech/ Language
Teacher
-1 Resource Specialist
-2 P.E. teachers
-1 nurse (1 day per
week)
-1 librarian
-2 ELD aides
-2 instructional aides
-4 Other Classified
Staff
-Principal
-27 General Education
Teachers
-2 Special Education
Teachers
-Program Manager
(known as a
Compensatory
Education -Resource
Teacher)
-Literacy specialist
-5 Special Ed. Aides
-1 Computer Lab Aide
-4 Instructional Aides
-4 Bilingual Aides
Instrumentation
The design and instrumentation for this study was developed by a
collaborative team composed of thirteen Ed.D. candidates and two Ph.D. candidates
at the University of Southern California’s Rossier School of Education. Dr. David
Marsh led the research team in a seminar during the summer of 2001. The research
team collaboratively determined the focus of the study and research questions as well
as the instruments and conceptual frameworks needed. Current literature was
reviewed
Conceptual Frameworks for the Instrument Content
Two conceptual frameworks were developed for the study; they defined the
elements for the design and implementation of data use polices and practices.
100
Conceptual Framework A (CFA): Description of Data Use Polices and Strategies:
The Design is defined in Table 3.3 below. CFA drew upon the theory associated
with the relationship amongst levels of organization.
Table 3.3
Conceptual Framework A: Description of Data Use Policies and Strategies:
The Design
________________________________________________________________
• Student Performance Assessed in the Context of Current and Emerging
Instruments
• Overview of the Elements of District Design of Data Use to Improve Student
Performance
• District Decisions and Rulings that Support Use of District Design
• Intended Results of Design Plans to Improve Student Performance (District,
School, and Classroom)
• Data Use Policy and Strategy Funding
Conceptual Framework B: Implementation of Data Use Policy and Strategy
in Practice used the work of Hall and Hord ( 2001) to define the implementation of
change practices.
Table 3.4
Conceptual Framework B: Implementation of Data Use Policy and
Strategy in Practice
________________________________________________________________
• Degree of Design Implementation
• Implementation of Current Data Practices (District, School Site, and Classroom)
• Implementation of Emerging State Data Practices (District, School Site, and
Classroom)
• Accountability for data use at district, school, and individual level
• Improving Student Achievement through Implementation of Data Use
Tables 3.3 and 3.4 outline conceptual frameworks A and B; however, each of the
bulleted elements have a number of sub-elements that guided the researchers as they
assembled their case studies (Appendices A and B). Conceptual Frameworks A and
101
B also clearly delineated current versus emerging data practices. Table 3.5 defines
some of the elements that distinguish the evolution of current from emerging data
practices. It is important to be able to distinguish between these two types of
practices because they serve as a barometer in each case study to see if data practices
really are advanced and see if the data plans are adequately and well-implemented.
Table 3.5
Current vs. Emerging Data Practices
Current Data Practices Emerging Data Practices
• Improving student performance using
Stanford-9 scores and API ratings as
defined by the State of California.
• Improving student performance using
California Content Standards.
• Use of authentic assessments and
norm-referenced assessments.
• Interventions with students linked to
performance assessments.
• Use of criteria-referenced tests and
performance assessments.
• Preparation for the High School Exit
Exam (Sr. High only).
• Preparation for the California English
Language Test (CELDT).
• Measurement of student performance
against international performance
standards.
The cross-case researcher was one of the Ph.D. students on the team, and her
contribution was an overall synthesis of the design and instrumentation elements
through the Data Study Case Study Guide. This guide organized the collection of
information needed about many aspects of the design and implementation of data use
policy and strategy at the district and school. The case study guide included the
following elements: directions for collecting research, various instruments, and
Conceptual Frameworks A and B. Miles and Huberman (1994) anchored the
102
methodology used in the case study guide. The data collection instruments contained
in the Case Study Guide included:
• Teacher Questionnaire: A circled response, five point Likert scale survey.
Responses range from “strongly agree” to “strongly disagree.” The survey
consisted of 34 items focusing on a) degree of design implementation of
current data practices; b) degree of design implementation of emerging state
data practice; c) accountability for data use at district, school, and individual
level; d) improving student achievement through implementation of data use
(Appendix C).
• Stages of Concern Questionnaire: A circled response, five-point Likert
scale survey based on the Stages of Concern Model (Hall, Wallace, &
Dorsett, 1973). The questionnaire consisted of 35 questions constructed to
determine what teachers who are using or thinking about using the district’s
design to use data to improve student learning are concerned about at various
times during the innovation adoption process (Appendix D).
• Situated Interviews: The same six teachers from the formal interviews were
interviewed to generate examples about the ways in which they use data in
their school/ classroom. The Data Study Case Study Guide contained an
interview guide for the situated interviews (Appendix E).
• A Researcher Rating Form: This form and its matrices assisted the
researchers during post data collection to assess the adequacy of the district’s
103
data use design. The instrument is based on a five-point scale ranging from 1-
5. The ratings range from “not effective” to “very effective” (Appendix F).
The three guiding research questions had a clear link to the instrumentation.
This guided the researchers in their data collection:
Table 3.6
The Relationship of Data Collection Instrumentation to Research Questions
Data Collection
Instruments
RQ1: Design RQ 2: Implementation RQ 3:
Adequacy of
Design
Case Study Guide
• Interviews: District
Administrator, Site
Administrator, 6
Teachers (made up
of grade level/
department leaders
and average
teachers)
• Artifact Analysis/
Collection
• Quantitative Data
X X X
Situated Interviews
• Forming vignettes
with 6 teachers
X
Teacher Questionnaire
X
Stages of Concern
Questionnaire
X
Researcher Rating
Form (Post Data
Collection)
X
Data Collection
All of the dissertation researchers collected their information over a month to
two months in the fall or winter of 2001. Written permission was obtained from the
respective school district, sites, and personnel for the case studies to be conducted.
104
On-site data collection typically lasted a number of days if not weeks and involved
at least two separate visits. Multiple data collection instruments validated the studies
by providing corroborative evidence through triangulation, using several sources and
eliminating biases (Gall, Borg, & Gall, 1996; Patton, 1987). For example, the
triangulation of survey questions, archival data, and interview questions helped
validate the findings for research questions 1 and 2 (Duncan, 2002). Having
consistency among the cases through standardized data collection procedures was
also important and hopefully led to credibility, transferability, dependability, and
confirmability amongst the cases (Isaac & Michael, 1995). All subjects and schools
sites were assured confidentiality.
The research team met for two intensive training sessions. Once in October
of 2001 to review data collection procedures and then once post data collection in
February to discuss preliminary findings. The Ed.D. researchers completed their
case studies with their dissertations. The cross-case researcher analyzed the findings
of the researchers who targeted the elementary level as their case study.
Staff Interviews
Each of the researchers interviewed a number of people at the district, school,
and classroom level. All participants were assured of their anonymity. All
interviews were conducted using an Interview Guide, Conceptual Framework A
(CFA) and Conceptual Framework B (CFB). The entire teaching staff of each
school answered two questionnaires. Each individual researcher created data
105
collection instruments for the formal interviews with questions gleaned from
Conceptual Frameworks A and B. A formal interview was conducted with
at least one district leader, preferably the superintendent or an
assistant superintendent, one site principal and six teachers.
The teachers should come from both positions of leadership in
the school (i.e., chairs, coordinators, etc. as well as typical
classroom teachers (Data Study Case Study Guide, 2001).
A “situated interview” was conducted with the same six teachers from the formal
interview using the Interview Guide. The situated interviews were conducted in the
teacher’s classrooms or a location where examples of data use could easily be
accessed. “The purpose of these interviews is to generate stories and examples about
the way data are used (and not used) in the school you are studying. These stories
will be used to form vignettes that will help illustrate the information gathered
through some of the other instruments” (Data Study Case Study Guide, 2001). Table
3.7 defines the characteristics of the leadership and teachers interviewed.
106
Table 3.7
Characteristics of the Leadership and Teachers Interviewed
Case 1 Case 2 Case 3
District
Leader
Coordinator of Student and
Program Evaluation
White female with a
doctoral degree
employed by the district
for 41 years. In her
current position for 5
years.
Executive Director of
Elementary Operational
Services. White Female in
her current position for 1 year.
Assistant
Superintendent
White male who
had been in his
current position
for 4 years.
School Site
Leader
Principal
Bilingual, Hispanic
female at her site for 2
years.
Vice-Principal
Designated by principal
because most knowledgeable
about data use and student
performance at the school.
Also serves as the testing
coordinator and data trainer
at the school site. White
female, 2
nd
year in this shared
position with another school.
Principal
White female who
had been at her
current position for
1 year.
6 Teacher
Leaders
5 of the teachers were
classroom teachers
selected because they
had implemented the
data use policy.
1 of the interviewees
was a Curriculum
Specialist.
Grades K-4 white teachers
with 5-31 years of experience
interviewed.
6 teachers
Teacher Questionnaires
All of the teachers at the three school sites were asked to fill out two
questionnaires: Teacher Questionnaire and Stages of Concern Questionnaire. All
participants were assured that their responses would remain confidential. The
107
Teacher Questionnaire questions were based on Conceptual Framework B. The
second questionnaire, the Stages of Concern Questionnaire is a circled response,
five-point Likert scale survey based on the Stages of Concern Model (Hall, Wallace,
& Dorsett, 1973). The purpose of the Stages of Concern Questionnaire was “to
determine what teachers who are using or thinking about the districts design to use
data to improve student learning are concerned about at various times during the
innovation adoption process” (Data Study Case Study Guide). In Case 1, all 24 of 24
teachers present at a staff meeting completed the two questionnaires. In Case 2, all
38 teachers present completed the two questionnaires at a faculty meeting. In Case
3, 18 teachers out of a total of 29 completed the questionnaires. Some indicated that
they did not complete the questionnaire because they thought it was too intrusive
(Duncan, 2002).
Observation
All the observations conducted by the researchers were informal in nature
and helped to flesh out an “environmental” picture of the school and district cases
(Data Study Case Study Guide, 2001). The researchers colleted evidence form
district and school records, district and school policies, district and school
publications, and by other means available to them. Each researcher visited several
classrooms to view and document the instructional practices/ data use practices (if
any) that are in place in the classrooms. Sample documents were collected during the
observations.
108
Quantitative Data Analysis
The case researchers collected a number of documents that supported both
current quantitative data (API Reports (1999-2001), SAT-9 scores (1999-2001), and
STAR) and emerging data (Performance Assessments, ELA scores (Math, Science,
Social Science), and CELDT.
Artifact Analysis
“The artifact analysis is an informal environmental school and district data
collection” (Data Study Case Study Guide, 2001). The researchers collected
evidence while informally observing and interviewing district and school staff
including:
• State Reports (State of California Department of Education web page, STAR
reports, API, etc.)
• District Records (Check with research department, Superintendent’s office, etc.)
• School Records (check files, with Office Manager, testing coordinators, etc.)
• District Correspondence (Bulletins, Memos, Addendum’s, Handbooks, School
Data Systems (i.e., SIS)
• School Correspondence (Bulletins, Memos, etc.).
• District Publications (District Newsletters, Newspaper Articles, Training
Manuals, etc.)
• School Publications (Monthly Newsletters, PTA Bulletins, School Handbook,
etc.)
109
• Classroom Publications (Parent Newsletters, Back to School Night Agendas,
Classroom Newspapers, etc.)
• Teacher Management Systems (goal setting charts, portfolios, rubrics, lesson
plans, etc.) (Data Study Case Study Guide, 2001).
The researchers especially focused on providing several examples of data use on the
district, school site, and classroom level in both the current and emerging context.
Some of the documents collected in Case 2 were: API, School Accountability Report
Card, miscellaneous memos, individual teacher systems of data use, and newspaper
articles (Dombrower, 2002). Case 3 collected archival data for a number of years
that included: SAT-9 data, API data, the School Accountability Report Card
(SARC), and district developed and purchased interim assessment reports (Duncan,
2002).
The original researchers used various methods for recording data collection:
written notes, tape recorders, laptop computers, etc. Finally, the researchers
reconciled the proposed district plan for using data and what they actually observed
at the school site.
Post-Data Collection
The Researcher Rating form was utilized by all three researchers to
summarize their observations regarding the adequacy of design for data policies.
110
Data Analysis
The cross-case analysis was accomplished through content analysis of the
dissertations. The analysis was done using secondary data that was publicly
available. The cross-case researcher had the task of analyzing three case-study
dissertations with the same research questions using qualitative analysis. Patton
(1987) makes an adroit observation when he writes: “The analysis of qualitative data
is a creative process. There are no formulas, as in statistics. It is a process
demanding intellectual rigor and great deal of hard, thoughtful work” (p. 146).
During this process, information from the dissertations was coded, charted, and
compared to define patterns and themes across the cases. Gall, Borg, and Gall
(1996) describe examining case study data closely as “interpretational analysis”
whereby the researcher “find(s) constructs, themes and patterns that can be used to
describe and explain the phenomenon being studied” (p. 563). This will “…push
him or her to find the most ‘elegant’ organization of the themes” (p. 563).
The cross-case researcher followed the three steps for analyzing a case study
as defined by Patton (1987): 1) Assemble the case data, 2) Construct a case record
(“this is a condensation of the raw case data organizing, classifying and editing the
raw case data into a manageable and accessible package”), and 3) Write a study
narrative that is a “readable, descriptive picture…” (p. 149). The cross-case
researcher looked for details and patterns to weave together a story of how data was
111
used to drive instructional practice in classroom. The research questions, conceptual
frameworks and data collection instruments were used to generate coding elements
for a content analysis of the text. Miles and Huberman (1994) data displays were
used to classify and analyze the data using content analysis; the case record was
aided by using data displays as defined by Miles and Huberman. Coding schemas
were refined many times. For some of the coding, written description was color
coded to define areas of high, medium, and low.
Strauss and Corbin (1998) define coding as “the analytic process through
which data are fractured, conceptualized, and integrated to form theory” (p. 3).
Miles and Huberman (1994) define qualitative analysis as consisting of “three
concurrent flows of activity:” data reduction, data display, and conclusion drawing/
verification (p. 10). Data reduction consists of choosing what data to highlight and
simplifying it; in essence making “analytic choices” (p. 11). Data displays take all
generated text and break it down into easily digestible chunks that might include
matrices, graphs, charts, and networks. “All are designed to assemble organized
information into an immediately accessible, compact form so that the analyst can see
what is happening and either draw justified conclusions or move on to the next step
of analysis the display suggests may be helpful” Miles and Huberman (1994, p. 11).
Finally, conclusion drawing and verification is an ongoing process of “noting
regularities, patterns, explanations, possible configurations, causal flows, and
propositions” (p. 12). These three “streams” “form an interactive and cyclical
process” (p. 12).
112
The cross-case researcher began the analysis by reading each of the three
dissertations paying special attention to Chapters 4 and 5. Content analysis was used
to begin creating sub-categories or themes (the salient characteristics of each case)
(Gall, Borg, & Gall, 1996). Then the cross-case researcher looked for patterns
(explanations for phenomena) across the cases (Gall, Borg, & Gall, 1996).
Conceptual frameworks A and B and the researcher rating scale provided the
skeleton to begin coding the information from the dissertations. As the analysis
continued, data displays were continually refined. For example, the initial analysis
of the researcher rating form consisted of a display with the rating and textual
support for the rating. This same data display was also presented with just the
ratings color coded by rating to quickly and graphically see the patterns of high,
medium, and low ratings. Miles and Huberman (1994) describe this as
“partitioning” and “clustering” meta-matrices that “are progressively more refined,
usually requiring further transformations of case-level data into short quotes,
summarizing phrases, ratings, and symbols” (p. 178). This technique was used
throughout the analysis.
Conceptual Framework A (CFA) served as the skeleton to analyze the
findings for research question 1, data design. A data display was created to compare
the three cases based on CFA. A textual analysis of the three dissertations was
conducted and findings gleaned from interviews, artifact analysis, and quantitative
data were coded in the data display to look for patterns across the cases. The data
display was continually refined.
113
This same technique was used to analyze Research Question 2, design
implementation. Conceptual Framework B (CFB) was used to create a data display
to compare the three cases using information from interviews (formal and situated),
artifact analysis, and quantitative data. The Teacher Questionnaire scores were also
placed on a spreadsheet by question to look for averages and patterns across the three
cases. Likewise, the Stages of Concern responses were placed on a spreadsheet by
question/ stage to look for patterns across the three cases. A distribution of strongest
teacher concerns by case was also created.
The coding of Research questions 1 and 2 helped answer question three
regarding the adequacy of design; the interviews, artifact analysis, and quantitative
data pieces helped illuminate the findings for Research Question 3. In addition, the
Researcher Rating Scale was used to inform Research Question 3. A data display
was created using text and a numerical rating to compare the three cases. This
display was refined to a point where each category was color coded so the researcher
could quickly see patterns of high, medium, and low ratings across the three cases.
A number of problems arose during this cross-case analysis. The researchers
original data was lost, leaving some holes in what was reported in the dissertations.
The largest issues arose around the Teacher Questionnaire. Case 2 did not report
data for 8 out of 34 questions. Averages were made without this data for Case 2.
These problems will be pointed out as they arise in Chapter 4.
114
One of the weaknesses of case study research is that it “will allow a minimal
amount of generalization and little opportunity to derive conclusions of causality”
(Isaac & Michael, 1995, p. 219). This is true, only focusing on three districts and
schools in Southern California cannot account for all the variables one would find at
the many different schools across the state of California let alone the United States.
It can, however, shed light on the techniques and strategies that a district and school
that are purported to be effectively using data to drive instructional practice and
thereby improve student achievement are accomplishing this.
Summary
The cross-case analysis sets out to investigate how three elementary schools
use data in hopes of raising student achievement. Chapter 3 included a description of
the Sample and Population for the three elementary school cases. The
instrumentation used for the data collection process was also defined: Case Study
Guide, Teacher Questionnaires, Situated Interviews, and Researcher Rating Form.
The two conceptual frameworks used to guide the data collection were also
described along with the data collection methods used: interviews, questionnaires,
artifact analysis, and quantitative data collection. There was also a description of
data analysis practices using Miles and Huberman data displays to find patterns
across the three cases (Miles & Huberman, 1994). Chapter 4 follows and it will
provide the analysis and interpretation of data and findings and discussion by
research question.
115
CHAPTER 4
ANALYSIS AND INTERPRETATION OF DATA AND FINDINGS
Introduction
The purpose of this study was to look across the experiences of three districts
and three elementary schools within each of those districts as they grappled with
current and emerging state mandates regarding data use. Three research questions
guided the analysis and interpretation of data and findings: a) design for using data
regarding student performance and design linked to the current and emerging state
context for assessing student performance, b) implementation of district design at the
district, school, and individual teacher level, and c) extent the district design is a
good one. The data and findings for the three research questions will be presented in
order, across the three cases, and in great detail in this chapter. The cases used
multiple modes of collecting data including: interviews, questionnaires, informal
observations, rating forms, and review of documents (standardized test reports,
periodic assessments, district benchmark exams, etc.).
Research Question One: Description of Data Use Policies and Strategies: The Design
The first research question is: “What is the district design for using data
regarding student performance, and how is that design linked to the current and the
emerging state context for assessing student performance?” For this question
Conceptual Framework A guided the data analysis of the district data use policy and
practice. Interviews based on this Conceptual Framework provide the primary data
116
for evaluation as part of the Case Study Guide. In addition, the researchers also used
artifact analysis, and quantitative data to collect data for this research question.
Student Performance Assessed in the Context of Current and Emerging Instruments
Current Assessments
All three elementary cases reported that the district and school are focusing
on improving Stanford-9 (SAT-9) scores and API (Academic Performance Index)
ratings as defined by the State of California. Sun City Unified, Case 1, has a
Strategic Master Plan (SMP) which clearly incorporates the need for assessment
procedures that are standardized and have validity and reliability. The SMP also
outlines the expectation for the District to develop performance standards related to
the content standards. In Case 2, the district and all schools wanted to improve their
SAT-9 scores and API ratings yearly. In Case 3, the Administrator of Assessments
stated: “Drastic improvements have occurred in the last two and a half to three years.
We're using data to make decisions. There's been excellent growth, especially at the
elementary school level at almost every one of our schools" (Duncan, 2002, p. 80).
There appears to be an awareness on the district and school levels of the Stanford-9
and API scores in all three cases.
Authentic assessments are beginning to be linked to state standards
throughout the district, school, and classrooms in the three districts. There are
curriculum embedded assessments for language arts and math in Case 1. Both cases
1 and 2 have introduced their own nationally normed testing in addition to the
Stanford-9 (SAT-9) from the Consortium on Reading Excellence (CORE). In Case
117
2, “CORE assessments assisted teachers in planning challenging individual and class
lessons, planning interventions for some students and making data rubrics that
evaluated academic growth” (Dombrower, 2002, p. 72). Case 3 has implemented
benchmark tests for the California Standards. The benchmark tests are criterion
referenced and test student progress toward mastery of the California state standards.
The tests are given four times a year and teachers receive performance data for their
individual classrooms. These districts have made inroads towards innovative testing
by contracting with an outside agency to bring in CORE tests, using benchmark tests,
and curriculum embedded tests. They are, however, just beginning to scratch the
surface of what would be called “authentic assessments.” Guba and Lincoln (1989)
define an authentic assessment as presenting students with real-world challenges that
require them to apply their relevant skills and knowledge. The cases are
implementing the elements in the “current” assessment trend; they are predominantly
using norm-referenced assessments through the SAT-9 and CORE. There is little
evidence of performance assessments being used except in Case 3 through written
assessments. Clearly, the three cases are not fully implementing some of the key
elements of standards-based reform: authentic assessments, performance
assessments, and criterion-referenced assessments.
All three of the cases have seen their scores improve since they began
standards-based reform and using data to drive instructional practice. A comparison
between 1988 and 2002 Stanford-9 scores demonstrate strong increases in all three
elementary schools in the cases in Table 4.1.
118
Table 4.1
Comparison National Percentile Rank (NPR) for Stanford-9 Scores 1988 and 2002
2
nd
Grade
3
rd
Grade 4
th
Grade 5
th
Grade
6
th
Grade
Total Reading ‘98 ‘02 ‘98 ‘02 ‘98 ‘02 ‘98 ‘02 ‘98 ‘02
Case 1. 36 71 30 57 33 54 22 52
Case 2. 70 79 63 79 69 80 71 78
Case 3. 24 39 14 32 15 31 15 30 17 35
Total Math ‘98 ‘02 ‘98 ‘02 ‘98 ‘02 ‘98 ‘02 ‘98 ‘02
Case 1. 47 74 40 71 48 73 41 71
Case 2. 73 89 61 87 67 87 64 85
Case 3. 38 47 24 57 22 38 19 41 26 53
1998 was the first year the Stanford-9 was used for statewide testing in the State of
California. The scores are the National Percentile Rank (NPR) which is based on the
mean Normal Curve Equivalent (NCE) score for each group. The increases are quite
strong with over 20-point jumps evident. All grade levels and subject areas show
growth. This same solid growth was exhibited in Academic Performance Index
(API) score increases for the three schools in the case study from 1999-2002.
119
Table 4.2
Comparison Between 1999 and 2002 API Scores
1999 2002 Change
Case 1. 603 770 +167
Case 2. 808 876 +68
Case 3. 520 600 +80
Once again, all three cases made solid improvement on their API scores. The
highest API score a school can receive is 1000. Anything over 800 is considered a
“high achieving school.” Case 1 made a huge 167 point jump over a three-year
period. Though, the progress made by cases 2 and 3 was also very strong with a 68-
and 80-point improvement respectively from the beginning of API rankings (1998)
to the year data collection terminated for this study (2002).
All three of the cases had evidence to show that the schools are beginning to
link interventions for students to assessments (especially within the current
assessment trends). Case 1 had students pre-tested and then targeted instruction was
matched to their areas of need. In Case 2, teachers designed more challenging
studies by augmenting textbook lessons with computer programs in preparation for
standardized testing; this led to improved SAT-9 and API ratings. Case 3 linked
interventions to assessments through the use of technology including Accelerated
Reader, Accelerated Math, Waterford Reading, River Deep Math, and Edge Mart
120
sight word program. “These programs are tutorials that give students practice at a
particular skill and then provide the teacher with diagnostic feedback so that the
teacher knows how much progress the student has made toward skill mastery and
where the student is in the program lesson progression” (Duncan, 2002, p. 82).
Linking student performance assessments to interventions, however, does not appear
to play a strong part in any of the school’s or the district’s designs. Once again,
using data in relationship to performance assessments appears to be an
underdeveloped and unclear area in the district plans or intended district impact of
data use.
This weakness in all three cases is a key area in need of improvement.
Fullan, Hill, and Crévola (2006) describe “critical learning paths” or CLIPS that they
feel are necessary to improve instruction and student performance (p. 57). They write
that, “Focused teaching requires that teachers have precise and continuously updated
information on students’ starting points and on their progress along the way” (p. 63).
At the heart of CLIPS is the concept of differentiated instruction linked to ongoing
authentic assessments. A critical learning path is made up of “indicators of
progress:”
(They) provide the basis for monitoring the progress of each
individual student within a given stage of development. They
provide feedback to teachers on the effectiveness of their
instruction and specific instructional foci for their daily lessons.
They also form the basis for providing feedback to student to
enable them to self-monitor their learning, to evaluate their
performance, and to know what constitutes an improved
performance (p. 62)
121
The indicators of progress must be linked to an assessment plan made up of pre- and
post-testing and procedures for monitoring student progress on a daily basis; these
elements form an explicit profile for each student that makes up the “instructional
regime” of a classroom (p. 63). Technology will be very useful in providing
accessibility to manipulate the data generated from ongoing authentic assessment.
State-of-the-art technology is being used to address current and emerging
student assessments on the State, district and school levels to varying degrees in the
cases. Case 1 has a well-established relationship with using technology to facilitate
data. A coordinator of Student and Program Evaluation was created six years ago;
the job description is to perform “organizational diagnosis” (Thompson, 2002, p. 99).
DataWorks is used to manage the data for the district. The District would like to
invest in the Instant Data Management System (IDSM) which instantly disaggregates
data. Case 3 also uses technology to harness all the data the district generates.
SPSS, CCC, AR, AM, Waterford, River Deep, Edge Mart, and Nova Net are used to
help students improve reading and math skills. The district is "constantly evaluating
the usefulness of intervention software programs that they have in place in their
schools….In sum, the district has utilized computer technology as an intervention
strategy for students who need additional resources to help meet mastery of
curriculum standards" (Duncan, 2002, p. 83). Case 3 was lacking “a central data-
base that provides the whole organization with a central source of data with the
capability of having it formatted in a way that may be meaningful to the particular
user” (p. 123). An example of this would be item analysis pages and matrix pages
122
that describe which standards are being mastered by students and which may need
review and reteach which could be found in a program like IDSM (p. 124). Case 2
does not put much of a focus on technology for their data management. In fact, Case
2 had so little technology support for data analysis that the elementary school
designed paper and pencil methods to chart data and student progress. “If the school/
teacher implementation of data utilization had been a technologically assisted
program of data collection and use, the school’s data program would lead to better
improvement of education performances (McLean, 1995)” (Dombrower, 2002, p.
140).
Emerging Assessments
The district and school data use design plans are beginning to address
emerging assessments. Case 1 clearly addresses emerging assessments in the SMP.
Their SMP was ahead of its time and continues to be modified as needed. The plan
was implemented during the 1997-1998 school year and consisted of summative
assessments. In the 1998-1999 school year, it was modified to include multiple
measures of student achievement including uniform interim assessments. "…(D)ue
to accountability efforts in California, this Action Plan will be sustained and
modified as needed to mesh with the emerging assessments tied to accountability"
(Thompson, 2002, p. 99). Case 3 is also responsive to the changing assessments.
The district is pointedly aware that the emerging practices of the state will eventually
place greater weight on the California Standards Test (CST), a criterion-referenced
test, in calculating API scores. As a result, they implemented the practice of
123
benchmarking student progress through the use of criterion-referenced tests in their
quest for improving student achievement scores and API outcomes. Case 2, again,
does not demonstrate a deep awareness or planning for the impending emerging
assessments. None of the districts have preparation for the High School Exit Exam.
There was mixed data use for some of the emerging elements: CLAD, CA
Writing Assessment, criterion-referenced testing, and international performance
standards. Only Case 2 had preparation for the CELDT (California English
Language Development Test). The Hampton Brown and Peabody Series were used
to prepare students for the CELDT in addition to computer programs to enhance
English development classes in Case 2. Training (CLAD) was used to prepare
teachers to teach ELL learners. Only Case 3 analyzed the data from and prepared
students for the California Writing Assessment. The district has developed writing
prompts tied to the state standards and are pilot testing them. Only Cases 1 and 3 use
criterion-referenced tests on the district and school levels. Case 1 used criterion-
referenced tests district wide for elementary school mathematics and language arts
assessments before content standards were in place in the state; currently, they use
curriculum embedded assessments. Case 3 uses the criterion-referenced benchmark
tests to assess student progress.
None of the cases has measured student performance against international
performance standards by participating in an assessment like NAEP (The National
Assessment of Educational Progress).
124
Overview of the Elements of District Design of Data Use to
Improve Student Performance
All state mandated tests are collected throughout the three districts as a result
of the 1999 Public School’s Accountability Act from the state of California. Most of
the districts are moving toward using a collection of assessments that include
multiple measures: criterion referenced assessments, norm-referenced assessments,
classroom grades, writing assessments, benchmark assessments, and diagnostic
reports from tutorial programs for reading improvement. Out of the three cases, only
Case 1 appeared to have a clear timeline for the receipt and use of data in the district
design. All of Case 1’s data was funneled and collected into the DataWorks
Program. The Testing Coordinator in Case 1 said that DataWorks “allows her to
know the status of everyone of our 12, 000 kids based on multiple measures and
interim assessments” (Thompson, 2002). In contrast, in Case 2, the teachers acted on
their own direction as they made flowcharts displaying student growth patterns; “A
school infrastructure system to assist their efforts of data utilization did not exist”
(Dombrower, 2002, p. 75). Case 3 had little district direction for the organization of
data. The elementary school site sets aside one day a month to analyze the data.
Teams of teachers work together.
The districts do provide schools with the results of these multiple measures.
In Case 1, this is evidenced by the administrative notebook as well as the in-house
data each school generates from interim assessments. The school sends the District
the scores on bubble sheets in June; however, the information from the assessments
125
and measures is used by the teachers throughout the year to “inform and plan
instruction, student grouping and design specific interventions to meet the needs of
students identified as lagging in the acquisition of these skills and knowledge"
(Thompson, 2002, p. 101). In Case 2, all SAT-9, SAT-9+, and CORE data is sent
from the Superintendent to the principals. Data disaggregation procedures are done
by the local schools and practices differ at each school site. In Case 3, the district
has committed resources to ongoing data use. There is a Program Evaluator who
handles the data needs of the district. Most data analysis is handled at the District
level including the SAT-9, 4
th
grade writing results, Benchmark Test classroom and
school reports. Then teams of teachers disaggregate the data based on their own
needs at the local school site. The district hopes to hire an Evaluation Technology
Specialist that can merge data from different software systems to provide more
succinct information for district administrators and other school personnel.
Once again, only Case 1’s district had a clear rationalization as to what
methods to use to analyze data. In addition to utilizing DataWorks, the Coordinator
has school staffs do a six-variable analysis of student achievement on the SAT-9.
Cases 2 and 3 do have locally, school designed methods of analyzing data, but there
are no clear directives from the District. None of the schools used reputable research
to form the bedrock of the district’s design of data use. Only the principal in Case 3
alluded to the work of Doug Reeves and reported that the district has used the book
Results by Schmoker (1999). The Principal also used refereed journals for staff
126
development such as an article from Leadership Magazine on how high poverty
schools improve achievement (Duncan, 2002).
Integrating training for improving/ modifying data into the district/ school
data use plan is a weak area for all three cases. All of the districts and teachers
reported the need for more staff development on this topic. Some training is going
on throughout the three districts. Case 1 has training from the Pulliman Group for
CORE. In Case 2, the Vice-Principal trained the teachers on how to use data and
chart the information into a planned learning log. In addition, the school district
offered six mandatory staff development sessions for teachers and would pay as
much as twenty-one additional hours of outside staff development. In Case 3,
teachers are encouraged to go to training sessions held by such speakers as Doug
Reeves or Mike Schmoker; however, even with these training options, teachers do
not feel that they are receiving adequate staff development.
District Decisions and Rulings that Support Use of District Design
The School Boards in the three cases have varying degrees of influence with
the data use design process in the districts. Case 1 is a good example of an involved
and participatory School Board whose rulings directly supported the district design
of data. The Coordinator in Case 1 said that the Board is “behind uniformity” and
“wanted multiple measures of student achievement” (Thompson, 2002, p. 108).
Currently, there is a draft policy entitled "Curriculum Review, Improvement, and
Implementation" that delineates the steps in which the district will implement
reform. The School Board in Case 2 seems to have limited influence on district data
127
policy. Historically, the District formed a committee to make the initial policy
proposal for an assessment program. The School Board of Trustees adopted the
policy in 1995. It became the CORE Reading Assessments which was designed by a
non-profit organization out of the University of Chicago. The school board in Case 3
is not “directly involved in the structure or design of a district plan for data use”
(Duncan, 2002, p. 83). The district sees data use and implementation as a function of
administration under the Department of Instructional Services.
It is easy to see that the School Board in Case 1 has clearly considered State
Legislation. Thompson (2002) states that “(t)he reason the ‘Studio Unified School
District’ has chosen to implement a data use policy is to be in alignment with the
new state philosophy of academic success for all students. They have stated this in
their SMP Goals/ Action Plans” (p. 112). One cannot see such clear influence from
the other two school boards in Cases 2 and 3. The State mandates seem to have
trickled down to the local school sites even without a clearly laid out district data use
plan and without any influence from their school boards. Case 2 admits that “Sun
City Unified School District did not have a district design, but of importance was
their support of standards and the district’s state normed CORE tests” (p. 76). All
three cases are vague about any timeline to implement the district design, and there
does not appear to be any research studies guiding the board support of the district
design except for CORE in Cases 1 and 2.
Evidence was found in only Cases 1 and 2 where multiple stakeholders were
involved in establishing/ influencing the boards and ultimately district data plan. It
128
was very important in Case 1 that within the first phase of this policy reform,
representatives of all stakeholder groups will be a part of the "Board's continued
effort to align curriculum, resources, textbook selections, and technical assistance to
the state content standards…" (p. 108). Case 2 had less hands-on assistance from
multiple stakeholders, but the Superintendent was trying to communicate with the
community at large through a community newspaper and the district website.
Intended Results of Design Plans to Improve Student Performance
(District, School, and Classroom)
District
All three of the districts want to use data to increase student performance and have
stepped up their focus on improving student performance. In Case 1, the district does
want to use data to increase student performance (especially using multiple measures
and interim assessments). Their draft reform policy states that they "desire to
provide an educational program that prepares students for each school level and for
post secondary education” (Thompson, 2002, p. 111). Case 2, “Sun City Unified
School District, intends to improve student performances on tests through
challenging classroom instruction" that prepares students for the State’s required
SAT-9, SAT-9+ and CELDT (Dombrower, 2002, p. 76). The Executive Director
described the process used to improve student performance district wide: 1)
determine Multiple Measure Scores, 2) give the results to schools for teachers to
chart individual student growth patterns, and 3) enhance academic learning that
included the use of technology with students. The Superintendent in Case 3
129
described the desire to use data to increase student performance in terms of principal
evaluation: “When they come in for evaluation, we ask them some pretty poignant
things and its all data driven. I think a principal a couple of years ago would have
gotten a different evaluation” (p. 85). Each school is expected to improve their API
numbers in the manner prescribed by the State at a minimum in Case 3.
The districts do support the school’s efforts to improve student performance
by providing student data. In Case 1, all administrators receive a copy of a notebook
of printouts and analysis of how their students are doing. The notebook includes a
copy of the California Content standards by grade level and a description of the
California Writing Standards for Grade 4. Each site is asked to do a six variable
analysis of student achievement on the SAT-9. In Case 3, The district has committed
financial resources for ongoing data use. They have hired a program evaluator and
want to create a position called an "evaluation technology specialist whose duties it
would be to merge data from the different software systems to provide more succinct
information for district administrators and school building personnel" (Duncan,
2002, p. 85).
Case 1 is clearly in the process of creating a draft plan for data use and they
intend to ultimately have implementation of their draft plan at the school sites.
Luckily, many of these steps in the plan have already been implemented and are
evident in the "Studio Unified School District” at the time this research data was
collected. Cases 2 and 3 don’t have specific written district data plans. There seems
to be a general idea of how a data plan should be constructed from the district’s
130
perspective, but the plans have not been made prescriptive district wide in the form
of a written plan. A great deal of the two cases’ data use design and planning is done
at the local school level.
School
All three schools are focused on using data constructively to either directly or
indirectly promote student learning. At "Divine Elementary School," in Case 1, the
district reform policy "has taken a back seat to the school's II/ USP (Immediate
Intervention/Underperforming Schools Program) plan. II/USP schools were given
leeway to have a 'special design' according to the site principal" (Thompson, 2002,
p. 113). Though, this was not necessarily a bad thing. The main tenet at the heart of
the school’s plan and the district's plan are the same: improve student performance
by using "performance data from interim assessments to focus instructional
interventions where they are needed so every child is successful" (p. 113). In Case 3,
the principal said,
the commitment to using data is high….Directly, the school
site, grade level, and classroom data has been analyzed. Most
specifically it has helped grade levels look at instructional
practices and helped them to formulate and create instructional
practices that are lacking (Duncan, 2002, pp. 86-87).
The teachers are all very involved in the three cases with data management and this
appears to be important to the school leadership teams. There does, however, appear
to be a discrepancy in Case 2 between how involved the district feels teachers are
and how involved teachers feel they are with data analysis. The Vice Principal "tried
to create and sustain professional grade level teams or 'communities' that were
131
focused on the success of students through her training for all teachers"
(Dombrower, 2002, p. 78). Teacher interviews in Case 2 indicated that there was
little across school collaboration of data efforts.
In contrast, Cases 1 and 3 focused their effort to make the use of their data a
collaborative process. In Case 1, the staff and parent/community representative
wrote the II/USP plan with the assistance of the Pulliam Group, an outside
consultant. The plan that was developed had five main goals: 1) develop interim
assessments, 2) use student performance data to drive instruction, 3) create a matrix
of assessments by grade level, 4) level the field for students by giving them at least
three years of assistance, depending on need, and 5) have all teachers at the site
trained and using the CORE materials and strategies. In Case 3, the school
established a time once a week where teachers can collaborate and discuss
instruction, student progress, and assessment. One day a month is set aside solely for
the analysis of data. The teachers work in grade level teams during the analysis and
are guided through the process by the principal, assistant principal, and program
specialist. The teachers are given testing data (state testing, district testing, and
classroom testing) and a set of guiding questions to assist in the analysis. Case 3 has
no plans to improve or change the way data is gathered or analyzed. The principal
and program specialist meet every week to "ensure that they are getting what they
need out of the data. In terms of use, the school uses the data to develop an action
plan based on data analysis. This is an ongoing process that the school feels it needs
to reflect on, use and then follow through on” (Duncan, 2002, p. 87). Other than
132
Case 1, there appears to be little long-term data planning or needs assessments to see
where improvements might need to be made.
There is professional development that is offered in the three cases to
improve administrator’s and teacher’s ability to use data to increase student
performance; however the quality of this staff development varied among the cases.
In addition to being trained in CORE, the II/USP plan in Case 1 provided training for
staff members on using data to drive instruction and a program from the Pulliam
Group called Instant Data Management System (IDSM). Each year in early
September in Case 2, new teachers and returning teachers reviewed how to use a
flowchart to assess student data. All teachers had a timeframe to prepare their charts
or rubrics of SAT-9, SAT-9+ and CORE results. The school district offered six
mandatory staff development sessions for teachers and would pay for as much as 21
hours of additional outside development sessions per teacher. In Case 3, there did
not seem to be any “direct evidence that indicates that the district is allocating funds
for professional development in the area of data analysis” (Duncan, 2002, p. 86).
Rather, “the emphasis in the district is on data collection and dissemination to school
level administrators and teachers" (p. 86).
The School in Case 3 does seem intent on scaffolding the data analysis as
much as possible for the teachers with little guidance from the district. The school
has articulated its procedures for data analysis in its strategic plan; the plan stipulates
that teachers utilize a variety of assessments to determine student’s success. The
plan reads, "Analysis of the data from these assessments includes teachers, students,
133
and parents information needed to drive instruction. The data also informs
individual and group instruction needed to remediate and/or address areas of
weakness" (p. 87).
Clearly, each case’s school administrator supports school-wide
implementation of a standards based curriculum to improve student performance.
The Administrators in Case 1, support a standards-based curriculum, the
implementation of the District SMP, and their IIUSP plan which is stewarded by the
Pulliam Group. In Case 2, standards-based curriculum was implemented district
wide. “It was the belief of the Vice-Principal of Southern Elementary, that the state
standards were geared to the median of improved academic growth and she wished
for the teachers to achieve higher growth than the mean” (Dombrower, 2002, p. 78).
Teachers were asked to select the standards they were to teach and gear their
instruction according to the flowchart information on each student; in this manner
the instruction was personalized (p. 78). Even with this scaffolding, “the principal
tells teachers ‘we can not use the students as excuses for lack of success.’" Case 3
states that "Based on all of the vehicles that have been put into place to promote data
analysis at the school, it is quite obvious that the administration does wholeheartedly
support school-wide implementation of standards based curriculum" (Duncan, 2002,
p. 89). The principal addresses standards at every Tuesday professional development
meeting. "Teachers break down where standards are in the curriculum, where
standards are in the instruction, and we allow time for teacher collaboration and
implementation for all the above" (p. 89).
134
All three cases found evidence that the school regularly informed the parents
and community of student performance. In Case 1, “…all stakeholder groups are
made aware of the alignment of standards and performance expectations throughout
the school year through multiple communications” (Thompson, 2002, p. 143). In
Case 2, Students and parents were informed of academic progress through
instructional programs, assessment goal setting conferences, report cards, or
retention letters sent to parents. The Superintendent also communicated with
students and parents from a local, weekly newspaper column and on-line as well as
the occasional bulletin. In Case 3,
The parents are provided with a bi-monthly newsletter as a
vehicle for raising parental awareness of state and district
standards, to provide information on how they can help their
children at home, and to inform them about the availability of
parenting classes and community references (p. 89).
Only Case 3 demonstrated that the effectiveness of instructional programs
was evaluated. They evaluated the effectiveness of their instructional program for
each subject area sometimes creating their own assessments in the case of spelling,
using SAT-9 data, benchmark results, API subgroup goals and other STAR
information, and ELL progress.
Two of the cases actively discussed the roadblocks or challenges they faced
in creating a culture of assessment to drive instructional decision-making. In Case 2,
the Executive Director felt that "interpretation of data by teachers" was a roadblock
and the teachers felt that lack of time was a roadblock (Dombrower, 2002, p. 86).
The principal in Case 3 felt that a roadblock was "getting beyond the no excuses
135
attitude. Trying to embed the high standards without alienating the staff. Helping
teachers deal with the feeling of being overwhelmed. Supporting the teachers over
time. At this school, it was not really a problem because there had been no real focus
in the past. It was a matter of bringing focus to the school" (Duncan, 2002, p. 91).
These two cases point out how important creating a culture of change is as a new
plan/ initiative is designed and implemented.
Classroom
All three cases show a tenuous relationship between the district, school, and
classroom teacher who is ultimately responsible for directly impacting a student’s
learning. In all three cases, classroom teachers are using data to analyze
performance data and other forms of data. For example, in Case 1, “grade level
teams use data to plan their instruction, group students for intervention and monitor
the achievement of all students. Each month, teachers turn in a 30-day plan to the
principal at this site. These plans rank students, list needs, and identify intervention
strategies to be used with underperforming students” (p. 114).
Whether or not teachers can effectively analyze data or have been provided
the necessary training to improve their ability to use data are other issues entirely.
Case 2 reports that there “was no uniform training provided by the district for all
teachers on how to improve student’s performance through the use of data
(Dombrower, 2002, p. 80). Nevertheless, the teachers at Southern Elementary had
been trained to utilize data flowcharts as an aid in instruction. In Case 3, teachers
indicated that they “did not fully understand how to analyze and use the
136
data….Additionally, they did not really understand the district’s role in data use”
(Duncan, 2002, p. 92). One reason cited for this was because the “culture of data use
was not built on a foundation of empirical evidence. Therefore the teaching staff had
no basis for understanding the concepts and constructs of standards based
instruction, or the methodology for data analysis of standards-based outcomes”
(Duncan, 2002, p. 94). The principal outlined three ways they provide staff
development on data use for teachers: 1) model, 2) provide worksheets, and 3)
provide action plans. There are also mini trainings at monthly meetings to help
teachers use data. An artifact analysis showed the worksheets to be helpful because
the teachers were “guided through the analytical process in a concrete fashion” (p.
93). They tell teachers what they should look for in the data and what the data is
telling them (p. 93). Duncan 2002, surmised that the teachers were being provided
"exposure to data analysis, (but) they are not exactly highly focused training with the
staff on analysis of data provided to them about the children they teach" (p. 95). The
teachers in Case 3 did not find these professional development opportunities
sufficient to provide a complex understanding of how to analyze and use data.
Data Use Policy and Strategy Funding
Data use implementation is funded from several sources for the three cases
including II/USP money from the State of California, Federal Title I funds, money
received from the state for testing apportionment ($5.00 per student), donations, and
general funds. Schools have a definite role in the budgeting and funding of the
programs and strategies. In Case 1, The district divides the funding equally for all
137
schools. The school used Title I funds to make the 50% Curriculum Specialist
provided by the District a full time position. The II/USP funds were also used for a
consultant (Pulliman group) and to purchase additional materials for teacher as well
as conference attendance. In addition, the funds were used to train teachers in
CORE. In Case 2, the schools have definite choices especially beyond the general
fund with regard to budgeting for data use. The school controlled the budgeting and
funding of programs and strategies for data use, and grade levels did not have an
equal share of funds. Both Cases 1 and 2 see areas where additional funding might
be needed. In Case 1, The II/USP funding is ending at the end of the year. The
principal and teachers are concerned where future funding will come from to
continue to fund CORE training for new teachers to the site. The principal felt this
was key to continue the data strategies that have been used at the site. In Case 2,
Teachers said they would like reimbursement for enrichment instructional materials
for their classrooms that support data driven instruction. Case 3 makes no mention of
how funding was a part of data use policy and strategy funding other than in the
purchase of technology to help students master curriculum standards.
Research Question Two: Degree of Design Implementation
The second research question is: To what extent has the district data plan
actually been implemented at the district, school, and individual teacher level?
Conceptual Framework B was designed to provide a framework to analyze the data
regarding the implementation of data use from the district, school, and teacher levels
in the three cases. The cross-case emphasis will be on the relationship between these
138
three entities (district, school, and classroom) and how this impacts the
implementation of the district data use design in the classroom. Three elements
made up Conceptual Framework B:
1) The degree of design implementation in the current and emerging contexts.
2) The accountability for data use at the district, school, and individual level.
3) Improving student achievement through implementation of data use.
Interviews with district administrators, site administrators, and teachers, along with a
Teacher Questionnaire and a Stages of Concern Questionnaire provided the tools for
the data collection for answering this second question. This combination of
information from a variety of sources paints a picture of implementation that is more
accurate than if only one source is used. The integrity of the inferences drawn is
supported through the use of triangulation from the use of multiple data sources
(Gall, Borg, & Gall, 1996).
Teacher Questionnaire Responses to Current and Emergent Assessments and
Evidence of Student Improvement
Teachers are the critical factor in any implementation process (Fullan, 2001;
Reeves, 2002; Schmoker, 2001). Research question two targeted teachers’ attitudes
towards district and school data plans and use. The table that follows summarizes
the average responses from the first part of the teacher questionnaire that focuses on
current data practices that address achievement on current state assessments. The
scores on this questionnaire are: “0” = don’t know, “1” = disagree strongly,
“2” = disagree somewhat, “3” = agree somewhat, and “4” = agree strongly.
139
There is an anomaly in the data presented. Case 2 did not report data for eight
of the thirty-four questions. The averages reported for Case 2 do not include these
items; the averages are probably higher than they normally would be with the
missing data.
Implementation of Current Data Practices
Table 4.3
Average Responses to Implementation of Current Data Practices to Improve
Achievement on Current State Assessments
Case 1 Case 2 Case 3
Degree of design implementation
of current data practices.
3.38 3.31
*
3.09
* average with questions 4, 5, 10, and 11 missing
An inspection of the responses in Table 4.3 reveals the relative strength of
design implementation at these elementary schools. All of the cases were in the
range of “Agree Somewhat” with the implementation of current data practices.
Earlier discussions from Research Question 1 support this assertion as well that the
cases make a strong showing in the area of current data practices.
Case 1 has done a sound job of implementing current data practices. The
District completely attained their goal/ action plan in this area; “…the district
‘conducted norm-referenced, criterion-referenced assessments of students K-12,
analyzed resulting data, training site representatives and administrators, parents and
140
teachers to utilize formative and summative data to inform instruction and improve
student achievement’” (Thompson, 2002, p. 118). The teachers agreed that there had
been consistent implementation of current data practices. The mean score of all
responses to Questions 1-16 was 3.38. Two mean responses are also very high and
indicate that teachers “Agree Strongly” that they collect data (4.00) and use data to
compare past and present performance of an individual student (3.95). Questions 15
and 16 were the lowest mean scores (2.20 and 2.87 respectively). These questions
dealt with sending reports to parents on a regular basis and the school reports of data
implementation for the District.
The responses for Case 2 were solidly in the range of “Agree Somewhat.”
The teachers showed they were comfortable with implementing current data
practices. An example of this practice was the charted rubrics, a school directive that
had been phased in over the past two years at the elementary school. This procedure
charted student performance based on SAT-9, SAT-9+, and CORE data and allowed
for targeted intervention based on student need. The students were to be followed
longitudinally over a period of three to five years. In addition, the teachers kept track
of student performance in each academic subject area by academic standard
according to the weeks in a school year. This performance was also compared to the
previous year’s performance. Classroom progress was also compared. If a
classroom’s progress was not similar to other like classrooms, this might suggest that
some instructional strategy changes should be made in that particular classroom.
141
Two items in Case 2 were significantly lower than the other 14 items in this
section. Item 14, “I use data to compare students across the school in the same
grade,” received an average score of 2.78. Item 15, “Reports are sent to parents on a
regular basis (about once a month),” received the lowest average rating for this entire
section: 2.48. Both of these questions were in the “Disagree Somewhat” range and
indicated that there was still improvement to be made with current data practices in
Case 2.
Case 3 focused less on their non-existent district data plan and more on the
local school plan for data use. This case was especially concerned with whether
teachers were successfully implementing the findings from their data analysis into
their instructional practice. This case found that most teachers believed that there is
a high degree of implementation of current data practices. Seventy-six percent of the
teachers in Case 3 reported they “Agree Somewhat” or “Agree Strongly” with
questions 1-16 on the Teacher Questionnaire. This is supported by observational
evidence that teachers do look at data on a weekly basis and the district provides the
school with data related to current state data practices. In addition, the teachers
perceive that they are using data to guide their instructional practices through the use
of data to monitor student practice, improve student outcomes, and compare students
in class and across grade levels. It should be noted, though, that 24% of the teachers
either “Don’t Know,” “Disagree Strongly,” or Disagree Somewhat” that their
current data practices address achievement on current state assessments.
142
Only Question 15 is a perceived weakness across all three cases. Question 15
reads: “Reports are sent to parents on a regular basis (about once a month). The
three cases are not sending home feedback to parents on a regular basis other than
report cards during various reporting periods.
Implementation of Emerging Data Practices
Table 2 displays responses to questions that investigate the extent to which
teachers perceive accountability plays in the implementation of a district’s design for
data use with respect to emerging state assessments. The questions contained in this
subset focused on frequent professional development, frequent discussion of new
data practices with colleagues, and assistance from school administrators in
implementing new data practices.
Table 4.4
Average Responses to the Degree of Implementation of District Designs for
Data Use with Respect to Emerging State Assessments
Case 1 Case 2 Case 3
Degree of design implementation
of emerging data practices.
2.81 2.72
*
2.5
* Average with questions 22 and 23 missing.
There is a contrast between Table 4.3 and Table 4.4. Indeed, there is a
distinction between current and emerging assessments. Teachers “Agree Somewhat”
that there is implementation for current data practices and “Disagree Somewhat”
around implementation for emerging data practices.
143
Case 1 was on the higher end of “Disagreeing Somewhat” with a 2.81
average response. Teacher interviews demonstrate that teachers are aware of current
data use mandates and policies and are not as aware of the emerging design or the
emerging state accountability measures. The District and school use both criterion-
referenced and norm-referenced assessments and are moving from summative to
formative assessments. There are, however, areas where an emphasis on emerging
assessments is falling short. Teacher interviews indicated that all of the teachers
interviewed were not aware that the CELDT had been administered to students nor
were they aware of what the CELDT assessed. A review of interview questions for
Case 1 reveals that teachers have concerns about the quality of professional
development and the inability to have an effective system to quickly and
immediately disaggregate data like IDSM (which the district is hoping to bring on-
line quickly). The teachers were also concerned with the data analysis lag from the
district. Disaggregated data sent to the District in June is not seen again until the
next school year---too late to be of use for the children it addresses.
The weakest areas for Case 2 in the emerging context of data use involve
frequently discussing new data practices with teachers who are less experienced and
with teachers in different disciplines: 2.86 and 2.05 respectively. On a positive note,
Question 18, “I have attended professional development training in the past six
months related to new data practices” was quite high: 3.42. It seems that staff
development on data practices is beginning to happen regularly at the school, but
articulation amongst teachers on best data practices in the emerging context is
144
lagging behind. Implementation of these practices is also slow; the administration
and teachers indicated “time” as a roadblock in the implementation of the school’s
design of data use.
Case 3 found that the teacher’s perceptions of the implementation of
emerging state data practices are mixed. It does appear that a small majority of
teachers believe that the school is implementing some of the emerging practices—
51% of the staff reported that they “Agree Somewhat” or “Agree Strongly” with
questions 17-23 regarding design implementation of emerging state data practices;
32.4% “Disagree Strongly” that their school is implementing emerging practices.
Interviews with the teachers in this case indicated that they felt they had not received
a high degree of professional development training in the use of data and the
emergence of new data practices.
Accountability Linked to the Use of Data Strategies
Interview responses from Case Studies 1, 2, and 3 indicate that the majority
of teachers interviewed believe that the state holds the district accountable for
improving student achievement on state assessments. This translates into a district or
school site plan to use student performance data as a means for accomplishing that
goal. Students are also perceived as being held accountable by the explicit use of
standards in the classroom. Table 4.5 illuminates the teachers’ average responses to
the degree accountability is linked to the use of data strategies
145
Table 4.5
Average Responses to the Degree Accountability is Linked to the Use
of Data Strategies
Case 1 Case 2 Case 3
Accountability for data use at
district, school, and individual
level.
2.59 3.60
*
3.13
* Average with questions 28 and 29 missing
Cases 2 and 3 “Agree Somewhat” that there is accountability present from
the state, district, local school site, and teachers for data utilization. Their respective
scores were: 3.60 and 3.13. The mean score for Case 1 is 2.59 which means that
teachers “disagree somewhat” that there is accountability on the district, state, and
classroom levels. Only Case 1 specifically shared the low-rated results of questions
28 and 29 dealing with salary and promotion being dependent upon utilization of
data practices: 1.00 and 1.04 respectively. These two extremely low scores for
questions 28 and 29 probably lowered the average for Case 1 considerably. Clearly,
teachers do not feel these two elements are at all in place. Interestingly enough, the
rest of the scores in this section on accountability are quite high for Case 1 and range
from 3.00-3.75. The highest score for Case 1 (3.75) in this section is question 26,
“My school holds me accountable for data utilization.” Teachers in this case made it
clear that it was their responsibility to implement both their II/USP plan and the
District plan regarding data use. The teachers also feel that there are high levels of
146
accountability and the District is accountable to the State and the school is
accountable to the District.
In Case 2, “Teachers were of the opinion that accountability was hierarchical.
Most interviewed teachers believed the State hold the districts accountable for data
utilization; the district holds the schools accountable; and each school hold the
teachers accountable” (Dombrower, 2002, p. 104). On the Teacher Questionnaire,
the teachers rate two questions regarding accountability quite highly. Item 24, “I
think the state hold the district accountable for data utilization” was rated 3.47. Item
25, “I think that the district holds the school accountable for data utilization” was
rated 3.88. In Case 2, the classroom teacher took on much responsibility as the final
link to data implementation in the accountability piece. Dombrower (2002) writes,
“The data were eventually sent to the individual school sites whereby the teachers
would have 100% responsibility to link instructional practices to data use.
Additionally, at risk (students) would need to be addressed by teachers for
intervention and/ or retention” (p. 103).
In Case 3, 54% of the teachers “Agree Somewhat” or “Agree Strongly” that
there is accountability for data use at the district and individual level. The school
level administration puts an emphasis on accountability through data use. Duncan
(2002) writes:
This sense of accountability is most likely rooted in the fact that
district administration and school level administration talk
about using data. Once again, the systems set in place by the
school for analyzing and discussing data contribute to the level
of accountability seen by the teachers. The expectation that the
147
teachers respond to the data in writing enhances the level of
individual accountability for teachers (p. 99).
All three cases also have evidence from interviews and document analysis to
demonstrate how data use accountability forms a chain starting from the state level
and ending up in a teacher’s classroom. Only Case 1 appears to have the
accountability built into their District plan.
Relationship Between Improvement in Student Achievement and Data Use
Table 4.6 represents responses to the remaining section of the Teacher
Questionnaire. Questions 30 through 34 attempt to uncover the teachers’ sense of
whether the implemented data use strategies are responsible for improved student
achievement. Ratings on this section of the questionnaire are important in that if
teachers perceive that the use of data positively impacts student learning, then they
will be encouraged to continue their use of those strategies.
Table 4.6
Perceptions Regarding Whether Improvement in Student Achievement is
Due to the Implementation of Data Use Strategies
Case 1 Case 2 Case 3
Accountability for data use at
district, school, and individual
level.
3.00 3.19
*
2.89
* Average with question 34 missing.
In Case 1, “The teachers were eager to share what this school has done, that
they were the driving force behind the District policy, and that the growth their
148
students achieved was attained because of the interim assessments and targeted
instruction they employ” (Thompson, 2002, p. 135). The teachers indicated in
interviews that accountability had been responsible for the API growth the school
had experienced over the past two years. Case 2 agreed that data use had helped aid
in the student gains at the district level. The SAT-9 test score results for 2001
increased 122 national percentile points for student in all grades and in all subject
areas. At the school level, the API score went up 64 points in 2 years from an 808 to
an 872. The teachers felt that creating flowcharts of data elements from a standards-
based curriculum had been the most effective tool in bringing about student
achievement gains. In Case 3, the teachers used interim assessments and teacher
made tests to track student achievement; the teacher’s were very aware when
students were acquired mastery of a certain standard. The implementation of data
use was also used to determine the interventions needed to help students meet
academic standards. In this case, the school had implemented an after-school
tutoring program as an intervention measure. All three cases used different methods
for analyzing data; yet, all had developed personalized systems of analysis beyond
the typical, traditional yearly analysis of standardized test scores.
All three cases have fairly strong responses to the majority of the statements
in this section of the accountability for data use on the Teacher Questionnaire. Case 3
has the weakest response with an average response of 2.89--62.4% of the teachers
agreed “Strongly” or “Somewhat” that student achievement can be improved through
data use. The weakest response for Cases 1 and 2 was item 33, “Student motivation
149
increases when data is present in my classroom,” 2.45 and 2.8 respectively. The
majority of teachers participating in the Teacher Questionnaire in all three of these
cases indicated through interviews that they believe that using student performance
data does improve learning.
Site Level Concerns Regarding Data Use Based on Response to
the Stages of Concern Questionnaire
The Stages of Concern Questionnaire provides a means of determining where
concerns exist with respect to efforts to implement innovations in an organization
(Hall, Wallace & Dorsett, 1973). There is a “natural questioning” that occurs within
the minds of individuals as change takes place around them (Duncan, 2002, p. 106).
The teachers were asked to respond to each item on the questionnaire in terms of
their present concerns about their involvement or potential involvement with the
district’s design to use data to improve student learning. The Stages of Concern
Questionnaire includes seven areas of concern that are described as Awareness,
Informational, Management, Consequence, Collaboration, and Refocusing. These
stages show the progressive phases that an individual may move through in adopting
an innovation. The Awareness stage expresses the level of awareness people have
regarding the innovation while the Informational stage gauges the level of additional
information desired regarding the innovation. The Personal stage measures the level
of concern individuals have toward the innovation with respect to how its
implementation will personally affect his or her ability to implement the innovation.
The stage of Management investigates concern about the management demands
150
(process and tasks) of the innovation. The stages of Consequence and Collaboration
describe levels of concern regarding the impact of the innovation on students and the
ability to collaborate with others as a part of the implementation process. Finally,
the stage of Refocusing is concerned with modifications that the innovation design
permits as a part of maximizing the implementation of the innovation. This may
include the possibility of replacing the innovation with a more powerful alternate.
The Stages of Concern was developed on a Likert scale from 0-7. A response of 0
indicated that the question was irrelevant to the individual. A response of 1 indicated
that the question of concern was not true for the individual. A response of 2-4
indicated that the question of concern was somewhat true for the individual. A
response of 5-7 indicated that the question of concern was very true for the
individual.
Table 4.7
Case 1. Distribution of Strongest Teacher Concerns
Stage of Concern # of
Teacher
Scores*
Highest Teacher Score and Strongest Stage
of Concern for Each Teacher
O Awareness 1 5.60
1 Informational 1 4.60
2 Personal 8 3.40, 4.00, 4.60, 5.00, 5.40, 5.40, 5.80, 5.80
3 Management 7 4.00, 4.40, 4.40, 5.40, 6.00, 6.20, 6.40
4 Consequence 6 3.00, 3.80, 4.00, 4.00, 4.60, 5.00
5 Collaboration 6 4.40, 5.00, 5.00, 5.40, 5.40, 5.60
6 Refocusing 2 4.60, 7.00
*Column is more than 24 because teachers had same highest scores in multiple areas.
151
The mean scores of teachers in Case 1 were analyzed. High scores indicate a
higher degree of intensity of concern of teachers. The strongest concern across the
stages is Stage 2, Personal. Teachers reiterated this intensity of concern during
interviews with “…regard to teacher’s personal commitment to the policy and their
feelings of adequacy in meeting the demands placed on them by the data use policy
at this school site” (Thompson, 2002, p. 121). The second strongest area was Stage
3, Management. This area focused on teacher’s concerns with the “…lack of
available time to administer assessments, analyze data, plan targeted instruction,
collaborate with site personnel, and provide interventions to the degree they feel
would benefit students” (p. 121). The largest concentration of high mean scores fall
in Stages, 2, 3, 4, and 5. The overall mean scores in Stages 2 and 3 is 3.91, Stage 4
is 3.70, and Stage 5 is 3.83. This indicates that the teachers have a range of
concerns.
152
Table 4.8
Case 2. Distribution of Strongest Teacher Concerns
Stage of
Concern
# of
Teacher
Scores*
Highest Teacher Score & Strongest Stage of Concern for Each
Teacher
O Awareness 5 3,00, 3,40, 4.20, 4.20, 5.50
1 Informational 17 5.50, 6.17, 4.50, 6.50, 4.17, 6.50, 4.33, 3.67, 3.83, 4.00, 5.67,
4.83, 5.83, 5.83, 3.67, 6.17, 6.17
2 Personal 4 7.00, 4.00, 4.00, 6.80
3 Management 3 5.60, 3.60, 3.20
4 Consequence 9 5.02, 3.60, 6.60, 2.60, 4.80, 4.00, 3.20, 3.20, 4.40
5 Collaboration 3 2.60, 2.60, 5.80
6 Refocusing 1 2.60
*Column is more than 38 because teachers had same highest scores in multiple areas.
Overwhelmingly, the largest distribution of strongest teacher concerns falls in the
informational stage. The informational stage characterizes an individual as having a
general awareness of the innovation and is interested in learning about it in more
detail. “A characteristic of the Informational Stage is that the individual seems to be
unworried about self in relation to the innovation. The individual is interested in
substantive aspects of the innovation in a selfless manner such as general
characteristics, effects and requirements for use” (Hall & Hord, 1987). The second
highest cluster is in the area of Consequence. This stage turns the focus of concern
on the relevance of the innovation for students. The teachers in Case 2 have concerns
that appear to straddle what the innovation is and how it can be used to help students
increase their academic progress and learning.
153
Table 4.9
Case 3. Distribution of Strongest Teacher Concerns
Stage of
Concern
# of
Teacher
Scores*
Highest Teacher Score and Strongest Stage of
Concern for Each Teacher
O Awareness 0
1 Informational 6 5.00, 3.00, 6.17, 6.83, 5.17, 6.00
2 Personal 6 5.00, 4.20, 5.20, 3.80, 5.40, 6.40
3 Management 1 2.60
4 Consequence 5 3.00, 6.40, 4.80, 2.60, 6.20
5 Collaboration 1 6.00
6 Refocusing 0
*Column is more than 16 because teachers had same highest scores in multiple areas.
In Case 3, the teacher’s strongest concerns are at the informational and
personal stages; their responses also tended to be quite high with 9 scores falling at
5.00 or over which is within the “very true” concern range. The Stages of Concern
Survey indicates that “although teachers believe that there is a high degree of
implementation of data usage, they are not quite comfortable with their ability to
manipulate the data for their own purposes and needs” (Duncan, 2002, p. 98).
Teachers indicated that they feel they had not received a high degree of professional
development training in the use of data and the emergence of new data practices.
Duncan (2002) wrote:
…the teachers are still looking to receive more information
about the innovation, trying to understand their place in the
innovation, are not sure of how to manage the innovation as part
of their instructional strategy, and do not understand how the
154
consequence or impact the innovation will have on their
instruction is pertinent to them (p. 109).
Teachers in the first stages of concern need a greater amount of staff development
training regarding the use of data as a methodology for improving student
achievement (p. 109). This can be said for the other two cases as well and can
probably be generalized for all teachers.
Just as in Case 2, Case 3 has a large cluster of scores in the stage of
Consequences. In this area, the attention is focused on the impact “of the innovation
for students, evaluation of student outcomes, including performance and
competencies, and changes needed to increase student outcomes” (Hall & Hord,
1987, p. 60). Some of the teachers’s main concerns now lie with how to implement
the data use plan and how to use data to drive instruction and provide targeted
interventions.
Overall, the three cases are in the beginning stages of concern which reflects
the beginning stages of feeling comfortable with using data. The individual needs to
understand what the research says about using data, understand why using data is
important and how it fits into their world, and how it can be used to its fullest
capacity (Duncan, p. 110). Hall and Hord (1987) write that it is natural for teachers
to be at the beginning stages of concern when an innovation is introduced. In all
three cases, focused data use is relatively new (within a few years). The teachers are
going to need more information and practice to become comfortable practitioners of
data analysis to drive instructional practice. This is a complex and highly developed
skill.
155
Research Question Three: Adequacy of the Design
The third question for this study was: “To what extent was the district plan a
good one?” The adequacy of the district design was evaluated by the Researcher
Rating Matrix in tandem with the Case Study Guide. This guide provided the
background data to fill out the Researcher Rating Matrix with the following
instruments: mapping of data flow, at district and school site, interviews, artifact
analysis, and collected quantitative data. The Researcher Rating Matrix is a Likert
scale of 1-5, ranging from one (not effective), two (somewhat effective), three
(unclear), four (effective), five (very effective). The Researcher Rating had three
main determinants: District support for standards-based instruction and assessment,
District and school accountability to standards-based curriculum, and Degree to
which High Student Performance is aligned to Standards and Communicated to
Teachers, Students, and Parents. There is a rubric with four criteria for each
determinant. This enabled the researchers to rate the determinants against pre-
established criteria and to provide consistency across the cases. The Researcher
Rating Forms were completed post data collection by the primary researchers.
District Support for Standards-Based Instruction and Assessment
The District Support for Standards-Based Instruction and Assessment (the
intended District impact, and the observed impact on the school site) was evaluated
on the Researcher Rating Matrix considering the following criteria: 1) Clear
Performance Goals (aligned to state goals); 2) Disaggregated Standards-Based
Assessment Data; 3) In-Services on how to use Data; 4) District is Preparing Schools
156
for Emerging State Assessments. As illustrated in Table 4.10, the three cases rated
the first determinant and criteria quite differently.
Table 4.10
Adequacy Rating of District Support for Standards-Based Instruction
and Assessment
Elementary
Case #1
Elementary Case
#2
Elementary Case
#3
I. District support
for standards-based
instruction and
assessment
5
Very
Effective
2
Somewhat
Effective
2
Somewhat
Effective
Case 1 rated the district support for standards-based instruction and
assessment as “very effective.” The district’s intended impact of supporting
standards-based instruction and assessment from their data use plan has the very
observable positive impact that students have shown academic growth. Assessment
is ongoing and uses multiple measures to guide instruction. Standards-based
instruction is being provided at the local school site. Teachers and administrators are
aware of both the district and the state expectations for students based on SAT-9
scores and API rankings, state content standards, and the district alignment of
curriculum to the standards. All assessment data is disaggregated using DataWorks
or disaggregation methods at the local school site. Teachers have been receiving
staff development on data through monthly meetings at the local school site and
district sponsored trainings. There is also preparation in the district for the emerging
assessments including the HSEE and CELDT.
157
Case 2 was rated “somewhat effective” for support of standards-based
instruction and assessment. Sun City Unified did not have a design of data use for
all schools, but intended to increase student performances with the use of an adopted
standards-based curriculum. The district would forward various assessments
including SAT-9 and CORE tests to the schools. There was no district directive on
how to assess and use this data. In essence, “(d)ata policies and strategies for Sun
City Unified School District were school driven” (Dombrower, 2002, p. 115). The
Vice-Principal trained the teachers on how to use data and chart information.
Teachers had a planned learning log for the year that was data driven; they consisted
of: 1) working skills according to what was presented in the textbook, 2) district
required standards, 3) emphasized remedial skills for “at-risk” learners. Both the
school and district wanted to increase student performance, “(b)ut, neither the
individual schools nor all teachers at each individual school site exchanged
information regarding their goals, strategies, and policies for increased student
performance” (p. 115).
The Case Researcher writes, “So it is difficult to say that a plan that
essentially does not exist is a good one or a bad one.” (Duncan, 2002, p. 113). Case 3
was rated based on the actions that the district and school had taken in the use of data
to improve student achievement in light of the fact that there was no substantive
district data use plan. The district did have an intended impact to encourage schools
to use multiple data sources, use standards-based instruction, and provide schools
information that would lead to improved API scores. The observed impact was that
158
schools were deluged with data and teachers had some understanding of student
achievement from the weekly analysis of data; however, there was insufficient
teacher training to really change instructional practice. For these reasons, Case 3
rated their district a “somewhat effective” for support of standards-based instruction
and assessment.
District and School Accountability to Standards-based Curriculum
The District and school accountability to the standards-based curriculum was
evaluated on the Researcher Rating Matrix considering the following: 1) Motivation
to use standards-based assessment and instruction; 2) Regular assessment of student
performance; 3) Collaborative Team Review Data; 4) instructional strategies
individualized to promote learning. The three cases rated the second determinant
and criteria very consistently and strongly as illustrated in Table 4.11.
Table 4.11
District and School Accountability to Standards-based Curriculum
Elementary Case
#1
Elementary Case
#2
Elementary Case
#3
II. District and
school
accountability to
Standards-Based
Curriculum
4
Effective
4
Effective
4
Effective
In all three cases, student performance data is forwarded to the school from
the Districts. In Case 1, disaggregated data from the DataWorks program is provided
to the school principal in a comprehensive notebook at the beginning of the school
year. All assessment data received by Sun City Unified in Case 2 is forwarded to the
159
schools including: SAT-9, SAT-9+, and CORE. In Case 3, the district sends
disaggregated state assessment data results out to schools at the beginning of each
year. The district also uses interim criterion referenced tests and sends the results to
the schools on a quarterly basis.
Each one of the schools in the cases chose different ways to utilize the
student data to impact student performance. In Case 1, teachers meet monthly by
grade level to group students for instruction and intervention based on their
performance on the most recently administered assessments; instruction is
individualized. The teachers also provide the principal with a plan of action each
month based on the data results. In Case 2, the teachers manually chart SAT-9,
SAT-9+, and CORE information. Teachers meet at the beginning of the school year
with the administrators to set goals for each student enrolled in their class based on
data, and then would meet again at the end of the school year to be evaluated. In
Case 3, the administration disseminates the data to the teachers. The teachers meet
in grade level teams on a weekly basis and analyze the data facilitated by the
administrators and lead teachers. The teachers look at the data and make changes in
instruction: i.e., changing questioning strategies, math questions, targeting specific
standards, etc.
Degree to Which High Student Performance is Aligned to Standards and
Communicated to Teachers, Students, and Parents.
The degree to which “high” student performance is aligned to Standards and
Communicated to teachers, students, and parents was evaluated on the Researcher
160
Rating Matrix considering the following: 1) High Performance Rubrics; 2) Teachers,
Students, and Parents understand standards-based curriculum, 3) Report Cards
aligned to State Standards; 4) All students are progressing towards high
performance. The three cases had very mixed ratings regarding this determinant as
illustrated in Table 4.12.
Table 4.12
Degree to Which High Student Performance is Aligned to Standards and
Communicated to Teachers, Students, and Parents.
Elementary Case
#1
Elementary Case
#2
Elementary Case
#3
III. The Degree to
Which High
Performance is
Aligned to Standards
and Communicated
to Teachers,
Students, and
Parents.
5
Very Effective
(Case Researcher)
4
Effective
(Cross-case
Researcher)
2
Somewhat
Effective
3
Unclear
(Case
Researcher)
2
Not Effective
(Cross-case
Researcher)
In all three cases, the schools and districts have made great efforts to align
curriculum to California State Standards. All three schools in the cases have made
recent improvements on their API scores. All stakeholder groups are made aware of
the alignment of standards and performance expectations throughout the school year
through multiple communications in Case 1. Interviews, documents, and artifact
analysis reveal that the II/USP and District’s SMP forced teachers, students, and
many parents to understand standards-based curriculum including high performance
161
rubrics. The district has not yet aligned their report cards to the state standards. The
researcher in Case 1 rated their district as “Very Effective” in this area. Due to the
lack of standards-based report cards, this researcher feels that this was an
overinflated rating and a rating of “ Effective” (4) is more valid.
Case 2 found evidence of high-performance charted rubrics and standards-
based curricula for student progress. There was, however, no evidence that students
understood the charted rubrics progress. There was periodic communication
between home and school mainly through standardized report cards and weekly
bulletins from the Superintendent. Report cards aligned to state standards and a
comprehensive understanding of standards-based reform by all shareholders were
lacking in this case.
Case 3 found evidence to support that teachers are using authentic
assessments, portfolios, and rubrics. Case 3 was not clear if report cards were
standardized or if all stakeholders understood standards-based curriculum. Evidence
of communication from the district/ school to the stakeholders was also not
indicated. In addition, Case 3 could not find evidence that there are large numbers of
students working towards high performance levels. Again, the current researcher
disagrees with the rating of “Unclear” (3) and feels due to the lack of evidence
uncovered to support this rating, a “Somewhat Effective” (2) would be better
justified.
162
Overall Adequacy Rating
Case 1 rated their district with a glowing “5” for supports for standards-based
Instruction and Assessment and demonstrated that the adequacy of their district
design was “very effective.” In contrast, Cases 2 and 3 could not rate the adequacy
of their district design very highly because their district did not have actual plans;
they ranked their districts under this determinant as “somewhat effective. Yet,
Accountability to Standards-based Curriculum was ranked as “effective” across all
three cases because the focus shifted from the district to the local site level. The
local school sites in all three cases were using data to drive instructional practice; for
cases 2 and 3, this was without a district data plan. Cases 2 and 3 struggled to rate
highly the degree to which high student performance is aligned to standards and
communicated to teachers, students, and parents. According to the cross-case
researcher, the average overall adequacy rating for Case 1 rates as having an
“effective to very effective” (4.3 average) district data plan. Case 2 rates as having a
“somewhat effective to unclear” (2.7 average) district data plan. Case 3 rates as
having a “somewhat effective to unclear” district data plan (2.8 average).
Discussion
The schools sampled for this study were selected based on their advanced
data practices in 2001/2002. What the three cases demonstrated is that there needs to
be more development in data practices in response to standards-based reform and the
accountability measures set forth from the State of California. This discussion
163
section will delineate the data use areas that were successful in these three cases and
those that need improvement.
Summary of Findings for all Three Cases
Overall, Case 1 was very successful at integrating data into their district
design. The school district and school had a cohesive plan. This plan includes
support for instruction, teaching strategies, and District support for teachers
(Thompson, 2002, p. 97). The plan was ahead of its time and included many
elements that would later become mandated by the State of California. The teachers
were trained with help from the II/USP funds. There was also a testing coordinator
overseeing all of the branches of data analysis at the local school sites. With all the
energy exerted on using data to drive instruction, scores did rise. This rise in scores
and API growth over a two-year period was attributed to the attention and
intervention students were receiving (p. 136). The weakest components of their plan
are in the areas of staff development and access to disaggregated data (p. 145). The
design for this case is in place, but “they still need to focus on staff development,
access to student performance data that is timely, funding tied to staff development
and data use, and aligning their student report cards to the state content standards”
(p. 148). Teachers agreed that current data practices had been implemented well;
however, they were not as aware of the emerging design or emerging state
accountability measures. They had no awareness of the CELDT exam. Thompson
(2002) calls the school effective vs. very effective due to the above issues.
164
The school district in Case 2 did not have a clear design for data policies and
strategies. The researcher rated this case as having a “somewhat effective to unclear”
(2.7 average) district data plan. The California Accountability program did force the
district to collect test data from a number of sources including SAT-9 and CORE.
The district also focused on standards-based instruction and their “top priority was
increased test scores using California standards” (Dombrower, 2002, p. 139). The
local school did use a standards-based curriculum to raise student achievement, and
data was used in the form of a charted learning log to support this. There was a
school directive to collect data and the teachers were comfortable with implementing
current data practices. Teachers used charted rubrics (charting student progress on
the SAT-9 and SAT-9+), longitudinal CORE testing, and interventions based on
student need. There was little emphasis at the local school site in the emerging
context other than training (CLAD/ BCLAD) to teach ELL students. Students were
also prepared for the CELDT test through supplemental instructional materials
including computer programs in the English development classes.
Case 3 is another example of a strong school design for data use and a weak
district design. “There is no planned design for data use, which would include how
data is distributed to schools, articulated expectation for data use, staff development
training to ensure competency, and funding to ensure that the above was adequately
implemented and maintained” (p. 134). The teachers also do not receive the data in
formats that they can easily interpret and they are usually from only one source:
SAT-9. On the school level; however, data use is at a more advanced and
165
meaningful level than the district. The school had begun to develop a systematic
procedure for using data to improve student achievement; the school administration
guided the faculty in data analysis and provided opportunities for collaborative
strategic planning efforts. “However, the development of their procedures came as a
result of outside consultants and not as a result of a district level plan” (p. 123). Data
became a priority in Case 3 partially from the influence of an outside evaluator (a
function of the II/USP process). A leadership team was set up at the school; this
team became the mechanism to set up using data to inform instruction (p. 104). One
teacher interviewed commented on the data use at the school: “Institute the weekly
meetings, planning and implementation, providing conference opportunities, putting
it out there and saying this is where we need to be are all ways that data use became
a priority in this school. The teachers have been empowered to make a difference”
(p. 105). The elementary school made strong academic gains, which one can
attribute to the use of data to drive instruction. The teachers in Case 3 believed they
were successfully implementing the findings from their data analysis into their
instructional program. There was a high degree of belief in the implementation of
current data practices. Teachers did not feel comfortable with emerging data
practices mainly due to a lack of training in this area. Case 3 rates as having a
“somewhat effective to unclear” district data plan (2.7 average).
Reflection on Findings
The state of California reforms do not come with a manual on how to
appropriately use data and engage in comprehensive school reform. Districts and
166
schools are finding their way with mixed results. The three cases are clearly utilizing
more data practices in the traditional context of data use rather than in the emerging
context. Student performance data is primarily analyzed through standardized
testing data with little use of more innovative techniques. One would assume that
the District level, school site level, and classroom level would need to work in
tandem to use data effectively and increase student achievement. The three schools
in this case did show increases in student achievement but they certainly did not have
all three levels working fluidly together. This achievement may have been the result
of any number of additional factors besides data use. And yet, evidence does support
the fact that one of the most likely reasons the scores increased had to be a part of the
standards-based reform movement of which data is an important element. In
addition, when the State of California placed a stronger emphasis on test scores and
created a system to rank schools based partially on those scores (API), schools
invested their energy into looking at how best data can be used to drive instructional
practices.
Many positive things can be said about the three cases in this study. All three
schools and school districts focused on improving student scores on the Stanford-9
battery of standardized state initiated exams. Student performance data from the
Stanford-9 assessments were presented to school sites in disaggregated formats. All
of the cases aligned their curriculum, interim assessments, and instructional materials
with the California State standards. The use of data by teachers and administrators
was significantly higher than prior to the enactment of the State’s accountability
167
system. All three cases had the perception that if they continued to use data, student
performance would continue to improve. There were, however, a number of areas in
need of improvement in the three cases. Three key topics emerged to frame this
discussion: training, technology, and funding.
Teacher Training
In Case 1, there was a discrepancy between how the District and teachers
reported their training on data use. The District reported that… “The teachers at this
site were all trained in administering interim assessments that are now required by
the District, and on using student performance data to target instruction” (Thompson,
2002, p. 118). The District went on to say that he teachers meet monthly to review
assessment data and target instruction. Teachers are given quality time to meet, plan,
and articulate. In addition, CORE training was paid for from the II/USP money. In
contrast, the teachers reported that “… they needed more staff development that
relates to specific needs at school sites, not ‘mass training’ from the district” (p.
127). All interviewed teachers stated that the staff development and training they
received was insufficient (p. 130). They did emphasize that the Literacy Coaches,
Facilitators, Curriculum Specialists, and Mentors “are an invaluable asset to them as
teachers” (p. 127). They considered them on-call if they needed any help or
additional training.
Case 2 also agreed that their staff development training was insufficient. On
the Teacher Questionnaire, the teachers “somewhat disagreed” that the school
offered frequent professional development to raise awareness of new data practices.
168
The school seemed to offer staff development infrequently on data use (p. 102).
Likewise, Case 3 teachers felt that their training was insufficient as well. “The
teachers are unable to articulate, in any concrete manner, that the school or district
has provided them with any training on the use of data” (p. 102). The district does
say that it provided data training. One teacher explained this training as such: “The
leadership team met and viewed a video on how to analyze data. It talked about
quintiles and things like that. I still walked away with confusion on how to read
class profiles. I did not receive enough information to understand the complexity of
the reports that we get” (p. 102). Yet, Grand Elementary does collect and analyze
data effectively. This appears to have been developed through their participation in
the II/USP process (as in Case 1). The district provides the data and the school
designed the process to analyze the data. The administration sets aside one
collaborative staff meeting a month for data analysis. Teachers meet in grade level
teams to determine how students are performing and adjust their instruction
accordingly. They use standards based instruction and rubrics to drive instructional
practices.
All of the schools provided training for learning how to analyze and use data
to improve student achievement. However, most of the teachers requested more
training and collaborative time to analyze and use the performance data. The teachers
felt they had not received a high degree of professional development and training in
the use of data and the emergence of new data practices. These concerns were seen
across all three cases and may be the strongest criticism regarding implementation
169
from the teacher perspective. Teachers at many of the schools wanted more
information on how to tailor data use to their specific site needs as well as how to tie
data to instruction.
Indeed, in addition to comprehensive district plans lacking in two of the three
cases, there appears to be little forethought into creating manageable and effective
staff development for data use. Remapping approaches to instruction and assessment
need adequate buy-in and time for development. “Schools should build professional
development into the school day and calendar and sustain it, align it with the content
of curriculum, and focus it on improving instruction with activities centered on the
classroom” (Manning & Kovach, 2003, p. 41). One wonders if the districts in these
three cases are even knowledgeable or prepared to train teachers to implement school
wide reforms that target improved student achievement?
Technology
Most of the case studies made it clear that the ability of technology to
enhance data collection, analysis, and availability was not being used. The teachers
expressed frustration over an inability to quickly and immediately disaggregate data
that was in an understandable form. In many of the cases, the data results were not
presented until the following year when a teacher had a brand new class. Without
availability and comprehensibility in place with data use, it is unlikely that the data
will be used effectively. It is through these two key elements that technology can be
seen as a catalyst for advanced data use. There was a call from the players in the
cases for an investment into technology programs (like IDSM) that would provide
170
ease of use for disaggregating data and immediately seeing results. The IDSM
program, for example, would allow anyone given access to download student data
from the Internet after entering an appropriate password; the program would also
allow users to enter information into the system and then disaggregate it according to
their needs. The cost of for this program is $5.00 per student. The Annenberg Data
Study (2005) discussed the importance of creating data warehousing programs with
an interface that was as easy to access as an ATM machine and capable of providing
a “wide variety of spontaneous queries and the printing of helpful reports” (p. 16).
Fullan, Hill, and Crévola (2006) write about the need for an “inference system” in
data warehousing programs. One where the data system could generate a “Focus
Sheet” for the next lesson indicating: 1) the suggested instructional foci, 2) the
number and composition of small instructional groups, 3) the suggested teaching
strategies to be used, 4) the resources available to the teacher (p. 82)
Funding
The above example also brings forth another concern: fiscal resources to
support data analysis. Cases 1 and 3 used outside companies/ programs to help them
with the data management. Case 1 used DataWorks and would like to have added
IDSM, but it was cost-prohibitive. In Case 2, general funds were used to purchase
and score tests at the district office, however, on the whole, “there was a lack of
funds to support data efforts” which the school site could have benefited from
(Dombrower, 2002, p. 122). Case 3 did not find any evidence that the district
allocated funds for professional development in the area of data analysis. As in Case
171
1, they did hire a Program Evaluator to oversee data collection, the analysis of
results, and submit the data and findings to the schools. Case 1 had a Testing
Coordinator whose function was similar. Funding for data analysis is really key not
only for programs to analyze data, but to train personnel to be adept at analyzing
data, and providing time for teachers to analyze this data (i.e., sub days, staff
development days, etc.).
Teacher Concerns
Teacher’s had a large range of concerns across the three cases as measured by
interviews, teacher questionnaire and stages of concern questionnaire. Teacher
concerns went beyond issues of training, funding and technology. They were
concerned with time demands, administration of assessments, interventions for
students, the best way to use data from the assessments and multiple measures, and
the effects it would have on student achievement.
The teachers in the three cases fall in the beginning stages of concern and
need more support and training with the new innovation. The Stages of Concern
Questionnaire revealed that teachers are not comfortable with their ability to
manipulate the data for their own purposes and needs. Cases 1 and 3 have a cluster
of responses falling in the Personal Stage. In this stage, the teachers are trying to
understand their place in the innovation, and what impact the innovation will have on
their instruction. They want more training, and it is important that they are provided
with more in-depth information regarding data and data analysis. In Case 3, the
interviewed teachers “…did not feel strongly that the district was a partner with them
172
in the use of data. Though they knew that the district provided them with data, they
did not view the district as helpful in giving them the tools and training that they
needed to make sense of data” (p. 102). Data use was often quite personalized for
the teachers in the three cases. In Case 2, the teachers were attempting to make data
use part of their daily instruction through the mapping and tracking of student
performance. Each teacher created their own subsystems of data use; “For other
teachers data was stored in various locations including in boxes located in trucks of
cars, classroom file cabinets, teachers’ desk drawers, binders in the classrooms,
student data portfolios; students’ conference folders; and on classroom computers”
(Dombrower, 2002, p. 105). In Case 3, rubrics are helping teachers to analyze the
effectiveness of the instruction by setting a standardized guide for determining
mastery of the curriculum on an ongoing basis. Each teacher in interviews
confirmed that they were using rubrics (pp. 102-103).
Cases 2 and 3 had a cluster of teachers in the Informational Stage. Within
this stage, individuals are concerned with the substantive aspects of the innovation in
a selfless manner. Case 2 also has a cluster of responses falling in the Consequence
stage, which shows that the teachers are concerned about the relevance of the
innovation for students. They are asking, “Is data use really going to help my
students learn?” Case 1 also has a cluster in the Management Stage, which shows
their concern with how to manage the innovation. They are concerned with a lack of
time to administer assessments, analyze data, plan targeted instruction, collaborate
with site personnel and provide interventions they feel would benefit students. It
173
would be interesting to repeat the Stages of Concern Questionnaire in a number of
years and see what stages the teachers are in after working with an innovation for a
longer period of time.
Fullan (2001) describes an “implementation dip” as a schools and teachers
move forward and encounter “an innovation that requires skills and new
understandings. All innovations worth their salt call upon people to question in
some respects to change their behavior and their beliefs---even in cases where
innovations are pursued voluntarily” (p. 40). Fullan goes on to add that people
experience two kinds of problems in the dip---fear of change and lack of skill. One
teacher who was interviewed in Case 3 aptly describes experiencing a dip: “At one
point in the past (data) was used to hit us over the head with, but lately we’ve been
looking at data to see how we can improve our teaching.” This quote nicely sums up
where most of the teachers are in all of the cases with respect to concern over
implementing the data use innovation: the beginning stages of slowly emerging from
an “implementation dip.” One can hope that with the correct training and cohesive
support from the district and local school site that the teachers will progress to a
higher stage in the innovation in order to make data utilization successful and
beneficial for student achievement.
Using Data to Improve Instructional Practices
What is the best way to train and ask teachers to use data from the
assessments and multiple measures collected to improve their teaching? Much of the
staff training in the three cases was not guiding the teachers to make the leap to
174
connect data results to the instructional modifications that might need to occur to
meet the needs of all learners. Many of the schools sites and teachers struggled with
the, “What now?” question. What do you do with the data when you have it? How
do you systematically improve the achievement levels in schools by using data?
How can data inform instruction and change practice? In the future, emerging
practices will need to utilize more immediate feedback versus current data practices
where a school looks at data, analyzes it, and draws conclusions sometimes months
after the initial collection of the data piece. In order for data analysis to be effective,
instructional practices must quickly react to the information the data provides.
Teachers must learn to react to the student’s strengths and weakness highlighted
from data; they must learn to adjust to the needs of their students. Whether these
students are English Language Learners, homeless, Title I, gifted etc., they need
instruction that is tailored to their academic needs; this is called differentiation:
Differentiated instruction is designed to match the readiness
levels of the students in the classroom. Teachers customize
instruction to learners’ needs by adjusting the pace and level of
instruction and varying the products of learning to reflect
students’ best ways to learn. Differentiated instruction
recognizes and acts upon the reality that children learn
differently (Kingore, 2004, p. 3).
It might be fair to predict, “...once the staff truly begins to understand the process of
data analysis through increased information and training, their achievement results
would be even stronger than those of the current indicators” (Duncan, 2002, p. 123).
This is a tantalizing thought that increased teacher adeptness at data analysis could
quite possibly drive instructional practice in a forward and thought provoking
175
direction, close the achievement gap, and differentiate for the varied needs of
learners.
Currently, the schools highlighted in the three elementary cases are working
towards being instructionally reactive to new data. Case 1 is actually quite advanced
at this process. The District and local school site is supportive of this model and
creating structures to make it a routine part of instructional practice. The School
Board was drafting a policy entitled, Curriculum Review, Improvement, and
Implementation, as the data collection for this case was being collected. This policy
included a great focus on data practices to drive instruction and accountability.
There was an emphasis on using data to differentiate the curriculum “Teachers shall
use student assessment data to determine patterns of student achievement, student
grouping, required intervention, the identification of general achievement trends and
the needs to modify instruction” (Thompson, 2002, p. 110). This policy recognized
the importance of teachers and included the importance of “staff development for
teachers to improve their instruction, assessment and monitoring of students in order
to provide students with appropriate interventions” (p. 110). Currently, at the local
school site, teachers use interim assessments to drive their instruction:
Grade level teams use data to plan their instruction, group
students for intervention, and monitor the achievement of all
students. Each month teachers turn a 30-day plan into the
principal at this site. These plans rank students, list needs, and
identify intervention strategies to be used with underperforming
students (Thompson, 2002, p. 114).
The teachers are asked to administer the assessments according to a District timeline.
The assessments are administered in a “cascading manner.” Students are pre-tested
176
and then targeted for instruction or given another assessment to further define their
exact area of need.
This method of cascaded assessment defines a student’s exact
area of need and helps prevent any child from ‘slipping through
the cracks’ by providing them with instruction and intervention
when and where it is needed” (Thompson, 2002, p. 101). The
site administrator is involved in the monitoring of this through
teacher evaluations (p. 118).
Cases 2 and 3 are not as advanced as Case 1 at using data to drive
instructional practices, but some elements of differentiation are beginning to be
utilized. In Case 2, teachers were designing more challenging lessons by enhancing
the standardized textbook lessons in preparation for State Assessments. CORE
assessments also assisted teachers in planning challenging individual class lesson,
planning interventions for some students, and making data rubrics that evaluated
student academic growth (Dombrower, 2002, p. 72). “Flowcharts or rubrics from
multiple assessments enabled teachers to adjust their classroom strategies of teaching
and personalize academic cores of learning for students” (p. 135).
There was some emphasis on using data to differentiate the curriculum in
Case 3. Teachers met in grade level teams to determine how students are performing
and adjust their instruction accordingly. The principal at the school site has pushed
the conversation regarding how to increase student achievement by challenging
teachers to “(f)ind new and creative ways to give children the practice they need,
think outside the box” (Duncan, 2002, p. 88). The district has also placed an
emphasis on using “technology as an intervention strategy for students who need
additional resources to help them meet mastery of curriculum standards” (p. 83).
177
Accountability
Clearly, there were many roles in addition to that of the teacher in the design
and implementation process of each of the cases’ data use plans. It is interesting to
look at all the players and how they interacted to the benefit or detriment of reform
efforts. Who took on the responsibility for designing and implementing the data use
demands made by the state in each of the cases? On the elementary level, it appears
the local school site and specifically the teachers are driving if not creating the data
use policies. Except for Case 1, there was no clear District Plan, and even in Case 1,
the teachers felt strongly that it was their responsibility to implement the II/USP plan
(school plan) and District data use plan. In fact, teachers at this school site exceed
the requirements of the district’s data use policy. The teachers administer interim
assessments to students on a monthly basis and use this data to create grade level
plans for targeted teaching/ intervention for small groups/ individual students to
move them towards proficiency on state standards. The district only requires these
assessments to be conducted three times a year. In Case 2, information about data
use seemed to trickle down in a hierarchical manner from the state to the district to
the school and finally to the teachers; however, there was no clear plan that this
district had created regarding data use policies. The teachers created their own data
use practices. “…(T)eachers have 100% responsibility to link instructional practices
to data use and intervention/ retention” (Dombrower, 2002, p. 103). In Case 3, the
school level administration put an emphasis on data use. The teachers were asked to
plan out how they would use data and submit this to their administrator. In this case,
178
the local school administration was driving data use practices and overcoming a non-
existent district plan with the help of an outside consultant. Elmore (2005) describes
the process that occurred in Case 3 with the principal and outside consultant:
And sometimes the development of a stronger set of collective
expectations---through the active agency of a leader and the
engagement of teachers---led to the creation of observable
internal accountability structures, informal and formal, that
carried real stakes and consequences for members of the
organization (p. 196).
In essence, each case has different elements driving the data plan’s design/
implementation. Table 4.13 summarizes these different elements:
Table 4.13
Relationship Between Design/Implementation of Data Plan Use
Case 1 Case 2 Case 3
Data Plan Design District Data plan No District Plan No District Plan
Implementation of
Data Use
A Local School
Design plan for IISUP
with the aid of an
outside consultant and
significant
teacher buy-in
complimented the
District Data plan
during the
implementation
process.
What little data use
was implemented
was teacher-led at the
local school site.
“…(H)ad a primitive
paper and pencil
policy for utilization
of data”
(Dombrower, 2002,
p. 134)
The school
administration and
outside consulting
group led data use
implementation
with a local site
plan.
The mandated policy changes from the State of California in regards to new
accountability measures served as the main agent of change in the three cases. Yet,
different factors in the three cases led to varying degrees of implementation of a data
use policy. A district data plan was seen as an important element to set reform in
179
motion in only one case. In all three cases, buy-in from the teachers was extremely
key to beginning reform efforts and in all of the cases bottom-up reform from the
local school site was the key piece to creating data use. In Case 3, the administrator
played an important role in guiding data reform at the school site. The writing of
the II/USP plan and hiring an outside consultant acted as an outside lever to
stimulate reform and appears to have been instrumental in guiding how the schools
in Case 1 and Case 3 use data effectively. In addition, the outside consultant
scaffolded the process of designing/ implementing a data use plan which made the
process accessible to the local school site administrators and teachers.
Indeed the three cases are a combination of top-down and bottom-up reform.
In the case of Case 1, this led to a tension between the district and local school site.
The district is attempting to standardize and centralize its program which goes
against the effective independent data practices the school has created.
…(T)he principal stated that one roadblock in this whole
process is that the District is working very hard to centralize all
programs by saying, ‘All schools will….’ She implied that
while the District is moving in the right direction by helping
students, this ‘All schools will…’ philosophy is not a shared
decision making model, so teacher buy-in will be diminished
and morale will be lowered (Thompson, 2002, p. 128).
The II/USP status and funding from this program had given this school a great deal
of autonomy to design practices through a collaborative, team-building process with
a “higher probability of institutionalizing the good practices that they are using at
this site” (p. 128). There is also confusion in Case 2 with the relationship between
180
the district and school site. One teacher interviewed in Case 2 replied as such
regarding this relationship:
No, I do not see the district as a partner with the school. I see
that they want us to use, data, but sometimes it is
manipulated…I know that the administrator in special services
has provided us assistance from time to time. I don’t see the
district as a whole partnering with us. They’ve hired a lot of
people across the street (district office) to analyze data, but I
don’t see any benefit from it. I don’t see it coming down to me
(Dombrower, 2002, p. 105).
This teacher is clearly talking about the tension between a top-down and bottom-up
relationships and the feeling or lack there of data resources “trickling down” to the
classroom level. It seems important that there be feedback from the school sites
regarding what is working with data to the district level; channels of communication
are extremely important.
Communication and Respect Amongst Stakeholders
The ultimate reason to collect and use data is to be able to improve and help
students learn and perform better. There are many different pathways to get to this
point as was discussed in the earlier discussion involving levers. There is no clear
path to implementing better data practices. Influences might include outside
consultants, a well-designed district plan, a visionary site administrator, or grass-
roots, bottom-up design of data use from the classroom teacher. One element that
must be present in all these pathways is communication. In the cases, there did
appear to be a tension between district ideals and teacher needs. “Generally, teachers
felt there was a lack of communication about data use among teachers and among
administrators. This caused districts to be reactive and produce district-gloried
181
systems that did not valued the qualitative data teachers collect on a daily basis such
as data from teacher created rubrics” (Thompson, 2002, p. 138). In addition, during
the creation of the district plans (or lack there of) there was “minimal advisory
participation” (Marsh, Hunt, & Lewis, 2002) from multiple stakeholder groups
including teachers. “Collaboration with teachers as equal change agents is
superficial” (Marsh, Hunt, & Lewis, 2002). Downplaying the powerful influence
teacher’s have on the process of school reform is dangerous. Elmore (2005) points
out that external accountability systems will be relatively powerlessness in the
absence of changed conceptions of individual responsibility and collective
expectations within schools.
In order for internal changes to occur regarding data use practices, teachers
must become involved in reflective thinking regarding data use. Fullan, Hill, and
Crévola (2006) recommend “working from the classroom outward,” or “using
grounded learning opportunities” that are “school-based and embedded in a teacher’s
daily work” (pp. 24-25). They assert that “professional learning ‘in context’ is the
only learning that changes classroom instruction” (p. 25). Lesson study would be an
example of this---observational, reflective, grounded in practice, and definitely
capable of creating internal changes. Whatever route to data use training a district or
school plans to use, it is a key point to the reform puzzle and needs to be given due
attention.
Chapter 5 will expand upon this thought and discuss the need to have teacher
buy-in and training to use strategies to change instructional practices, close
182
achievement gaps, and help all students improve academically through targeted
instruction.
183
CHAPTER 5
SUMMARY, CONCLUSION, IMPLICATIONS
Overview and Purpose of the Study
The State of California created a new accountability system largely based on
“high-stakes” testing. There is little research about the responses to this new data-
driven accountability system and even less research on the elements that go into
creating a functioning district data design plan that raises student achievement. The
purpose of the study was to evaluate the design, implementation, and adequacy of
data usage to improve student academic achievement in three cases at the district,
school, and elementary classroom levels. This study also explored the effect district
data use policy had on the local school site, and in what ways was data effective at
improving instructional practices in the classroom and thereby raising student
achievement. This study looked at three districts that were beginning to look at how
data can improve student achievement. This was a snapshot of the state of data use
four years ago, in 2002. This study aimed to illuminate effective practices across
multiple cases for utilizing data to improve the delivery of instruction and student
performance. Three research questions guided the cross-cases analysis:
1. What is the district design for using data regarding student performance, and
how is that design linked to the current and the emerging state context for
assessing student performance?
184
2. To what extent has the district design actually been implemented at the
district, school and individual teacher level?
3. To what extent is the district design a good one?
This comparative case study is a collection of findings from 3 elementary schools.
Summary of Findings
Research Question One
Research Question One asked, “What is the district design for using data
regarding student performance and how is that design linked to the current and
emerging state context for assessing student performance?” Interviews based on
Conceptual Framework A. guided the data analysis of the district data use policy and
practice. Table 5.1 summarizes the findings from Research Question One:
Table 5.1
Summary of Findings from Research Question One: Design
District Design for using Data Regarding Student Performance
• All three cases report that districts and schools are focusing on improving SAT-9
scores and the API as defined by the State of CA.
• There is awareness at the district and school levels of the SAT-9 and API scores
in all three cases.
• Only Case 1 has a clear district data use plan and a clear timeline for the receipt
and use of data. In addition, only Case 1 has a clear directive from the district on
how to analyze data at the local school site. Only Case 1 had an involved School
Board in the data plan design process. Only Case 1 used a technological program
to manage data, DataWorks.
• All three districts have seen scores improve and make large jumps between 1998
and 2002 since they began standards-based reform and using data to drive
instructional practices. API scores also rose significantly from 1999 to 2002.
• In all three cases, the districts do provide schools with the results of multiple
measures (though often not in a timely manner).
• None of the cases have a clear timeline to implement a district data design plan.
• All of the cases have some training occurring regarding data use; however, the
majority of teachers in the three cases felt the training was inadequate.
• Multiple Stakeholders were involved in the district data plan to mixed degrees in
the three cases…mostly to a limited extent.
185
District Data Design Linked to Current State Context for Assessing Student
Performance
• The three districts are predominantly using norm-referenced assessments through
the SAT-9 and CORE.
• All three cases have evidence to show that the schools are beginning to link
interventions for students to assessments within the current state context.
• There was mixed usage of technology to enhance data collection and use.
District Data Design Linked to Emerging State Context for Assessing Student
Performance
• Mixed data use for some of the emerging elements: California English Language
Test (CELDT), writing assessments, criterion-referenced testing, and
international performance standards.
• In all three cases, authentic (more progressive) assessments are beginning to be
linked to state standards throughout the districts, schools, and classrooms. This
is through curriculum embedded assessments in English Language Arts and
Mathematics in Case 1, nationally normed testing for CORE (Cases 1 and 2),
and benchmark, criterion-referenced tests for CA Standards (Case 3).
• There is, however, little evidence of performance assessments in the 3 cases, and
districts need to continue adding authentic assessments, performance
assessments, more criterion referenced assessments, and other reform elements
from the emerging context to the data mix.
Research Question Two
The second research question is, “To what extent has the district plan actually
been implemented at the district, school, and individual level.” Three areas were the
focus of this research question:
1) The degree of design implementation in the current and emerging contexts.
2) The accountability for data use at the district, school, and individual level.
3) Improving student achievement through implementation of data use.
The Teacher Questionnaire, Stages of Concern, and interviews with district
administrators, site administrators, and teachers provided a number of key findings
for these three focus areas. Table 5.2 summarizes the findings for Research Question
Two:
186
Table 5.2
Summary of Key Findings for Research Question Two: Implementation
• Most teachers in all three cases believe there is a high degree of
implementation of current data practices.
• All three cases are not sending home feedback (progress of students) to
parents on a regular basis.
• Teacher’s perceptions of the implementation of emerging state data practices
are mixed. It appears that a small majority of the teachers in all 3 cases
believe that the school is implementing some of the emerging design or the
emerging state accountability measures.
• In all three cases, teachers had concerns about the quality of professional
development in the use of data.
• The majority of teachers interviewed in all three cases believe that the state
holds the district accountable for improving student achievement on state
assessments. Students are also perceived as being held accountable by the
explicit use of standards in the classroom.
• There is mixed agreement by the teachers in the cases that accountability is
linked to the use of data strategies. On the whole, the teachers “agree
somewhat” that there is accountability pressure from the state, district, local
school site, and teachers for data utilization.
• The teachers do feel that data use strategies are responsible for improved
student achievement.
• Teachers are in the beginning stages of feeling comfortable with data use and
why data is important for instructional practice. Teachers will need more
information and practice through staff development to become comfortable
practitioners of data analysis to drive instructional practice.
Research Question Three
The third research question, “To what extent was the district plan a good
on?” generated mixed adequacy ratings across the three cases. The cross-case
researcher used the Researcher Rating Matrix (a Likert scale of 1-5) to rate the three
cases to find the average overall adequacy rating. Case 1 was rated “effective to
very effective” (4.3), Cases 2 and 3 were rated as “somewhat effective to unclear”
(2.7 and 2.8 respectively). Table 5.3 delineates the main findings that generated
these ratings:
187
Table 5.3
Summary of Findings for Adequacy of District Plans
Case 1
Case 2
Case 3
Data Plan Strong District
Data Use Plan
Weak Plan
No substantive district
design for data use.
No data use plan at
district level
Curriculum
Aligned to CA
State Standards
Present Present Present
Preparation for
Emerging
Assessments
Preparation in
Place
Little to None Little to None
Staff
Development
for Teachers
Stronger Weak
Weak
District Sends
Disaggregated
Data to Schools
District provided
both standardized
data and quarterly
interim criterion-
referenced tests.
District would send SAT-
9 and CORE tests to
school with no directive
on how to assess and use
this data.
District handles some
data analysis and then
teachers disaggregate
the data based on
their own needs at
their own school site.
Data Use at
School site
Teachers provide
principal with plan
of action each
month based on
data results
received from
District.
School drives the data use
policies. Teachers created
planned learning logs for
the year. Manually
charted Sat-9, Sat-9 +,
and CORE.
Teachers meet in
grade level teams on
a weekly basis to
adjust instruction
based on information
from data.
Multiple
Stakeholder
exposure to
Standards-
Based
Instruction/
Reform
Elements
Yes
Community has
been exposed to
standards and
performance
rubrics.
Weak exposure of
stakeholders to standards-
based reform.
Weak communication
with community
about standards-
based reform.
188
Conclusions and Recommendations for Improved Practice
One of this study’s main purposes was to present both effective and
ineffective data practices across three cases. All three districts in the cases are
implementing standards-based reform with success---their test scores have risen
significantly in a short period of time. They are, however, struggling with the full
design and implementation of a data use plan. The conclusions and
recommendations for this study will focus on some best practices and areas of need
that were found in the cases. These are areas that other districts and schools will
want to pay close attention to as they grapple with their own design and
implementation of effective data use practices in the current and emerging state
contexts. These will also be areas that may benefit researchers and policymakers.
Ultimately, the question being asked is how does a district/school/teacher
systematically improve the achievement level of students by using data?
Training/Technology
In the cross-case analysis, teachers did not fully understand data analysis.
They need more specific and sustained training on how to use data to drive
instruction. Data collection without a purpose is meaningless. There is a continued
need for staff development on data use and how to tie data to instruction in all three
cases. Data must also be presented in a format that is understandable to teachers and
administrators; it must be easy to manipulate and use. If teachers are not exposed to
high-quality staff development, then their practice in data use will not evolve.
Rosenholtz (1991) writes: “Those with limited opportunity for professional growth
189
are not only ill-equipped for an inherently changing environment, but they
unwittingly become programmed for sameness and routine, exacting costs of
incalculable proportion” (p. 217).
Effective data disaggregation is important and this is where technology such
as DataWorks and IDSM can be extremely helpful to expedite the disaggregation
process.
With the current state and federal accountability demands,
including the rigorous reporting and data analysis requirements
of NCLB, information management technologies also respond
to an urgent need for integrating, analyzing, and reporting data
on student achievement and school performance (Annenberg
Institute, 2005, p. 5).
Any data design plan should also include technology because the two have a
symbiotic relationship.
Recommendations
• Make data-use staff development high quality, comprehensive, on-going, and
available to teachers and administrators. Hirsh (2004) adds that “professional
development functions more effectively when it is embedded into the district
or school plan and is seen as the primary strategy for achieving district or
school goals….improvement plans will help students achieve higher
standards only when the plans recognize that comprehensive professional
development must be a supporting piece of the plan” (p. 12).
• Have an accessible way for administrators and teachers to use data. Invest in
technology so schools can look at data in-depth (with analysis pages) and
provide interim benchmark assessment so data is not received the next year.
190
Reform Levers
McDonnell and Elmore (1987) and Anderson, MacDonald, and Sinnemann
(2004) describe policy levers that can create change. In this study, the catalyst for
change across these three cases was the lever of mandated policy change from the
State of California through the Public Schools Accountability Act of 1999. One
expected to see this reform continue in a top-down direction with the district leading
the reform charge; however, that was only apparent in Case 1. Only Case 1 had a
clear District Plan for data use, and even in Case 1, it was the teachers and an
outside consultant targeting data use through their II/SUP plan that stimulated the
most reform at the local school site. In Case 2, data use was teacher-led and in Case
3 data use was administration led in consort with an outside consulting group.
Indeed, in these three cases, top-down influence stimulated reform, but it was
bottom-up reform that responded quickly and more effectively. “Jeannie Oaks
recently completed a review of research comparing top-down models to teacher-
driven ones. She found that school-level control can lead to strong gains in learning
if local curricular decisions are accompanied by rigorous teacher training and
collaboration” (Helfand & Rubin, 2006, p. 2). It is unclear if this response could be
replicated district-wide in each case. This brings us to the difficult issue of
“scaling-up” across multiple schools and districts (Elmore, 1999; Borman, 2005).
Recommendations
• Johnson (2002) points out that “(t)he excitement about a new endeavor can
cause schools and districts to rush to implement reforms with no forethought
191
as to how they will measure whether the reforms are working or not. It is
important that we resist this temptation and devote time up front to thinking
very carefully about how to measure the results of the reforms we are
planning. A data-informed monitoring process allows for midcourse
corrections, reinforces positive directions, and rewards success” (p. 37). It is
recommended that districts and schools create a workable data use plan.
Communicate with all stakeholders and involve parents. Set clear
expectations for the analysis and usage of data. Develop a uniform data
analysis process across all schools. Staff development should be aligned to
this as well.
• Rosenholtz (1991) delineates the difference between “moving” and “stuck”
districts. In regard to reform elements, the treatment of schools and teachers
can have vast ramifications: “Where moving districts pulled one way, giving
teachers more autonomy to learn and improve their work, stuck districts
pulled doggedly in the other” (p. 211). Districts should respect and support
working data systems at schools that function and give teachers the autonomy
that Rosenholtz describes to continue refining data use practices.
Communication
There was a lack of communication between stakeholders and across levels in
a District (for example: between the district Office and local school site).
192
Recommendation
• Make effective vertical communication a priority between district, school,
and classroom and include multiple stakeholders in the dialogue of data use
design and implementation.
Funding
Funding issues were found across all three cases. There was a lack of
allocation of money for data use, personnel, technology, and training.
Recommendation
• Before beginning a large-scale reform like data-driven decision making,
make sure to adequately budget for the training and materials needed to make
the process successful. This should be a part of a long-range plan.
Time
Remapping new approaches to instruction and assessment needs adequate
buy-in and time for development. Differentiation for learner needs is a key element
to any reform plan.
Recommendation:
• Setting time aside for planning and data analysis needs to be done to drive
instructional practice. Teachers need time to digest the data results and apply
them to their instructional practice.
Change and Affect
The Stages of Concern Questionnaire revealed the teachers had strong
concerns regarding the changes occurring in data use. Affective consideration must
193
be considered when change occurs. In each of the cases the following behaviors
could be observed: primary (enhancement behavior): spearheading the design at the
local school site, collaborative (maintenance behavior): working with district team
or local school team to effectively implement the data plan and passive (protective
behavior): receiving the information regarding the plan with little understanding or
implementation (Wong & Wong, 1998, p. 288).
Recommendation
• Take the time to foster buy-in from all parties involved. Choose a path of
reform that works with the culture of your district and school.
Evolving Reform
The schools and districts in this cross-case analysis are predominantly in the
traditional rather than emerging context. All of the cases have aligned their
curriculum, teaching, and assessments to the state standards. The cases are
beginning to align student performance to targeted interventions. They will quickly
need to augment practices to meet the emerging state accountability pieces. They
need to look ahead to changing accountability measures and stay one step ahead by
embedding upcoming changes in a long-range plan.
Recommendations
• Continue moving toward the implementation of data use in the emerging
context. Include a variety of assessments to gather data: criterion-referenced,
benchmark, performance assessments, etc., to get the whole picture of a
student’s performance. These should include interim assessments to target
194
instruction throughout the year for students. Shepard et al. (2005) cite the
landmark study Knowing What Students Know (Pellegrino & Others, 2001);
in this study, the authors make recommendations for improving assessment
systems. “They identified several alternatives such as curriculum-embedded
assessments, that would make it possible to do a better job of representing the
complexity of student learning and still ensure comparability of data. For the
future, they envisioned a more balanced assessment system in which
classroom and large-scale assessment would work together in a more
coherent and supportive fashion” (p. 308).
• The State of California needs to continue including more authentic
assessments into the reform process (performance assessments have been
implemented and the CST (California Standards Test) is a criterion-
referenced test).
Recommendations for Future Research
Using data for targeted instruction has a powerful effect on student
achievement. Using data for targeted intervention or acceleration will cause student
achievement to increase.
Both effective assessment procedures and effective use of the
associated data are fundamental to a school’s continuing
achievement and improvement. With good data, teachers can
tell which groups of students are struggling and where their
problems lie. With such data teachers can 1) determine whether
their students are learning more or achieving at a higher level
than they did in the past, 2) compare their outcomes with those
of other teachers, and 3) evaluate whether existing curriculum
and instruction adequately prepare student to demonstrate
proficiency (Blankenstein, 2004, p. 142).
195
Using data to drive instruction can be likened to the medical model being
applied to education: diagnose and prescribe. Data analysis is the examination room
where as much information on each child can be gathered to understand their needs:
standardized tests, criterion-referenced tests, portfolios, writing samples,
observations, etc. Just like finely trained surgeons, district personnel, administrators,
and teachers need to know how to interpret all of the data and come up with a plan to
rehabilitate the struggling learner and help the competent student excel to their
highest potential. Just as in medicine, there is no one way to treat an illness, in
education there are many pathways to help students achieve academically. Indeed,
one size does not fit all. It is necessary for districts and schools to make sure that all
personnel are trained to be skilled at using data to best meet the needs of learners.
Data analysis leads us to differentiation, individualization, and intervention: a
prescription for academic issues.
There does not appear to be a clear answer in the research literature on how
to enable a district, school, and teacher to deftly manage data to drive instructional
practices. A number of questions would be beneficial for guiding future research
considerations:
1) How to effectively and efficiently train personnel to use data?
2) How sustainable is data use and how do you get it to scale up? What
is the most effective lever to stimulate educational data reform?
196
3) How can state reform systems include a balanced assessment system
(as Pelegrino and Others, 2001 proposed) in which classrooms and
large scale assessment work together?
4) Are schools more effective with data use if an advanced technology
system is readily available to disaggregate data?
In addition, it would be interesting to repeat this study four years later to see how the
data use plans, procedures, and practices of these three cases evolved. Indeed, the
Cross-Case Update below gives a sneak peak as to what may have occurred.
Finally, a recommendation to future researchers conducting a cross case
analysis is to have clear timelines where raw data is centrally collected and organized
to prevent loss of data.
Case Update
It has been four years since the data was collected for this study, and there
have been many changes to the state’s accountability system including a more
stringent and complex way of calculating the Academic Performance Index. The
current (2006) state context of reform in California centers around fulfilling the
parameters defined by federal legislation in the No Child Left Behind (NCLB) Act of
2001 and the Public Schools Accountability Act (PSAA) of 1999. Three major
components currently make up the California Accountability Progress Reporting
system: The Academic Performance Index (API report), and The Adequate Yearly
Progress (AYP) Report, and the Program Improvement (PI) report. The State has set
800 as the API target for all schools to meet. Schools are expected to meet annual
197
growth targets until they reach 800; the annual API growth target for a school is 5
percent of the difference between the school’s API and the statewide performance
target of 800. Once a school is at 800, they are expected to at least maintain that
level. Table 5.4 presents the API growth from 1999 to 2006 for the three cases:
Table 5.4
Comparison Between 1999 and 2005 API Scores
1999 2006 Change
Case 1. 603 799 +196
Case 2. 808 900 +92
Case 3. 520 660 +140
Indeed, all three cases have continued to make good growth in academic
achievement as demonstrated by the strong increases in API scores over an eight
year period---from the first recorded API calculation to the most recently reported
year of scores. One can surmise that the reform elements highlighted from the
school sites were somewhat effective: emphasis on standards-based reform,
professional development on data use, teacher-buy-in, etc. In the State of California,
the state average API score has grown to 720 in 2005-2006 which is an 11-point gain
from 2004-2005. Only 52% of school met all of their API growth targets for this
year.
198
The AYP Report “shows how well schools and school districts are meeting
common standards of academic performance” (CDE, 2006). Every year, the schools
and districts must meet four sets of requirements to make AYP: 1) student
participation rate on statewide tests, 2) percentage of students scoring at the
proficient level or above in English-language arts and mathematics, 3) API Growth,
and 4) graduation rates (if high school students are enrolled). A sub group is
considered numerically significant for the API and AYP if the group includes 100
students or at least 50 who make up 15% or more of the school’s total population.
Table 5.5 details the Adequate Yearly Progress for the three cases from 2004-2006.
Table 5.5
Comparison Between 2004-2006 for AYP
Cases
2004
Met
AYP?
2005
Met AYP?
2006
Met AYP?
Current
PI Status
Case 1 Yes Yes Yes No
Case 2 Yes Yes Yes No
Case 3 Yes No
Did not meet
AYP because did
not meet API
growth target (fell
4 points) and did
not meet Percent
Proficient in
English Language
Arts
No
Did not meet the
participation rate or
Percent Proficient in
English Language
Arts.
Year 1
In 2004, all three cases made their AYP; however, beginning in 2005 Case 3 did not
meet their AYP goals for two years. This has put them in Program Improvement
Status for the 2006-2007 school year. “A school or district that receives Title I funds
199
is subject to identification as PI if it does not make AYP for two years in a row”
(CDE, 2006). Schools who are in PI status must offer various services/ interventions
to parents and students during the year and may exit from PI status after making
AYP for two years in a row. Examples of these services/ interventions are: offering
school choice with paid transportation to another public school in the Local
Educational Agency that is not PI and tutoring eligible students in the school. When
a school becomes a PI school, they are placed on a State watchlist. If the school does
not show improvement, a State takeover can occur. “This year, 39 school districts
and county offices were newly identified for PI. With 26 LEAs exiting PI this year,
167 school districts and county offices are now in PI” (“Board of,” 2006).
Case 3 typifies the struggles that many schools are having in the State of
California. It is difficult to meet all the parameters laid out by No Child Left Behind.
In the State, 65% of all Title 1 funded schools made AYP for the 2005-2006 school
year. In addition, 483 elementary schools met their API growth targets with
schoolwide growth at least double the 2006 target, but did not meet AYP (“Board
of,” 2006). This also occurred in Case 3. Their API score went from 644 in 2005 to
660 in 2006; however, they did not meet AYP. Looking more closely at why they
did not meet AYP, one finds that not enough students with disabilities participated in
the assessments and their Hispanic, socioeconomically disadvantaged, and English
Learners did not meet targets for percent proficient in English-Language Arts. These
are the same core issues surrounding closing the achievement gap.
200
Currently under NCLB, AYP targets are supposed to increase almost yearly
until 2013-2014 when all schools must have 100% of their students performing at or
above the proficient level on state tests. Fullan and other researchers have identified
several flaws in NCLB: “It focuses primarily on measuring growth in school
performance against fixed standards---the so-called adequate yearly progress (AYP)
requirement---and only incidentally on building the capacity of individual educators
and schools to deliver high-quality instruction to students” (Fullan, 2003, p. 6). Joel
Packer of the National Education Association, predicts that “75 to 90% of all
schools will eventually receive a failure rating” under NCLB due to the AYP
(Annual Yearly Progress) (as cited in Franklin, 2006, p. 7). NCLB is up for
reauthorization this year and it will be interesting to see what changes will be made
in light of vigorous criticisms leveled at the Act.
Because Federal legislation (NCLB) does not focus on building capacity, the
current State context for reform also continues to center around high-stakes testing.
Recent studies have shows that “teachers have many concerns about high-stakes
testing, including that it narrows the curriculum, causes teachers to teach to the test,
dampens student and teacher motivation, and has an overall negative effect on public
education” (Jones & Egley, 2006, p. 767). By now, researchers and policymakers
know that “Paint-by-number approaches will fall short for all of us---teachers and
students alike ---because they abandon quality. Paint-by-number approaches will fail
teachers because they confuse technical expedience with artistry. They will fail
students because they confuse compliance with thoughtful engagement. Any
201
educational approach that does not invite us to teach individuals is deeply flawed”
(Tomlinson, 2000, p. 11).
Fullan, Hill, and Crévola (2006) echo the call for the State’s role to be
building capacity at all levels of education:
Districts and states must integrate pressure and support so that
everyone within the system seriously engages in capacity
building with a focus on results. Capacity building is what most
policymakers neglect. Capacity building involves the use of
strategies that increase the collective effectiveness of all levels
of the system in developing and mobilizing knowledge,
resources, and motivation, all of which are needed to raise the
bar and close the gap of student learning across the system (p.
88).
Fullan (2005) urges the fostering of leadership at all levels of education in order to
“form a critical mass of interacting and coalescing leadership for change across the
three levels of the system---school, district, and state (as cited in Fullan, Hill, &
Crévola, 2006, p. 95).
Fullan, Hill, and Crévola (2006) make the excellent point: “It doesn’t matter
where the change starts as long as it is systemic thereafter. And systemic means a
focus on establishing expert instructional systems that serve the needs of all levels”
(p. 89). “Although, we don’t recommend that any level wait for another level to get
its act together, in the long run, sustainable systems run on quid pro quo synergy.
The idea is to create conditions that get all of the excuses off the table and then to
expect results” (p. 99). Each of the three cases had a different pathway to reform
(district push, innovative principal, engaged teachers) and all have shown strong
growth over the past eight years. The research examined also supported this notion
202
of different pathways to reform and different levers that can stimulate reform.
Perhaps, it does not matter where or how the change occurs, but rather the
sustainability and generalizability of that change.
Educational researchers, policymakers, administrators, and teachers are all
struggling to make the right decisions and increase student performance. There has
been a huge influx of programs, systems, and movements over the past century in
education, and yet, a strong achievement gap persists. Indeed, not all of our student
are making adequate yearly progress. Perhaps, it is time to stop the onslaught of the
ever new and better idea and turn inward and look at what we have to work with.
Take the data we have, create a manageable system of usage, and individualize
instruction for students.
Education reform is at a stage where many of the components of
successful large-scale reform are evident in school’s collective
basements. One half of the solution is to seek out and identify
the critical elements that need to be in place; the other half is
combining them creatively. This is not simply a job of
alignment, but rather one of establishing dynamic connectivity
among the core elements: (Fullan, Hill, & Crévola, p. 15).
That is the challenge.
203
REFERENCES
Albuquerque Public Schools (APS). (n.d.). Some useful definitions. Retrieved August
10, 2006, from http://www.aps.edu/aps/tls/Standards/Definitions.htm.
Anderson, B, MacDonald, D.S., & Sinnemann, C. (2004). Can measurement of
results help improve the performance of schools? Phi Delta Kappan, 85(10),
735-744.
Association for Supervision and Curriculum Development (ASCD). (n.d.). Lexicon
of learning. Retrieved August 15, 2006, from http://www.ascd.org.
ATLAS Communities. (n.d.). Retrieved December 22, 2002, from
http://www.edc.org/ATLAS/psable.htm.
Baker, J. & Brown, T. (2006). Pyramid of support. Leadership, 35(4), 12-15, 36.
Barton, P. E. (2006). The dropout problem: Losing ground. Educational
Leadership, 63(5), 82-83.
Behuniak, P. (2002). Consumer referenced testing. Phi Delta Kappan, 84(3).
Berliner, D. C. (1993). Educational reform in an era of disinformation.
Education Policy Analysis Archives, 1(2), 1-29. Retrieved July 7, 2001, from
http://epaa.asu.edu/epaa/v1n2.html.
Berliner, D. C. (2001a). Our schools vs. theirs: An international comparison. The
Merrow Report, 1-11. Retrieved December 12, 2002, from
www.pgs.org/merrow/tmr-radio/pgm26/.
Berliner, D. C. (2001b, January 28). Averages hide the true extremes.
The Washington Post. Retrieved, December 28, 2002, from
http://courses.ed.asu.edu/berliner/readings/timssroped.html.
Berliner D. C. & Biddle, B. I. (1998). The lamentable alliance between the media and
school critics. In G.I. Maeroff (Ed.), Imaging Education: The Media and
Schools in America (pp. 1-18). New York: Teachers College Press. Retrieved
December 28, 2002, from
http://courses.ed.asu.edu/berliner/readings/alliancew.htm
Bernhardt, V. L. (2000, Winter). Intersections: New routes open when one type of data
crosses another. Journal of Staff Development, 21(1), 1-7. Retrieved December
12, 2002, from http://www.nsdc.org/library/jsd/bernhardt211.html.
204
Best, J. & Kahn, J. (1993). Research in education (7
th
ed.). Boston: Allyn and
Bacon.
Blank, R. & Wilson, L. (2001). Understanding NAEP and TIMSS results: Three types
of analyses useful to educators. ERS Spectrum, 19(1), 23-33.
Blankenstein, A. (2004). Failure is not an option: six principles that guide student
achievement in high performing schools. Thousand Oaks: Corwin Press.
Board of education hears accountability, NCLB reports. (2006, September 15).
EdCal, p. 5.
Borman, G. D. (2005). National efforts to bring reform to scale in high-poverty
schools: outcomes and implications. Review of Research in Education, 29, 1-
27.
Bracey, G. W. (2002). International comparisons: An excuse to avoid meaningful
educational reform. Education Week, 21(19), 30-32.
Caldwell, B. J. (1999). Education for the public good: strategic intention for the 21
st
century. In D. D. Marsh (Ed.) Preparing our schools for the 21st century (pp.
45-64). Alexandria: ASCD Publications.
California Department of Education. (2001). Standards-based reform in
California. Retrieved December 27, 2002, from
http://www.cde.ca.gov/iasa/standards/.
California Department of Education. (2005, August 31). O’Connell announces
significant gains in state API results, mixed progress in federal AYP results.
Retrieved August 15, 2006, from
http://www.cde.ca.gov/nr/ne/yr05/yr05rel103.asp.
California Department of Education. (2006, August). Parent/ guardian guide to the
school accountability progress reporting system. Retrieved August 29, 2006
from http://www.cde.ca.gov/ta/ac/ay/documents/parentguide06.pdf.
Chappuis, J. (2005). Helping students understand assessment. Educational
Leadership, 63(3), 39-43.
Christie, K. (2000). Monitoring what matters. Phi Delta Kappan, 82(91), 5-8.
Clark, L. (2005). Gifted and growing. Educational Leadership, 63(3), 56-60.
Congressional Digest. (1997a, November). International Comparisons, 262-263.
205
Congressional Digest. (1997b, November). National Educational Tests, 257.
Congressional Digest. (1997c, November). A Framework for Reform, 259-260.
Darling-Hammond, L. (Ed). (1994). Professional Development Schools: Schools for
Developing a Profession. New York: Teachers College Press.
Darling-Hammond, L. (1997). The right to learn: a blueprint for creating schools that
work. New York: Jossey-Bass.
Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of
state policy evidence. Education Policy Analysis Archives, 8(1), 1-49.
Darling-Hammond, L. & Baratz-Snowden, J. (2005). A good teacher in every
classroom: Preparing the highly qualified teachers our children deserve. San
Francisco: Jossey-Bass.
Data Study Case Study Guide (2001). Unpublished guide.
Datnow, A. (2004, Spring/Summer). The age of accountability and the need for data-
driven decision-making. USC Urban Ed, 14-16.
Deal, T.E. & Peterson, K.D. (1994). The leadership paradox: Balancing logic and
artistry in schools. San Francisco: Jossey-Bass.
Delpit, L. D. (1999). Ten factors essential to success in urban classrooms. Fall Forum.
Retrieved December 26, 2002, from
http://ces.edgateway.net/pub/ces_dpcs/fforum/1999/speeches/delpit_speech99.ht
ml
Dombrower, D. M. (2002). Data: policies, strategies, and utilization public schools
(Doctoral dissertation, University of Southern California, 2002). ProQuest, UMI
Microform, 3094322.
Duncan, W. T. (2002). How effective schools use data to improve student achievement
(Doctoral dissertation, University of Southern California, 2003). ProQuest, UMI
Microform, 3093755.
Education Commission of the States. (1996, November). The progress of education
reform. Available from ECS Distribution Center: ED393183.
Education Commission of the States 1(3). (1999 September-October). A promising
approach for today’s schools. Retrieved December 20, 2002, from
http://www.ecs.org/clearninghouse/16/42/1642.htm.
206
Education Commission of the States 1(5). (2000a, January-February ). Setting the
standard: Will higher expectations improve student achievement?. Retrieved
December 20, 2002, from http://www.ecs.org/clearninghouse/16/50/1650.htm.
Education Commission of the States 1(6). (2000b, March-April). Assessing student
performance-tough choices. Retrieved December 20, 2002, from
http://www.ecs.org/clearninghouse/16/51/1651.htm.
Elmore, R.F. (1999, September). Leadership of large-scale improvement in
American education: What are the scale-up issues in your organization?
Paper presented for the Albert Shanker Institute.
Elmore, R.F. (2002). Local school districts and instructional improvement. In W. D.
Hawley (Ed.) The Keys to Effective Schools: Educational Reform as
Continuous Improvement (pp. 111-122). Thousand Oaks: Corwin Press.
Elmore, R.F. (2005). School reform from the inside out. Cambridge: Harvard
University Press.
Elmore, R.F. (2006, June). Video Lecture from Local District 4, Los Angeles
Unified School District.
Esch, C.E., Chang-Ross, C.M., Guha, R. Humphrey, D.C., Shields, P.M., Tiffany-
Morales, J.D., Wechsler, M.E., &Woodworth, K. R. (2005). The status of the
teaching profession 2005. Santa Cruz, CA: The Center for the Future of
Teaching and Learning.
Fashola, O.S. & Slavin, R.E. (1998, February). Schoolwide reform models: what
works? Phi Delta Kappan. Retrieved December 12, 2003, from
http://www.pdkintl.org/kappan/ksla9801.htm.
Franklin, J. (2006). NCLB a year before reauthorization: Squaring off on the issues
that will frame the debate. ASCD Education Update, 48(7), 1,7-8.
Friedman, T.L. (2004). The world is flat: A brief history of the twenty-first century.
New York: Farrar, Straus, and Giroux.
Fullan, M. (2001). Leading in a culture of change. San Francisco: Jossey-Bass.
Fullan, M. (2002). Educational reform as continuous improvement. In W. D. Hawley
(Ed.). The Keys to Effective Schools: Educational Reform as Continuous
Improvement (pp. 1-9). Thousand Oaks: Corwin Press.
207
Fullan, M., Hill, P., Crévola, C. (2006). Breakthrough. Thousand Oaks: Corwin
Press.
Fullan, R. F. (2003). A plea for strong practice. Educational leadership 61(3), 3, 6-
10.
Fuhrman, S. H. & Odden, A. (2001). Introduction. Phi Delta Kappan,
83(1), 59-61.
Gall, M., Borg, W., & Gall, J. (1996). Educational research: An introduction (6
th
ed.). White Plains: Longman Publishers.
Gao, H. (2003, January 1). Law’s aim unmet. Los Angeles Daily News. Retrieved July
1, 2003, from http://www.dailynews.com.
Goals 2000: History. (n.d). Retrieved December 20, 2002, from
www.ed.gov/pubs/62KReforming/g2chl.html.
Goertz, M. E. (2001). Redefining government roles in an era of standards-based
reform. Phi Delta Kappan, 83(1), 62-66.
Goodlad, J. (1994). Common schools for the common weal: reconciling self-interest in
the common good. In J. I. Goodlad & P. Keating (Eds.), Access to Knowledge:
The Continuing Agenda for Our Nation’s Schools (pp. 1-22). New York:
College Entrance Examination Board.
Guba, E.G. & Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Park: Sage
Publications.
Guskey, T.R. (2005). Mapping the road to proficiency. Educational Leadership,
63(3), 32-38.
Hall, G.E. & Hord, S.M. (1987). Change in schools: facilitating the process. Albany:
State University of New York Press.
Hall, G., & Hord, S. (2001). Implementing change: Patterns, principles, and potholes.
Boston: Allyn and Bacon.
Hall, G.E., Wallace, R.C., & Dorsett, W.A. (1973). A Developmental
Conceptualization of the Adoption Process Within Educational Institutions.
Austin: Research and Developmental Center for Teacher Education,
University of Texas at Austin.
208
Helfland, D. (2003, January 9). State keeps education standards. Los Angeles
Times, pp. B1, B12.
Helfland, D. & Rubin, J. (2006, June 23). Villaraigosa’s plan for LA Unified faces
opposition in classrooms. Los Angeles Times.
Herman, J.L. & Baker, E.L. (2005). Making benchmark testing work. Educational
Leadership 63(3), 49-53.
Hill, P.W. & Crévola, C. (1999). The role of standards in educational reform. In D. D.
Marsh (Ed.), Preparing our school’s for the 21st century. (pp. 117-142).
Alexandria: ASCD Publications.
Isaac, S. & Michael, W. (1995). Handbook in research and evaluation (3
rd
ed.). San Diego: EdITS.
Jago, C. (2000). California, the golden state. In A.A. Glatthorn & J. Fontana (Eds.).
Coping With Standards, Tests, and Accountability: Voices from the Classroom
(pp. 63-73). NEA Teaching and Learning Division.
Jencks, C. & Phillips, M. (1998). America’s next achievement test: Closing the black-
white test score gap. The American Prospect, 9(40). Retrieved December 20,
2002, from http://www.prospect.org/print/V9/40/jencks-c.html
Johnson, R. S. (2002). Using data to close the achievement gap: how to measure
equity in our schools. Thousand Oaks: Corwin Press.
Jones, B.D. & Egley, R. J. (2006). Looking through different lenses: Teachers’ and
administrators’ views of accountability. Phi Delta Kappan, 87(10), 767-771.
Khanna, R., Trousdale, D. & Keil, J. (1999). Supporting data use among
administrators: results from a data planning model. Paper presented at the
Annual Meeting of the American Educational Research Association Meetings,
April 19-23, (pp.1-20).
King, S. P. (1999). Leadership in the 21
st
century. Using feedback to maintain focus
and direction. In D. D. Marsh (Ed.) Preparing Our School’s for the 21
st
Century
(pp. 165-184). Alexandria: ASCD Publications.
Kingore, B. (2004). Differentiation: simplified, realistic, and effective. Austin:
Professional Associates Publishing.
Kozol, J. (1995). Amazing Grace: The Lives of Children, the Conscience of a Nation.
Harper Perennial.
209
Larson, K, Guidera, A.R. & Smith, N. (1998). Formula for success: A business leader’s
guide to supporting math and science achievement. Business coalition for
educational reform. Washington, D.C.: Office of Educational Research and
Improvement.
Leahy, S., Lyon, C., Thompson, M., & William, D. (November, 2005). Classroom
assessment: minute by minute, day by day. Educational Leadership, 63(3), 19-
24.
Lewis, A. C. (2002). A Horse Called NCLB? Phi Delta Kappan, 84(3), 179-
180.
Lewis, C., Perry, R. & Murata, A. (2006). How should research contribute to
instructional improvement? The case of lesson study. Educational
Researcher, 35(3), 3-14.
Little, J.W. (2002). Professional communication and collaboration. In W. D. Hawley
(Ed.). The keys to effective schools: educational reform as continuous
improvement (pp. 43-55). Thousand Oaks: Corwin Press.
Lucas, C. (1997). Teacher Education in America: Reform Agendas for the Twenty-First
Century. New York: St. Martin Press.
Manning, J. B. & Kovach, J. A. (2003). The continuing challenges of excellence and
equity. In B. Williams (Ed.) Closing the achievement gap: a vision for
changing beliefs and practices (pp. 25-47). Alexandria, Virginia: ASCD
Publications.
Manski, A. (1987). Academic ability, earnings, and the decision to become a teacher:
Evidence from the national longitudinal study of the high school class of 1972.
In D. Wise (Ed.), Public sector payrolls. Chicago: University of Chicago Press.
Marsh, D., Hunt, L., & Lewis, A. (2003). An analysis of the use of data to increase
student achievement in public schools: a cross-case analysis. Paper
presented at the meeting of the American Education Research Association,
Chicago, IL.
Maxwell, J. (1996). Qualitative research design: An interactive approach.
Thousand Oaks: Sage.
Mayer, D.P., Mullens, J.E., & Moore, M.T. (2002). Monitoring school quality:
An indicators report. Education Statistics Quarterly, 3, 38-44.
210
McDonnell, L.M. & Elmore, R.F. (1987). Alternative policy Instruments. CPRE
Joint Note Series.
McTighe, J. & O’Connor, K. (2005). Seven practices for effective learning.
Educational Leadership, 63(3), 10-17.
Meier, D. (2002). Standardization versus standards. Phi Delta Kappan, 84(3), 190-198.
Miles, M. & Huberman, A. (1994). Qualitative data analysis. Thousand Oaks: Sage
Publications.
Miles, T. & Foley, E. (2005). From data to decisions: Lessons from
school districts using data warehousing. Annenberg Institute for School
Reform Retrieved August 20, 2006, from
http://www.annenberginstitute.org/publications/DataWarehousing.html
Murnane, R J. & Levy, F. (1996). Teaching the new basic skills. New York: The Free
Press.
Murnane, R. J. & Olsen, R.J. (1989). The effects of salaries and opportunity costs on
duration in teaching: Evidence from Michigan. Review of Economics and
Statistics, 71, 347-352.
National Center for Educational Statistics, U.S. Department of Education. (2000).
Highlights from the Third International Mathematics and Science Study-Repeat
(TIMMS-R). Washington D.C.: U.S. Government Printing Office.
National Center for Education Statistics, U.S. Department of Education. (2002). The
nation’s report Card: An overview of NAEP. Retrieved December 20, 2002,
from http://nces.ed.gov/nationsreportcard.
National Center for Education Statistics. (2005). The nation’s report card: State
profiles. Retrieved August 15, 2006, from
http://nces.ed.gov/nationsreportcard/states/profile.asp.
National Clearinghouse for Comprehensive School Reform (n.d.). About CSR.
Retrieved December 12, 2003, from:
Http://www.goodschools.gwu.edu/aboutcsr/index.html.
National Commission on Excellence in Education. (1983, April). A Nation at Risk: The
Imperative for Educational Reform.
No child left behind. (n.d.). Retrieved December 27, 2002, from
www.nochildleftbehind.gov/next/overview/overview.html.
211
Noguera, P.A. (2001) Racial politics and the elusive quest for excellence and
equity in education. Motion Magazine. Retrieved December 19, 2002, from
www.inmotionmagazine.com/er/pnrp3.html.
North Central Regional Educational Laboratory. (n.d). Program research and
publications. Retrieved December 12, 2003, from http:
www.ncrel.org/csri/overview.htm.
North Carolina Board of Education. (n.d.). Student accountability standards: effective
strategies for teaching students. Retrieved December 19, 2002, from
www.dpi.state.nc.us/student_promotion/strategies.htm
Northwest Regional Educational Laboratory. (1997, May 28). A regional
depiction: Standards-based curriculum reform in the northwest. Retrieved
December 28, 2002 from:
http://www.nwrel.org/scpd/ci/regdepiction/introduction.html.
Odden, A.R. (1999). Making better use of resources for educational reform. In D. D.
Marsh (Ed.) Preparing Our School’s for the 21st Century (pp. 143-164).
Alexandria: ASCD Publications.
Ogbu, J. U. (1994). Overcoming racial barriers to equal access. In J. I. Goodlad & P.
Keating (Eds.). Access to Knowledge: The Continuing Agenda for Our Nation’s
Schools. (pp. 59-90). New York: College Entrance Examination Board.
Orland, M. E. (1994). From the picket fence to the chain link fence: national goals
and federal aid to the disadvantaged. In M.C. Wang and K.K. Wong (Eds.)
Rethinking Policy for At-Risk Students. Berkeley: McCutchan Publishing
Company.
Patton, M. (1987). How to use qualitative methods in evaluation. Newbury Park:
Sage Publications.
Patton, M. (1990). Qualitative evaluation and research methods. Newbury Park:
Sage Publications.
Picus, L.O. (2000, May). Setting budget priorities. American School Board Journal, 1-7
Retrieved June 29, 2001, from: http:
www.asbj.com/2000/05/0500coverstory.html.
Popham, W.J. (2006). Assessment for learning: An endangered species? Educational
Leadership, 63(5), 82-83.
212
Reeves, D. B. (2000). Accountability in Action: A Blueprint for Learning
Organizations. ALP Press.
Resnick, L.B. & Glennan, T.K. (2002). Leadership for learning: A theory of action
for urban school districts. In A.T. Hightower et al. (Eds), School districts
and urban renewal. New York, New York: Teacher’s College Press.
Rollie, D. L. (2002). Preface. In W. D. Hawley (Ed.). The keys to effective schools:
Educational reform as continuous improvement (vii-xix). Thousand Oaks:
Corwin Press.
Rosenholtz, S. J. (1991). Teacher’s workplace: The social organization of schools.
New York, NY: Teacher’s College Press.
Sanders, W.L. & Horn, S.P. (1995). Educational assessment reassessed: The
usefulness of standardized and alternative measures of student achievement as
indicators for the assessment of educational outcomes. Education policy
analysis. 3(6). Retrieved December 28, 2002, from
http://olam.ed.asu.edu/epaa/v3n6.html.
Schmoker, M. (1999). Results. The key to continuous school improvement, 2
nd
edition. Alexandria: Association for Supervision and Curriculum
Development.
Schmoker, M. (2005). Here and now: Improved teaching and learning. In R. DuFour &
R. Eaker (Eds), On common ground: The power of professional learning
communities. Bloomington: National Educational Service.
Schoolwise Press. Glossary of educational terms. Retrieved August 1, 2006, from
www.schoolwisepress.com/smart/dict/dict2.html.
Schryver, D.A. (1998). Truth in spending: the cost of not educating our children. The
center for education reform. Retrieved December 20, 2002, from
http://www.edreform.com/pubs/truth.htm.
Shepard. L. A. (2005). Linking formative assessment to scaffolding. Educational
Leadership, 63(3), 66-71.
Shepard, L.A., Hammerness, K., Darling-Hammond, L., Rust, F., Snowden, J., Gordon,
E., Gutierrez, G. & Pacheco, A. (2005). Assessment. In L. Darling-Hammond &
J. Bransford (Eds.), Preparing teachers for a changing world (pp. 275-326). San
Francisco: Sage.
213
Sirotnik, K.A. (2002). Promoting responsible accountability in schools and
education. Phi Delta Kappan, 83(9), 662-673.
Steele, C. M. (1999a). Expert report of Claude M. Steele. The compelling need for
diversity in higher education. Retrieved December 20, 2002, from
http:www.umich.edu/~ire;/admissions/legal/expert/steele.html.
Steele, C.M. (1999b, August) Thin ice: “Stereotype threat” and black college students.
Atlantic Monthly. Retrieved December 20, 2002, from http:
www.theatlantic.com/issues/99aug/9908stereotype.htm.
STEP (Stanford Teacher Education Program). (2000). Standards-based reform.
Retrieved December 19, 2002, from: www.stanford.edu/~hakuta/caltexsbr.
Strauss, A. & Corbin, J. (1998). Basics of qualitative research. Thousand Oaks:
Sage.
Thomas, M.D. & Bainbridge, W.L. (2001). “All children can learn” facts and fallacies.
Phi Delta Kappan, 82(9), 660-662.
Thompson, R. (2002). Design, implementation, and adequacy of using student
performance data and the design’s link to state context for assessing student
performance: a case study. Unpublished doctoral dissertation, University of
Southern California, Los Angeles.
Thompson, S. (2001). The authentic standards movement and its evil twin. Phi Delta
Kappan. Retrieved December 21, 2002, from
shttp://www.pdkintl.org/kappan/ktho0101.htm.
Thornburgh, N. (2006, April 17). Dropout nation. Time, pp. 32-40.
Tomlinson, C.A. (2000). Reconcilable Differences? Standards-based teaching and
differentiation. Educational Leadership, 58(1), 6-11.
Tomlinson, C.A. (2002, December). Proficiency is not enough. Intercom.
Tucker, M.S. & Codding, J.B. (1999). Education and the demands of democracy in the
next millennium. In D. D. Marsh (Ed.) Preparing our school’s for the 21st
century (pp. 25-44): ASCD Publications.
United States Department of Education. (2002, November ). Education
Department issues final regulation for no child left behind act. Retrieved
December 27, 2002, from http://www.ed.gov/PressReleases/11-
2002/11262002.html.
214
United Teacher LA, XXXIII. (2002, December 13). Education.
Weiss, I. R. (1994). A Profile of Science and Mathematics Education in the United
States: 1993. Chapel Hill: Horizon Research, Inc.
Wikipedia. Retrieved August 10, 2006, from http://en.wikipedia.org
Wilcox, J. (2006). A generation to define a century. ASCD Education Update, 48(6),
8.
Williams, B. (1999). Diversity and education for the 21
st
century. In D. D. Marsh (Ed.)
Preparing our school’s for the 21st century (pp. 89-116). Alexandria: ASCD
Publications.
Winfield, L.F. (1990). School competency testing reforms and student achievement:
exploring a national perspective. Educational evaluation and policy analysis,
12(2), 157-173.
Winkler, D. (2002). Division in the ranks: Standardized testing draws lines between
new and veteran teachers. Phi Delta Kappan, 84(3), 219-225.
Wong, H.K. & Wong, R. (1998). The first days of school. Mt. View: Harry K. Wong
Publishing.
Wormser, M. (2003, January 5). Districts looking for a few good teachers. Los Angeles
Daily News. Retrieved July 10, 2003, from http://www.dailynews.com.
Yahoo. Dictionary. Retrieved August 10, 2006, from
http://education.yahoo/reference/dictionary/entry/data.
215
APPENDIX A
Conceptual Framework A
Description of Data Use Policies and Strategies: The Design
This conceptual framework addresses research question 1: What is the district
design for using data regarding student performance, and how is that design linked
to the current and the emerging state context for assessing student performance?
____________________________________________________________________
1. Student Performance Assessed in the Context of Current and Emerging
Instruments
Current
• Are the district and school focusing on improving Stanford-9 scores and API
ratings as defined by the State?
• Are the district and school using the California Content Standards to improve
student performance?
• Are authentic assessments linked to state standards throughout the district,
school, and classrooms?
• Are norm-referenced assessments used?
• Are interventions with students linked to performance assessments?
• Is there awareness on the district and school levels of the Stanford-9, API scores,
etc?
• Is state-of-the-art technology used to address current and emerging student
assessments on the state, district, and school levels?
Emerging
• Is there awareness on the district and school levels of emerging state
assessments?
• Is there preparation for the High School Exit Exam (Sr. High only)?
• Is there preparation for the California English Language Test (CELDT)?
• Is there evidence of district planning to prepare students for emerging
assessments?
• Are there examples of the use of criteria-referenced tests on the district and
school levels?
• Is there measurement of student performance against international performance
standards?
216
2. Overview of the Elements of District Design of Data Use to Improve Student
Performance
• What types of data are collected in the district design of data use?
• What is the timeline for the receipt and use of data in the district?
• What assessment instruments is the district using to collect data?
• What is the rationale for collecting these instruments?
• What methods are used to analyze these data?
• What is the rationale for selecting these methods of analysis?
• Did reputable research guide the district’s design of data use?
• What methods does the district have for disseminating data?
• What training does the district provide for improving/modifying data?
• Are multiple stakeholders involved in the decision-making process within the
district?
• What steps do district and schools take to use data to improve student
performance?
• What outside influences affect policy and strategy design?
• What inside influences affect policy and strategy design?
• What role does fiscal and funding issues play in the design of the policies and
strategies?
• What state legislation was considered in the design?
• What current data practices are included in the policy?
• What emerging data practices are included in the policy?
3. District Decisions and Rulings that Support Use of District Design
• What board rulings directly support the district design?
• To what degree are multiple stakeholders involved in establishing/ influencing
boards?
• Are research studies guiding the board support of the district design?
• Does the board consider State legislation?
• What is the timeline to implement the district design?
• Is there board mandated training for the district design of the use of data?
• What is the process for establishing district-wide high-performance goals?
• Is there money allotted to develop and implement the district design?
• Is technology designated to be used in the implementation of the design?
• Are outside sources used to implement the design?
217
4. Intended Results of Design Plans to Improve Student Performance (District,
School, and Classroom)
District
• What are the intended results of district design?
• Does the district want to use data to increase student performance, and is there an
increased focus on improving student performance?
• Was there a process created to establish district-wide high-performance goals?
• Does the district support schools’ efforts to improve student performance by
providing student data?
• Does the district use multiple measures to increase performance and guide
instruction?
• Does the district provide schools with the results of these multiple measures?
School
• What is the school doing to use data (directly or indirectly) to promote student
learning?
• Are there any schoolwide plans/ efforts to improve how data are gathered,
analyzed, and used?
• Does the school try to improve administrator’s and teachers’ ability to use data to
increase student performance through training, etc.?
• Does the school try to improve administrator’s and teachers’ ability to analyze
data effectively to guide instruction?
• Is the school trying to raise student performance expectations with teachers and
parents?
• Does the school administrator(s) support school-wide implementation of
standards-based curriculum to improve student performance?
• Does the school regularly inform the students, parents, and community of student
performance?
• Was the effectiveness of instructional programs evaluated?
• Were there any roadblocks or challenges to making these improvements?
Classroom
• Are teachers able to effectively analyze performance data and other forms of
data?
• Are schools training teachers to improve their ability to use data?
• Are administrators and teachers using standards and rubrics to improve student
performance? Are teachers executing high-level standards-based curriculum?
• Are classroom teachers using data to evaluate the effectiveness of instructional
programs?
• Are teachers placing a high priority on improving student performance?
218
• Are students expected to raise their ability and achieve high performance goals?
• Are classroom teachers regularly informing students, parents and community of
student learning performance and how they can assist student improvement?
5. Data Use Policy and Strategy Funding (District, and School Site)
• How is data use implementation funded?
• Does the district control the budgeting and funding of the programs and
strategies, or do schools have a role?
• Do all grade levels and subject areas receive equal shares of funding?
• How is the funding affecting curricular and instructional decisions?
• Where is additional funding needed?
219
APPENDIX B
Conceptual Framework B
Implementation of Data Use Policy and Strategy in Practice
This conceptual framework addresses research question 2: To what extent has the
district design actually been implemented at the district, school, and individual
teacher level?
____________________________________________________________________
1. Degree of Design Implementation (in the current and emerging contexts)
• To what degree does the school have knowledge of the district design?
• Were the district policies and strategies fully implemented at the school?
• How long did this implementation take? Was it phased in? What was the
timeline?
• Who was given the job of implementation at the site level?
• What factors were considered important in the implementation?
• How comfortable is the school with the district design?
• What outside influences affected implementation?
• What roadblocks were identified in implementation?
• Was technology used to implement the design?
• Were outside sources used to implement the design?
• Was there a timeline for the receipt and use of data from the district or state?
• What data assessment instruments are used at the school site?
• How supportive is the principal (and other site administrators) of the district
design?
• Are school inservices being offered on how to use data effectively?
• Can the school site link implementation of the district design with improved
student performance?
• Does the school have a method for communicating standards-based curriculum
and assessments to students, parents, and community?
• What data was collected on student performance?
• Are current data practices implemented?
• Are emerging data practices being implemented at the school?
• Is the district plan implementing current data practices occurring: Sat-9, API,
norm-referenced tests etc?
• Is the district plan implementing emerging data practices: criteria-referenced
tests, performance assessments, international comparisons, High School Exit,
etc.?
220
2. Accountability for data use at district, school, and individual level
• How are the district policies and strategies monitored for effectiveness?
• Where is the data stored?
• How is the data disaggregated?
• What is done with the data?
• Who is accountable for the implementation of the district’s plan to improve
student achievement through the use of data at the school site?
• What responsibility do classroom teachers have for linking instructional practice
to data use?
• What responsibility do students have to be accountable for their achievement?
3. Improving Student Achievement through Implementation of Data Use
• Does the implementation of data use show any student achievement gains at the
district level and school site level?
• What assessments were used?
• What elements of data use are the most effective in bringing about student
achievement gains?
• How is data used in the classroom to improve student performance?
• Do classroom teachers keep track of students’ growth? How?
221
APPENDIX C
Teacher Questionnaire
Thank you for agreeing to participate in this study. We ensure complete
confidentiality of your valued contribution. As part of this study, your responses will
provide educators and policy-makers in California with much needed information.
Accordingly, please take time to answer each question carefully and completely by
circling the number that best corresponds to your view.
We would appreciate it if you would provide the following demographic data for
purposes of the study only. Again, complete confidentiality will be maintained.
Credential(s)
(Indicate if it is an emergency credential)
Years of Experience
Years in current position
Grade level(s) currently teaching
Courses Currently Teaching
(By department only)
Gender
Ethnicity
222
Don’t
Know
0
Disagree
Strongly
1
Disagree
Somewhat
2
Agree
Somewhat
3
Agree
Strongly
4
Degree of design implementation of current data practices
1. I am aware of the design for
using data.
0 1 2 3 4
2. I use data in my classes on a
weekly basis.
0 1 2 3 4
3. I collect data on a weekly
basis.
0 1 2 3 4
4. I use data to monitor student
progress.
0 1 2 3 4
5. I use data to guide my
instruction.
0 1 2 3 4
6. I use data to improve student
outcomes.
0 1 2 3 4
7. I collect data on test scores. 0 1 2 3 4
8. I collect data on class
participation.
0 1 2 3 4
9. I collect data on student
attitudes.
0 1 2 3 4
10. I collect data. 0 1 2 3 4
11. My department head
collects data.
0 1 2 3 4
12. I use data to compare the
past and present performance of
an individual student.
0 1 2 3 4
13. I use data to compare
students within my class.
0 1 2 3 4
14. I use data to compare
students across the school in the
same grade.
0 1 2 3 4
15. Reports are sent to parents
on a regular basis (about once a
month).
0 1 2 3 4
16. The school completes
reports of data implementation
for district databases.
0 1 2 3 4
223
Don’t
Know
Disagree
Strongly
Disagree
Somewhat
Agree
Somewhat
Agree
Strongly
Degree of design implementation of emerging state data practices
17. The school offers frequent
professional development to
raise awareness of new data
practices.
0 1 2 3 4
18. I have attended professional
development training in the past
six months related to new data
practices.
0 1 2 3 4
19. I frequently discuss new
data practices with teachers
who are about as experienced as
I.
0 1 2 3 4
20. I frequently discuss new
data practices with teachers
who are more or less
experienced than I (mentor/
mentee format).
0 1 2 3 4
21. I frequently discuss data
practices with teachers in
different disciplines from mine
0 1 2 3 4
22. School administrators have
assisted me in implementing
new data practices.
0 1 2 3 4
23. School administrators have
monitored my utilization of new
data practices.
0 1 2 3 4
224
Don’t
Know
Disagree
Strongly
Disagree
Somewhat
Agree
Somewhat
Agree
Strongly
Accountability for data use at district, school, and individual level
24. I think that the state holds
the district accountable for data
utilization.
0 1 2 3 4
25. I think that the district
holds the school accountable for
data utilization.
0 1 2 3 4
26. My school holds me
accountable for data utilization.
0 1 2 3 4
27. I hold my students
accountable for improved
performance through the use of
data.
0 1 2 3 4
28. My salary is dependent
upon utilization of data
practices.
0 1 2 3 4
29. My promotion within the
school is dependent upon
utilization of data practices.
0 1 2 3 4
Improving student achievement through implementation of date use
30. I have seen student
achievement visibly improve
when data is used as a
benchmark for students to
reach.
0 1 2 3 4
31. I have seen student
achievement visibly improve
when I use data to inform my
teaching.
0 1 2 3 4
32. Intervention is more easily
employed through the
utilization of data.
0 1 2 3 4
33. Student motivation
increases when data is present
in my classroom.
0 1 2 3 4
34. Student motivation
increases through the
dissemination of data to
parents.
0 1 2 3 4
225
APPENDIX D
Stages of Concern
(Teachers)
Name (optional)
_________________________________________________________
In order to identify these data, please give us the last four digits of your
Social Security number: ______________________
This is a questionnaire about the district’s design to use student data to
improve student performance. The purpose of this questionnaire is to determine
what teachers who are using or thinking about using the district’s design to use data
to improve student learning are concerned about at various times during the
innovation adoption process. A good part of the items on this questionnaire may
appear to be of little relevance or irrelevant to you at this time. For the completely
irrelevant items, please circle “0” on the scale. Other items will represent those
concerns you do have, in varying degrees of intensity, and should be marked higher
on the scale.
Please respond to the items in terms of your present concerns, or how you
feel about your involvement or potential involvement with the district’s design to use
data to improve student learning. We do not hold to any one definition of this
innovation, so please think of it in terms of your perception of what it involves.
Remember to respond to each item in terms of your present concerns about your
involvement or potential involvement with the district’s design to use data to
improve student learning.
Thank you for taking time to complete this questionnaire.
Please circle the number that best reflects your response to each statement based on
the following rating scale:
0 1 2 3 4 5 6 7
Irrelevant Not true for Me Somewhat true for me Very true for me now
____________________________________________________________________
1. I am concerned about student attitudes toward the district’s design to use student
data to improve student performance.
0 1 2 3 4 5 6 7
226
2. I now know of some other approaches that might work better than the district’s
design to use student data to improve student performance.
0 1 2 3 4 5 6 7
3. I don’t even know what the district’s design to use student data to improve
student performance is.
0 1 2 3 4 5 6 7
4. I am concerned about not having enough time to organize myself each day
because of the district’s design to use student data to improve student
performance.
0 1 2 3 4 5 6 7
5. I would like to help other faculty in their use of the district’s design to use
student data to improve student performance.
0 1 2 3 4 5 6 7
6. I have a very limited knowledge about the district’s design to use student data
to improve student performance.
0 1 2 3 4 5 6 7
7. I would like to know how the implementation of the district’s design to use
student data to improve student performance would affect my classroom, my
position at my school and my future professional status.
0 1 2 3 4 5 6 7
8. I am concerned about conflict between my interests and responsibilities with
respect to implementation of the district’s design to use student data to improve
student performance.
0 1 2 3 4 5 6 7
9. I am concerned about revisiting my use of the district’s design to use student
data to improve student performance.
0 1 2 3 4 5 6 7
227
10. I would like to develop working relationships with both our faculty and outside
faculty while implementing the district’s design to use student data to improve
student performance.
0 1 2 3 4 5 6 7
11. I am concerned about how the district’s design to use student data to improve
student performance affects students.
0 1 2 3 4 5 6 7
12. I am not concerned about the district’s design to use student data to improve
student performance.
0 1 2 3 4 5 6 7
13. I would like to know who will make the decisions in the district’s new design to
use student data to improve student performance.
0 1 2 3 4 5 6 7
14. I would like to discuss the possibility of using the district’s design to use
student data to improve student performance.
0 1 2 3 4 5 6 7
15. I would like to know what resources are available to assist us in implementing
the district’s design to use student data to improve student performance.
0 1 2 3 4 5 6 7
16. I am concerned about my inability to manage all that is required by the district’s
design to use student data to improve student performance.
0 1 2 3 4 5 6 7
17. I would like to know how my teaching or administration is supposed to change
with the implementation of the district’s design to use student data to improve
student performance.
0 1 2 3 4 5 6 7
228
18. I would like to familiarize other departments or people with the progress of this
new approach to use district’s design to use student data to improve student
performance.
0 1 2 3 4 5 6 7
19. I am concerned about evaluating my impact on student’s in relation to the
district’s design to use student data to improve student performance.
0 1 2 3 4 5 6 7
20. I would like to revise the district’s design to use student data to improve
student performance.
0 1 2 3 4 5 6 7
21. I am completely occupied with other things besides the district’s design to use
student data to improve student performance.
0 1 2 3 4 5 6 7
22. I would like to modify our use of the district’s design to use student data to
improve student performance.
0 1 2 3 4 5 6 7
23. Although I don’t know about the district’s design to use student data to
improve student performance, I am concerned about aspects of the district’s
design.
0 1 2 3 4 5 6 7
24. I would like to excite my students about their part in the district’s use of student
data to improve student performance.
0 1 2 3 4 5 6 7
25. I am concerned about time spent working with nonacademic problems related to
the district’s design to use student data to improve student performance.
0 1 2 3 4 5 6 7
229
26. I would like to know what the use of the district’s design to use student data to
improve student performance.
0 1 2 3 4 5 6 7
27. I would like to coordinate my efforts with other to maximize the effects of the
district’s design to use student data to improve student performance.
0 1 2 3 4 5 6 7
28. I would like to have more information on time and energy commitments
required by the district’s design to use student data to improve student
performance.
0 1 2 3 4 5 6 7
29. I would like to know what other faculty are doing in the area of implementing the
district’s design to use student data to improve student performance.
0 1 2 3 4 5 6 7
30. At this time, I am not interested in learning about the district’s design to use
student data to improve student performance.
0 1 2 3 4 5 6 7
31. I would like to determine how to supplement, enhance, or replace the district’s
design to use student data to improve student performance.
0 1 2 3 4 5 6 7
32. I would like to use feedback from students to change the district’s design to use
student data to improve student performance.
0 1 2 3 4 5 6 7
33. I would like to know how my role will change when I am using the district’s
design to use student data to improve student performance.
0 1 2 3 4 5 6 7
34. Coordination of tasks and people in relation to the district’s design to use
student data to improve student performance is taking too much of my time.
0 1 2 3 4 5 6 7
230
35. I would like to know how the district’s design to use student data to improve
student performance is better than what we have now.
0 1 2 3 4 5 6 7
36. I am concerned about how the district’s design to use student data to improve
student performance affects students.
0 1 2 3 4 5 6 7
231
APPENDIX E
Situated Interviews
You will need to conduct six situated interviews with the same six teachers with
whom you conducted formal interviews. The purpose of these interviews is to
generate stories and examples about the way data are used (and not used) in the
school you are studying. These stories will be used to form vignettes that will help
illustrate the information gathered through some of the other instruments. Therefore,
these interviews should aim to be factual yet personal. The bases for these questions
are from research question 2 about implementation. Each section that follows comes
directly from the conceptual framework headings for this research question. The
following is a non-exclusive list of questions you can ask to elicit the desired types
of responses.
Degree of design implementation of current data practices
1. Tell me your impression of how aware staff members are of the district design for
using data.
• Talk about a teacher who exhibits a high level of awareness.
• Talk about a teacher who has a low level of awareness
2. Can you give me an example of how data has been used in your classroom?
• What kinds of data have been used?
• Where does this data filter down to you from (i.e., Who sees it before you? After
you?)
3.Tell me how data has been used to improve your instructional practices.
• What was the problem?
• What was the relevant data?
• What was the eventual solution/outcome of the problem?
Degree of design implementation of emerging state data practices
1. Tell me about any professional development training you’ve been involved in to
learn about emerging state practices.
• Who organized the training?
• Who attended the training?
• What policies were discussed?
• Was design or implementation the focus of the training?
232
2. Tell me about any professional development training you feel you would like to
have received but haven’t.
• Is there a cohort of teachers interested in this same topic?
3. What other existing avenues outside of formal training have you been involved
with?
• Teacher-teacher informal discussions
• Teacher-teacher formal meetings
• Beginning teacher to experienced teacher informal discussions/formal meetings
4. In what ways has the school provided time for you to analyze data?
• Time release
• Professional development/non-teaching days
Accountability for data use at district, school, and individual level
1. How has your accountability been affected by the use of data?
• Promotion within the school
• Salary-dependence
• Required reporting to subject coordinators
Improving student achievement through implementation of date use
1. Tell me about a student with low performance whose performance improved
through the use of data.
• Who formulated an intervention plan based on the data?
• How long did it take from the time the data was analyzed to the time
improvement was seen?
2. In what ways is data effective in motivating student achievement?
• “fear factor:” (of retention, of parental dissatisfaction, etc.)
• in-class competition
• internal motivation to improve
233
APPENDIX F
Researcher Observation and Rating Form, Question #3
District support for standards-based instruction and assessment
Intended District Impact:
____________________________________________________________________
Observed Impact on Site:
____________________________________________________________________
Researcher’s rating on how effectively district design improves student performance
as demonstrated in standardized assessment results.
1= not effective 2= somewhat effective 3=unclear 4= effective 5= very effective
____________________________________________________________________
Researcher’s Rating: 1 2 3 4 5
____________________________________________________________________
234
Researcher Observation and Rating Form, Question #3
District and school accountability to standards-based curriculum
Student Performance Data forwarded to School:
____________________________________________________________________
How School is Utilizing Student Data to Impact Student Performance:
____________________________________________________________________
Researcher’s rating on degree that district provided student data is used by school.
1= not effective 2= somewhat effective 3=unclear 4= effective 5= very effective
____________________________________________________________________
Researcher’s Rating: 1 2 3 4 5
____________________________________________________________________
235
Researcher Observation and Rating Form, Question #3
Degree to which High Student Performance is aligned to Standards and
Communicated to Teachers, Students, and Parents:
Student Performance Data forwarded to School:
____________________________________________________________________
Researcher’s rating on how effectively high student performance is developed
throughout the school learning community.
1= not effective 2= somewhat effective 3=unclear 4= effective 5= very effective
____________________________________________________________________
Researcher’s Rating: 1 2 3 4 5
____________________________________________________________________
Abstract (if available)
Abstract
There has been much written about the educational challenges facing the United States of America today. These challenges include, but are not limited, to educational inequity, low test scores, teacher competency and training, and achievement gaps. An era of standards-based reform and comprehensive school reform evolved as a reaction to these challenges. There is also an increase in assessment and accountability of student progress as a part of these educational reform measures. Key elements to school reform include effectively utilizing and collecting data. This includes using data to monitor student achievement and hold districts, schools, and educators accountable for student progress. This also includes making the teacher an integral part of the reform process.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
How professional learning communities use student data to increase achievement
PDF
Factors including student engagement impacting student achievement in a high performing urban high school district: a case study
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
School-level resource allocation practices in elementary schools to increase student achievement
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
Systemic change and the system leader: a case study of superintendent action to improve student achievement in a large urban school district
PDF
The superintendent and reform: a case study of action by the system leader to improve student achievement in a large urban school district
PDF
Reform strategies implemented to increase student achievement: a case study of superintendent actions
PDF
High school reform to improve mathematics achievement
PDF
Systems and structures at a high performing high poverty school: A case study using RTI to promote student achievement
PDF
Superintendents and Latino student achievement: promising practices that superintendents use to influence the instruction and increase the achievement of Latino students in urban school districts
PDF
The open enrollment of advanced placement classes as a means for increasing student achievement at the high school level
PDF
Multiple perceptions of teachers who use data
PDF
Leading the way: the effective implementation of reform strategies and best practices to improve student achievement in math
PDF
The effective implementation of reform strategies, instructional conditions, and best practices to improve student achievement in math: a case study of practices at Land High School
PDF
A study of an outperforming urban high school and the factors which contribute to its increased academic achievement with attention to the contribution of student achievement
PDF
School level resource allocation to improve student performance: A case study of Orange County and Los Angeles County Title I elementary schools
PDF
Factors that contribute to narrowing the achievement gap for elementary age students: a case study
PDF
Reform strategies used by system leaders in education to impact student achievement: a case study
PDF
Superintendent's leverage: a case study of strategies utilized by an urban school district superintendent to improve student achievement
Asset Metadata
Creator
Hunt, Lucy Katherine
(author)
Core Title
Cross-case analysis of the use of student performance data to increase student achievement in California public schools on the elementary level
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Education (Curriculum
Publication Date
11/28/2008
Defense Date
09/05/2006
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
data,OAI-PMH Harvest,school improvement,school reform
Language
English
Advisor
Marsh, David D. (
committee chair
), Badame, Dianne M. (
committee member
), Kaplan, Sandra N. (
committee member
)
Creator Email
LHunt@lausd.k12.ca.us
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m195
Unique identifier
UC1180276
Identifier
etd-Hunt-20061128 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-42060 (legacy record id),usctheses-m195 (legacy record id)
Legacy Identifier
etd-Hunt-20061128.pdf
Dmrecord
42060
Document Type
Dissertation
Rights
Hunt, Lucy Katherine
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
data
school improvement
school reform