Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Multiple perceptions of teachers who use data
(USC Thesis Other)
Multiple perceptions of teachers who use data
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MULTIPLE PERCEPTIONS OF TEACHERS WHO
USE DATA
by
Lorena Frances Rayor
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2010
Copyright 2010 Lorena Frances Rayor
Dedication
This dissertation is dedicated to my mother, Eda Lopez Avila, who passed away
in 1994 but was with me in spirit during this challenging and rewarding adventure. She
was a bold and courageous risk taker who embraced life and its challenges. She was my
mentor, my friend, and my hero. She believed in me.
ii
Acknowledgments
I would like to take this opportunity to thank and acknowledge the many indi-
viduals who offered their support and encouragement in my scholarly pursuit at the
USC Rossier School of Education. First, I would like to thank the distinguished faculty
members who served on my dissertation committee: Chairperson Dr. Guilbert
Hentschke, along with Dr. Richard Brown and Dr. Amanda Datnow. Drs. Hentschke,
Brown, and Datnow were very generous with their time, expertise, and feedback while
maintaining a busy schedule of global travels, teaching loads, and research endeavors.
Dr. Hentschke deserves a special mention here because of his thoughtfulness and
understanding as he continued to provide his guidance and helped turn obstacles into
opportunities.
I am appreciative to all of the administrators, the teachers, and the students from
the charter school high school in this study who shared their time, knowledge, and per-
ceptions as participants in this research endeavor. Without their participation, there
would have been no study. Thanks to these educators and students, I was able to con-
duct my study.
Finally, I could not have completed this journey without my immediate family,
the most loving four-legged children in the world. Thanks to Trudie, Tucker, and Travis
always providing unconditional love and encouragement when I needed it most. Of
course, I thank Thomas for coming into my life when I needed laughter and joy to get
me over that final hurdle to finish the race. I am grateful to Trudie, Tucker, Travis, and
iii
Thomas for waiting patiently for their walks and keeping me company late into the
night and the early morning more times than I can recall. I “paws” for them.
iv
Table of Contents
Dedication ii
Acknowledgments iii
List of Tables vii
Abstract viii
Chapter 1: Introduction 1
Statement of the Problem 4
Data Use for Instruction Intervention 7
Teacher Expertise in Using Student Data 10
Time for Data Analysis 12
Receiving Data on Time 13
Other Contributing Factors That Affect the Data-Based Process 14
Purpose of the Study 16
Chapter 2: Literature Review: Results-based Pressures 18
NCLB Mandates 18
The Impact of NCLB 20
Data-Driven Decision-Making Skills 26
Making Use of Resources 29
Mixed Message About Best Practices 33
Current State of Efforts to Implement Data-Driven Decision-Making
Processes 35
Research Questions 36
Chapter 3: Methodology 37
Sample and Population 39
Background Information: Brio Charter High School 42
Data Collection Procedures 43
Data Analysis Procedures 45
Procedural Considerations 46
Limitations of the Study 47
Chapter 4: Findings 49
Research Questions and Underlying Themes 50
Research Questions 50
Common Goals and Other Similarities 51
Common Perceptions About Teachers Who Used Data 68
v
Common Perceptions of What “Use Data” Means 72
Variability in Perceptions 80
Conclusion 85
Chapter 5: Summary and Implications of Findings 89
Implications for Future Research 92
Implications for Policy and Practice 93
Student Representation 94
Professional Development 94
Conclusion 95
References 97
Appendices
Appendix A: Interview Questions for Administrators 102
Appendix B: Interview Questions for Teachers 103
Appendix C: Interview Questions for Students 104
Appendix D: Brio Charter High School: Graduation and College
Entrance Requirements 105
Appendix E: Department Chair Meeting Agenda 107
Appendix F: School Site Council Meeting Agenda 108
Appendix G: Grade Team Meeting Agenda 110
vi
List of Tables
Table 1: Types and Number of Interviewees (N = 20) 41
Table 2: Brio Charter High School (Grades 9-12): Enrollment by Grade,
2006-2007 44
Table 3: Brio Charter High School (Grades 9-12): Enrollment by Subgroup,
2006-2007 44
vii
Abstract
The current situation in U.S. K-12 education is such that there is no clarity,
consistency, or consensus in the manner in which schools go about implementing a
data-driven decision-making process. Simply stated, there is no clearly articulated
direction provided to school systems regarding policies and procedures for data analy-
sis. School systems are left to their own devices to develop, design, and implement their
own data-based approach. Furthermore, standards-based testing requirements and
accountability systems set in place by multiple federal, state, and local legislative
mandates require that school administrators and teachers assume the role of data ana-
lysts. Little consideration has been given for variability in teacher or administrator
preparation or consensus at all educational levels on what matters most: student
achievement or student learning (Wade, 2001). Equally important to improving student
achievement is how to go about implementing a data-based process for optimal effec-
tiveness. Implementing a data-based process effectively is the key to improving student
achievement. To that end, this study focused on the possibility that stakeholders at the
school site level, such as administrators, teachers, and students, might have varying
perceptions of teachers who used data to improve student achievement and that this
variability might have an impact on the effectiveness of implementing a data-driven
decision-making process.
The purpose of this case study was to analyze the perceptions of administrators,
teachers, and students with regard to teacher use of data to improve student
viii
achievement. The method used for data collection consisted primarily of onsite inter-
views of administrators, teachers, and students. One of the major findings resulting
from this study was that administrators, teachers, and students shared common goals
and perceptions of teachers who used data. These administrators, teachers, and students
believed that using data involved a process of analyzing data, identifying weaknesses,
and modifying instructions to intervene. The behaviors involved in this process were
what differentiated a teacher who used data from a teacher who did not. A major as-
sumption underlying this study was that because administrators, teachers, and students
had different roles that they played in the data-driven decision-making process, there
would be variability in their perceptions of teachers who used data. Consequently, this
variability would have an impact on the effectiveness of the process. Contrary to this
assumption, the data collected revealed that in general, the stakeholders in this case
study shared similar goals and perceptions related to data. Thus, there was no basis in
the data for analyzing variability among the stakeholders’ perceptions and, subse-
quently, no way to measure whether variability would have impacted the effectiveness
of the process.
ix
CHAPTER 1
INTRODUCTION
Presently, educators as well as policymakers and parents are calling for schools
to focus on educational standards, continuous academic improvement, teacher quality,
and accountability. The driving force behind this educational thrust has come from the
No Child Left Behind (NCLB) Act of 2001, noteworthy because it is the first time since
the passage of the Elementary and Secondary Education Act in the 1960s that federal
education policy by Congress has attempted to improve student achievement through
increased testing and accountability. In doing so, it signaled a major reform of elemen-
tary and secondary educational programs in the United States (Simpson, LaCava, &
Graner, 2004).
Four underlying principles of the NCLB Act are the following:
1. Increased accountability for student learning and achievement results, re-
quiring annual testing of all students in Grades 3 through 8 in reading and math, and the
hiring of highly qualified teachers;
2. Greater flexibility for parents in terms of school choice, particularly for
students attending Title I schools that are failing to meet state standards;
3. Greater control and flexibility for states, districts, and schools to decide how
best to use federal education funds when test results meet state targets; and
4. Increased federal funding to promote the President’s Reading First Initiative,
which is available to help children learn to read by the end of Grade 3 (Odland, 2006b).
1
With all the pressures, both federal and local, to effect a change in the academic
performance of all students, one course of action, according to Mason (2002), is for
educators to use data effectively to enable schools to learn more about their organiza-
tion, to identify strengths and weaknesses, to pinpoint areas of improvement, and to
help assess the effectiveness of programs and instructional practices.
“Since the effectiveness of schools is being measured by performance indica-
tors, it is not surprising that educators are now using data for improvement” (Datnow,
Park, & Wohlstetter, 2007, p. 10). However, using data for improvement has tremen-
dous implications for teachers—first, because of the focus on performance indicators
and, second, because of the importance of not just collecting data but of processing and
interpreting data to intervene in the learning process. The assumption underlying the
tenets of NCLB is that educators will know how to analyze, interpret, and use data to
make informed decisions with regard to both student learning and professional develop-
ment. According to Newmann, King, and Rigdon (1997),
the assumption is that teachers will try harder and become more effective in
meeting goals for student performance when goals are clear, when information
on the degree of success is available, and when there are real incentives to meet
goals. (p. 43)
It is implicitly assumed that for teachers to improve student achievement, all they have
to do is collect student performance data, analyze the data, try harder, and then they will
become effective in meeting their goals. These assumptions are themselves open for
discussion, and in the literature they have been questioned and are still unverified, as
will be discussed later in this chapter.
2
The challenge for educators is to not only collect student performance informa-
tion but also to know how to use it effectively to guide instructional improvement
(Love, 2004). According to Raach (2000), collecting data does not necessarily lead to
making sound educational decisions in all areas of the school organization. As a matter
of fact, stakeholders such as teachers need to learn how to make data-driven decisions
to improve instruction, because in education, data-driven decision making—the process
of making decisions based on data—should function as a process for making instruc-
tional modifications and improving student learning.
This aforementioned decision support system, when implemented with fidelity,
can be used effectively to provide a continuous and timely feedback loop for those
closest to the learning—students and teachers (Kinder, 2000). According to Killion and
Bellamy (2000) and Raach (2000), teachers should be able to use the data effectively to
better understand their students by identifying their strengths and weaknesses, collect-
ing data, analyzing results, setting priorities and goals, and developing strategies to
improve student learning. However, there is so much variability among the stakeholders
as to what constitutes effective use of data by teachers that the lack of consensus and
ambiguity make it sometimes challenging for school organizations to institute data-
based approaches for reform.
Educators as well as researchers are recommending that school organizations
use a data-based approach to school improvement (Thornton & Perreault, 2002). Ac-
cording to Streifer (2000), data-based decision making provides teachers with the
“strategies most likely to succeed given specialized contexts of the learning
3
environment” (p. 66). McClean (1995) supported that notion by asserting that the effec-
tive implementation of a data-based approach, complete with collection and analysis,
will lead to educational improvement. Canada (2001), president of the American Asso-
ciation of School Administrators, suggested that by studying the data carefully, instruc-
tional leaders were often able to help the “individual students that contribute to those
numbers” (p. 44) and need to improve. Many researchers have discussed the importance
of data-based decision making (e.g., Berhnardt, 1998; Creighton, 2001; Holcomb, 1999;
Sparks, 2000) and espouse similar beliefs that such a process represents a noteworthy
opportunity for educational leaders to improve schools.
Statement of the Problem
Administrators and teachers today understand the importance of using data to
make sound decisions regarding teaching and learning to improve student performance.
The challenge lies in developing, establishing, and maintaining a school process in
which teachers share the same commitment to use data effectively to make instructional
modifications in order to meet the needs of students. According to the literature, some
teachers are using data effectively to intervene in the learning process. In fact, these
data-wise teachers collect, analyze, and interpret data. Then, based on this information,
they take the last step in the intervention process, and they map the curriculum stan-
dards to instruction. This last step starts the cycle of data analysis, which loops back to
the first step of data collection. Once the teacher has implemented the intervention—
4
the modification to the instruction—the intervention is assessed for effectiveness, and
the data loop begins.
As a matter of fact, there are differing perceptions among principals and teach-
ers as to what constitutes a data-wise teacher. Jones and Egley (2006), based on their
survey of both elementary teachers and principals across Florida, found that some ad-
ministrators were primarily concerned with having teachers analyze data to increase test
scores, while teachers, on the other hand, took the test results more personally and
focused on student learning, development, and motivation. In this case, both parties
were analyzing the test data, but their lens for looking at the data was different, al-
though their ultimate goal might be perceived to be the same: namely, to improve
student achievement. According to Wade (2001), one way to address this dissimilarity
is for principals and other administrators to best support the data-driven decision-
making process by providing vision and leadership, which, of course, will vary based
on leadership styles. Their impact will also vary based on the factors such as the learn-
ing styles of teachers.
The U.S. Department of Education (USDOE), Office of Educational Research
and Improvement (OERI; 2002), would highly recommend that data become the foun-
dation and driving force behind the school improvement process. This directive,
although it may be lacking in specificity and appears to be similar to other state legisla-
tive actions as well, does underscore for school administrators that what is important is
that they support the data-based process with the necessary building blocks of resources
and time. To the extent that administrators follow these recommendations, this directive
5
does not explicitly define the administrators’ data-wise behaviors or those of the teach-
ers in this process. That lack of specificity and open-ended charge are not necessarily a
negative. In fact, it could be perceived both as a challenge for data-based implementa-
tion and as an opportunity to implement this process with customized effectiveness.
In this era of high-stakes testing and accountability, teachers are getting mixed
messages about what constitutes a data-wise teacher. While some teachers are victims
of reprisal, others share in professional engagement (USDOE, OERI, 2002; Wade,
2001). What is present here are the two extremes of the teacher experience continuum:
(a) teachers who have received data analysis training in college and/or have had profes-
sional development opportunities to develop data analysis skills during their teacher
service and (b) teachers who have not received any data analysis training in college
and/or have not been afforded the professional development opportunities of data
analysis during their teaching career. This whole continuum is a mere reflection of the
variability that exists in schools when a data-based approach is implemented and
teachers, as human resources, are involved. According to Wade and the USDOE, OERI,
administrators will need to assure skeptical teachers that this process is being under-
taken not to point fingers, but rather to give both teachers and students the necessary
tools to succeed and improve.
6
Data Use for Instruction Intervention
One of the challenges that exists in some school systems today is associated
with teachers who do not trust the process. They may trust one another as colleagues
and peers, but they are suspicious of a process in which they fear the data related to
student achievement will be used to evaluate their performance. Without a clear defini-
tion of the criteria for evaluating a teacher’s performance, teachers are fearful and mis-
trusting of the school system. The challenge, therefore, is for school systems that
embrace data-driven decision making to build trust among teachers as well as admin-
istrators and to build consensus around their mutually agreed-upon goals for student
academic improvement and the criteria for teacher evaluation. According to Ingram,
Louis, and Schroeder (2004), the types of data that teachers use to make decisions about
their effectiveness do not necessarily match the types of data mandated by external
accountability policies. At times, this misalignment contributes to the lack of consensus
around the meaning of data-wise teacher characteristics and behaviors. That is one
reason why variability around the meaning of data-wise teacher characteristics and
behaviors may be inevitable; however, it does not preclude school educators from
seeking to implement a process and developing consensus around goals, terms, and
procedures.
An example of this misalignment of what constitutes teacher effectiveness can
be found in the study by Ingram et al. (2004), where teachers used achievement indica-
tors in conjunction with other types of indicators to assess their effectiveness. When
7
asked how teacher effectiveness should be defined and measured, one teacher re-
sponded as follows:
By the performance of the kids, on the curriculum. Everyone’s pushing towards
the test results. I guess that’s valid to a degree, but there are other things as well.
It’s their honesty and integrity in these kids. Are they treating each other and the
staff with respect? That’s part of it as well. Can you socialize in ways that are
positive rather than negative, and those things aren’t measured. Sometimes I
think that’s where the big push should really be, and it’s not. (p. 1270)
What this means is that teachers are using test scores (achievement indicators) and other
student-related indicators to assess their effectiveness in a holistic manner. They con-
sider other types of indicators as equally important and critical to the learning process
as performance indicators. In the study by Ingram et al., teachers believed that tests do
not tell the whole education story. According to Mintrop (2004), for many teachers,
severe sanctions based on a lack of statistically significant performance effects appear
impractical and not credible.
In some cases, teachers have described situations in which they felt the data
were misused or not used by others (Ingram et al., 2004). This perception of data use
and misuse has only contributed to the ongoing debate that there are variability and a
lack of consensus regarding the data-based decision-making process and what teachers
perceive their roles to be. These situations illustrate how teachers perceive that “data
could be used as tool to force a decision that had already been made rather than as
information to shape a decision” (p. 1276).
The notion of misalignment is not spared in the following study in which there
was incongruence from the perspective of teachers and administrators as to what
8
constitutes teacher effectiveness. According to Ingram et al., (2004) teachers judged
teacher effectiveness and school effectiveness by using both student achievement data
and other indicators. According to teachers, there were “limitations of achievement
data” (p. 1273) that necessitated the use of other indicators such as teacher assessments
and course grades, while administrators relied more heavily on student achievement
data to evaluate teacher effectiveness as well as school effectiveness. The implication
suggested from these findings is that local stakeholders—teachers and administrators—
had not reached an agreement about what data to use to measure teacher effectiveness
and how to measure their school’s achievement goals. To that end, administrators
viewed achievement data as a reflection of teacher effectiveness, whereas teachers took
a more holistic approach in which achievement data were one of many factors (i.e.,
learning disabilities, English proficiency, socioeconomic status [SES], etc.) used to
measure teacher effectiveness. This was just another example of the lack of consensus
that exists between two critical stakeholders in the data-driven decision-making pro-
cess.
Given that there is no agreement on how to measure teacher effectiveness and
school effectiveness when discussing the improvement of student achievement, it
comes as no surprise that this inherent lack of consensus contributes to the difficulty of
achieving completely successful implementation and measuring its effectiveness
(Santo, 2005). In some cases, as in the aforementioned study, lack of trust and fear of
reprisals have hindered but not extinguished the implementation of varied types of
school reform efforts using the data-based approach.
9
Teacher Expertise in Using Student Data
Another theme related to the data-driven decision-making process that resonated
throughout the research literature was that there were a lack of consistency in the
preparation of teachers and a lack of consistency in the professional development pro-
vided by school districts to use data. In fact, not only was there lack of consistency, but
this inconsistency was also compounded by the lack of teacher efficacy—the lack of
belief in their ability to meet the diverse needs of their students (Reeves & Burt, 2006).
This finding by Reeves and Burt can be used to help explain the variability in the imple-
mentation of data-based approaches. Perhaps much of the variability in the implementa-
tion and the effects of the data-based approaches can be attributed to variability in
applied material resources and human resources (e.g., teacher capabilities). Hence, this
variability, although it is illustrative of the ambiguity that exists at the teacher level,
also underscores the notion that varied levels of teacher preparation and participation in
professional development programs lead to varied teacher behaviors and varied percep-
tions of teacher data-wiseness by administrators.
According to Reeves and Burt (2006), one factor that contributes to variability is
teacher preparation and knowledge in the data-driven decision making process. Here is
a common frustration shared by some teachers:
Our teachers need to be trained. Our principals need to be trained. I know I’m
not telling you anything you don’t already know . . . [I]n our principal meetings
. . . we were talking about data and adjusting instruction. . . . I look around and I
know there are people in the room who buy into it [data analysis], but they are
not quite sure how to do it. (p. 67)
10
This is reflective of a knowledge gap, not a motivation gap. The teachers in this study
believed in data analysis but lacked the knowledge of how to apply student data to
improve their instructional practices. Cooley et al. (2006) also referred to this ability to
apply student data to improve instructional practices as a form of “accountability liter-
acy” (p. 11). What is more, teachers as well as administrators have reported having little
common knowledge regarding what data are important and what the data mean. To
further exacerbate matters, when overwhelmed by massive amounts of data, teachers
and administrators without accountability literacy are at a loss about how to use data
from multiple data streams to improve student achievement, and this lack of uniform
levels of preparation of both teachers and administrators contributes to the variability
that is evident from school to school and from district to district.
Another variable related to teacher expertise is the differences that teachers
share in their self-efficacy of data use. This lack of self-efficacy can cause educational
angst, because a low level of teacher efficacy can be an underlying barrier to student
improvement, thereby affecting expectancy outcomes as well as teacher beliefs about
student learning (Bandura, 1977). In their research, Ingram et al. (2004) were told by
teachers that their responsibility was to teach the curriculum imposed by state standards
or by the school district or by local agreement. “These teachers believed that if they
delivered the curriculum, then it’s up to the students to do the learning” (p. 1278). This
statement is notable because it reflects a cultural value of student attribution that is
contrary to the teacher accountability assumptions underlying NCLB. In this study,
teachers believed that their job was to teach and their students’ job was to learn. That
11
teacher belief, coupled with teachers’ perception of the federal and state mandates, also
contributes to the variability and the lack of consensus regarding goals in the drive to
implement a consistent data-based approach for academic improvement.
According to Mintrop (2004), as schools and teachers have been forced to
assume responsibility for critical conditions of student learning over which they per-
ceive to lack control, some teachers have rejected accountability altogether (Ingram et
al., 2004). These problems of accepting responsibility and attribution make it less likely
that schools will see improved student performance simply by having a high-stakes
accountability system imposed on them (Mintrop). The qualities and characteristics of
the teachers who use data are critical components of the data-driven system; and the
diverse needs, interests, and abilities of the teachers, administrators, and students drive
the process. Clearly, with the lack of appropriate guidance and direction from federal,
state, and local policymakers, the pressures to implement a data-based approach for
student improvement have contributed to the variability of implementing the process.
This variability comes from school systems as they attempt to implement the process
based on the human resources (the teachers) at their respective sites with their collective
expertise in conjunction with the diverse needs of the students.
Time for Data Analysis
The use and allocation of teacher work time comprise another factor that affects
the data-driven decision-making process. In the research, the challenge for administra-
tors is to set aside time for teachers to collaborate in the collection and use of data.
12
According to Reeves and Burt (2006), “teachers do not have time to analyze data or to
collaborate with one another regarding the meaning and use of data” (p. 69). Ingram et
al. (2004) also found in their study that teachers do not have time to collect data and
analyze information to make decisions about student learning. Teachers see data and
data collection as a time-consuming component of the process that is competing with
their “real job,” which is teaching students. In other words, “there is not sufficient time
for teachers to meet and analyze the data. Teachers are busy and often do not want to do
more than what they are contractually required to do” (Reeves & Burt, p. 69). Hence,
there is little or no collaboration time for teachers to get the data, “mull” over it, discuss
strategies, and think about how they can teach differently and share with one another
what they have done. According to Brown and Duguid (2000), there is no time for the
social process of creating and sharing knowledge through conversation. Put another
way, the synergy that comes from collective discourse is hindered by the lack of time to
mull over ideas for intervening in the learning process.
Receiving Data on Time
Another challenge related to time is making meaningful data available to teach-
ers and administrators in a timely manner. Teachers as well as administrators identified
that getting the data back (following testing) in a timely fashion was a challenge
(Reeves & Burt, 2006). Often they were forced to write their school improvement plans
based on incomplete data or old data (up to a year old). Consequently, they were always
a year behind in their ability to use the data effectively to make teaching and learning
13
modifications (Reeves & Burt). With data that old, the value of the entire exercise
dropped significantly in the eyes of many teachers. This is a condition of the data-based
process that critically impacts teacher collaboration. Without timely data and time for
collaboration, teachers cannot gather and usefully interpret data for decision making
(Ingram et al., 2004).
Other Contributing Factors That Affect
the Data-Based Process
Three additional components mitigate the intended benefits of data-based
decision-making processes. According to Thornton and Perreault (2002), the three
components necessary for a school system to successfully implement a data-based
approach are “a shared vision, a trust-filled environment, and a principal who under-
stands data” (p. 87). Hallinger, Murphy, and Mesa (1999) found that when teachers and
administrators embraced a vision for school improvement and they (teachers) were
supported with resources and time by the administrators, then they (teachers) were less
confused about where they were going, thus making it easier for them to identify and
focus on the steps that should be taken to reach their goals.
According to Thornton and Perreault (2002), a trust-filled environment reduces
the fear of failure—a common response to implementing a data-based approach. Other
common emotional responses to implementing a data-based decision-making process
identified by Lachat, Williams, and Smith (2006) include reluctance and defensiveness.
Ingram et al. (2004) found that these type of responses will impede the use of data and
14
serve only to have teachers disassociate their own performance from that of their stu-
dents, thus leading them to overlook otherwise useful data.
Thornton and Perreault’s (2002) third component for successful implementation
of a data-based approach was having a principal who understands data and has devel-
oped the ability to collect and interpret data. According to these researchers, having this
ability would enable the principal to analyze the data and communicate the next steps in
the data-driven decision-making process to his/her staff. To that end, Reeves and Burt
(2006) maintained that a school without a principal who can shape this process will see
little progress in connecting data use with classroom instructional decisions. Arguably,
some schools may have a principal or a leadership team led by the principal that shapes
this process, and this condition can likewise contribute to the variability that exists in
school systems that implement a data-based approach for school improvement.
With the federal, state, and local pressures of accountability and the need for
school systems to improve student achievement, school organizations are looking at the
data-driven decision-making process as a method to effectively intervene in the learning
process. To some administrators, that translates to an increase in their schools’ aca-
demic achievement scores. To some teachers, that means improved student learning,
development, and motivation. To some students, that means testing and report card
grades. In this case, all three stakeholders share a common goal to increase student
achievement by implementing a data-based approach. However, the lens they wear
colors their perspective of that process. In other words, stakeholders have a different
15
view of the same process from their own perspective, because their role in the process is
different and they are affected by that process in different ways.
At the core of the data-based process is the teacher who is responsible for inter-
vening in the learning process. For numerous reasons, as reviewed in chapter 2, there is
variability throughout the data-based approach from school system to school system,
from school to school, within a school system, and from teacher to teacher within a
school. Variability exists throughout the data-based process and has important implica-
tions for students and administrators. The students are affected by the teachers using
data to intervene in the learning process, and administrators are affected by teachers
using data to intervene in the learning process in an attempt to increase student achieve-
ment. Hence, it is the teacher whose behaviors in the data-based process have signifi-
cant implications in the implementation and effectiveness of the process.
Purpose of the Study
The purpose of this study was to examine the perceptions held by administra-
tors, teachers, and students of teachers who used data to improve student achievement
and to uncover any consistency, if any, of these perceptions among the stakeholders.
According to the literature, variability exists in the way that data are used, teachers are
prepared, and time is allotted for analysis. These factors may mitigate the effectiveness
of a data-based approach at this school site. Given that the school site chosen for this
case study was implementing a data-based approach to improve teaching and learning,
did administrators, teachers, and students share a common goal or goals? Did they share
16
a common perception of teachers who used data? Did they share a common perception
of what using data means; or, given the inherent differences in their roles in the process,
was there variability in their perceptions, and did this variability affect the effectiveness
of the process?
17
CHAPTER 2
LITERATURE REVIEW: RESULTS-BASED PRESSURES
This chapter will explore the variability that arises from multiple sources at dif-
ferent legislative levels that impact the testing and accountability of student learning at
the school site. More specifically, there is an underlying assumption in the NCLB Act
of 2001 that teachers, just by virtue of implementing the mandates of this act, will use
data effectively to improve student learning. There is a presumption here of teachers
being data-wise. According to Killion and Bellamy (2000) and Raach (2000), a data-
wise teacher should be able to use data effectively to better understand students by iden-
tifying their strengths and weaknesses, collecting data, analyzing results, setting priori-
ties and goals, and developing strategies to improve student learning. However, because
of the lack of uniform guidance and direction from stakeholders at the federal, state, and
even the local levels, the impact of school reform using a data-based approach has been
varied.
NCLB Mandates
The landmark NCLB Act, signed by President George W. Bush in 2001, is
potentially the most significant educational initiative to have been enacted in decades,
because it is the first time that Congress has attempted to improve student achievement
through increased testing and accountability. In so doing, it has had a significant impact
on elementary and secondary educational programs (Simpson et al., 2004). The NCLB
Act of 2001 has four overarching principles related to student achievement:
18
1. Increased accountability for student learning, achievement results, and
highly qualified teachers;
2. Greater flexibility in school choice for those parents with students in Title I
schools that are failing to meet standards;
3. Greater control and flexibility of federal education funds for schools that
meet or exceed achievement targets; and
4. Increased federal funding to promote the President’s Reading First Initiative
(Odland, 2006b).
In other words, the goal of NCLB (2001) “is to ensure that all children have a
fair, equal, and significant opportunity to obtain a high-quality education, and reach, at
a minimum, proficiency on challenging state academic achievement standards and state
academic assessments” (§1001).
To the extent that NCLB mandates that all students demonstrate annual yearly
progress, it serves as the most rigorous standards-based accountability initiative enacted
for reforming schools in decades (Albrect & Joles, 2003; Center on Educational Policy,
2003). Under the guidelines of NCLB, schools that perform well on high-stakes assess-
ments may receive public recognition and financial rewards, while schools that have
students who perform poorly could receive sanctions and be at risk of state takeover. In
their research, Hardman and Mulder (2003) found that the NCLB Act expands the role
of the federal government “from ‘assisting states in setting standards and improving
local performance,’ to assigning fiscal sanctions and corrective action for both states
and schools that fail to meet set criteria” (pp. 5-6). Because of NCLB, the federal
19
government is much more actively involved in public elementary and secondary educa-
tion than in past decades. For example, the USDOE must now approve the testing
programs used by states to carry out NCLB, as well as the accountability systems that
determine the rules for how schools meet their annual target (Jennings & Rentner,
2006).
The Impact of NCLB
Although NCLB has led to higher academic standards for schools districts and
schools across the nation and to improvement in student achievement (Odland, 2006b,
p. 32), serious problems have surfaced as a result of the expectations for every child
accompanied by the “one-size-fits-all” mentality of “teaching to the test” to meet aca-
demic targets with little or no room for teacher efforts to provide developmentally
appropriate instruction that meets the needs of all students. This one-size-fits-all men-
tality of teaching to the test is just one example that illustrates the mixed messages that
teachers receive. From the federal mandate comes the directive to improve student
learning, and from the school site administrator comes the directive to raise the aca-
demic achievement scores of the students or risk sanctions and corrective action. Teach-
ers are confused. In some cases, teachers are required to follow scripted lesson plans
that do not allow them the flexibility they need to teach students and to develop their
individual potential. Some of these problems relating to what is appropriate instruction
can be attributed to poor implementation of the data process and too much emphasis on
testing and reporting provisions (Jennings & Rentner, 2006).
20
However, a growing number of state legislators, school administrators, and
teachers are calling into question the mandates of NCLB because they believe that the
act is actually hurting efforts to improve the quality of public education and close the
achievement gap between students. Of particular concern to many teachers and school
administrators are the sanctions that can be imposed on school districts and schools for
failing to meet annual progress measures, as well as goals for students in reading and
math. These educators are also opposed to the idea of linking standardized test results
to the measurement of a school’s effectiveness and its ability to garner federal support
for public education (Odland, 2006b). The use of standardized test results to measure a
school’s success is not viewed among these stakeholders as using testing data effec-
tively as a means of improving student learning. In fact, there is a lack of consensus in
the aforementioned literature on how to use the testing data to improve instruction or
sanction schools.
As the national educational blueprint for increasing test scores incrementally so
that all students are proficient in reading and math by 2014, NCLB defined accountabil-
ity as the degree to which school systems would meet the required targets of student
achievement (Booher-Jennings, 2006). As a result of this mandated blueprint, the issue
of accountability and standardized testing have had both intended and unintended con-
sequences on school districts and schools due to the way in which test-based account-
ability systems are aimed at directing the behavior of teachers as it pertains to the
improvement of student achievement.
21
One of the unintended consequences of NCLB has been the poor implementa-
tion of the act, as evidenced by the overemphasis of scores at the expense of educational
benefit for all students. In other words, the administrator’s overemphasis on testing
results has led some teachers to focus their attention on those students who will elevate
the school’s academic index at the expense of providing educational benefit to all
students, regardless of their proficiency range (i.e., basis, below basic, far below basic,
etc.; Booher-Jennings, 2006). This act also has, by its very design, set up incentives
such that some teachers believe that they are rewarded for the percentage of students
who perform at a given level on standardized tests and not the percentage who improve.
According to Booher-Jennings, data can be used to improve student achievement while
targeting some students at the expense of others. For example, a teacher related the fol-
lowing in an interview with Booher-Jennings:
I guess there’s supposed to be remediation for anything below 55%, but you
have to figure out whom to focus on in class, and I definitely focus more atten-
tion on the bubble kids. If you look at her score [pointing to a student’s score on
her class test-score summary sheet], she’s got a 25%. What’s the point in trying
to get her to grade level? It would take two years to get her to pass the test, so
there’s really no hope for her. . . . I feel like we might as well focus on the ones
that there’s hope for. (p. 3)
That being the case, making data-driven decisions—identifying the needs of
individual students and introducing interventions to remediate the deficits—can take on
a new meaning in which schools slip between evaluating the individual needs of every
student and deciding which students to target in order to maximize the school’s per-
formance quickly (Booher-Jennings, 2006). That was evident at Beck Elementary
School, an urban elementary school in Texas, where teachers quickly learned how to
22
acknowledge the difference between their administrators’ theoretical proclamations and
their expectations based on test results. The bottom line was that teachers understood in
this numbers game what mattered most to their administrator was the percentage of
students who passed the test. Here again, the teachers have received a mixed message in
which they are called upon not only to improve student achievement but also to increase
student achievement scores and meet state targets. As a result, because of the persistent
pressure to increase test scores, teachers have resorted to diverting their school’s re-
sources (e.g., additional time in class; enrichment sessions with the literacy teacher;
after-school, Saturday, and summer tutoring) to students who were on the threshold of
passing the test, the bubble kids. Thus, these bubble kids received valuable school
resources at the expense of students who were scoring below or far below the state
proficiency level (Booher-Jennings). Similarly, teachers in this study were directed to
improve student achievement as well as increase student achievement scores and meet
or exceed state testing targets.
Another broader consequence that has undermined the promise of NCLB is the
overemphasis on testing and reporting provisions at the expense of sound instructional
practices. In some cases, the notion of a teacher being data-wise has been transformed
into the concept of a teacher using data only. According to Lewis (2005), state-level
testing requirements have doubled and states must report annual progress based on the
data they collect. As a result of this increase in state-level testing, some states have
abandoned their progressive assessment systems or opted to lower their standards in
order to comply with NCLB’s lowest-common-denominator policies. These policies of
23
curriculum push-out, lower standards, and emphasis on basic skills have had a sizable
impact on the curriculum and instruction of school districts and schools. In some cases,
the emphasis on basic skills resulted in narrowing the focus of the content taught to
students. Consequently, these policies add to a teacher’s angst to “teach to the test” at
the expense of teaching content depth and breadth. To that end, the emphasis on student
achievement, as reflected in standardized test scores, is evident in the literature as well
as this study.
As a matter of fact, according to Mathis (2003), statewide achievement tests do
not and cannot measure the vast number of curriculum standards set forth by states and
school districts. With tests designed to measure those things that are easily measurable
in an efficient and economical way, schools are now focusing on lower order thinking
skills. In an effort to face the increasing demands to improve test scores and avoid a
failing school label, schools will sprinkle the curriculum with some higher order think-
ing skills (Mathis). Given this emphasis on short-term test results and the fear of puni-
tive sanctions, the nation’s curriculum is narrowing, the level of expectations is being
lowered, and teachers are being put in a unique position of high-stakes decision making
and instructional impact with the sole purpose of meeting mandated targets (Crawford,
2004; Mathis).
According to Crawford (2004) and Gottlieb (2003), testing experts agree that
“achievement tests of questionable validity and reliability—or, indeed, a single test of
any kind—should not be used for high-stakes decision-making” (Crawford, p. 5). Of
particular concern are decisions related to individual students, such as grade promotion
24
or graduation from high school. In California, for example, high school administrators
and teachers are having to reconcile the differences between the results of a high-stakes
achievement test, such as the California High School Exit Exam (CAHSEE), and 4
years of high school course credits. Because of the policy of punishing schools based on
unreliable scores on a single test, high stakes for schools have become in many ways
high stakes for children. Schools threatened by labels and sanctions have responded by
tailoring instruction accordingly. Consequently, teachers know that their careers can be
jeopardized by the results on a single round of achievement tests that cover just two
subjects, language arts and mathematics. Some of these teachers are wrestling with
high-stakes testing and accountability at the expense of student learning. To that end,
some schools have interpreted this accountability policy in a way that reduces education
to language arts, mathematics, and test preparation (Crawford). According to Jennings
and Rentner (2006), 71% of school districts are reducing the time spent on other sub-
jects in elementary schools. One of the subjects most affected by this reduction in time
is social studies; physical education is the least affected. For teachers with varying
degrees of professional preparation, teaching in this era of standards-based accountabil-
ity has created uncertainty in not only what core content areas to teach but also what
other content areas or subjects to exclude.
There is no denying that NCLB has raised the educational bar for student
achievement as schools and teachers have been forced to assume responsibility for
critical conditions of student learning (Mintrop, 2004). Policymakers, educators, and
parents have all agreed that student achievement must be quantified and measured
25
regularly in order to ensure that the public school students are learning and meeting
minimum standards of proficiency. However, districts and schools continue to struggle
with problems related to poor implementation of the data-based process and an overem-
phasis on testing. Although these problems may impede or slow down the school
improvement process, it is imperative not to lose sight of the fact that teachers who use
data can be the key agents and play an important and vital role in driving this data-
driven process. That is why examining the ambiguity that exists in the data-based
approach relative to teachers who use data effectively is so critical to the school reform
effort.
Data-Driven Decision-Making Skills
The underlying assumption driving NCLB is that using data will lead to school
improvement (Heritage & Chen, 2005). However, in order to use data effectively, edu-
cators must have the skills and organizational procedures necessary to make effective
decisions—decisions based on accurate information (Johnson, 1997). Without these
skills, administrators and teachers will be ill equipped to use data effectively for school
improvement. However, at present, these skills and procedures have not been clearly
delineated. Because of ambiguity in the literature, there is variability as to what consti-
tutes a teacher who uses data and a school system that implements a data-based ap-
proach.
The USDOE and the National Center for Research on Evaluation, Standards,
and Student Testing (as cited in Heritage & Chen, 2005) have identified five core skills
26
that are essential for using data effectively for school improvement. These core skills
are comprised of the following processes: (a) determining what one wants to know, (b)
collecting data, (c) analyzing results, (d) setting priorities and goals, and (e) developing
strategies. In order to improve, schools must focus their efforts on the “right problem.”
The right problem is one over which they have control, is clearly defined, and has high
potential for affecting student learning (Killion & Bellamy, 2000). After determining
the problem, schools must compile data in one place and organize it in formats that are
easy to read and comprehend, as well as accessible, accurate, and appropriate to those
making decisions (Killion & Bellamy). Once systematically collected and analyzed, the
data will provide educators with their only real evidence of the success or failure of
their educational programs (Wade, 2001). To that end, data can help pinpoint strengths
and weaknesses in students’ knowledge and skills, and these can provide meaningful
guidance on how instructional practices can and should be adapted. After the analysis
process is complete, the next step is to identify priorities and set goals for school im-
provement (Heritage & Chen).
Heritage and Chen (2005) have described a procedure that, if implemented,
might have a positive impact on school improvement. However, for all intents and
purposes, there is no reason to believe that any school will proceed along this same
path. According to Killion and Bellamy (2000) and Wade (2001), the leadership team at
the school site simply needs to focus on their particular problems, set locally acceptable
goals, and identify a course of action. However, what these researchers have not taken
into consideration is the variability in teacher preparation for such a task and the need
27
for consensus among administrators and teachers to move in the direction of data-based
decision making. That being the case, these are some of the variables that will affect the
implementation of a data-based approach and its effectiveness.
With regard to goals Schmoker (1999), goals should be measurable, time sensi-
tive, focused on student achievement, linked to assessment, written in clear language,
realistic, and achievable. Schmoker’s (1999) theory is one that administrators and
teachers can use as a template for consistency of goals, because in practice, there have
been no uniform directions set on how to proceed when implementing a data-driven
decision-making approach. According to Heritage and Chen (2005), once the goals are
set, the team will be able to develop an action plan focused on school improvement.
This action plan will include setting targets for the goals and establishing clear strate-
gies for achieving the goals. Finally, after the action plan has been implemented and
completed, the team will be able to begin the feedback cycle of collecting data again to
measure the success or failure of their efforts (Killion & Bellamy, 2000; Wade, 2001).
This theory is just one of the myriad of possible implementation paths that a school
system can follow on how to use data effectively.
Making Use of Resources
In response to the aforementioned pressures, districts and schools are beginning
to realign resources and routine operations. They have come to the realization that data
are central to school improvement under NCLB without any clear direction as to how to
proceed. In fact, many school districts have created the role of a school-based data
28
analyst to strengthen their school improvement efforts and to focus school-based deci-
sion making on student learning (Killion & Bellamy, 2000). The idea of a data analyst
illustrates the variability that can exist from school district to school district as each one
attempts to use data to improve its schools’ performance. Based on a given school’s
resources identified for the data-based approach, the variability in staff expertise and
professional preparation may necessitate the use of a data analyst. In this type of sce-
nario, a principal makes an investment in a particular human resource, a data analyst.
This data analyst is a teacher selected from among the school staff to encourage trust
and confidence among the staff for the process. This is one example of how a principal
can shape the process. What is more, this data analyst is a teacher who is a member of
the school’s leadership team and has been trained in data analysis, interpretation, and
display processes (Killion & Bellamy; Wade, 2001). Because he/she is on site, this data
analyst is easily accessible to teachers and administrators and can provide timely re-
sponses and assistance (Wade).
These staffing arrangements are intended in part to support professional devel-
opment functions, such as those involving data analysis and redesign of instructional
strategies with teachers. This staffing arrangement helps to reconcile the variability in
teacher preparation that exists among the staff and the lack of consensus that may exist
among the stakeholders as to how to support the implementation of a data-based ap-
proach so that the lack of expertise does not hinder the process. Data analysis is not a
panacea that will solve every school problem, although it can be used to help raise
student achievement (Wade, 2001). For the most part, when a school or a district
29
determines that professional development is the key to improving schools, then that
attitude will filter throughout everything that is done at the school and provide a sense
of direction (Richardson, 2000). As teachers develop the skills for data analysis, they
increase confidence among their colleagues, their students, and the community (Wade).
By learning to incorporate data analysis as a routine part of their professional activities,
teachers become more reflective about their teaching practices (Wade). However,
although there may be some truth to these postulations, there is no reason to believe that
teachers will behave in the aforementioned manner.
According to Olson (2007), some data-wise school systems share some key
traits in common. In particular, they invest in professional development to support the
use of data, provide time for teacher collaboration, and connect educators across
schools to share data and improvement practices. In fact, several school districts are
now using student achievement reports as a resource to support and guide their teach-
ers’ professional development and connect instruction and student learning (Parsons,
2003; USDOE, Education Commission of the States, 2002). For example,
one district that tracks student progress on benchmarked objectives every six
weeks also uses the results to monitor teaching strategies. If many students in a
class miss a specific objective, then the principals requests professional devel-
opment for the teacher from another teacher who has successfully taught the
objective, a content resource teacher, or the district’s professional development
staff. (USDOE, Education Commission of the States, p. 2)
According to Parsons and the USDOE, Education Commission of the States, one way
to address the variability in the type of professional staff development provided to
teachers is for schools and districts to collect data regarding new strategies or
30
interventions and then evaluate how well they are working to raise student achievement.
To that end, then one way to address the haphazard trickle-down effect of NCLB is for
the USDOE to provide teachers and administrators with clearly articulated procedures
and guidelines for an effective implementation of a data-based approach to improving
student learning.
On one hand, according to Schmoker (2003), “the most important school im-
provement processes do not require sophisticated data analysis or special expertise” (p.
23). Schmoker (2003) believed that teachers can easily be taught how to conduct analy-
ses that would provide them with the information they need to improve their teaching
and student achievement. What Schmoker (2003) has suggested was contrary to what
Olson (2007) recommended with regard to professional development to support the use
of data because, on the other hand, Olson’s recommendation includes an investment in
professional development with a prescriptive process designed for data analysis, teacher
collaboration, and the monitoring of teaching strategies and student learning. Moreover,
it is this ambiguity that creates confusion and inconsistency among school systems.
In addition, Schmoker (2003) has contended that the primary purpose of data
analysis is to improve instruction in order to achieve greater student success, and
teachers themselves can easily learn how to conduct data analyses that will have the
most impact on their students’ learning. In fact, according to Schmoker (2003), most
school districts, using a data-driven decision-making process, can ensure continuous
improvement by simply employing a simple template for a focused improvement plan
with annual goals for improving students’ annual assessment scores (see also USDOE,
31
OERI, 2002). That being said, Schmoker’s (2003) oversimplification of this complex
process of data analysis was illustrative of the ambiguity that exists even among the
researchers who posit their own theories regarding what constitutes a data-wise teacher
or how extensive a teacher’s skills should be in order to use data effectively.
There are some researchers who believe that in order to drive student perfor-
mance to higher levels, instructional improvement depends on simple data-driven
formats such as teams identifying and addressing areas of concerns and then develop-
ing, critiquing, testing, and modifying strategies based on results (Collins, 2001;
Darling-Hammond, 1997; DuFour, 2002; Fullan, 2000; Reeves, 2000; Schaffer, 1988;
Senge, 1990; Wiggins, 1994) without any regard for the variability in teacher prepara-
tion and the need for consensus among the administrators and teachers regarding how to
proceed with the implementation of a data-based approach. There are other researchers
who believed that investment in prescriptive professional development functions are
just as key to raising student achievement (Killion & Bellamy, 2000; Olson, 2007;
Parsons, 2003; USDOE, Education Commission of the States, 2002; Wade, 2001).
With a lack of consensus, even among the researchers in the field, school districts are in
the precarious position of having to make decisions about resources for staff without
being fully informed about what resources to invest in for their staff in order for them to
use data effectively.
32
Mixed Message About Best Practices
To further compound the ambiguity that exists in the data-driven decision-
making process among school systems, schools and districts are discovering a myriad
of practices that appear to be useful. With no evidence to substantiate the usefulness of
these practices, schools districts are at a loss for consensus on what practices to imple-
ment at their sites. Heritage and Chen (2005), Olson (2007), Schmoker (2003), and
Wade (2001) asserted that based on their research, school practices have a powerful
impact on student achievement; however, there was no consensus among these re-
searchers as to what best practices to incorporate. For example, on the one hand, Olson
believed that school districts should invest in professional development, while on the
other hand, Schmoker (2003) contended that data analysis is such a simple process that
it does not require any special expertise. However, based on the literature, regardless of
the level of training, some form of professional development related to data analysis is
necessary to impact student achievement. In this case study, Blue Dot provided the
professional development in the data-driven decision-making process.
In their studies, William and Kirst (2006) identified some best practices related
to the data-driven decision making process. These practices have also been widely
accepted and proven by more recent studies to have the closest correlation with a high
Academic Performance Index (API), which is a school’s composite academic score. For
instance, this correlation occurred among schools that tended to share four interrelated
practices at the core of their school system: “1) prioritizing student achievement; 2)
implementing a coherent, standards-based curriculum; 3) analyzing student assessment
33
from multiple sources; and 4) ensuring availability of instructional resources” (p. 8).
Here again are some best practices that can be added to the list of practices that appear
to be useful but illustrate the lack of consensus in this field.
Given a school system with some of Williams and Kirst’s (2006) best practices
in place, as described above, it appears that principals are using data to nurture a
schoolwide culture of standards and to drive instructional priorities (Thornton & Per-
reault, 2002), while keeping in mind that effective teachers are the linchpin of student
success. To that end, principals in high-achieving schools (i.e., those schools that have
met or exceeded their API) reported having teachers who shared the following charac-
teristics (in order of priority): “a demonstrated ability to raise student achievement;
strong content knowledge; characteristics that made them a good fit with the school
culture; training in curriculum programs; and the ability to map curriculum standards to
instruction” (Thornton & Perreault, p. 10). These characteristics, although not arrived at
by consensus, could help form the basis from which to describe what constitutes a
teacher who uses data effectively. In other words, this definition by Thornton and
Perreault could serve as a springboard for discussion in the development of a widely
accepted definition of a teacher who is data-wise and uses data effectively.
Current State of Efforts to Implement Data-Driven Decision-
Making Processes
The current situation in education is such that there is no clarity, consistency, or
consensus in the manner in which schools go about implementing a data-driven
decision-making process. This ambiguity stems from the federal mandate, NCLB, itself.
34
Simply stated, there is no clearly articulated direction provided to school systems re-
garding policies and procedures for data analysis. School systems are left to their own
devices to develop, design, and implement their own data-based approach. Furthermore,
the assumption is that because of the standards-based testing requirement and the
accountability system set in place by all the legislative mandates, school administrators
and teachers will automatically assume the role of data analysts. Little consideration has
been given for variability in teacher or administrator preparation or consensus at all
educational levels—federal, state, and local—on what matters most, student achieve-
ment or student learning. Equally important to improving student achievement is how
to go about implementing a data-based process for optimal effectiveness. This lack of
consistency has a significant effect on school systems and, in particular, teachers be-
cause they play a critical role in the improvement process at the school site level: the
action level. The underlying assumption is that a teacher’s ability to use data, or lack
thereof, can impact the degree to which a student’s academic achievement will increase
or learning will improve. However, there is a lack of clarity in the literature as to how a
teacher’s ability to use data affects a student’s achievement. There is also a lack of
clarity as to how a teacher’s ability to use data can also impact the effectiveness of a
data-based approach. Perhaps this lack of consistency and clarity in the literature is why
there is much variability in the effectiveness of the data-based approach in school
systems.
35
Research Questions
Given that there is a preponderance of research regarding the use of data to
improve teaching and learning and that teachers are key in this process, and given that
the school site chosen for this case study was implementing a data-based approach to
improvement teaching and learning, do administrators, teachers, and students share a
common goal or goals? Do they share a common perception of teachers who use data?
Do they share a perception of what “use data” means; or given the inherent differences
in their roles in the process, is there variability in their perceptions, and does this vari-
ability impact the effectiveness of the process?
With so much ambiguity out in the field and so much at stake for students as
well as educators, this study looked at teachers who used data from multiple perspec-
tives and also examined the common perceptions shared by the school site stakeholders.
36
CHAPTER 3
METHODOLOGY
This case study examined the perceptions of teachers, students, and administra-
tors as they relate to teachers who are perceived to use data to improve student achieve-
ment. This case study was conducted at a charter high school in the Lions School
District ( fictitious name for confidentiality purposes). By using a qualitative research
method, the researcher was able to collect and analyze the data and then report her
findings. The main method used to compile the data was interviews. To be more spe-
cific, the researcher interviewed teachers, students, and administrators who volunteered
to share their perceptions about teachers who used data. The purpose of this chapter is
to describe the methodological design of the study, the data collection process, the
process for analyzing the data, and the reporting process.
The main reason for using a qualitative method to study the perceptions of
teachers, students, and administrators as their perceptions related to teachers who used
data to improve student achievement was to examine these perceptions and uncover, if
possible, any common themes or trends within or between groups. According to Patton
(2002), “qualitative methods facilitate study of issues in depth and detail” (p. 14). That
being so, in this case study the researcher focused on using Patton’s interview guide
approach by collecting her data through interviews that targeted administrators, self-
selected teachers, and students on an individual basis. The questions posed to the ad-
ministrators, teachers, and students were designed to elicit descriptions of teachers
37
whom they perceived to use data to increase student achievement. This case study
focused on the following questions:
1. Given that there is a preponderance of research regarding the use of data to
improve teaching and learning and that teachers are key personnel in this process; and
given that the school site chosen for this case study was implementing a data-based
approach to improve teaching and learning, do administrators, teachers, and students
share a common goal or set of goals for student achievement?
2. Do administrators, teachers, and students share a common perception of
teachers who use data?
3. Do administrators, teachers, and students share a common perception of
what “use data” means?
4. Given that there are inherent differences in their roles in the data-based
approach, is there variability in their perceptions? If so, does this variability affect the
effectiveness of the process?
For the purposes of conducting this study in a manner that would elicit thought-
ful and insightful information from the three participant groups, the interview method
of inquiry was selected as an effective means of gathering unique and discerning data
on the subject matter. According to Merriam (1998), by using the interview method in a
qualitative study, the process becomes more open-ended and more flexible. To that end,
the participants in this study were provided a less structured format with open-ended
questions to help reflect their individuality.
38
Sample and Population
The primary focus of this case study was a charter high school in the Lions
School District. At the time of the study, Lions School District was considered a low-
performing urban school district with a 2006 API of 667. Brio Charter High School
(fictitious name for confidentiality purposes) had been implementing a data-based
approach for improving student achievement for the past 3 years. Brio Charter High
School was considered a high-performing urban high school with a 2006 API of 651.
The reason that the researcher selected this high school for her case study was because
it had raised its API score by 36 points over the previous 3 years. However, due to
personnel changes, some of the administrators and teachers interviewed were not
present 3 years before when the formal implementation of the data-based approach
began. The seniority of the two administrators and seven teachers interviewed for this
study ranged in some cases from more than 3 years to less than 1 year. The two admin-
istrators had been at Brio for 2 years; three teachers, for more than 5 years; another
three teachers, for 2 years; and 1 teacher had been there for less than 1 year. Coinciden-
tally, the assistant principal had been a teacher at Brio for 3 years prior to assuming the
assistant principalship.
The demographics of Lions School District included the following ethnicities:
95.9% Hispanic or Latino, approximately 2.3% African American, 0.9 % Pacific Is-
lander, 0.2% Filipino, 0.1% Asian, 0.4% White (not of Hispanic origin), and 0% Amer-
ican Indian or Alaska Native. These students had two options when they graduated from
eighth grade: (a) enrolling in the local comprehensive high schools, which were
39
underperforming with an average 2006 API of 605; or (b) applying and becoming one
of the few selected by lottery to attend a charter high school that was using a data-based
approach to improve student achievement. Thus, given that the purpose of this study
was to examine the perceptions of administrators, teachers, and students as their percep-
tions related to teachers who used data for school improvement, then the case study
focused on the stakeholders at Brio Charter High School because they had been in-
volved in the ongoing implementation of a data-based approach that had been in place
for at least 3 years.
At the time of this study, Brio Charter High School had a population of approxi-
mately 538 students, 28 teachers, and 2 counselors. Its administration consisted of 2
administrators, a principal, and an assistant principal.
Twenty interviews were conducted with 11 students, 7 teachers, and 2 adminis-
trators. The interviews included 5 10th-grade students, 5 11th-grade students, and 1
12th-grade student; 3 10th-grade teachers, 3 multiple-grade teachers, and 1 11th-grade
teacher; in addition to 1 principal and 1 assistant principal. This was a self-selected
group of participants. That being the case, there was no assurance of balance, as evi-
denced by the lack of any 9th-grade student or teacher participants. The teachers who
volunteered to participate in the study all attended the first staff meeting in August
2007, where they were directed by the administration to use benchmark assessments in
the core content subjects of English/language arts (E/LA), mathematics, science, and
social studies at the conclusion of each quarter for the school year. Of the 7 teachers
40
who were interviewed, 1 taught college readiness, 1 taught Latino art/film, and 1 taught
special education. The other 4 taught core content subjects (see Table 1).
Table 1
Types and Number of Interviewees (N = 20)
Types of interviewees n
Administrators (n = 2)
Principal 1
Assistant principal 1
Teachers (n = 7)
Grade 11, English/language arts 1
Grade 10, College Readiness 1
Grade 11-12, Latino art/film 1
Grade 10, English/language arts 1
Grades 9-12, special education 1
Grade 9-10, algebra I/precalculus 1
Grade 10, world history 1
Students (n = 11)
Grade 10 5
Grade 11 5
Grade 12 1
The purpose of interviewing administrators was to provide the perspective from
the school leaders’ perspectives of a teacher who used data. In addition, the purpose of
interviewing students was to provide the perspective of the learners regarding a teacher
who used data. Finally, the purpose of interviewing teachers was to provide the
41
perspective from the teachers themselves who believed that they were using data to
improve student achievement.
The interview protocols for each group of interviewees—teachers, students, and
administrators—were different and designed specifically for each group. Hence, all data
from each group had to be categorized and analyzed separately.
In order to conduct the study, the researcher was granted verbal approval in
March 2007 from the superintendent of the Lions School District to conduct this case
study at Brio Charter High School. In addition the researcher met with the principal of
Brio Charter High School in May 2007 and received both verbal and written approval to
conduct her case study. Then, prior to the beginning of the school year, the researcher
met with the administrators and counselors to discuss the purpose of the study and how
to recruit teacher and student participants.
Background Information: Brio Charter High School
Brio Charter High School was chartered by the Lions School District, located in
southern California. Brio Charter High School followed a traditional school calendar
and received Title I funding based on the number of students eligible for free or
reduced-price lunch. Since its inception in 2001, the school had grown from approxi-
mately 125 ninth-grade students only to a school that served approximately 538 stu-
dents in Grades 9-12. The 2006 API for Brio Charter High School was 651; it was up
36 points from its 2004 API.
42
Tables 2 and 3 provide demographic information about Brio Charter High
School. Table 2 displays the number of students enrolled in each grade in the school;
Table 3 displays the percentage of students enrolled at the school who were identified
as belonging to a particular subgroup.
Data Collection Procedures
Data collection procedures followed a semistructured interview format with
both structured and open-ended questions to guide the interview (see appendices A, B,
and C). This interviewing technique was selected to provide the structure needed to
guide the participants throughout the interview and also to elicit open-ended responses.
This kind of format allowed the researcher more flexibility in the interview process to
collect data.
Each interview with the administrators lasted approximately 60 minutes; with
the teachers, approximately 45 minutes; and with the students, the interview sessions
varied in length from 10 minutes to 30 minutes. All the interviews took place on the
school campus, with one exception. Each interview with the administrators was con-
ducted in each administrator’s respective office. Each interview with a teacher was
conducted in his/her classroom (free of any students present). One teacher was inter-
viewed at a fast food restaurant during his preparation period. Each student was inter-
viewed in an empty counselor’s office. All interviews were tape recorded and tran-
scribed solely by the researcher for purposes of data analysis.
43
Table 2
Brio Charter High School (Grades 9-12): Enrollment by Grade, 2006-
2007
Grade level Number of students
Grade 9 141
Grade 10 139
Grade 11 142
Grade 12 116
Note. N = 538. Data taken from DataQuest, by California Department of
Education, 2008, Sacramento: Author. Retrieved May 12, 2008, from
http://dq.cde.ca.gov/dataquest/
Table 3
Brio Charter High School (Grades 9-12): Enrollment by Subgroup,
2006-2007
Subgroup Percentage of total
Hispanic or Latino 97.8
African American 0.9
White (not Hispanic) 0.2
Filipino 0.2
American Indian or Alaska Native 0.0
Asian 0.0
Pacific Islander 0.0
Multiple or no response (ethnicity) 0.9
Socioeconomically disadvantaged 90.6
English learners 58.7
Students with disabilities 4.1
Note. N = 538. Data taken from DataQuest, by California Department of
Education, 2008, Sacramento: Author. Retrieved May 12, 2008, from
http://dq.cde.ca.gov/dataquest/
44
The interview questions for each group were divided into three sections: Setting
and Measuring Goals, Data Uses, and Data Users. These sections were designed to
examine the ongoing implementation of a data-based approach, as put into practice by
its primary user, the teacher. In the first section, the members of each group were to
describe their goals and how they were going to measure whether they had met their
goals. In section two, the questions focused on data uses and how teachers used data to
improve student achievement. Then, in section three, the questions focused on describ-
ing as well as identifying teachers who were perceived to be using data effectively to
improve student achievement.
Data Analysis Procedures
The next step, once the interviews had been completed, was to transcribe the
interviews in order to start analyzing the data. First, the researcher transcribed verbatim
the interviews of each participant by listening to the digital recordings made into the
Olympus Digital Voice Recorder WS-300M and typing verbatim the conversations onto
a Microsoft Word® document using a MacBook Pro laptop. The result was approxi-
mately 200 pages of transcribed interview data.
After the transcribing process was completed, the researcher categorized and
coded the data. No formal type of qualitative software program was used for this
process. The researcher manually sorted and organized the transcription data into mean-
ingful data. With the end in mind, the sectioning of the interview questions into data-
based related concepts helped the researcher during the analysis phase of the study.
45
Specifically, the way in which the interview questions were organized by sections
allowed the researcher to analyze the responses from multiple perspectives.
Once the data had been sorted and organized, the researcher began the analysis
of the data. At first, the researcher looked for trends and patterns in the participants’
responses by section and within a specific group, and then by section and between
groups. This type of data organization and reorganization continued throughout the
entire data analysis process as the researcher sorted the data around the multiple per-
spectives of the three groups who described teachers whom they perceived to use data
to improve student achievement.
Procedural Considerations
The Institutional Review Board (IRB) at the University of Southern California
approved this proposal, and the researcher adhered to the procedural safeguards of the
board accordingly. The researcher also received verbal approval from the Lions School
District superintendent, as well as written and verbal approval from the Brio Charter
High School principal to conduct this study at Brio. The administrator and teacher
participants consented verbally to participate in the study voluntarily, with the neces-
sary permission granted by the Lions School District. The students submitted consent
forms signed by their parents to ensure that the students’ participation was voluntary
and to acknowledge that the study was sanctioned by the school district. All recorded
interviews and transcriptions were placed in a secure location for storage and to protect
confidentiality.
46
The results of this study were valid based on the truthful representation of each
participant who identified himself/herself as a teacher, student, or administrator of the
charter high school involved in this study. The charter high school selected for this
study was chartered by a local school district that operated under the supervision of the
Los Angeles County Office of Education and the California Department of Education
and adhered to all of the California codes of education.
Limitations of the Study
As mentioned in earlier chapters, school systems across the nation are imple-
menting some form of a data-based approach to improve student achievement with
varying degrees of effectiveness. That being the case, conducting a case study in which
only 20 subjects were interviewed posed a major limitation. The participants in this
study included only 11 students, 7 teachers, and 2 administrators. Their opinions and
beliefs had a significant impact on the findings of this study.
There was also another limitation: Not only was the sample size a concern, but
also there was only one school involved in the study. The use of one school meant that
the results of this study were not generalizable, because the experiences of the three
participant groups might not have represented the majority of stakeholders’ perceptions
of teachers using data throughout the nation.
Further, the school selected was “high achieving,” suggesting that major differ-
ences in data-related understandings among administrators, teachers, and students may
not be as likely as in a school with a history of lower performance. However, the fact
47
remains that school systems across the nation are implementing data-based approaches
to improve student achievement and stakeholders are looking at these approaches
through different lenses, based on their respective roles in the school system. To that
end, this study focused on uncovering the relationship, if any, among the stakeholders’
perceptions of teachers who used data to improve student achievement. That being said,
this case study, based on the interviews of 20 stakeholders at a charter high school,
provided additional data related to the implementation of a data-based approach and the
teachers who were perceived by others to use data.
48
CHAPTER 4
FINDINGS
The landmark NCLB Act, signed by President George W. Bush in 2001, serves
as the most rigorous, standards-based accountability initiative enacted for reforming
schools in decades (Albrect & Joles, 2003; Center on Educational Policy, 2003). It has
led to higher academic standards for school districts and schools across the nation and
to improvement in student achievement (Odland, 2006b). It has also increased testing
and accountability. In doing so, it has had a significant impact on elementary and
secondary educational programs (Simpson et el., 2004). Specifically, one of the under-
lying suppositions of the NCLB Act of 2001 is that teachers, simply by following the
mandates of the act, will use data effectively to improve student achievements. To the
extent that data are being used for instructional interventions to improve achievement,
there is no consensus in the literature from researchers as to how to implement an ef-
fective, data-driven decision-making process. However, there is a preponderance of
agreement among supporters of data-based approaches that effective use of data helps
stakeholders learn more about their schools, identify weaknesses, select areas for
improvement, and evaluate the effectiveness of interventions and practices (Mason,
2002).
This chapter analyzes and interprets the data collected from this qualitative
study in which students, teachers, and administrators discussed their perspectives on
data. The purpose of this study was to examine the consistency of perceptions of
49
teachers who were at a school site that was implementing a data-based approach to
improve student achievement from the perspective of the administrator, the teacher, and
the student. The focus of this study was on the high school students, teachers, and
administrators at Brio Charter High School in the Lions School District.
Research Questions and Underlying Themes
This chapter is designed to examine and discuss the underlying themes that
emerged in the study and to connect them to the corresponding research questions. To
that end, it was after researching, coding, and analyzing the data collected from the
student, teacher, and administrators in the study that the following themes emerged and
were linked to the research questions. This chapter presents the data analysis that links
the major themes that emerged to the research questions.
Research Questions
1. Given that there is a preponderance of research regarding the use of data to
improve teaching and learning and that teachers are key in this process; and given that
the school site chosen for this case study is implementing a data-based approach to
improve teaching and learning, do administrators, teachers, and students share a
common goal or set of goals for student achievement? Topics explored in the inter-
views included the following: (a) goals related to student achievement, (b) goals related
to college, (c) other significant goals, and (d) similarities in how to measure goals.
2. Do administrators, teachers, and students share a common perception of
teachers who use data?
50
3. Do administrators, teachers, and students share a common perception of
what “use data” means?
4. Given that there are inherent differences in their roles in the data-based
approach, is there variability in their perceptions? If so, does this variability affect the
effectiveness of the process?
Common Goals and Other Similarities
The goals and the process for measuring the goals related to student achieve-
ment that are presented in this section emerged from the analysis of data and were
indicative of the overriding similarity of purpose by all three stakeholders to increase
student achievement. School districts and schools around the nation are responding to
the legislative intent of the NCLB Act of 2001 that has resulted in increased testing and
accountability. While increasing testing and accountability has led to the implementa-
tion of a data-driven decision process at the school site level to use data to impact
student achievement, the findings in this section reveal that the stakeholders with dif-
ferent roles in the process sometimes shared the same goals. In particular, the configu-
ration of charter high schools in this given county region, under the direction of Blue
Dot (a not-for-profit corporation), has provided these schools with a foundational basis
for developing common goals at the school site level. According to Thornton and Per-
reault (2002), “having a vision that is truly shared and supported provides the school
team with guidance, and the ensuing strategic plan provides milestones to track prog-
ress toward goals” (p. 88).
51
More specifically, the administrators, students, and teachers at Brio Charter
High School share a vision with two sets of common goals. Both sets of goals are
related to the challenges of not only improving student achievement but also preparing
students for a postsecondary career in college. To that end, the data from this study have
suggested that administrators, teachers, and students shared two common sets of goals
related to student achievement and college. There was also a goal unique to the admin-
istrators that was extrapolated into a student’s future beyond high school and college.
In addition to the student achievement and college goals, a process related to
how to measure the goals emerged from the data. It was evident from the data that all
three stakeholders shared similarities in how to measure the student achievement goals.
That process involved using different types of data (i.e., grades, CAHSEE results,
California Standard Test [CST] scores, etc.) to assess student performance and demon-
strate results.
Goals related to student achievement. As discussed previously, at the time of
this study, Brio Charter High School had been implementing a data-driven decision-
making approach for several years and recently, under the leadership of the current
administration, for the past 2 years. Moreover, Blue Dot had been instrumental in pro-
viding the charter schools under its sponsorship with the compilation and distribution of
standardized and benchmark assessment data, as well as professional staff development
and training for teachers and administrators to assist in the analysis and interpretation of
assessment data. According to Wade (2001):
52
When systematically collected and analyzed, data provide an accurate way of
identifying problem areas in school programs. Data reveal strengths and weak-
nesses in students’ knowledge and skills, and they provide meaningful guidance
on how teaching practices can and should be altered. (p. 2)
This type of systematic support for a data-based process had allowed the administrators
at Brio Charter High School to fully immerse the staff in the process. Both administra-
tors and six out of the seven teachers interviewed at Brio had been working there for the
past 2 years, thus experiencing the systemic support for the data-based approach from a
regional level to a school site level.
At Brio Charter School, students were assessed formatively and summatively
throughout the year. The formative assessments varied from teacher to teacher. Depend-
ing on the number of core content standards being measured and the time line for
mastery of the standards in a given pacing guide, the formal assessments included
evaluative tools such as a test, a midterm, final, or benchmark assessment; informal
assessments included other evaluation tools such as entrance/exit slips, quizzes, or
homework assignments. Alternatively, the summative assessments are definitely more
formal and include a variety of standardized tests that are used not only to measure
student achievement for a particular grade level and corresponding core content stan-
dards but also to calculate a school’s API.
For example, all students in Grades 2-11 are required to take the CSTs in E/LA
and mathematics, while alternatively requiring the administration of science in Grades
5, 8, 9, 10, and 11 and social studies in Grades 9, 10, and 11. Another example of a
summative assessment is the CAHSEE. Commencing in Grade 10, all students in
53
California are required to take the CAHSEE. State law requires that a student pass both
sections of the exam, the E/LA and mathematics sections, with multiple opportunities
throughout Grades 10, 11, and 12, in order to graduate from high school. This require-
ment is significant because as a result of this state mandate, a student must now fulfill
two requirements to graduate from high school: passing the CAHSEE and earning the
appropriate number of course credits. At Brio Charter High School, a student must earn
approximately 240 course credits and pass both sections of the CAHSEE to graduate
and receive a high school diploma (see appendix D).
These aforementioned formal assessments are high-stakes testing tools that
impact the teaching and learning environment of a school and all its personnel: admin-
istrators, teachers, and students. Crawford (2004) and Gottlieb (2003) argued that such
achievement tests, or any single test, “should not be used for high-stakes decision-
making” (Gottlieb, p. 5). However, there is no denying that based on the interview data,
these kinds of test results were employed as tools in conjunction with course credits to
make high-stakes decisions related to grade promotion and graduation at Brio Charter
High School.
At Brio Charter High School, the significance of this focus on assessment scores
had resulted in the development of two sets of common goals related to student
achievement. The first set of goals that emerged from the data was focused on student
achievement. From the student interviews emerged a multitiered goal to improve report
grades and to maintain and/or improve one’s grade point average (GPA). One 11th-
grade student summed up this student achievement goal succinctly when she stated,
54
“Well, the goals I’ve set . . . basically, get really good grades, so that I can graduate with
a really good GPA.” Another 11th-grade student reported the following: “I want to be
able to get into a 4-year college and have really, really good grades and have an impres-
sive résumé.” There was also a 10th-grade student who admitted to having changed her
academic focus in the following manner:
Oh, my goals. I would say the biggest goals I would say are my grades. Because,
I would say, in the past, like my freshman, year, I wasn’t really focused on
school. But now, I’m really focused, ‘cause I really want to go to UC Davis,
‘cause I want to become a vet.
After interviewing the teacher participants, it became clear that some of them
were also focused on improving student achievement. Some teacher data revealed a
similar goal to improve student performance specifically related to content standards.
In the case of math, science, and E/LA, these teachers expressed a desire to work col-
laboratively in their departments to systematically design and scaffold support for
department goals (see appendix E). Johnson (1997) supported that idea when he
wrote that “effective educators make effective decisions, decisions based on accurate
information” (p. 2). By collecting student data and analyzing it, teachers are empow-
ered to adjust the curriculum to achieve whatever goals the school has chosen (John-
son). In this particular case, the goals chosen were department goals. For example,
the E/LA Department was working collaboratively in the 2007-2008 school year to
improve student writing skills. To that end, an 11th-grade E/LA teacher explained:
I would say one of our other departmental goals is to come up with a universal
writing rubric that can be applied throughout the 9th- to 12th-grade levels. . . .
We looked directly at the data. The data that came back from the CSTs and
we, if, I’m sure you know, or you might not know, we take three benchmark
55
exams a year that help the students prepare for the CSTs. So, those bench-
marks come back to us with a lot of data that analyzes student performance on
particular standards. And, so, that’s where we were able to get the data. What
needs to be improved in the department and there was a disconnect kind of
what was being done at each grade level. So, we’re trying to get rid of the
disconnect so everything flows from the 9th to the 12th grade, so the students
come in with a foundation where curriculum is being reviewed, but not re-
taught every single year. So, that’s where it came from.
The Math Department also emphasized the importance of improving student
achievement. According to the Math Department chair, closing the “content knowl-
edge” gap was a major goal that he felt compelled to address. Here’s what he re-
ported:
Yeah, it’s a big focus of ours. So, like the students that I have in precalculus
right now, when those test in the spring, we will do a unit, too, that goes back
and reviews geometry and stuff that is going to be on that test that they
haven’t taken before . . . and give them more practice tests and things like
that, too . . . I mean, I think their content knowledge has major gaps, and so
every math class is a blend of teaching them. Like every math course, here is a
blend of what it should be if it’s precalculus, trigonometry, and stuff like that,
and filling in all the gaps.
Similarly, the special education teacher also described how using assessment
data helped improve student achievement. In fact, what she described was the collab-
oration that existed among teachers to achieve this goal:
We look at CAHSEE, all of the ALS benchmarks, so we get every student’s
score on the ALS [Academic Learning Standards], the benchmarks. And
pretty much, I create lessons that are more remediation based on their IEPs
[Individualized Education Plans]. Then we go ahead and support their general
education teachers, English and math, and we create supplementary lessons or
supports or what works to help them in those classes. So, the English teach-
ers, take for example, the English Grade 9 teacher will create some lessons
within certain standards that she noticed that most of our students did not
meet. So, she creates those lessons. She submits those lesson plans to me, and
then I figure out how I can support her on that as well.
56
By working collaboratively in their departments where teachers were grouped by
their content areas, teachers were able to effectively adjust their curriculum to focus
on improving student performance specifically related to content standards.
The administrator interview data revealed a student achievement goal similar
to the aforementioned teachers’ goal. For the most part, the administrator goal was to
continue to improve student academic achievement and to raise achievement scores.
In this case, the administrator’s goal of student achievement reflects the underlying
pressure that educational leaders feel with respect to meeting and/or exceeding
accountability targets. Although this goal was similar to the teachers’ goal of improv-
ing student achievement and increasing achievement scores, the pressure that admin-
istrators experienced was different. As administrators, their focus extended beyond an
individual student and was inclusive of the entire student body served by a school
site. For that reason, the principal’s remarks were noteworthy:
The goals are to continue improving the percentage rate of students passing
the high school exit exam—the percentage of students increasing their [profi-
ciency levels], the number of kids that we move from below basic or for
below basic into basic and above.
Similarly, the assistant principal’s goals for student achievement were “to continue to
meet our API scores, to increase those scores, to increase our college acceptance rate
for every senior, and to provide the best quality education for our students.”
Hence, there was no need to look for common ground for these three groups
of participants when it came to student achievement. They may have differed in their
focus, but their overall goal was the same: They all wanted to improve student
57
achievement and increase student achievement scores. Their differences were related
to their focus. The students’ focus was personal and individual; the teachers’ focus
was personal and individual, whereas the administrators’ focus was schoolwide. As
unique as their roles were as administrators, teachers, and students, they all shared
one overriding desire: to improve student achievement. For the students, achievement
was reflected in their grades; for the teachers and administrators, student achievement
was reflected in the students’ scores on standardized assessments. Thus, the data
revealed that although each participant group had a different role in the teaching and
learning environment of a school, the three participant groups at Brio Charter High
School demonstrated a common goal related to student achievement (see appendix
F).
According to Odland (2006b), the one-size-fits-all mentality of teaching to the
test to achieve academic targets hinders a teacher’s efforts to provide appropriate in-
struction to meet the diverse needs of students. In fact, this issue of accountability
and standardized testing has had both intended and unintended consequences on
schools and the way in which test-based accountability systems are aimed at directing
the behaviors of teachers. One of the unintended consequences, according to Booher-
Jennings (2006) has been to use assessment data to improve student achievement by
targeting some students at the expense of others. However, these findings are not con-
sistent with the findings in this study, where the administrators, teachers, and students
shared a common set of goals related to student achievement. Contrary to the empha-
sis in the research on targeting specific students to increase test scores, the
58
stakeholders in this study targeted all students to improve academic achievement and
to help them attain their academic potential.
Goals related to college. Brio Charter High School was popular in the region
because it was a school of choice. As a school of choice, it offered the community an
alternative to the traditional neighborhood high school. Brio Charter High School
played an integral role in cultivating a culture of high expectations, beyond the tradi-
tional expectation of simply graduating from high school. To this end, it was not sur-
prising to find that the second set of goals was related to college.
After interviewing the student participants, it became evident from the data
that the traditional goal of graduating from high school was merely another require-
ment at Brio that they needed to complete in order to meet the ultimate goal of being
accepted at a 4-year college or university. A 10th-grade student summed up this
student theme when she stated:
Well, I want to get accepted at a good college—that’s why I’m in high school.
And I want to take AP [Advanced Placement] classes to be able to get ac-
cepted to my dream college.
Similarly, there was another 10th-grade student who responded to the goals question
with the following remarks:
Well, currently, I’ve been trying to raise my GPA, so I can be eligible to go to
a 4-year [college] after high school. I’m not really sure what major I want to
go into or any of that yet. I just want to make sure that I can be able to go into
the 4-year [college]. So, that’s just my main goal now, just to make sure I can
go.
59
An 11th-grade student simply reported his goals in this manner:
One of my goals . . . is to go to college, at least go to a college. So we could
like succeed in life and everything. I’m mostly interested in like going to the
east coast colleges, like Boston University or MIT [Massachusetts Institute of
Technology], and engineering.
For the most part, the teacher participant data revealed a similar theme to post-
secondary goals. Not only did teachers want students to graduate from high school, but
they also wanted them to be accepted by an institution of higher learning and to be
prepared for the rigorous standards and expectations associated with higher education.
Once teacher in particular, the 10th-grade E/LA teacher, described his department’s
goal of preparing students to pass the CAHSEE early in high school so that they (the
students) could focus on preparing for college in their senior year. According to him:
Yeah, and we’re trying to . . . at a higher level, to get to where we’re not having
the stragglers . . . so, we’ll worry about that later. We want them worrying about
getting into college and not worrying about passing the standardized test. They
have the SAT [Scholastic Achievement Test] to worry about; we don’t want one
more thing for them to worry about. Who wants to worry about the CAHSEE
when you’re a senior?
Similarly, the Algebra I/calculus teacher reported that his department also focused on
preparing the students for college. He said:
The main thing that we have focused on, you know, preparing our kids for
college, we think is the most important thing. And we sort of hope that all the
little things can build up to that, whether it’s a standardized test or whatever
else. But, in the end, we want our students to be active problem solvers that can
adapt to college level math, and college level challenges, in general. With re-
search, you know, and be able to be given a situation they’ve never been in
before, like in a study or something, and really problem solve. That’s our big-
gest goal. And that is subdivided into a lot of little goals.
60
Then there was the college readiness teacher whose course was designed to focus on
college preparedness. For that reason, she reported using her class as a vehicle for de-
veloping postsecondary skills. Here she described her goal:
Okay. For the next 3 years, [the goal] is really focusing on the lifelong learner’s
aspect of it. Really teaching the students to . . . giving the students tools, but also
teaching them how to use the tools to be able to strive after college (high
school), whether they go to a 2-year, 4-year or they just decide to work or go and
take a short course somewhere.
A similar theme related to college preparation emerged from the administrator
interviews. Specifically, interview data from this group revealed a tiered goal in which
the students were (a) accepted to an institution of higher learning, (b) prepared for the
rigorous academic challenges at that level, and (c) provided with the academic and
psychological stamina to prevail through all 4 years of college and to graduate with a
degree. The principal of Brio made it quite clear when he remarked:
If I’m going to move Brio in the direction that I believe needs to be moved on
. . . I need to see how do we get our kids to graduate. And we’re doing that
already, at a high percentage rate. [The question is] How do we get our kids to
be accepted in a 4-year school, stay there, be successful, and graduate from a 4-
year school? So, I want to start eventually moving into data that I won’t be able
to see until they graduate from college.
In a similar vein, the assistant principal expressed the following remarks related to the
students as their administrative focus extended beyond high school graduation and into
postsecondary achievements, such as college:
I think our students are getting into schools [colleges] and then on top of that,
we’re gong to start to follow up with data on tracking students staying in school.
So that is going to be huge in the next couple of years, which will also impact
the quality of education that we’re having.
61
Based on the set of goals related to college, college preparedness, and comple-
tion of a college degree that the three participant groups showed in common, it ap-
peared that the goal of going to college was central to the high school experience at
Brio. It started with the student participants’ goal to be accepted at 4-year college,
followed by the teacher participants’ enhancement for the goal to be prepared for the
rigors of college, and then it culminated with administrator participants’ fruition of the
goal that would result in a student earning a degree from college.
Another consequence that emerged in the literature related to the overemphasis
on testing and reporting provisions was the idea that teachers use data to meet testing
targets at the expense of sound instructional practices to improve student learning
(Mathis, 2003). As a result of an increase in state-level testing, some schools were
abandoning their progressive assessment systems and adapting curriculum policies with
lower standards and an emphasis on basic skills (Lewis, 2005). Thus, these policies
were developed in an effort to face the increasing demands to improve test scores and
avoid a “failing school” label. However, in contrast to the literature, the findings in the
present study suggested that although the stakeholders emphasized the importance of
meeting academic targets in the short term, their shared set of goals for student aca-
demic achievement was long term and postsecondary.
Other significant goals. There was a unique goal that emerged from the princi-
pal interview data. This goal was significant not only because it was related to the
aforementioned goals of student achievement and college but also because there was an
62
added dimension of reciprocity associated with this goal. In other words, students from
this urban community were expected to graduate from college and to return to their
community as change agents. Their responsibility was to better themselves through the
academic vehicle of high school and college in order to become the change agents
needed to improve and enhance the community from which they came. The principal’s
statements were clear:
The vision of the school was open with the fact that, you know what, kids are
going to come to Brio. They’re going to graduate from high school, and they’re
going to go back to the community and be agents of change. They cannot be
agents of change if they don’t graduate from college. So, in order for me to
fulfill this school vision, I need to look at that, going in that direction. . . . If
we’re not looking at the graduation rate of kids from college, then we, you
know, we’re not—I don’t believe we’re being as successful as we should be. So,
that is the standard that I’m holding myself to.
The goal of graduating from college with the ultimate purpose of returning to
one’s community as a change agent was unique to the principal. He was the only partic-
ipant whose goal extended postcollege and into a new commitment of community
activism as a social instrument of change.
Similarities in how to measure shared goals. Not only did the administrator,
teacher, and student participants share similar goals related to student achievement and
college, but they also shared similarities regarding how to measure these aforemen-
tioned goals. A common theme that resonated throughout the interviews with the three
participant groups was the use of formal as well as informal assessment data to measure
the shared goals described in the previous sections.
63
The participants did not differentiate between the method and types of data they
used to measure the two sets of goals. However, what emerged from the interview data
was a distinct similarity between the responses from the student and administrator par-
ticipants. This distinction was notable because the teacher participants, although they
used data to measure these shared goals, revealed in their interviews a focus on
formative data. Alternatively, the data from the administrators and the students revealed
a focus on summative data.
When asked to describe how to judge whether they were meeting their goals, the
participants’ interview data revealed a striking similarity between the administrators
and the students. The overriding theme in their responses was to use standardized
assessment results of a summative nature as opposed to using the results from formative
assessments.
To that end, the students and administrators agreed in their collective responses
that to judge whether the students were meeting the common goals related to student
achievement and college, they were going to use the scores and/or results from such
formal assessments as the CSTs, the CAHSEE, the SAT, and other related summative
assessments. The assistant principal’s remarks were indicative of this results-based
theme when he discussed assessment data, as in the following:
Our SAT scores are new data that we are going to try to incorporated more into
how we’re doing, because that really speaks to how our students match to every
one else in the nation. . . . Those tests are really, everyone takes the same one,
it’s kind of like social leveling the playing field. . . . The SAT, I think, really
allows them to see what they don’t know and how well they’re able to manage
that information.
64
The principal summed up this results-based theme when he stated that “we’re looking
at our data from the high school exit exam, throughout the CSTs to the college place-
ment test. So, we’re looking at all that to see that we are on the right track.” One of the
11th-grade students indicated not only how he was going to measure his goals but also
what he was going to do with those data:
We took the CAHSEE last year, and I excelled in both math and the English. I
think I scored perfect on the math. . . . Yeah, my PSAT [Preliminary Scholastic
Achievement Test] scores are really showing me what I have to work on. And
they’re preparing me for the SATs that I’m thinking of taking in January.
Another 11th-grade student, when asked, “What data will you use to judge whether you
are meeting your goals?” responded;
Well, the CAHSEE is just to graduate, right? So that, I already have done. So
that’s not even . . . it’s like a sure thing to graduate, ‘cause it’s mandatory by
every school, so students can know they can graduate. But, other than that,
maybe the SATs, but just to make sure I get into a 4-year [college].
Then there was a 10th-grade student whose response, although brief, was to the point:
“Well, if I take AP classes, the AP exit exams.” Hence, the combined data from the ad-
ministrator and student responses indicated that these two groups of participants were
going to use summative assessment data to measure their goals.
Alternatively, the teachers’ response for measuring the student achievement and
college goals was to emphasize formative assessment data throughout the year (see ap-
pendix G). Their collective response was that they would use results from such formal
and informal assessments as benchmark exams, mid-terms, finals, essays, and other
formative evaluations. For example, the 10th-grade E/LA teacher explained:
65
We use the Academic Learning Standards [ALS] benchmark program, that gives
us a complete computer printout of which questions, which standards, and what
percentage of kids in the class were weak on that particular question and they
have averages for the school district and for the grade level. . . . So, that’s basi-
cally what I have been doing using the data to point out the weakest areas and
attacking those.
Another 11th-grade E/LA teacher described the formative assessments that she used to
measure student achievement goals:
It’s not just the benchmarks. I look at their, I look at the obvious, test scores . . .
Yes, my teacher-made tests. My students do a lot of writing, so I assess their
writing via essays, in-class assessments, exit slips. We do something that is sort
of like an entrance slip, the same as an exit slip, but just a review of what we did
the last class to measure the transfer of information from class to class, if they
can keep it with them between classes. Just short quick quizzes, those are daily .
. .
Similarly, the special education teacher responded with a heightened focus on formative
assessments, particularly as a way to monitor student achievement without having to
wait for standardized test scores that, in her opinion, did not provide information in a
timely manner. Her remarks were as follows:
What types of data do I use? As in like what assessments do I use? I personally
like a lot of short assessments, so I like a lot of short informal [tests] because I
feel like a lot of the times the kids will get lessons and then they’re given the
test and it kind [of] like it hits so many different standards that it’s a little too
late. So, I’m more of like a quick exit slip, or a quick informal white board
assessment, or a quick group activity where they’re dong like a learning walk or
whatever. So, I like the quick activities and the quick assessments where I’m
able to kind of see and get like a pulse check of where everybody is at. I do like
the formal assessments ‘cause I do get kind of a baseline and scores and things
like that. But for me, just looking at the scores do not really assist me, because
the kids are really poor test takers.
To further emphasize this point of formative assessments, the Algebra I/calculus
teacher reported:
66
Almost daily in algebra, I’ve had entrance and exit slips. Those are very
quick, pretty much just a subset of a standard, not even a complete standard,
where you’re saying, like, “Do you get the element of it?’”
Here was another illustration of the teachers’ focus on formative assessments to
measure their goals as well as to inform their instruction.
The similarities in how to measure shared goals among the three participant
groups reflected the importance that all three groups placed on improving student
achievement and demonstrating results in the form of assessment data. Clearly, the
stakeholders at Brio Charter High School shared a vision for achievement with a
parallel strategy for demonstrating results as a form of accountability. According to
the literature, given this emphasis on academic targets and testing results, some
schools are using achievement tests for high-stakes decision making (Crawford,
2004). Of particular concern are decisions related to individual students, such as
grade promotion or graduation from high school (Gottlieb, 2003). According to both
researchers, no single test, particularly an achievement test of questionable validity or
reliability, should be used for high-stakes decision making. This overemphasis on
testing, as identified by the researchers, has not only become high stakes for schools
but also has in many ways has become high stakes for students. However, these
findings were consistent with those from the present study, where the administrators,
teachers, and students used the results of a high-stakes achievement test such as the
CAHSEE in conjunction with 240 course credits to determine graduate status from
high school for a student.
67
Common Perceptions About Teachers Who
Used Data
After analyzing the interview data from the administrators, teachers, and
students regarding teachers who used data, a common perception of data use began to
emerge. The participants all seemed to share in a perception of teachers who used
data that involved implementing a multifaceted process. In other words, using data
did not equate to one action—it equated to a process that involved analyzing student
performance and behavior, identifying weaknesses, intervening to remediate or
reteach, and assessing student achievement. The theory, according to Booher-Jen-
nings (2006), is simple: “Give students regular benchmark assessments; use that data
to identify individual students’ weaknesses; provide targeted instruction and support
that addresses those areas” (p. 1). When asked, “Do any of your colleagues use data
effectively to improve student achievement?” the college readiness teacher replied:
I know the Math Department does a great job with using data and helping the
students and they have, and I mean, they really have a strong department in
my opinion. And they really have support classes for the students and they
have tutorials for them. And I think they’re dong a great job. English, the
same way—I think they’re really using the data to help the students. So, I
think everyone with their part in the content areas, science, that they’re doing
a phenomenal job as well. . . . I think everyone is really motivated this year
‘cause now the pressure is on. We got this huge API score, and now we want
to maintain it and improve it as well.
The 10th grade E/LA teacher described his perception of the science teachers who
used data in this way:
They have some sort of set plan for the kids to learn their standards. So I think
that they have been able to use the data effectively to target the weaker areas
than just preparing their kids for the test in general as opposed to just worry-
ing about the class as a whole. They’re able to break down in smaller
68
components, and so kids recognize, “Hey, I can master that one,” and they see
their progress. That helps the kids, too. There’s an end goal in sight, like they
have something to work towards, whereas with some other teachers, even
with me, you need standard 10.2, maybe that’s what we should be doing—
maybe that’s something to consider. But with them, I think that they have
been able to fairly successfully do that, based on their scores. That’s what I
have seen.
According to this E/LA teacher, not only were the science teachers using data, but
they were also using it effectively to improve achievement as evidenced by “their
scores. I think their scores on the last benchmarks—they scored significantly better
than some of the other schools and what not.”
Another 11th-grade E/LA teacher shared a similar perception of teachers who
used data in the English Department:
Well, in the English Department and throughout the 11th-grade level, I know
that we are implementing this data appropriately and effectively. We’ve taken
on a similar way of facilitating information to students. So, for instance, a
colleague of mine today delivered her benchmark results to her students and I
—we all sat down to plan and also put together a lesson plan that facilitates
the data to the students in a way that they understand and to the students in a
way that they can start to understand not only what they might have gotten
wrong on the test, but understand how they learn. It looks at their, like, the
approach to learning that the student is taking. So, yeah, I absolutely know
that my colleagues are implementing information appropriately and effec-
tively.
The special education teacher also shared her perception of the teachers who
used data. Her remarks were specific to the Math and Science Departments:
I know that the Math Department has a meeting every week to go over the
data that they have, like tests and quizzes that they’ve given, homework re-
flections, and then seeing how they can use strategies. They’re not just meet-
ing within the Math Department—they actually have a coach that comes in
from the people who invented and implemented California Math and Science
Teachers [CMAST]. They’re coming in and helping the Math and Science
Departments . . . and they’ve done really well with their test scores. . . . How I
69
know that it’s working, that the math scores have improved about this much,
but the science scores have improved 20 to 30%.
Furthermore, according to this special education teacher, not only were they improv-
ing their scores but they were also performing better than similar schools:
Based on the ALS benchmarks I was talking about, there are six other schools
that are doing it in the Blue Dot district and Brio is the best in the science
scores out all those schools.
The principal corroborated this perception of teachers who used data when he
stated:
I would say a high percentage are looking at their data, are analyzing it, and
are using it. I would say over 60% of my teachers are using it. I know that
probably 95% of the teachers are looking at [it], analyzing it, and changing
some of their teaching strategies. Eventually, we’ll get more and more teach-
ers to look at it and really break it down the way that I want them to do and be
able to put it up.
The assistant principal discussed in more detail this perception of teachers
who used data when he described the following actions taken by the Math Depart-
ment:
So, take the Math Department . . . they created their own quarterly assess-
ments, matched them with the CSTs, matched them with their class, with the
standards. And, then, now what they do, they’re meeting and then they’re
testing and then they’re getting that information and they’re analyzing it
themselves . . . they’re looking at it as a department and saying, “Well, this is
what we need to do. So, if [in] Algebra I, we were supposed to hit the first
five standards and the teachers didn’t and it shows [reflects in their tests],
then they’re redirecting, their modifying their pacing plans to make sure
they’re hitting those standards before they move on.
One of the 11th-grade students, when asked the following question, “Are
teachers using data to help you meet your goals?” responded:
70
Yes, we’re doing like benchmarks, like every quarter or so, or like every
semester to check if we know our standards like for chemistry or for English.
And they let us know. “Oh, you scored this and this.” Like, I know I scored
like proficient on all of chemistry. Chemistry was hard for me at the begin-
ning of the year—now I understand it.
There was an 11th grade student who described the data being used by the teachers
and how it was presented:
Okay, well. Usually teachers, they might give us like a paper where they show
all the grades that we’ve been getting for assignments and all these things, so
that we can be aware of what we need to . . . Yeah, they’ll hand it to us. It’ll
have our grades for the assignments and recently like [I’ve] been receiving
also for the grades for all our classes. So, that helps out, too. ‘Cause like we
are aware of what we’re doing.
One student in particular, an 11th grader, reported the following types of data used by
teachers to measure their performance:
Well, they compare us. Like I have an English teacher—she compares us to
her last year’s class, their grades. And, like most of the teachers, they compare
us to other schools, to other Blue Dot schools and schools around the area,
like with the benchmark exams and all those state tests. They give us the data
and tell us, “Okay, this where you guys are,” and “Okay, this is where we
want you to be.” This is where you scored last year, and they help us improve
that way.
In sum, despite the variety of sources for data that the teachers chose to use,
the overriding perception shared by the administrators, teachers, and students was
that teachers who used data were involved in a multifaceted process that formed a
loop that started and ended with assessment. This process involved what has previ-
ously been described as assessing student performance, analyzing student achieve-
ment, identifying weaknesses, intervening to remediate or reteach, and then assessing
student performance again. Kinder (2000) referred to this process as “a decision-
71
support system, one in which the feedback generated by this data can help schools
determine whether or not they’re on the right track to improved achievement and how
fast or slow they’re progressing” (p. 5).
Common Perception of What “Use Data” Means
Based on the interview from all three participant groups, the underlying theme
that emerged from the data related to “use data” was tied closely to the shared percep-
tion of teachers who used data. These administrators, teachers, and students shared a
perception that teachers who used data behaved in a particular way, as described in
the previous sections. More specifically, it was the way in which the teachers inter-
acted with the data that defined the process of using data. The shared perception of
this process of using data included analyzing data, identifying weaknesses, and
intervening by modifying instruction.
Stakeholders shared common definition of “use data.” The principal de-
scribed this perception of the “use data” process in the following manner:
They’re analyzing and they’re changing their teaching, their lesson plans,
based on what the data is telling them. . . . I know that, for example, last
year’s 9th- grade English teacher—she wasn’t too pleased about giving
benchmark [assessments], but then she looked at the data. And she made
goals with her class. She changed her teaching strategies a little bit more to
make sure that she was addressing the students’ needs.
The assistant principal described a similar perspective of the “use data” process in his
response to the question, “What does ‘use data’ mean?”
After they [the teachers] give a test, the Academic Learning Systems data,
they’re pulled out for a day and they’re looking at the data and it’s done
72
districtwide, I guess. And so they work within their department, and they’re
looking at the data. They’re having a full day of this, okay, saying this is what
it is. “What do I need to do? How well is it assessing what I’m teaching? Did I
only teach the first four standards? And then I got really, really good data on
that, and I didn’t teach the last three. So that’s why the students are perform-
ing badly on those.” And so, they’re really looking at that to fully inform their
instruction.
Similarly, a 10th-grade E/LA teacher described using data as a way of inter-
vening in a student’s learning. He reported his strategy of attacking weaknesses in
this way:
And then as far as I use data, like I said before, we get the individual reports
to see which area is certain students are having problems with and like I said,
I attack those certain problems in their work. The good thing about that, you
can kinda see the changes, because when they’re having problems in certain
areas, it’s very blatant and out in the open and you’re able to see the changes
very quickly, because you know a lot of students have problems in a certain
area. . . . Once you sit down and break it down with them, then you tend to see
how they’re able to improve.
Similarly, another 11th-grade E/LA teacher explained how she used data to better
understand her students’ achievements and address their weaknesses:
I mean, I definitely do use it [data] to see where student achievement is suffer-
ing. The gaps . . . so, the gaps in student achievement—I used that data that I
received from the benchmark results to start to analyze why that particular
class is performing lower. And some of the reasons were very obvious. It’s
my largest class. I have the largest number of discipline problems in that
particular class—I’m constantly policing them. I also have the highest per-
centage of special needs students in the class which need extra help and atten-
tion, which is fine. And so, after that, I was truly able to analyze because with
(I didn’t have data before) having the data in front of me helped me truly
understand the impact that this is having on students in the class and the fact
that the class may need . . . that class needs to be addressed. The class size
needs to be addressed, at least.
73
When asked the question, “Do you use data?” the world history teacher’s
response included not only her description of using data but also an analysis of the
positive and negative aspects of using benchmark assessment data:
Yes, I do. The thing is . . . the only problem with ALS data is . . . the good
thing is, it’s preparing us for the CSTs in the sense that I understand which
standards I need to revisit before my students take the CSTs. The bad thing is
it usually comes at the end of the unit. So it’s not something that I can get to
re-teach a lot of right away. So, there’s the problem of the student possibly
forgetting everything about that specific standard, since number one, they
didn’t do well on it in the first place and number two, I’m not going to be
revisiting it until the beginning of April . . . so, that’s the tough part.
Student interview data revealed a perception of using data similar to the
process suggested by the administrators and the teachers. The students also believed
that using data meant analyzing student data, pinpointing performance weaknesses,
and intervening with modifications to improve student achievement. In fact, one
10th- grade student described how her college readiness teacher modified vocabulary
instruction to meet her needs:
Yeah. They’re doing a lot of stuff. Okay, Mrs. [college readiness teacher’s
name], right now, like okay, she told me to study like for some vocabulary
words, and then I was like, “I don’t get ‘em, and don’t get ‘em.” And, she like
made them short for me to understand the words, but with the same defini-
tions, but the way you have a sentence in your mind, it conducts [affects]
everything else. I memorized every word, and I got a 10 out of 10 in her quiz.
Another 10th-grade student described the “use data” process in the following simple
terms, “Some of my teachers use our test scores and just how we do in their classes to
minimize (modify) what we do that day.” Then, there was an 11th-grade student who
elaborated on the process by discussing the use of data charts:
74
Oh, I don’t know if you’ve been in any of the classrooms, but we have data
charts comparing our scores because we have these things called benchmark
tests, which like, I think they’re quarterly tests that are based on the California
standards of each subject, and basically we take the test and it’s sponsored by
Blue Dot. And, when we score, we’re ranked individually, by classes, by
grade level, and also against other Blue Dot schools and other schools in the
district. And so, basically those—they post them on the walls in every class,
and the teacher explains it to us, and we just see how we’re doing against
other schools.
For example, a 10th-grade student described how his teachers used his test results and
homework assignments to intervene and help him improve his grades. According to
him, they provided tutoring opportunities. More specifically, he stated:
We have like Mr. [Algebra I teacher’s name]. Like he’ll spend time with me.
Like if I didn’t understand it [algebra], I’ll ask him the question and he’ll
explain it to me in a pace that I can understand it. And then, after he teaches
me, I got it.
In sum, the common perception of what “use data” means was interrelated to
the common perception of teachers who used data. The definition, based on the data
analysis, was predicated on the behaviors of the teachers who used data. At Brio
Charter High School, to use data meant to use a process involving the analysis of
data, the identification of weaknesses, and the modification of instruction to intervene
and mitigate the impact of the weaknesses. In fact, Heritage and Chen (2005) sup-
ported the idea of using data as a process, because using data leads to school im-
provement. Their contention was that educators who implement this process of
collecting data, analyzing data, and setting goals and targets based on their analysis
could reach their goals for school improvement. The literature review described the
variability that exists among researchers in what constitutes a teacher with skills to
75
use data in a school system where a data-based approach was being implemented.
According to Heritage and Chen, educators need to have the skills and organizational
procedures in order to be equipped to use data effectively for school improvement.
These researchers identified the following five core skills as essential for using data
effectively: (a) determining what one wants to know, (b) collecting data, (c) analyz-
ing results, (d) setting priorities and goals, and (e) developing strategies. Killion and
Bellamy (2000) corroborated this need for skills by suggesting that schools focus on
clearly defining the “right problem” so as to be able to systematically collect and
analyze data. To that end, these data can provide meaningful guidance on how in-
structional practices can be modified to address student needs (Wade, 2001).
What these researchers failed to take into consideration was the variability in
teacher preparation for such a task and the need for consensus among administrators
and teachers to implement a data-based decision-making approach. First, in light of
this oversight with regard to teacher preparation in the literature, the data from this
study reflected a major focus on the part of the administration and Blue Dot to pro-
vide teachers with professional development in the use of data for the purpose of
improving student achievement. Secondly, although there was no consensus among
the researchers regarding what set of skills or procedures to implement in a data-
based process, the data from this study revealed that the stakeholders at Brio shared a
common perception of what skills were necessary for teachers to implement in their
data-driven decision process. The data also corroborated the need for consensus
among the administrators and teachers at Brio. According to the data, these
76
stakeholders had, by virtue of their charter status and association with Blue Dot,
agreed to implement a data-based approach to monitor and improve student achieve-
ment; and this approach was comprised of the various steps previously described in
the assessment loop.
To that end, for example, Killion and Bellamy (2000) focused their research
on creating the role of data analyst to strengthen school improvement efforts that stra-
tegically targeted student learning. This creation is a reflection of the need that
schools implementing a data-based approach have to address regarding the variability
that exists in staff expertise and professional preparation. Clearly, with the focus of
using data for school improvements, districts and schools have to realign their re-
sources and continue operations (Killion & Bellamy). The focus on realigning
resources was consistent with the approach taken in the study. Blue Dot not only pro-
vided data analysts at the district level for data analysis sessions for administrators
and teachers but also provided professional development for teachers to develop
skills for data analysis and to address the variability in staff expertise and profes-
sional preparation.
Student addition: Communication. The student interview data also revealed
an additional step in the process of using data that would complete the process and
make it meaningful to students. That additional step was communication in the form
of information. For example, one 11th-grade student stated:
77
I like to get that progress report so I know what I’m doing, how I’m doing.
Even though I’m turning in my assignments, if it’s okay or not, what I can do
to improve the assignment. Stuff like that.
To further corroborate the importance of communication and being informed, another
10th-grade student reported the following:
I ask my teachers for a progress report every week. So, I know how I’m doing,
so if I fall behind I can know what I have to work on. So, I can like yeah . . . I
haven’t taken my CASHEE yet. I take that in January, I think.
Then there was an 11th-grade student who suggested that ongoing communication
and sharing of assessment information would improve student achievement on a
schoolwide level when he posited the following:
. . . if we want to like improve in score things or any way, then maybe we
should get things every month or so, or every week, to see if we actually even
by a little, even by a small percentage. So, that way, like we see, oh, this
school has improved, even if [it] was a fraction.
Being informed and being made aware of their academic status was a desired
step in the process of using data that emerged from the student data. For the students,
sharing information based on the data was important. However, the desired goal was
not only to share the information but also to also share with them, individually and/or
collectively, the analysis and interpretation of the data. For example, one 10th-grade
student indicated that her teachers had personalized assessment information for her in
the form of a pie chart:
Well, usually, they have a pie chart for me. To see like my balance of how I’m
doing from the beginning of the year. So that I think like that’s helping me to
see how I’m doing, progressing. . . . I have a problem with taking tests, so
they all got together and they make a pie chart of my project, of my progress.
78
Similarly, for another 11th-grade student, having the assessment data was valuable
information. She also wanted to know what these data meant as they related to her
academic achievement:
I think it would be [important], other than the grades, knowing what I get, like
I said. Like also teachers, when they go to us and they tell us that we need to
improve in this here and things like that. ‘Cause I know that it really matters.
It’s important to do good, to get like good grades.
Having the assessment data and having the teachers interpret that data were
also important to another 11th-grade student who explained his position in this way:
Yeah, because once they have all the data on the wall, not only do they ex-
plain it to us and show us what we need to improve on (‘cause our school is
very like college-bound, test-bound), and so like they have—they like change
their, how do you say, their lesson plans around test taking so they could help
us improve. Like they’ll put up test taking skills, tests, like test questions,
sample questions. And they’ll run it through the class. That’s how they’re
helping us.
This theme of sharing information was unique to only one teacher, the 11th-
grade E/LA teacher, who explained:
The data gets shared with the students; the company that we use is very re-
sourceful in the sense that they provide individual student reports back to the
student. They also send a letter home in Spanish explaining the nature of the
test and their student’s score. And then after we review each individual stu-
dent score, then I ask the students to put a realistic goal that they’d like to
achieve for the next benchmark exam.
In this particular case, not only has the teacher shared the information, but she has
also assisted the students through the process of analysis, interpretation, and inter-
vention—a process usually reserved for teachers and administrators. In fact, based on
the student interview data, this is a process in which they, the students, would like to
be included.
79
For the students, sharing information based on student academic performance
was a critical process of the using data process. It might not even be considered a step
in the process, as defined in the previous section, but it was the most meaningful and
relevant step in the process for them. The communication step was key for the stu-
dents. In that same vein, Kinder (2000) acknowledged that distributing the data,
sharing it, and talking about it is a critical step in the data-driven decision-making
process.
Variability in Perceptions
This section describes the variability in the participants’ perceptions of teach-
ers who used data and the meaning of “use data.” The assumption embedded in the
research question itself was that due to the inherent differences in the roles of the
administrators, teachers, and students, there would be variability in their perceptions.
In fact, according to Ingram et al. (2004), teachers and administrators have a difficult
time agreeing on what types of data to use to measure teacher effectiveness as it
relates to student achievement. Their findings suggested the use of systematic data
and anecdotal information, experience, or intuition to make decisions. These findings
were contrary to the findings in the present study. In contrast to the study by Ingram
et al., based on the themes that emerged from the interview data, there was a prepon-
derance of congruence in the stakeholders’ collective perceptions of teachers who
used data and the definition of “use data.”
80
Lack of variability. When describing teachers who used data, the shared per-
ception held by the stakeholders at Brio Charter High School was that teachers who
used data generally followed a process. This process involved several steps, which
formed a loop—an assessment loop. The loop started with assessment data and ended
with assessment data. In other words, this meant that teachers who used data assessed
student performance, analyzed student achievement, identified weaknesses, inter-
vened to modify instruction, and then assessed student performance again. For exam-
ple, the algebra/calculus teacher stated that in his department meetings, the staff came
together and had what he referred to as a “qualitative reflection.” Disappointed with
the quality of the ALS questions, he stated: “We wrote our own [questions] and have
compiled that data and looked at it.” Collaboratively, the Math Department agreed to
teach certain standards and target them on their teacher-made assessment. Then he
reported, “We can classify them [the students] as advanced or proficient and on down
and have a good idea when we look at the cumulative data that how did they do on
this standard and what we need to re-teach,” in order for the students to meet profi-
ciency and be prepared for the next benchmark assessment. To that end, the world
history teacher described her role in the assessment process in the following manner:
I decide on this test or quiz or the rubric, etc. And, I take that information, and
I’m like, “Okay this is what I need to get done.” Then I go backwards and
plan it so that I’m preparing them to be able to take that test. Then I give them
the test. Then I collect the test. I grade the test. I look at the information and
decide which questions are, percentage-wise, you know, by ratio . . . 80% of
my students really did not do well on this portion of the test. That means I
need to re-teach it. Or, not so much test more, but like a quiz or a comprehen-
sion check. So then, I’m taking that information and using [it] to re-teach. Or,
I know that they’ve mastered it.
81
Then there was the special education teacher who followed a similar assess-
ment process in order to target student weaknesses, intervene in the learning phase,
and consequently improve student achievement. This is how she described her assess-
ment loop:
So, I’m able to, basically, with my exit slips, with our warm-ups, with our
guided practice, in this class, I’m able to collect my own informal assessment
[data]. Are the kids understanding substitution? . . . Which students are get-
ting it? Which students are not? The students that were not getting it that I
feel need to get it over again, I might have them have a separate lesson. For
students who already get it . . . then I might meet with students that do not and
remediate.
It was Schmoker (2003) who supported this idea when he wrote: “Teachers
themselves can easily learn to conduct the analysis that will have the most significant
impact on teaching and achievement” (p. 23). In keeping with that same idea of
analyzing data, identifying weaknesses, and intervening, a 10th-grade student de-
scribed her E/LA teachers assessment loop in this way:
We take like weekly tests on the chapters. So, if like all the class doesn’t pass,
then he’ll like, “Oh, I have to focus on this chapter because they got confused
in this chapter.” So, that’s when he would go over it, read it out in class.
Similarly, another 11th-grade student related her perspective of teachers who imple-
ment this assessment loop of data analysis, interpretation, and intervention in the
following way:
I think they like, better, like tests. Homework is just something, like I said
before, it’s just practice. All right, of course, the teacher might get upset if you
don’t do the homework. But, I think for them the test is more important. And
it shows them that you’re either paying attention in class, or maybe they have
to improve in whatever it is they’re teaching, or they have to review that
subject once again.
82
The administrator data also revealed a shared perception related to teachers
who used data. In fact, it was the assistant principal who reported the following ex-
ample:
Our 9th-grade biology teacher and our 10th-grade marine biology teacher are
effectively using data to track their students. They’re using it on every assess-
ment that the students do, a quiz or a test.
Furthermore, reported the assistant principal, these aforementioned teachers identi-
fied advanced students and made them “super” tutors, who were then given teacher-
like responsibilities to tutor students after school and assess students in designated
classes. To that end, once a student had mastered a given standard with the super
tutor, he or she proceeded to the next assessment level, which was the teacher. Ac-
cording to the assistant principal, in this example
the teacher will be the ultimate assessment of what content is, and then from
there it’s tracked on a larger chart of whether how well they’re doing, whether
they achieve it or not. If they don’t get an advanced score, then they start over
again with that one particular standard.
The principal shared a similar perception related to the assessment loop when he de-
scribed an E/LA teacher’s nascent experience with data analysis:
For example, last year’s 9th-grade E/LA teacher, she wasn’t too pleased about
giving the benchmark [assessment]. But then, she looked at the data and she
made goals with her class—she changed her teaching strategies a little bit
more to make sure that she was addressing the students’ needs. She the—at
the end of the year came back saying, “I like giving the benchmark. I like
having the data. It helped me, you know, organize myself better.”
In sum, although the participants in this study had different roles and, given
their roles, one would assume that their perceptions of teachers who used data would
differ, that was not the case. On the contrary, there emerged from the interview data
83
from all three participant groups congruence—a shared perception related to teachers
who used data, as described above.
In addition to the shared perception of teachers who used data, the three
groups of participants in the study also had a shared perception related to the defini-
tion of “use data.” Based on the analysis of the interview data, a common perception
emerged in which using data was described as a process. This process included a
series of behaviors that appeared to loop from assessing data, to identifying weak-
nesses, to intervening with modifications, and then assessing data again. The princi-
pal described using data as “looking at, analyzing it, and changing some of their
teaching strategies.” The 10th-grade E/LA teacher described “using the data to point
at the weakest areas and attacking those.” Following that same theme, an 11th-grade
student described the process used by her English teacher in very simple terms: “She
tries to help us, like uses the scores to improve us.”
The fact of the matter is that although administrators, teachers, and students
played different roles in the data-driven decision-making process, this difference was
overridden by the perceptions they shared. Based on the interview data, two shared
perceptions emerged from among these three participant groups. The first shared per-
ception they held was that teachers who used data generally followed a process that
involved several steps. This process was defined by a loop—a data loop in which
there was a series of behaviors that started with data analysis of assessment data at
the beginning and the end of the loop. These behaviors were those involved in using
data. According to the participants, a teacher who engaged in these behaviors used
84
data. Based on the interview data, the participants seemed to agree that the behaviors
associated with using data included assessing student performance, analyzing student
achievement, intervening by modifying instruction, and then assessing student perfor-
mance again after the intervention. This shared perception formed the basis for the
participants’ definition of what it meant to “use data.”
The descriptions of these shared perceptions served to negate the assumption
that was inherent in research question #4, Given that there are inherent differences in
their roles in the data-based approach, is there variability in their perceptions? If so,
does this variability affect the effectiveness of the process? Contrary to the assump-
tion that the participants, because of their different roles in the data-based approach,
would have different perceptions, this study demonstrated that they shared similar
perceptions related to teachers who used data and the meaning of “use data.” Because
that was the case at the time of this study, variability did not appear to be a factor that
affected the strength of the process at Brio Charter High School. Hence, there was no
basis for discussion of variability and how it impacted the effectiveness of the data-
driven decision- making process at this school site.
Conclusion
This chapter included the analysis of the data collected through interviews of
administrators, teachers, and students at Brio Charter High School. The data collected
were used to address the research questions that focused on commonalities of stake-
holders with regard to goals and perceptions related to data. To that end, after
85
collecting and analyzing the data, several themes emerged that were linked to the
research questions. The purpose of this study was to determine whether administra-
tors, teachers, and students at Brio Charter High School shared common goals and
common perceptions of teachers who used data and what it meant to use data. Addi-
tionally, if there was evidence of variability among their goals and perceptions, the
study was to determine whether this variability impacted the effectiveness of the data-
driven decision-making process being implemented at this school site.
With the increasing pressures of legislative mandates from all levels of
government—federal, state, and local—many schools have moved toward a data-
based approach to increase student achievement. Brio Charter High School was one
such high school. Under the direction and guidance of Blue Dot, a not-for-profit
corporation that had cosponsored charter schools in the region, the school had imple-
mented a data-driven decision-making process to improve student performance. The
findings from this study suggested that administrators, teachers, and students shared
two common sets of goals. The first set of goals was related to improving student
achievement, and the second set was related to preparing students for college. There
was one goal unique to the principal that involved goal involved postcollegiate
service to the student’s community. In other words, the goal was for the student not
only to graduate from college but also to return to his/her community and become a
change agent who would improve and enhance the community in which he/she grew
up.
86
The data also revealed similarities among the three groups of participants as
to how to how to measure the aforementioned shared sets of goals. From the
interview data emerged a common theme that the participants would use assessment
data to judge whether their goals had been met. In addition, the administrator and
student interview data revealed an underlying shared focus on using summative
assessment data to measure the goals. Alternatively, the teacher data revealed a focus
on formative assessment data. That being said, these findings were significant be-
cause they reflected the value that all three participant groups placed not only on
improving student achievement but also on how to measure student achievement to
demonstrate results.
Based on the interview data, the administrators, teachers, and students all
seemed to share a common perception of teachers who used data. The data revealed a
shared perception in which teachers who used data were involved in a process where
student achievement data were analyzed, weaknesses were pinpointed, and instruc-
tional modifications were used to intervene. These behaviors were what differentiated
a teacher who used data from a teacher who did not. Similarly, these behaviors were
what helped to define the term use data. To be specific, it is the way in which teach-
ers interact with the data that defines the term, not only as an action but also as a
process. Therefore, at Brio Charter High School, administrators, teachers, and stu-
dents believed that to use data meant to use a process that involved analyzing data,
identifying weaknesses, and intervening with modifications.
87
There was an assumption underlying this study that because administrators,
teachers, and students had different roles in the data-driven decision-making process,
the data collected would reflect some inconsistencies in their responses. On the
contrary, the interview data revealed that in general, these stakeholders shared similar
goals and similar perceptions related to data. To that end, there was no basis in the
data for analyzing variability and, consequently, no way time to measure whether
variability impacted the effectiveness of the process.
88
CHAPTER 5
SUMMARY AND IMPLICATIONS OF FINDINGS
Many of the recent legislative mandates have had a significant impact on the
testing and accountability of student learning. In fact, one of the consequences of this
increased focus on student learning has been the implementation of a data-based
process to improve student achievement. The purpose of this case study was to
examine the perceptions of administrators, teachers, and students about teachers who
were believed to use data to improve student achievement. At a time when schools
are implementing data-based approaches, this study was designed to provide educa-
tors with informative data on variability in the perceptions shared by stakeholders
directly affected by a data-driven decision-making process. To that end, the impact of
school reforms using data-based approaches was assumed to be varied due to the lack
of uniform guidance and direction, as stated in previous chapters.
The challenge faced by educators is not the implementation of a data-based
process but rather the lack of direction on how to implement a process effectively in
order to improve student achievement. The NCLB Act has had a significant impact
on elementary and secondary educational programs (Simpson et al., 2004). However,
mandating continuous academic improvement through increased testing and account-
ability without the appropriate support and guidance for using data to achieve this
goal is a recipe for variability across school districts, across schools, and even among
teachers.
89
Clearly, without uniformity and consensus on how to implement a data-driven
decision-making process, it stands to reason that the effectiveness of the process to
improve student achievement will vary. Given that variability is the consequence of a
lack of consensus, even among researchers, then many questions arise regarding the
use of data, particularly at a school site. Do administrators, teachers, and students
share the same goals? Are teachers using data to improve student performance? Do
administrators, teachers, and students share a common perception of teachers who
use data? Is there variability in their perceptions, and does this variability affect the
effectiveness of the process? The findings of this case study provided insightful infor-
mation with regard to stakeholders’ perceptions of teachers who used data and the
variability or lack of variability in their perceptions.
This case study included an analysis of stakeholders’ perceptions related to
teacher utilization of data to improve student performance. The method used for data
collection consisted primarily of onsite interviews of administrators, teachers, and
students. One of the major findings resulting from this study was that administrators,
teachers, and students shared common goals and perceptions of teachers who used
data. These administrators, teachers, and students believed that using data involved a
process of analyzing data, identifying weaknesses, and modifying instructions to
intervene. The behaviors involved in this process are what differentiated a teacher
who used data from a teacher who did not.
A major assumption underlying this study was that because administrators,
teachers, and students had different roles they played in the data-driven decision-
90
making process, there would be variability in their perceptions of teachers who used
data. Consequently, this variability would have an impact on the effectiveness of the
process. Contrary to this assumption, the data collected revealed that in general, the
stakeholders in this case study shared similar goals and perceptions related to data.
Thus, there was no basis in the data for analyzing variability among the stakeholders’
perceptions and, subsequently, no way to measure whether variability would have
impacted the effectiveness of the process. Another possible assumption, not consid-
ered in the study design but suggested by the findings, is that alignment of percep-
tions can result from the common, routine interactions of the people working together
in the same building (school). Individual differences can be mitigated by common
experiences and sustained interaction. This (untested) assumption maybe especially
operative in a high-performing school such as Brio Charter High School.
In the literature, although there was no consensus among researchers regard-
ing what practices to implement at school sites to improve student achievement, there
were some best practices identified by Williams and Kirst (2006) that have been
widely accepted and proven by more recent studies. These practices include “priori-
tizing student achievement; implementing a coherent, standards-based curriculum;
analyzing student assessment from multiple sources; and insuring availability of
instructional resources” (Williams & Kirst, p. 8). This research was consistent with
the findings in the present study related to school practices. Based on the data, Brio
Charter High School teachers and administrators shared a common set of goals
related to student achievement. Teachers implemented a standards-based curriculum,
91
as evidenced by the science teachers and their elaborate tutorial system based on
standards. Administrators provided teachers with multiple assessment data, such as
benchmark assessment results, CST scores, and CAHSEE results. Furthermore, Blue
Dot provided the fiscal resources for schools, such as Brio Charter High School, to
engage in clearly articulated procedures and guidelines that supported effective
implementation of a data-based approach to improving student learning.
Implications for Future Research
The use of data is central to the accountability and testing provisions stipu-
lated in NCLB. Academic targets have to be measurable and quantifiable. To that
end, data analysis and the ability of school systems to incorporate a data-based ap-
proach to improve academic achievement will continue to be a focus for the educa-
tion community. Thornton and Perreault (2002) suggested that it is important for
principals using a data-based approach not only to nurture a schoolwide culture of
standards to drive instruction but also to keep in mind that effective teachers are the
linchpin of student success. While the present study has provided valuable insight
into the perceptions of teachers who used data to improve student learning, future
research is needed to develop consistency and clarity of purpose in the effective
implementation of a data-based process that will improve student achievement. Two
areas that merit additional research related to the implementation of a data-based
process based on this study are the following:
92
1. Future research should focus on analyzing the procedures and guidelines
of schools that are using a data-based approach and demonstrating continuous im-
provement in student academic achievement. In this case study, the administration in
conjunction with Blue Dot had developed these procedures and guidelines for the
teachers. Future research in this particular area would provide valuable insight as to
how to develop consistency and clarity of purpose in the effective implementation of
a data-based process.
2. Additional future research should focus on analyzing the type of profes-
sional development being provided for teachers who are using a data-based approach
and demonstrating continuous improvement in student academic achievement. In this
particular case study, Blue Dot provided the training in data analysis and facilitated
the collaboration process. Future research in professional development could provide
valuable information related to instruction as focus is placed on the elements of data
analysis and the interpretation of the data in order to effect instructional modifica-
tions that will impact learning as a result of the effective implementation of a data-
based process.
Implications for Policy and Practice
Based upon the research and the findings of this case study, school systems
that are implementing a data-based approach to improve student achievement or are
moving in that direction in the near future should consider the following two recom-
mendations:
93
Student Representation
Administrators, teachers, and students in this study shared a common set of
goals related to student achievement. They also shared a common perception of
teachers who used data to effectively improve student learning. The one step that
could be refined in the data analysis process was student representation. Students in
this study expressed a desire to have teachers share assessment results more fre-
quently and help them analyze and interpret these results. The establishment of a
student academic leadership team could provide students with a forum for discussing
the data-based approach implemented at their school site and expressing the impact
that it has on their academic well-being, thus developing a sense of representation
and ownership among the student body in the process. This team could also provide
oversight of the process on an ongoing basis by collecting feedback from students
and making recommendations to the administration on how to improve communica-
tion and dissemination of information related to assessment data and progress moni-
toring.
Professional Development
Clearly, the recommendation for ongoing professional development related to
data analysis should be central to any school district implementing or interested in
implementing a data-driven decision-making process to improve student achieve-
ment. In this case, there was variability related to staff expertise and professional
preparation. During the initial phase of implementation, teachers as well as
94
administrators were trained in data analysis and interpretation. The next phase of pro-
fessional development included how to identify and address areas of concerns. Spe-
cifically, teachers in this phase learned how to modify their instruction in order to
meet the needs of the students. The last phase, the collaboration phase, implemented
in this study was one in which the teachers had explicitly expressed a desire to have
more time dedicated specifically to developing, critiquing, testing, and modifying
instructional strategies based on assessment results. Based on the data from this
study, these last two phases should be ongoing and integral to effectively implement a
data-based process.
Conclusion
For many schools, the implementation of a data-based approach for improv-
ing student achievement is one way of addressing the national focus on testing and
accountability. Clearly, the effectiveness of this process lies with the teacher, who
plays an integral role in the use of data. This case study in particular has provided the
educational community with timely and insightful information regarding the percep-
tions of teachers who used data at a high school by highlighting the consistency that
existed among all three stakeholders when it came to improving student achievement.
However, much more research is necessary to examine the procedures and guidelines
implemented in a data-based process where student achievement improves and there
is an increase in student achievement scores. The benefits of implementing a data-
based process are evident as administrators, teachers, and students use data not only
95
to improve student achievement but also to intervene instructionally. The use of data-
based approaches for school improvement in the future will be predicated on the
degree to which school systems articulate procedures and guidelines to administra-
tors, teachers, and students to produce measurable and meaningful results.
96
REFERENCES
Albrecht, S., & Joles, C. (2003). Accountability and access to opportunity: Mutually
exclusive tenets under a high-stakes testing mandate. Preventing School Failure,
47(2), 86-91.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change.
Psychological Review, 84, 191-215.
Berhnardt, V. L. (1998). Data analysis for comprehensive school improvement.
Larchmont, NY: Eye on Education.
Booher-Jennings, J. (2006). Rationing education in an era of accountability: The push
for accountability was originally cast as a way to ensure that schools would leave
no child behind. Phi Delta Kappan, 87, 756-767.
Brown, J. S., & Duguid, P. (2000). Social life of information. Boston: Harvard Busi-
ness School Press.
California Department of Education. (2007). Student data files. Sacramento, CA:
Author. Retrieved July 21, 2008, from http://www.cde.ca.gov/ds/sd/cb/
studentdatafiles.asp
California Department of Education. (2008). DataQuest. Sacramento: Author. Re-
trieved May 12, 2008, from http://dq.cde.ca.gov/dataquest/
Canada, B. O. (2001). Welcome more data, but apply it well. The School Administra-
tor, 58(4), 44.
Center on Education Policy. (2003). From the capitol to the classroom: State and
federal efforts to implement the No Child Left Behind Act. Washington, DC:
Author.
Collins, J. (2001, October). Good to great. Fast Company, 51, 90-104.
Cooley, V. E., Shen, J., Miller, D. S., Winograd, P. N., Rainey, J. M., Yuan, W., et al.
(2006). Data-based decision-making: Three state-level educational leadership
initiatives. Educational Horizons, 85(1), 57-64.
Crawford, J. (2004, September). No Child Left Behind: Misguided approach to
school accountability for English language learners. Paper presented at a forum
sponsored by the Center on Education Policy, Washington, DC.
97
Creighton, T. B. (2001). Data analysis in administrators’ hands: An oxymoron? The
School Administrator, 58(4), 6-11.
Darling-Hammond, L. (1997). The right to learn: A blueprint for creating schools
that work. New York: Jossey-Bass.
Darling-Hammond, L. (2004). Standards, accountability, and school reform. Teach-
ers College Record, 106, 1047-1085.
Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high-
performing school systems use data to improve instruction for elementary stu-
dents. Los Angeles: University of Southern California, Center on Governance.
DuFour, R. (2002). The learning-centered principal. Educational Leadership, 59(8),
12-15.
Fullan, M. (2000). The three stories of education reform. Phi Delta Kappan, 81, 581-
584.
Gottlieb, M. (2003). Large-scale assessment of English language learners. Alexan-
dria, VA: Teachers of English to Speakers of Other Languages.
Hallinger, P., Murphy, J., & Mesa, R. P. (1999, November). School district policies
and practices which promote school effectiveness. The Effective School Report,
n.v., 3-8.
Hardman, M., & Mulder, M. (2003, November). Federal education reform: Critical
issues in public education and the impact of students with disabilities. Paper pre-
sented at the Eagle Summit on Critical Issues on the Future of Personnel
Preparation in Emotional/Behavioral Disorders, University of North Texas,
Institute on Behavioral and Learning Differences, Denton, TX.
Heritage, M., & Chen, E. (2005). Why data skills matter in school improvement. Phi
Delta Kappan, 86, 707-710.
Hoefling, T. (2001). Working virtually: Managing people for successful virtual teams
and organizations. Sterling, VA: Stylus.
Holcomb, E. L. (1999). Getting excited about data. Thousand Oaks, CA: Corwin
Press.
98
Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and
teacher decision making: Barriers to the use of data to improve practice. Teach-
ers College Record, 106, 1258-1287.
Jennings, J., & Rentner, D. (2006). The big effects of the No Child Left Behind Act
on public schools. Phi Delta Kappan, 88, 110-113.
Johnson, J. (1997). Data-driven school improvement (ERIC Digest No. 109). Eugene,
OR: ERIC National Clearinghouse on Educational Management. (ERIC Docu-
ment Reproduction Service No. ED401595)
Jones, B., & Egley, R. (2006). Thinking about accountability. Phi Delta Kappan, 87,
767.
Killion, J., & Bellamy, G. (2000). On the job. Journal of Staff Development, 21(1),
27-31.
Kinder, A. (2000). D[cubed]M: Helping schools distill data. In M. Kroeger, S. Blaser,
L. Raach, & C. Cooper (Eds.), How schools use data to help students learn (pp.
5-8). Naperville, IL: North Central Regional Educational Laboratory
Lachat, M. A., Williams, M., & Smith, S. C. (2006, October). Making sense of all
your data. Principal Leadership, 7, 16-21.
Lewis, A. (2005). Washington commentary: States feel the crunch of NCLB. Phi
Delta Kappan, 86, 339.
Love, N. (2004). Taking data to new depths. National Staff Development Council,
25(4), 22-26.
Mason, S. (2002, April). Turning data into knowledge: Lessons from six Milwaukee
public schools. Paper presented at the annual conference of the American Educa-
tional Research Association, New Orleans, LA.
Mathis, W. (2003). No Child Left Behind: Costs and benefits. Phi Delta Kappan, 84,
679-689.
McClean, J. E. (1995). Improving education through action research: A guide for ad-
ministrators and teachers. Thousand Oaks, CA: Corwin Press.
Merriam, S. B. (1998). Qualitative research and case study applications in educa-
tion. San Francisco: Jossey-Bass.
99
Mintrop, H. (2004). High-stakes accountability, state oversight, and educational
equity. Teachers College Record, 106, 2128-2145.
Newmann, F., King, B., & Rigdon, M. (1997). Accountability and school perfor-
mance: Implications for restructuring schools. Harvard Educational Review, 67,
401-474.
No Child Left Behind Act of 2001, 20 U.S.C. 70 [section] 6301 et seq.
Odland, J. (2006a). Educators left behind. Childhood Education, 83(2), 98-99.
Odland, J. (2006b). NCLB: Time to reevaluate its effectiveness. Childhood Educa-
tion, 83(1), 32-33.
Olson, L. (2007, February 14). Data-wise school systems seen as sharing key traits.
Education Week, 26(23), 5.
Parsons, B. (2003). A tale of two schools’ data. Educational Leadership, 60, 66-68.
Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed. rev.).
Thousand Oaks, CA: Sage.
Raach, L. (2000). Measuring student success. In M. Kroeger, S. Blaser, C. Cooper, &
A. Kinder (Eds.), How schools use data to help students learn (pp. 13-15).
Naperville, IL: North Central Regional Educational Laboratory.
Reeves, D. (2000). Accountability in action. Denver, CO: Advanced Learning Press.
Reeves, P. L., & Burt, W. L. (2006). Challenges in data-based decision-making:
Voices from Principals. Educational Horizons, 85(1), 65-71.
Richardson, J. (2000). Learning benefits everyone. Journal of Staff Development,
21(1), 54-59.
Santo, S. A. (2005). Knowledge management: An imperative for schools of educa-
tion. TechTrends 49(6), 42-49.
Schaffer, R. H. (1988). The breakthrough strategy: Using short-term successes to
build the high-performing organization. New York: Harper Business.
Schmoker, M. (1999). Results: The key to continuous school improvement (2nd ed.).
Alexandria, VA: ASCD.
100
Schmoker, M. (2003). First things first: Demystifying data analysis. Educational
Leadership, 60, 22-24.
Senge, P. (1990). The fifth discipline: The art and practice of the learning organiza-
tion. New York: Doubleday.
Simpson, R., LaCava, P., & Graner, P. (2004). The No Child Left Behind Act: Chal-
lenges and implications for educators. Intervention in School and Clinic, 40(2),
67-76.
Sparks, D. (2000). Results are the reasons. Journal of Staff Development, 21(3), 1-4.
Streifer, P. (2000). School improvement: Finding the time. NASSP Bulletin, 84(612),
66-71.
Thornton, B., & Perreault, G. (2002). Becoming a data-based leader: An introduction.
NASSP Bulletin, 86(630), 86-96.
U.S. Department of Education, Education Commission of the States. (2002). Data-
driven decision making (No Child Left Behind Issue Brief Report No. GP-02-
10). Washington, DC: U.S. Government Printing Office.
U.S. Department of Education, Office of Educational Research and Improvement,
North Central Regional Educational Laboratory. (2002). Data exploration: A
journey to better teaching and learning. Washington, DC: U.S. Government
Printing Office.
Wade, H. (2001). Data inquiry and analysis for educational reform (ERIC Digest
#153). Eugene, OR: ERIC National Clearinghouse on Educational Management.
(ERIC Document Reproduction Service No. ED461911)
Wiggins, G. (1994). None of the above. The Executive Educator, 16(7), 14-18.
Williams, T., & Kirst, M. (2006). School practices that matter. Leadership, 35(4), 8-
10.
101
APPENDIX A
INTERVIEW QUESTIONS FOR ADMINISTRATORS
1
Questions Related to Setting and Measuring Goals
1. What goals has your school set for the next 3 years?
2. What data will help you judge whether your school is meeting its goals?
Questions Related to Data Uses
3. How are you using data current to improve student achievement?
4. How are teachers using data currently to improve student achievement?
5. What additional data should be collected and why?
6. In what ways are teachers involved in data collection and analysis?
7. What different types of data should teachers use when assessing student perfor-
mance?
8. How can teachers use data analysis effectively to target student achievement
gaps?
Questions Related to Data Users
9. Do you have teachers who are using data effectively to improve student
achievement? How do you know that they are being effective?
10. Who are the teachers who are using data currently to improve student achieve-
ment?
Prepared by Lorena Frances Rayor.
1
102
APPENDIX B
INTERVIEW QUESTIONS FOR TEACHERS
1
Questions Related to Setting and Measuring Goals
1. What goals has your department set for the next 3 years?
2. What data will help you judge whether your students are meeting these goals?
Questions Related to Data Uses
3. How are you using data currently to improve student achievement?
4. What additional data would you want to have collected and why?
5. In what ways are you involved in data collection and analysis?
6. What different types of data do you use when assessing student performance?
7. How can you use data analysis effectively to target student achievement gaps?
8. What data matter most to you and why?
9. What role do you play in collecting, processing, interpreting, and using data?
Questions Related to Data Users
10. Do any of your colleagues use data effectively to improve student achievement?
If so, how do you know that they are being effective?
11. Who are the teachers who are using data currently to improve student achieve-
ment at your school?
Prepared by Lorena Frances Rayor.
1
103
APPENDIX C
INTERVIEW QUESTIONS FOR STUDENTS
1
Questions Related to Setting and Measuring Goals
1. What goals have you set to graduate from high school?
2. What data (test scores, grades, CAHSEE results, etc.) will help you judge whether
you are meeting your goals?
Questions Related to Data Uses
3. How are you using data currently to maintain or improve your grade point average
(GPA)?
4. Are your teachers using data to help you meet your goals? If so, what types of data
are they using to assess your performance? (If so, what kinds of information are
they using to determine whether you have met the content standards?)
5. What additional data should be collected and why?
6. What data matter most to you? What data matter most to your teacher?
7. Who are the teachers at your school who are using data to help students?
Prepared by Lorena Frances Rayor.
1
104
APPENDIX D
BRIO CHARTER HIGH SCHOOL: GRADUATION AND COLLEGE
ENTRANCE REQUIREMENTS
1
Current BCHS
Graduation Requirements
Proposed Grad
Requirements
A. History/Social Science 3 Years
(30 credits)
• 1 year World History
• 1 year U.S. History
• ½ year Civics
• ½ year Economics
3 Years
(30 credits)
• 1 year World History
• 1 year U.S. History
• ½ year Civics
• ½ year Economics
B. English 4 Years
(40 credits)
4 Years
(40 credits)
C. Mathematics 3 Years
(30 credits)
• Must pass Algebra 2
• Must earn a C or higher in
order to go on to the next
level
3 Years
(30 credits)
• Must pass Algebra 2
• Must earn a C or higher in
order to go on to the next
level
D. Science 2 Years
(20 credits)
• 1 year physical science
• 1 year life science
2 Years
(20 credits)
• Must pass biology
• Must pass chemistry or
physics
E. Foreign Language 3 Years
(30 credits)
3 Years
(30 credits)
F. Visual/Performing Arts 1 Year
(10 credits)
1 Year
(10 credits)
G. Electives 6 Classes
(60 credits)
6 Classes
• Must complete 10 credits
of additional math or sci-
ence course
• Must complete 10 credits
of additional history or
foreign language course
• Must complete 10 credits
of additional VPA course
• Must complete 30 credits
in any other elective
course
P.E./Health 1 Year
(10 credits)
• 5 credits used toward elec-
1 Year
(10 credits)
• 5 credits used toward elec-
Source: Principal, Brio Charter High School. BCHS = Brio Charter High
1
School; P.E. = physical education; GPA = grade point average; VPA = visual/perform-
ing arts.
105
tive credit tive credit
College Readiness 1 Year
(10 credits)
1 Year
(10 credits)
Grades Students must earn a D or
higher in all required courses
to receive credit.
Students must earn a D or
higher in all required courses
to receive credit.
Exams Calif. High School Exit
Exam: English and Math
Calif. High School Exit
Exam: English and Math
GPA
Retaking Courses F = required courses must be
repeated
F = required courses must be
repeated
College Applications Must apply to 3 colleges min-
imum
Must apply to 3 colleges min-
imum
Scholarships Must apply for 3 scholarships
minimum
Must apply for 3 scholarships
minimum
Community Service Student = 40 hours
Parents = 120 hours
Student = 40 hours
Parents = 120 hours
106
APPENDIX E
DEPARTMENT CHAIR MEETING AGENDA
1
Brio Charter High School
Department Chair Meeting Agenda
Wednesday, October 10, 2007
I. Principal’s SMART Goals
II. Department Goals
1. Academic rigor
2. Department choose
III. Day to Observe Your Department
IV. Task for the Year
1. Come up with a process or pathway in which we align the school to the
college standards
Departments
Chairs
Counselor #1
Counselor #2
Chairman A Chairman F
Chairman B Chairman G
Chairman C Chairman H
Chairman D Chairman I
Chairman E Chairman J
Prepared by school principal. Modified by researcher to keep names confiden-
1
tial. SMART = Specific, Measurable, Attainable, Realistic, and Timely.
107
APPENDIX F
SCHOOL SITE COUNCIL MEETING AGENDA
1
Brio Charter High School
School Site Council Meeting Agenda
Wednesday, October 24, 2007
I. Welcome
II. Principal’s Report
1. Budget
a. ADA month of October 95.6%
b. Nell Soto Grant—home visits
2. Calendar
a. No school 10/29
b. Grades will be mailed 10/31
c. Learning walks on 10/31
d. Parent conferences 11/5 to 11/9
e. Northern CA college trips 11/5 to11/9
f. Lockdown drill—TBA
3. Testing
4. Discipline
a. Neighboring school district Director of Pupil Services
b. Student to another Davis School
c. Video cameras
5. Chat with the principal
a. 11 grade 10/26
th
b. 12 grade 11/2
th
III. Requirements to graduate from Brio
IV. Student Report
V. Special Ed Report
Prepared by principal. Modified by researcher to keep names confidential.
1
ADA = average daily attendance; BCHS = Brio Charter High School; TBA = to be
announced.
108
VI. Next meeting 11/28/07
Department
Chairs
11 /12
th th
Counselors
Principal
Chairman A Student B Assistant Princi-
pal
Chairman B Parent 1
Chairman C Parent 2
Chairman D Parent 3
Student A Chairman E
109
APPENDIX G
GRADE TEAM MEETING AGENDA
1
Brio Charter High School
10 Grade Team Meeting Agenda
th
November 2, 2007
8:00 a.m.
Attendance:
___ Teacher A
___ Teacher E
___ Counselor 1
___ Teacher B
___ Teacher F
___ Counselor 2
___ Teacher C
___ Teacher G
___ Asst. Princi-
pal
___ Teacher D
___ Teacher H
___ Principal
Time Topic Notes
5 min Announcements:
• 11/5 AMU Meeting
• 11/5- 9 Parent Conferences
• 11/12 Veteran’s Day (No School)
• 11/21 Thanksgiving Luncheon
• 11/30 “So you think you got talent?”
10 min Action Items:
• Parent Conference Schedule
30 min Discussion Items:
• Parent Conference Plan of Action:
** What message are we trying to get across?
__ Conference Matrix
** Which meetings will our mixed grade
__members be at?
• Semester 1 incentives? (Any incentives for ALS
benchmarks?)
Prepared by school principal. Modified to keep names confidential. AMU =
1
Asociacion de Maestros Unidos; ALS = Academic Learning System; MAST = math
and science teachers.
110
Due by
next
mtg.
HW:
G True Backwards design/MAST planning: what are
you doing for your Final & are there any extra curric-
ular things you’re doing to prep the kids?
RESPONSIBILITIES:
111
Abstract (if available)
Abstract
The current situation in U.S. K-12 education is such that there is no clarity, consistency, or consensus in the manner in which schools go about implementing a data-driven decision-making process. Simply stated, there is no clearly articulated direction provided to school systems regarding policies and procedures for data analysis. School systems are left to their own devices to develop, design, and implement their own data-based approach. Furthermore, standards-based testing requirements and accountability systems set in place by multiple federal, state, and local legislative mandates require that school administrators and teachers assume the role of data analysts. Little consideration has been given for variability in teacher or administrator preparation or consensus at all educational levels on what matters most: studentachievement or student learning (Wade, 2001). Equally important to improving studentachievement is how to go about implementing a data-based process for optimal effectiveness.Implementing a data-based process effectively is the key to improving student achievement. To that end, this study focused on the possibility that stakeholders at the school site level, such as administrators, teachers, and students, might have varying perceptions of teachers who used data to improve student achievement and that this variability might have an impact on the effectiveness of implementing a data-driven decision-making process.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
Examining the applications of data-driven decision making on classroom instruction
PDF
Holding on and holding out: why some teachers resist the move toward data-driven decision making
PDF
Allocation of educational resources to improve student learning: case studies of California schools
PDF
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
Do we really understand what we are talking about? A study examining the data literacy capacities and needs of school leaders
PDF
Does data-driven decision making matter for African American students?
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
Data-driven decision-making practices that secondary principals use to improve student achievement
PDF
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
Building data use capacity through school leaders: an evaluation study
PDF
The teacher evaluation: key components that positively impact teacher effectiveness and student achievement
PDF
Beyond the numbers chase: how urban high school teachers make sense of data use
PDF
How districts prepare site administrators for data-driven decision making
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
PDF
A quantitative study on southeast Asian and Latino student's perceptions of teachers' expectations and self-efficacy
Asset Metadata
Creator
Rayor, Lorena Frances
(author)
Core Title
Multiple perceptions of teachers who use data
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
01/18/2010
Defense Date
11/30/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
acccountability,administrators,assessments,data,data analyst,data-based driven approach,data-based driven process,data-driven decision-making process,formal assessments,informal assessments,OAI-PMH Harvest,student achievement,student performance,Students,Teachers
Place Name
USA
(countries)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hentschke, Guilbert C. (
committee chair
), Brown, Richard Sherdon (
committee member
), Datnow, Amanda (
committee member
)
Creator Email
rayor@usc.edu,rayor_lori@montebello.k12.ca.us
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2801
Unique identifier
UC1441154
Identifier
etd-Rayor-3433 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-289253 (legacy record id),usctheses-m2801 (legacy record id)
Legacy Identifier
etd-Rayor-3433.pdf
Dmrecord
289253
Document Type
Dissertation
Rights
Rayor, Lorena Frances
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
acccountability
assessments
data
data analyst
data-based driven approach
data-based driven process
data-driven decision-making process
formal assessments
informal assessments
student achievement
student performance