Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Holding on and holding out: why some teachers resist the move toward data-driven decision making
(USC Thesis Other)
Holding on and holding out: why some teachers resist the move toward data-driven decision making
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
HOLDING ON AND HOLDING OUT: WHY SOME TEACHERS RESIST THE
MOVE TOWARD DATA-DRIVEN DECISION MAKING
by
Brian Steven Markarian
_________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2009
Copyright 2009 Brian Steven Markarian
ii
TABLE OF CONTENTS
LIST OF TABLES v
ABSTRACT vi
CHAPTER ONE 1
Overview of the Study 1
Introduction 1
Background of the Problem 1
Research Questions 6
Significance of the Study 7
Limitations of the Study 8
CHAPTER TWO 10
Literature Review 10
Introduction 10
Educational Reform Context 11
Defining Data-Driven Decision Making 14
Research on Resistance to Change 15
Internal Barriers to Data Use 21
Assignment of Value/Questions of Validity 21
Mistrust of Data/Political Use 23
Teacher Efficacy 25
Perceived Loss of Power 25
External Barriers to Data Use 27
Structural Supports for Data Use 27
Time 27
Technology/Technical Issues 28
Professional Development 31
Culture Around Data Use 32
Leadership Support for Data Use 32
School Culture 34
Norms of Collaboration 36
Summary of the Literature Review 37
CHAPTER THREE 39
Methodology 39
Introduction 39
Sample and Population 40
Overview of Districts and Schools 41
iii
Data Collection Procedures 43
Data Analysis Procedures 45
Ethical Considerations 46
Limitations of the Study 47
Researcher’s Subjectivity 47
Summary 48
CHAPTER FOUR 49
Data Analysis and Interpretation of Findings 49
Introduction 49
Research Questions and Thematic Underpinnings 50
School Settings 51
Issues Related to External Factors 54
Lack of Time 54
Use of Collaboration Time 57
Technology Issues 59
Disparities in Training 63
Lack of Focus/Support from Site Leadership 65
Issues Related to Internal Factors 68
Resentment of Emphasis on Student Assessment Data 68
Insecurity with the Sharing of Data 71
Preference for Informal Data Use 72
Validity Issues 74
Effect of Teacher Characteristics 77
Grade Level 78
Gender 78
Age 79
Years of Service 80
Involvement in the Process 80
Conclusion 81
CHAPTER FIVE 85
Summary and Implications of Findings 85
Introduction 85
Connections to Prior Research 87
External Factors 88
Quantity and Quality of Time 88
Technology 89
Professional Development 90
Leadership Support 91
Internal Factors 92
The Role of Values 92
iv
Perceptions of Validity 92
Mistrust of Data Use 93
Self-Efficacy 93
Summary 94
Implications for Future Research 94
Implications for Policy and Practice 95
Conclusion 99
REFERENCES 101
APPENDICES 109
Appendix A: Teacher Interview Protocol 109
Appendix B: Principal Interview Protocol 113
Appendix C: Code List 115
v
LIST OF TABLES
Table 1: Teacher Distribution – Mesa Elementary School 41
Table 2: Student Demographics – Mesa Elementary School 41
Table 3: Accountability Profile – Mesa Elementary School 42
Table 4: Teacher Distribution – Orchard Elementary School 42
Table 5: Student Demographics – Orchard Elementary School 42
Table 6: Accountability Profile – Orchard Elementary School 43
Table 7: Characteristics of Interviewed Teachers 77
vi
ABSTRACT
As a result of its identification as a promising practice in numerous educational
studies and its prominence in recent legislative mandates, data-driven decision making is
a reform being implemented in schools and districts across the country. As with any
reform, the successful implementation of data-driven decision making often requires
overcoming some degree of resistance. This study examined the nature of resistance to
data-driven decision making by some teachers, in hopes of discovering patterns or themes
that may guide administrators in designing implementation strategies. Data for this study
were gathered through interviews with teachers at two elementary schools where the
teaching staff have exhibited varying degrees of acceptance of or resistance to data-
driven decision making. Various external and internal factors were identified as having
an effect on teachers’ receptiveness to the process of data-driven decision making and
possible solutions for addressing these issues are outlined.
1
CHAPTER ONE
Overview of the Study
“What separates successful schools from those that will not be successful in their reform
efforts is the use of one, often neglected, essential element – data” (Bernhardt, 2004, p.1).
Introduction
Background of the Problem
While the notion of using data to inform school practices may seem relatively
new, researchers have been publishing literature on this topic for over thirty years
(Wayman, 2005). Despite the fact that data-driven decision making has existed in one
form or another for decades, this practice has received increased attention over the last
few years due to the prominent role that achievement test data play in current federal and
state accountability policies (Marsh, Payne, & Hamilton, 2006). Within the context of
the most recent push to reform the educational process, it is clear that “data have become
the vehicle of choice for ensuring accountability” (Earl & Katz, 2002, p.2).
Although some schools and districts have a long history of engaging in data-
driven decision making, this practice was not the norm prior to the introduction of state
and federal accountability mandates. This is evident in Earl and Katz’s (2002)
description of the practice of data use, or absence thereof, preceding government reform
efforts:
There was a time in education when decisions were based upon the best
judgments of the people in authority. It was assumed that school and district
leaders, as professionals in the field, had both the responsibility and the right to
make decisions about students, schools, and even about education more broadly.
2
They did so using a combination of intimate and privileged knowledge of the
context, political savvy professional training and logical analysis. Data played
almost no part in decisions. (p.2)
Slavin (2002) echoes this sentiment when he depicts changes in educational practice as
being influenced more by “swings in taste” than by data or rigorous evaluation.
The 2001 reauthorization of the Elementary and Secondary Education Act
(ESEA), known as the No Child Left Behind (NCLB) Act, resulted in a transformation of
the role of assessment data in the public school setting (Datnow, Park, & Wohlstetter,
2007; Earl & Katz, 2002; Supovitz & Klein, 2003). In addition to requiring states to
develop standards-based tests in English-language arts and mathematics, this legislation
also mandated that test results be disaggregated by student subgroups and that
information about the performance of students on these tests be included in annual report
cards developed at the state, district, and school levels (Learning Point Associates, 2006).
The results of these state assessments are used to determine whether or not schools and
districts have met the Adequate Yearly Progress benchmarks set by NCLB. Schools and
districts that fail to do so are subject to various layers of sanctions imposed by the state.
In announcing the signing of NCLB by President Bush in January of 2002, the
White House issued a press release stating the following:
Under the new law, states and school districts will develop strong systems of
accountability based upon student performance. The new law also gives those
states and school districts increased local control and flexibility, removing federal
red tape and bureaucracy and putting decision-making in the hands of those at the
local and state levels. Parents of children from disadvantaged backgrounds will
have options under the new law to participate in public school choice programs or
obtain supplemental services such as tutoring. And teachers around the country
3
will be encouraged to use teaching methods that are based upon scientific research
demonstrating that they work. (¶ 5)
This statement highlights the prominent role that student assessment data would play in
the context of this new accountability system. Furthermore, it suggests that the intent of
this legislation was not only to reform practice at the district and site level, but also to
permeate the walls of the classroom, affecting the way in which teachers instruct
students. Although the legislation calls for a focus on instructional strategies that are
backed by scientific research, Wayman (2005) notes that even with its increased focus on
data and student assessment, NCLB does not emphasize direct teacher involvement in
data-driven decision making. The mandates of data use apply mostly to state, district,
and site administrators. As Marsh et al. (2006) note, these state agencies and school
districts have collected large amounts of student data for years. The challenge has been
getting teachers to use this data to improve educational practices. While data-driven
decision making may be taking place at the state and district levels, it has not been
thoroughly implemented at the level at which it might have the most immediate and
effective impact on students, the classroom level. If current accountability policies and
their focus on data use are to be successful, they must result in changes to instructional
practice and assist in gauging the effectiveness of such changes (Ingram, Seashore Louis,
& Schroeder, 2004).
Although accountability policies might not explicitly call for teachers to employ
data-driven decision making, some researchers have highlighted the link between
teachers’ use of assessment data and increased standards of achievement (Black &
4
Wiliam, 1998). Such research has led educational leaders to strive to develop school
cultures that support data-driven decision making and a cycle of continuous reflection.
Heritage and Yeagley (2005) define a data culture as one in which “teachers and
administrators work together in a community of practice-trusting data, focusing on
results, and engaging in using data for systematic reflection and planning” (pp. 134-135).
Despite the existence of research that promotes the use of student performance
data as a means to improve the teaching and learning process (Datnow et al., 2007;
Supovitz & Klein, 2003), efforts to involve teachers in data-driven decision making are
often met with resistance. Such resistance may be the result of internal barriers, those
resulting from the teachers’ own feelings about data use or student assessment in general.
Researchers have identified factors such as teachers’ mistrust of the quality of student
data and a lack of lack of motivation as barriers to data use (Marsh et al., 2006). Others
have cited a lack of value assigned to standardized achievement tests, a mistrust of data
use, and teacher efficacy (Ingram et al., 2004), as well as differing philosophical views
(Corcoran, Fuhrman, & Belcher, 2001), as being obstacles to the implementation of data
driven-decision making. It has also been found that many teachers lack interest in
examining data that they view as satisfying accountability mandates more than benefiting
classroom practice (Young, 2006). Finally, individual factors, such as a teacher’s years
of service, have also been shown to affect the level of resistance to data use (Winkler,
2002).
5
In addition to these internal barriers, previous research has also identified various
external barriers that might complicate or prevent the implementation of data-driven
decision making in the classroom. One such barrier is the absence of a user-friendly data
system (Wayman, 2005; Marsh et al., 2006). A lack of adequate time to collect and
analyze data is another obstacle that has been cited by researchers (Ingram et al., 2004;
Marsh et al., 2006). The challenge of finding enough time to analyze data is often
compounded by another barrier that Ingram et al. (2004) refer to as “data overload,” in
which schools collect so much data that teachers cannot use them effectively. School
culture and leadership may influence teachers’ perceptions of the importance of data use
(Young, 2006). A lack of support staff and professional development in the area of data
use have also been identified as being barriers to the implementation of data-driven
decision making (Marsh et al., 2006; Wayman, 2005; Lachat & Smith, 2005).
In examining both the internal and external barriers to the implementation of data-
driven decision making, it is helpful to consider this issue in the larger context of
resistance to change. It is plausible that even after addressing these barriers to data use,
some teachers may still resist this movement as part of general aversion to any type of
change in the work environment. In studying resistance to change across a variety of
professions and fields, researchers have gained insights into the nature of such resistance
and have developed strategies to address it (Hollander & Einwohner, 2004; Waddell &
Sohal, 1998). Applying these concepts and strategies to the issue of resistance to data-
driven decision making amongst teachers may help us to better understand this issue.
6
Clark and Estes (2002) also provide a useful framework for examining the root
causes of resistance to data-driven decision making amongst teachers. Their gap analysis
model involves analyzing issues to identify performance gaps and possible improvement
measures. By using gap analysis to determine whether the factors contributing to data
resistance amongst teachers are the result of gaps in knowledge or skills, issues of
motivation, or if they are related to larger organizational problems, it may be possible to
design measures to significantly reduce or even eliminate them.
While a handful of studies have focused on the barriers, both real and perceived,
that prevent some teachers from embracing data-driven decision making, more research is
needed in this area. If all students are to demonstrate proficiency on state assessments in
both mathematics and English-language arts by 2014, as is mandated by NCLB, then
teachers everywhere will likely have to begin to use assessment data to improve
instructional practices. In order to build a culture of data-driven decision making and
continuous improvement, educational leaders must be provided with research on effective
ways of working with teachers who are resistant to data use (Datnow et al., 2007).
Identifying barriers to data use amongst teachers and developing a profile of a “data
resister” are initial steps in an attempt to better understand the root of this resistance and
to develop strategies to increase teacher receptiveness to data-driven decision making.
Research Questions
The proposed study aims to identify barriers, either real or perceived, that prevent
teachers from applying data-driven decision making to their daily practice. Additionally,
7
the proposed study seeks to identify common characteristics amongst teachers who are
resistant to analyzing data or modifying instructional practices in light of data, in hopes
of identifying avenues to engage them in data driven decision making.
Specifically, this study will address the following overarching question:
Why do some teachers resist using data-driven decision making?
The following sub-questions will also be examined through the course of this study:
a) How do various external factors, such as teacher support, technology, and site
leadership, affect teachers’ acceptance of or resistance to data-driven decision
making?
b) How do internal or motivational factors, such as the perceived value of data and
teacher efficacy, affect teachers’ acceptance of or resistance to data-driven
decision making?
c) What role, if any, do the background characteristics of teachers play in their
willingness to engage in data-driven decision making?
These questions will be addressed using qualitative methods, including interviews of
teachers at two schools.
Significance of the Study
Of the research that has been conducted on data-driven decision making, much of it
has focused on schools where the staff has embraced this process. A lesser amount of
literature exists which focuses on the barriers that may be encountered when introducing
this process and the various reasons that teachers may be resistant to it. While
8
researchers have examined the implementation of data-driven decision making at the
district or site level, less attention has been paid to the role of individual teachers in
making this process a successful one. There is also a lack of research that examines the
role that personal characteristics of teachers play in determining their acceptance of or
resistance to data-driven decision making. This study proposes to add to the growing
body of literature on the barriers to implementing data-driven decision making and
attributes of teachers who are resistant to adopting this practice. This study should prove
to be useful to administrators who are seeking to implement the practice of data-driven
decision making at their school sites and to those who have been struggling to make this
process a successful one. The study should also inform administrators and coaches about
the possible pitfalls that they may encounter when implementing data-driven decision
making, as well as assist them in developing tailored support plans based upon the
varying levels of resistance that exist amongst their teachers.
Limitations of the Study
This investigation will be conducted at two schools, each operating within a
different school district. One limitation of the study is the limited sample size. Different
results may be obtained if a greater number of schools were to be examined. Another
limitation is the small geographic area to which the sample is confined. Selecting
subjects from schools outside of California may alter the results obtained. With regard to
these limitations, it is hoped that more comprehensive research will be conducted in the
areas of teacher resistance to data-driven decision making. A further limitation is the fact
9
that only schools that claim to have established cultures of data use, in addition to
maintaining a data management system, were considered for this study. Due to this
selection criterion, the results of this study may not be generalizable to schools that do
not meet these conditions.
10
CHAPTER TWO
Literature Review
Introduction
With the ever-increasing demand for accountability in public education, many
districts are now closely analyzing data as a means to improve instructional practices and
to increase student achievement (Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006).
Accountability mandates, such as NCLB, require districts to rethink the ways in which
they collect and analyze data (Learning Point Associates, 2006). Although many districts
and schools have implemented processes that promote data-driven decision making, the
major components of these processes are often ignored by teachers in their daily practice
(Bambrick-Santoyo, 2007). In fact, some researchers have concluded that the application
of data to classroom practice is something that is virtually nonexistent in most schools
today (Herman & Gribbons, 2001). Despite increased efforts to make data use an integral
part of the instructional process, districts continue to encounter numerous barriers in this
area (Wayman, 2005).
In light of the mounting pressures associated with high-stakes testing and school
accountability, developing strategies to promote data-driven decision making amongst
teachers will be a vital step for schools as they work to avoid the sanctions established by
NCLB and the various state accountability systems. The purpose of the review of
literature that follows is to examine resistance to reform strategies such as data-driven
11
decision making and to identify possible barriers to data use by teachers. More
specifically, the following sections will include:
1. A review of theories related to resistance to change, both in educational
settings and private industry.
2. An examination of the internal barriers to data use, such as teachers’
perceptions of the value of data, mistrust of data use, teacher efficacy, and
perceived loss of power.
3. An examination of the external barriers to data use, with a specific focus on
structural supports and school culture.
Before discussing these topics, I will present background information on the broader
historical educational reform context in which data driven decision making is embedded,
highlighting the impact of resistance amongst teachers on these efforts.
Educational Reform Context
…frustrated leaders and policy makers…noticed that, after rendering a
decision about something that seemed momentous, absolutely nothing
happened in the classroom. The board adopted academics standards and
solemnly vowed that all children would meet them. Nothing happened in
the classroom. The superintendent announced a new vision statement;
along with core values and an organizational mission that the entire staff
would enthusiastically chant. Nothing happened in the classroom.
Millions were spent on new technology. Nothing happened in the
classroom. Staff development programs were adopted…Although seats
were dutifully warmed during countless training, nothing happened in the
classroom. (Reeves, 2004, p. 2)
While the current movement toward accountability and the focus on student
outcomes and achievement data might be considered new evolutions in the effort to
12
improve schooling, education has been the focus of various reform movements for
decades (O’Day, 2002). Since the start of the 20
th
century, criticisms of public education
led to massive efforts to bring about change (Marzano, 2003). Elmore (1996) refers to
the period leading up to the 1950s as the “progressive period,” during which reformers
focused on changing pedagogy, in hopes that exemplary practices would spread
throughout schools. The late 1950s and 1960s have been dubbed the “adoption era” by
Fullan (2000), due to the push by reformers to adopt numerous innovations to improve
public education. Despite these efforts to bring about change within schools, however,
researchers who examined the overall results of these reform efforts in the 1970s
concluded that there had been an “absence of change at the classroom level” (Fullan,
2000, p.6).
The publication of A Nation at Risk (National Commission on Excellence in
Education, 1983) in the early 1980s served to convince the public that, despite numerous
reform efforts, schools were still failing to properly educate students (Marzano, 2003). In
revisiting this document in 1998, influential leaders in politics, education, and business
decried the “inadequate education” being offered to students in the United States and they
presented numerous recommendations to improve the current state of public schools. In
A Nation Still at Risk (1998), they called for the adoption of high standards for all
students, alternative means for the delivery of education, and increased efforts to inform
parents of the performance of their child and their child’s school. Many different reform
efforts were initiated following a Nation at Risk, including the implementation of site
13
based management, state curriculum standards, and whole school restructuring. For
example, the early 1990s marked an upsurge in comprehensive schoolwide reform (CSR)
models, most of which were designed to change teaching practices, “the one area of
schooling that has proved the most resistant to change” (Desimone, 2002, p. 442).
What the reform efforts of the 1980s and 1990s lacked, however, was a strong
accountability component. Yet, with a majority of the public believing that students in the
United States were not receiving an adequate education and that they were falling behind
those in other countries throughout the world, the stage was set for the introduction of
mandates for increased accountability in education (Stecher, Hamilton, & Gonzalez,
2003). Most states established accountability systems in the 1990s to evaluate student
progress towards state standards. Most significantly, the signing of NCLB by President
Bush in 2001 ushered in an unprecedented movement toward testing and accountability at
the federal level (Linn, Baker, & Betebenner, 2002; Mandinach, Honey, & Light, 2006).
Marzano (2003) depicts this “new era of school reform” as one that: 1) recognizes the
highly contextualized nature of reform, 2) includes a heavy emphasis on data, and 3)
approaches change incrementally (p.158).
With its requirement that states administer exams to students in Grades 3 through
8 and its inclusion of sanctions for schools and districts based upon the results of these
exams, NCLB has caused many states, districts, and schools to change their practices
(Linn et al., 2002). In order for these mandates to have the intended effect of increasing
student achievement, these changes will have to move beyond the district and school
14
level and impact decisions made by teachers in the classroom. In short, teachers will
have to be willing to use student data to make appropriate instructional decisions
regarding their students.
Defining Data-Driven Decision Making
In order to assess the state of data use by teachers, it is necessary to provide a
definition for data-driven decision making. The United States Department of Education
(2007) defines data-driven decision making in the educational context as “the analysis
and use of student data and information concerning educational resources and processes
to inform planning, resource allocation, student placement, and curriculum and
instruction” (p.1). For the purposes of this study, student data will refer to data gathered
from formal assessments, including state tests, district benchmarks, and unit or chapter
tests. Emphasis will be placed on teachers’ use of such data to make instructional
decisions for students. This expectation of regular data use by teachers is a relatively
new concept, as the pressure of accountability has yet to have much of an impact on their
daily work (Abelmann, Elmore, Even, Kenyon, & Marshall, 1999; Wayman &
Stringfield, 2003; Young, 2006). This is common to most reform efforts, as teachers
have been buffered from educational changes throughout the years (O’Day, 2002).
Data-driven decision making is a promising educational reform strategy (Datnow
et al., 2007; Earl & Katz, 2002). Researchers, such as Armstrong and Anthes (2001),
have credited regular and extensive data use with dramatic increases in student
achievement. The regular use of data has also been identified a common element in
15
studies of “effective” schools (Wayman, 2005). Data-driven decision making in such
schools goes beyond the mere collection of data, however, and requires that data are
consistently and thoroughly examined for the purpose of refining practice (Means,
Gallagher, & Padilla, 2007). For this process to take hold in classrooms across the
country, many teachers will have to change their current practices and reexamine their
rationale for making instructional decisions. As accountability policies necessitate such a
change, educational leaders must be prepared to deal with resistance amongst teachers
who are opposed to this movement (Supovitz & Klein, 2003).
Research on Resistance to Change
As shown in the preceding section, public education has been bombarded with
calls for change throughout the last century. Many of the reforms that were intended to
bring about changes in schools, however, failed to do so. While it is highly likely that
some of these reforms were poorly designed or that structural and capacity issues may
have contributed to their demise, the failure of some school reform efforts can
undoubtedly be attributed to a resistance to change amongst school staff (Feldman &
Tung, 2001). While the topic under examination is teacher resistance to data use, it must
be acknowledged that some individuals may be resistant to change in any form (Guskin,
1996). For this reason, it is worthwhile to examine resistance to change in a broader
context, as well as in its relation to teacher resistance to data use.
Resistance to changes in conditions or demands in the workplace is a topic that
has been studied by researchers for decades. One of the earliest researchers to study this
16
issue was Kurt Lewin (1947), who focused not only on individual resistance to change,
but also the role of group dynamics in the acceptance of or resistance to such changes.
He highlighted the importance of group decision-making and concluded that one is more
likely to bring about changes in individual behavior after a group standard has been
changed. Lewin introduced a three-step model to implementing effective and lasting
change. This model consisted of an “unfreezing” of the current standard, a movement
toward a new standard, and, finally, a “freezing” of this new level or standard.
Coch and French (1948) examined responses to change amongst workers at a
manufacturing plant and documented the effects of group standards on individual
performance. They concluded that in order to decrease resistance to change in the
workplace, management must conduct group meetings during which they communicate
the need for the change and also solicit participation from workers in the change process.
While these studies laid the foundation for the examination of resistance to
change and the role of group dynamics within organizations, more recent studies have
also established a link between the characteristics of the individual who is subjected to
the change and the level of resistance that he or she exhibits. Van den Berg and Ros
(1999), for example, cited the meaning that teachers attach to a situation as being the
determining factor regarding their reaction to it. This meaning can be affected by such
personal factors as previous experience, preferences and styles, home life, and self-
confidence. These factors can result in individuals responding to changes or innovations
in ways that may be viewed as irrational.
17
Duffy and Roehler (1986) reached a similar conclusion when they stated that,
“Teacher educators and researchers often assume that teachers operate from a rational
model. We tell teachers about a sensible innovation and expect them to apply the
innovation directly to practice. However,…this is not the case” (p.57). They describe the
process by which teachers receive information, restructure it, and tailor it to fit their
perception of reality. Each individual teacher, therefore, filters information about a
change or innovation through his or her own reality. Regardless of the nature of the
change being made, individual teachers may exhibit resistance based upon their own
perceptions and circumstances.
In addition to their individual perceptions of events, personal characteristics of
teachers may also affect their level of resistance to change. Numerous studies have
established a connection between a teacher’s years of service and his or her resistance to
reform efforts. In his study of efforts to abolish tracking at one particular school district,
Welner (1999) noted that the veteran teachers were not as receptive to the change, and
that many of them were just waiting for it to go away, rather than making an effort to
implement the reform (p.206). He also identified teacher pride and self-image as factors
affecting individual levels of resistance. In his examination of teacher responses to the
implementation of state testing in Virginia, Winkler (2002) also noted differences in
teachers’ responses to change based upon years of service. He found that veteran
teachers associated the testing with a loss of power and professionalism, while newer
teachers associated it with gains in collaboration and pedagogical freedom. Accordingly,
18
those teachers with a greater number of years of experience were the most resistant to
implementing the state testing program. Hargreaves (1996) noted increased cynicism and
skepticism amongst teachers in mid-to-late career phases that affected their ability or
willingness to make the many changes required by school reform initiatives.
Other researchers have found that gender can play a role in resistance to program
implementation as well. In a study of teacher responses to attempts to implement
antisexist initiatives in British schools, Acker (1988) determined that characteristics such
as gender, age, and social class may affect a teacher’s level of resistance to reform.
Datnow (1997) also argued that different identifications, including gender, race, and
department affiliation, can influence a teacher’s acceptance of or resistance to school
reform efforts when that identification results in a dominance of the school culture.
While the above examples have highlighted research that depicts teacher
resistance as a negative force, it should be noted that some researchers have argued that
resistance amongst teachers toward reform is a practical response to some calls for
change. Others have even put forth the idea that resistance to change can yield positive
results.
In her review of literature on change in teacher practice, Richardson (1990)
concludes that “teachers change all of the time” (p.16). She decries the fact that too often
resistant teachers are portrayed as being recalcitrant, when, in fact, the changes they are
asked to make are not significant or worthwhile. She states that teachers must be allowed
to exercise more control over the changes being imposed on them and that flexibility for
19
teachers to develop alternative conceptions of practices that fit their own views is a key
ingredient to any successful attempt to bring about change.
Gitlin and Margonis (1995) point out that school change researchers often
“overlook the good sense embedded in teachers’ resistant acts” (p.379). They warn that
the tendency to view any form of teacher resistance to change as negative and
obstructionist contributes to the power struggle that has derailed many school reform
efforts.
In their examination of resistance to change, Waddell and Sohal (1999) describe
resistance as being the result of a combination of factors, including rational, non-rational,
political, and management factors. They indicate that, despite its negative connotation,
resistance to change can be a positive force in that it:
• Reminds those involved that not all change is inherently good;
• Brings greater stability to an organization;
• Draws attention to inappropriate aspects of a proposed change;
• Creates an influx of energy;
• Encourages the search for alternative methods that synthesize the conflicting
opinions.
As has been discussed in the prior section, resistance to change can result from a variety
of factors, some related to group dynamics and others to the characteristics of individuals.
Addressing this resistance may be a critical step in implementing the process of data-
driven decision making at a school site. In discussing this issue of bringing about change
20
within an organization, Bolman and Deal (2003) refer to Kotter’s (2002) eight steps to
achieving successful change initiatives. These steps include:
• Creating a sense of urgency;
• Pulling together an appropriate team;
• Creating an uplifting vision and strategy;
• Communicating this vision through words, deeds, and symbols;
• Removing obstacles and empowering people;
• Producing visible signs of progress;
• Sticking with the process and refusing to quit;
• Nurturing and shaping a new culture to support the change.
In Learning by Doing (2006), a handbook for establishing professional learning
communities, DuFour, DuFour, Eaker, and Many present Gardner’s (2004) steps for
leaders who are promoting a change. These steps include:
• Presenting reasons for the change;
• Providing appropriate research;
• Connecting the change to people’s feelings;
• Presenting information in different formats;
• Providing incentives to change;
• Presenting real-world examples of successes involving the change.
21
Gardner (2004) reminds leaders that even when these steps are followed, some resistance
will still linger. He stresses the importance of confronting this resistance, rather than
ignoring it. He considers this confrontation of resistance to be an important seventh step.
In the following two sections, various factors that might contribute to teacher
resistance to data-driven decision making are discussed. These barriers to data use will
be classified as being either internal in nature, a result of teachers’ perceptions or
feelings, or external in nature, a result of structural or logistical issues.
Internal Barriers to Data Use
When examining teacher resistance to data use, it is important to differentiate
between the various factors that might be contributing to such resistance. While some
factors may be the result of school structures or cultures, others are more internal, having
to do with the individual teacher’s feelings or emotions. Examples of such internal
barriers are a lack of value assigned to test data, a mistrust of data use, and teacher
efficacy. Each of these internal barriers will be addressed in the following section.
Assignment of Value/Questions of Validity
Value can be defined as a belief that something has benefits, either direct or
indirect, for an individual (Ormrod, 2006). When something is viewed as having little or
no value, individuals are less apt to be motivated to achieve it. In the context of data use,
if educators do not see any value in the data with which they are presented, it is unlikely
that they will use these data to make instructional decisions. In examining the perceived
utility of assessment data within schools, Supovitz and Klein (2003) found that the
22
teachers and administrators participating in their study assigned the least amount of value
to state and district assessments, and the greatest amount of value to teacher records and
student portfolios. This finding would indicate that these individuals would be less likely
to use data from state and district assessments when making decisions.
Firestone, Mayrowetz, and Fairman (1998) found similar results when they
examined teachers’ responses to data from state tests in Maryland and in Maine. In
interviewing teachers to determine how they used these results to make decisions, they
concluded that the actual effect of the implementation of state testing on teaching was not
significant. This was partly attributed to the fact that teachers did not place as much
value on the content covered on the state tests as they did on the curriculum that they
were teaching in the classroom.
In their study of teacher perceptions of the role of data in improving instructional
practices, Ingram et al. (2004) found that teachers placed much more value on
nonachievement indicators over assessment data when measuring teacher effectiveness.
In gauging the effectiveness of a teacher, they were more likely to use indicators such as
“how they treat students” or whether or not the students have learned about “other things
in life” as a result of being in the class than they were to cite achievement data (p. 1269).
When asked about indicators of teacher effectiveness, one teacher stated that you can just
“see it on the face of their kids” (p. 1269). It is clear from these responses that these
teachers valued anecdotal evidence more so than assessment data in determining the
effectiveness of their colleagues. This preference for anecdotal indicators poses a
23
significant problem since the data valued by teachers “does not match the type of data
mandated in external accountability policies” (p. 1270).
Some researchers have attributed this lack of value for assessment data, in part, to
issues of perceived validity (Kerr et al., 2006; Ikemoto & Marsh, 2007). If teachers
believe that the items on a test are not valid or that the test is of low quality, they will be
less likely to value the data from this assessment. This was evidenced by the fact that
many teachers express “a preference for classroom assessments and reviews of student
work, which were seen as more meaningful and valid” (Kerr et al., 2006, p. 512). In his
study of data use in six Milwaukee public schools, Choppin (2002) noted that teams at
each school expressed concerns about the accuracy and validity of the data that they had
access to. Lachat and Smith (2005) reported similar concerns amongst high school
teachers they interviewed regarding issues of data quality. Doubts about the accuracy
and validity of data were also identified by Ikemoto and Marsh (2007) as factors affecting
teachers’ data use.
Mistrust of Data/Political Use
Mistrust of data and concerns about the ways in which data are used are additional
factors that have been identified by researchers as impediments to data use amongst
teachers (Mandinach et al., 2006). Ingram et al. (2004) provide numerous accounts of
teachers who experienced data being used politically by administrators, especially in the
context of making decisions. Coburn, Honig, and Stein (2005) describe how the political
landscape can affect the processes of decision-making and evaluation within a district.
24
Such misuses of data can lead to mistrust on behalf of teachers and thwart efforts to
implement data-driven decision making (Ingram et al., 2004). If this process is to be
successful, it is vital that data be used in a safe and non-threatening way (Togneri &
Anderson, 2003; Ingram et al., 2004; Ikemoto & Marsh, 2007; Mason, 2002).
With education continuously serving as a hot topic in political debates, politicians
at the local, state, and federal levels may, from time to time, make uninformed statements
about and draw inaccurate conclusions from assessment data (Heubert and Hauser, 1998).
Educators are constantly required to provide data to external bodies at all levels, but they
are often unsure of how this data will be used (Johnson, 2002). The politicization of
assessment data, with an emphasis on short-term growth, and the presence of the “long
arm of the government penetrating into schools” can result in resentment to politically
sponsored reforms on the part of teachers and administrators alike (Earl & Katz, 2002, p.
8). Such resistance has been documented in reaction to NCLB and its mandate for state
testing. Teachers have organized rallies throughout California and in Nevada to protest
the provisions of this federal act (Melendez, 2007; Chanin, 2007). In Wisconsin, a
middle school teacher refused to administer the required state tests to his students citing
“moral objections” to NCLB (Foley, 2007). Since much of the data analysis being
performed in schools involves the examination of results from state testing or data from
benchmarks that some view as preparing students for state testing, it is plausible that
resistance to data-driven decision making by some teachers may be rooted in their
objections to NCLB and its testing mandates.
25
Teacher Efficacy
Ormrod (2006) defines efficacy as “the belief that one is capable of executing
certain behaviors or reaching certain goals” (p. 341). With regard to teachers, efficacy
can be viewed as a “belief in their ability to have a positive effect on student learning”
(Winkler, 2002, p. 223). Efficacy can influence a teacher’s willingness to embrace new
innovations or reforms. In his study of teachers’ reactions to the implementation of state
testing in Virginia, Winkler (2002) found that teachers who were receptive to the new
testing reported enhanced levels of efficacy. Conversely, teachers who exhibited
resistance reported diminished levels of efficacy. With regard to the adoption of new
innovations, teachers will typically only welcome such changes if they are confident that
they can make them work in the classroom and that student learning will not be
jeopardized as a result of the reform (Guskey, 1986).
For some teachers, the introduction of data-driven decision making challenges
their beliefs about the relationship between their instruction and the performance of their
students. Whereas these teachers could previously ignore outcome data, engaging in
data-driven decision making requires that they examine the connection between their
teaching and student learning (Ingram et al, 2004). This process can impact their feelings
of efficacy as it makes them reflect upon their ability to raise student achievement.
Perceived Loss of Power
Teachers are used to autonomy and the freedom to make their own decisions
(Wayman & Stringfield, 2005). For many teachers, the implementation of data-driven
26
decision making challenges this notion on two levels. The insistence that teachers refer
to data when making decisions regarding their students can be construed by some
teachers as ignoring their professional judgment in these matters (Wayman, 2005).
Where teachers once held the power to make all decisions based upon experience,
preference, and instinct, they would now have to defer to the data when planning
instruction and interventions. As Winkler (2002) found when interviewing teachers who
resisted state testing, many of these teachers equated the advent of the tests to a loss of
power.
On a broader level, resistance to reform may also result from perceptions by
teachers that such innovations are being “imposed” upon them, depriving them of their
ability to choose courses of action that they prefer or feel are more appropriate.
Richardson (1990) states that it is a common thread amongst all reform efforts that
“someone outside of the classroom decides what changes teachers will make” (p. 11).
Whether teachers view reforms as being the result of external pressure or part of their
own desire to improve student achievement can affect their level of acceptance of or
resistance to these changes (Diamond & Cooper, 2007; Elmore, 2002; Earl & Katz,
2002). In their study of resistance to the implementation of the Success for All reading
program, Datnow and Castellano (2000) express the need for teachers to “own the
process of change” (p. 778). Marsh et al. (2006) note that teachers who described
themselves as being data-driven “attributed their use of data to internal motivation to
27
reflect and improve on their craft,” as opposed to externally imposed accountability
mandates (p. 9).
External Barriers to Data Use
Structural Supports for Data Use
There are a host of structural supports that, when lacking, can inhibit teachers’
interest in using data. These include the provision of adequate time for data collection
and analysis, access to a user-friendly data warehousing system, and the presence of a
consistent and informed professional development program.
Time. Engaging in data-driven decision making requires an extensive commitment
of time (Learning Point Associates, 2004). Even after data have been collected, they
must be analyzed, synthesized, and interpreted (Marsh et al., 2006). Many researchers
have noted that a lack of adequate time has presented a barrier to data use by teachers
(Marsh et al., 2006; Mandinach, 2006; Ingram et al, 2004; Ikemoto & Marsh, 2007;
Feldman & Tung, 2001). While many districts have processes in place to collect and
analyze data, sufficient time must be allotted for teachers if they are expected to use such
data to make instructional decisions (Marsh et al., 2006). Allowing teachers the time to
engage in data-driven decision making has been shown to increase their receptiveness to
the process (Wayman, 2005). The allotment of adequate time has been identified as a
key ingredient to the success of implementing data-driven decision making at a school
site (Armstrong &Anthes, 2001; Dantow et al., 2007; Supovitz & Klein, 2003).
28
While providing time for teachers to analyze and reflect upon student data is a
vital component of the data-driven decision making process, it is a difficult task to
accomplish. In Marzano’s (2003) assessment of the sufficiency of classroom time in
light of the vast array of standards that must be taught, he concludes that teachers do not
have enough time in the day to adequately teach the required curriculum. This lack of
time is further complicated by a desire amongst many teachers to continue to engage in
activities that may no longer be related to state standards. The elimination of such
activities can be attributed by some teachers to the implementation of the reform, causing
further resentment and resistance (Winkler, 2002). It is not unlikely that even teachers
who are teaching to the standards will perceive that a trade-off exits between the
collection and analysis of data and classroom instruction (Ingram et al., 2004). Schools
that have effectively implemented data-driven decision making have been found to have
used a variety of strategies to provide teachers with adequate time to analyze and reflect
upon student data (Datnow et al., 2007; Wayman & Stringfield, 2005).
Technology/Technical Issues. The capacity to collect, store, and disaggregate data
has been cited as an essential element to effective data use (Lachat & Smith, 2005).
Despite the importance of this feature, a survey conducted by the United States
Department of Education found that less than half of the nation’s teachers had access to
an electronic data system containing student data in the 2004-05 school year (Means et
al., 2007). Access to a data system is especially limited amongst teachers in small school
districts (Marsh et al., 2006).
29
As more and more districts and schools begin to implement processes that rely
upon regular access to student data, the number and variety of data systems available to
them has increased (Wayman & Stringfield, 2005). Wayman (2005) identifies three
different types of student data systems available to districts:
• Student information systems (SIS) that provide information about daily school
functions, such as attendance and schedules;
• Assessment systems that organize and analyze current data;
• Data warehousing systems that provide access to various types of historical
data.
Today’s newer systems provide a wider range of options for a broader group of users,
regardless of technological proficiency (Wayman, 2005). The mere existence of such
data systems, however, has not eliminated the technological barriers to data use in some
schools and districts.
Numerous studies have cited technological barriers to data use on the part of
teachers (Lachat & Smith, 2005; Marsh et al., 2006; Mandinach, 2006; Kerr et al., 2006).
Even within districts where a data system was in place, the system was often not able to
perform the type of disaggregation or integration processes necessary to make data useful
for teachers (Lachat & Smith, 2005; Datnow et al., 2007). Of the 48 percent of teachers
who reported having access to a student data system in 2004-05, less than half of those
teachers were able to use those systems to obtain what they considered to be “useful”
data regarding their students (Means, 2007). For districts that employed systems that
30
were capable of performing such tasks, Supovitz and Klein (2003) noted that most of
them were not using these systems to their full capacity. Kerr et al. (2006) identified
differences in technology and access to timely data as being a distinguishing factor
between districts where teachers were effectively using data and those where they were
not. Wayman (2005) assesses the issue of limited accessibility by describing schools as
being “data rich,” but “information poor” (p. 296), while Datnow et al. (2007) warn that
access to too much data can cause confusion amongst teachers as to what data are the
most important. In order to support the demands of a school that is truly data-driven,
Wayman and Stringfield (2003) describe the ideal data system as one that possesses the
following qualities, amongst others:
• Capacity to enable clean data that can be accessed quickly from anywhere;
• Software requires little training;
• Multiple ways to access information and various methods of data representation;
• Comprehensive query tools available to access a wide range of data;
• Longitudinal presentation of data available at every user level.
While the existence of such a system may support the effective implementation of data-
driven decision making, school leaders must remember that no matter how good the data
system is, it still takes a well-trained and committed staff to make the process work
(Mason, 2002).
31
Professional Development. Like any innovation, the implementation of data-
driven decision making requires professional development. Research on effective
professional development states that it must be engaging, relevant to the work of
teachers, and be consistent and ongoing (Supovitz & Klein, 2003; Bernhardt, 2002;
Elmore, 2002; Guskey, 1986). Marzano (2003) adds that professional development
should also provide opportunities for active learning and be seen as part of an integrated
program. This type of professional development is especially important in the area of
data use, as many researchers have indicated that school staffs lack the necessary skills
and knowledge to implement data-driven decision making effectively (Mandinach, 2006;
Marsh et al., 2006; Kerr et al., 2006; Wayman, 2005; Lachat & Smith, 2005). Wayman
(2005) found the structure of professional development programs around data use to be
ineffective at many schools. These programs were done on a large scale, did not include
ongoing support, and lacked an “expectation of comprehensive teacher involvement”
(p.302). In many cases, teachers receive no training in data use at all (Wayman, 2005).
Nationwide, only 63% of the 48% of teachers who reported having access to a student
data system in 2004-05 had received training in data use (Means, 2007).
The design and delivery of a coherent and consistent professional development
program has been cited as a key element to the effective implementation of data-driven
decision making (Datnow et al., 2007; Supovitz & Klein, 2003). These programs
included components such as training for new teachers, the development of
32
knowledgeable support personnel, and continuous refinement of staff development
offerings based upon current data.
In addition to formal professional development programs, data use amongst
teachers may also be supported by the creation of rubrics and data tools to aid in the
collection and analysis of student data. The existence of such tools has been noted in
schools and districts that have implemented data-driven decision making (Datnow et al.,
2007). Bernhardt (2002) provides numerous examples of tools that staff can use to
collect and analyze data. While some districts contract with outside consultants to do
much of this work for their teachers, there may be some benefits associated with having
teachers do this work at their school site, either on their own or in groups. If teachers are
to complete such tasks, however, extensive training and adequate tools must be provided.
Culture Around Data Use
While the presence of structural supports may increase teachers’ interest in using
data, other external factors that may affect data use are more closely associated with
relationships and environments. These factors include the level of support for data use
exhibited by the school leadership, the culture of the school, and the established norms of
collaboration.
Leadership Support for Data Use. According to Marzano (2003), “leadership
could be considered the single most important aspect of school reform” (p. 172). He
stresses the need for principals to be the visible head of any reform effort and for them to
act as cohesive forces during the process of change. Kerr et al. (2006) advocate a strong
33
initial push by the principal to act as a catalyst for change, followed by a movement
toward distributed leadership with regard to data use. Research has shown that the
absence of strong leadership, whether on the part of the principal, coach, or leadership
team, can prevent data use initiatives from taking hold at a school (Kerr et al., 2006;
Young, 2006). Numerous studies have highlighted the role of effective leaders in the
implementation of data-driven decision making (Datnow et al., 2007; Kerr et al., 2006;
Young, 2006; Lachat and Smith, 2005; Wayman, 2005; Supovitz & Klein, 2003).
Although leadership was not the focus of their study, Supovitz and Klein (2003) noted
that “the fingerprints of strong leadership are all over the data activities” found in the
effective schools that they examined (p. 36).
Young (2006) identified leadership focused on data, especially in the area of
agenda setting, as a factor that contributed to either loose or strong connections between
teacher rhetoric and practice regarding data use. The importance of leaders establishing
norms and conditions for data use was also seen by Wayman (2005) as a key ingredient.
Datnow et al. (2007) state that leaders must make these norms clear and explicit, making
data use a “non-negotiable” practice. In addition to having a clear vision and message
regarding data use, it is also important that leaders develop an orientation toward using
data in their daily practice as well (Earl & Katz, 2002). Leaders who are technologically
savvy can inspire teachers to increase their use of data, while those who are data illiterate
can be an impediment to such a change (Mandinach, 2006).
34
School leaders play a vital role in affecting the behavior of teachers who are
resistant to any change initiative or innovations. The importance of such leaders
confronting teachers who fail to implement strategies or practices that have been proven
to be effective for students is stressed by Kohm and Nance (2007). They urge leaders,
and specifically school principals, to consider the cost of neglecting to address this
behavior on the part of teachers. Ultimately, they conclude, it is the students who pay the
price for this avoidance. Janas (1998) equated teacher resistance to a “sleeping dragon”
that some school leaders were afraid to awaken and confront. Such fear is not necessary,
she suggests, if leaders are aware of the resistance, can identify the source and type of
resistance, and have developed proactive strategies to manage the resistance. When it
comes to working with teachers who are resistant to innovations, including the
implementation of data-driven decision making, the role of the school leader cannot be
overstated.
School Culture. The examination of school culture is vital to the implementation
of a data use initiative at a school site (Wayman & Stringfield, 2005). Research has
shown that culture can impact the adoption of new practices within schools (Young,
2006). It has also been found that reform efforts can be influenced by the collective
school culture, subcultures, and by the contribution of individual teacher ideologies to the
culture of a school (Datnow & Castellano, 2000). Ikemoto and Marsh (2007) identify
school culture as a major distinguishing factor between schools that exhibited high levels
of data use and those that did not. They found the affect of school culture to be even
35
more pronounced when looking at the implementation of more complex data use at
school sites. Ingram et al. (2004) refer to school culture as being a “strong determinant of
how teachers use data to judge their effectiveness” and highlight the fact that it
“influences the type of data that teachers think is needed” (p. 1280). They identify four
cultural barriers to data use:
• Differences in personal metrics for judging teacher effectiveness;
• A heavy reliance on experience, intuition, and anecdotal information in
decision making;
• Lack of agreement on the importance of certain student outcomes;
• A disassociation between teacher performance and student learning.
A culture that is supportive of data use is one in which data can be discussed without fear
and where teachers can openly share observed strengths and weaknesses with colleagues
(Ikemoto & Marsh, 2007; Datnow et al., 2007). School culture can either contribute to or
eliminate teachers’ distrust for data initiatives (Wayman & Stringfield, 2005). In turn, an
increased emphasis on student data can result in positive changes in school culture
(Wayman & Stringfield, 2005, Johnson, 2002; DuFour et al., 2006). The effective
implementation of data-driven decision making at a school site may require a change in
school culture and, in particular, a change in staff attitudes. This change can be brought
about by working with staff to see the needs of students and how the use of data can help
to address these needs (Lachat & Smith, 2005; Johnson, 2002). Datnow et. al (2007)
36
provide numerous examples of ways in which site and district leaders have worked to
establish data-driven cultures within their schools.
Norms of Collaboration. Although autonomy has been the norm for most
educators (O’Day, 2002; Wayman & Stringfield, 2005), the implementation of data-
driven decision making can be supported by the existence of school culture and structure
in which collaborative teams where teachers work together and independently toward a
common goal (DuFour et al., 2006; Marsh et al., 2006). A high level of collaboration has
been shown to support data use amongst teachers (Ikemoto and Marsh, 2007; Wayman,
2005; Love, 2004). Increased collaboration amongst teachers may also result in greater
use of technology to analyze student data (Wayman & Stringfield, 2005).
Collaboration and school culture are two areas that are very much intertwined. In
schools where the culture is one that values independence and where a safe environment
has not been established, collaboration will be difficult. It should be noted, however, that
the existence of collaboration in and of itself does not necessarily mean that an effective
environment for data-driven decision making exists. DuFour et al. (2006) stress that
collaboration is a “means to an end, not the end itself” (p. 3). While Young (2006) found
collaboration amongst teachers at some of the schools that she observed, she noted that
this collaboration sometimes consisted more of “sharing war stories” than analyzing data
or discussing student achievement. Datnow et al. (2007) provide useful examples of
norms that can be established to ensure that collaboration stays focused on student data.
While collaboration is not the only ingredient necessary to implement data-driven
37
decision making, the absence of collaboration has been shown to be an impediment to
this process (Ikemoto & Marsh, 2007).
Summary of Literature Review
Earl and Katz (2002) describe a time when decisions in schools were based solely
upon the judgments of educators without regard for data. For many schools, this is still
the case. Despite the existence of a plethora of data regarding student achievement and
the effectiveness of instructional practices, many teachers still rely solely upon
experience, instinct, and preference when making decisions. Current accountability
mandates, however, stress student outcomes and include sanctions based upon student
data (O’Day, 2002). With these policies as a backdrop, data must become a key
ingredient in the school improvement process (American Association of School
Administrators, 2002).
As with any innovation, the movement toward data-driven decision making is one
that may be met with resistance by some teachers. This resistance may be attributable to
internal factors, such as a lack of value for data, a mistrust of data use, teacher efficacy or
a perceived loss of power. Other barriers may be external, such as a time, technology,
professional development, leadership, school culture, and collaboration. Identifying
specific causes of teacher resistance and analyzing these factors using gap analysis may
result in the development of strategies to reduce or eliminate them. This study will add to
the literature about the obstacles that may be encountered when schools seek to
implement the process of data-driven decision making. As leaders work to initiate this
38
change, they will inevitably encounter some degree of resistance. While undergoing the
difficult task of getting these teachers to embrace the process of data-driven decision
making, they may find solace in the words of Alan Guskin (1996, p. 34); “It must be
remembered that people join a change effort in different stages; few remain resisters to
the end.”
39
CHAPTER THREE
Methodology
Introduction
This chapter describes the design, sample, instrumentation, data collection, and
data analysis of the proposed study. As discussed previously, the purpose of this study is
to identify barriers to the use of data-driven decision making amongst teachers. The
study also examines characteristics of teachers who have exhibited resistance to data-
driven decision making, with the purpose of looking for possible commonalities. This
study considers both internal and external barriers to data use. Two elementary schools
in southern California were examined in order to gain insights into the established
research question and sub-questions:
Why do some teachers resist using data-driven decision making?
o How do various external factors, such as teacher support, technology,
and site leadership, affect teachers’ acceptance of or resistance to data-
driven decision making?
o How do internal or motivational factors, such as the perceived value of
data and teacher efficacy, affect teachers’ acceptance of or resistance to
data-driven decision making?
o What role, if any, do the background characteristics of teachers play in
their willingness to engage in data-driven decision making?
40
The study employed qualitative research methods, in hopes of gathering in-depth
information regarding barriers to data use. Patton states that qualitative methods
“facilitate study of issues in depth and detail” (p. 14). The primary data collection
instrument was interviews with classroom teachers. Merriam (1998) refers to
interviewing as “the most common form of data collection in qualitative studies in
education” (p. 70). Patton (2002) finds interviews to be of great value as they “allow us
to enter into the other person’s perspective” (p. 341). By conducting semi-structured
interviews with teachers and administrators, I aimed to gain a better understanding of the
challenges, both real and perceived, that may be encountered when implementing data-
driven decision making.
Sample and Population
This study focused on teachers in two public elementary schools in the Southern
California area. The schools were purposefully chosen according to a number of criteria.
First, the districts in which these schools are located have made it part of their mission to
promote and support the use of data by teachers. Second, the two schools selected for
this study were chosen on the basis that the administration at both sites had claimed to
emphasize the use assessment data for decision making. Finally, conversations with the
principals at both sites confirmed that the schools had on their respective staffs a mix of
teachers who engaged in data use regularly and teachers who were struggling with or who
had failed to embrace data-driven decision making. The selection of the districts and
schools was accomplished through referrals from colleagues.
41
Overview of Districts and Schools
Both of the schools selected for the study are located in Southern California.
Mesa Elementary School is part of the Mystic Valley Unified School District.
1
The
Mystic Valley Unified School District was established in 1965 and its 70 public schools
now serve over 48,000 students in kindergarten through twelfth grade. Mesa Elementary
School serves approximately 450 students in grades K-6. The tables below contain
information about Mesa’s students, staff, and past performance.
Table 1: Teacher Distribution – Mesa Elementary School
Kindergarten 2
1
st
Grade 3
1
st
/2
nd
Grade Combination 1
2
nd
Grade 3
3
rd
Grade 3
4
th
Grade 2
5
th
Grade 2
6
th
Grade 2
Source: 2007-2008 School Accountability Report Card – Mesa Elementary School
Table 2: Student Demographics – Mesa Elementary School
Total Enrollment 456
Hispanic 80.8%
Asian 9.0%
White 5.0%
Filipino 2.6%
African American 0.9%
Other 1.7%
English Learners 62.4%
Free/Reduced Lunch 79.3%
Source: California Department of Education -http://www.cde.ca.gov
1
Pseudonyms are used for the purposes of confidentiality.
42
Table 3: Accountability Profile – Mesa Elementary School
2005-2006 2006-2007 2007-2008
Academic Performance Index (API)
2
710 731 726
Statewide API Rank 3 4 n/a
Similar Schools API Rank 8 8 n/a
Made Adequate Yearly Progress (AYP) Yes Yes No
Program Improvement School No No No
Source: 2007-2008 School Accountability Report Card – Mesa Elementary School/
California Department of Education -http://www.cde.ca.gov
Orchard Elementary is part of the Smithfield Unified School District. The
Smithfield Unified School District serves over 28,000 students in kindergarten through
twelfth grade. Orchard has an enrollment of approximately 800 students in grades K-5
and is one of 40 public schools within Smithfield Unified. Information regarding the
staff, students, and past performance of Orchard Elementary School are contained in the
tables below.
Table 4: Teacher Distribution – Orchard Elementary School
Kindergarten 5
1
st
Grade 6
1
st
/2
nd
Grade Combination 1
2
nd
Grade 7
2
nd
/3
rd
Grade Combination 1
3
rd
Grade 6
4
th
Grade 4
5
th
Grade 4
Source: California Department of Education -http://www.cde.ca.gov
Table 5: Student Demographics – Orchard Elementary School
Total Enrollment 809
Hispanic 89.2%
African American 10.3%
2
Academic Performance Index (API) is a ranking system used by the state of California
to gauge academic performance levels and gains in student achievement.
43
Table 5, Continued
Other 0.5%
English Learners 76.5%
Free/Reduced Lunch 83.9%
Source: California Department of Education -http://www.cde.ca.gov
Table 6: Accountability Profile – Orchard Elementary School
2005-2006 2006-2007 2007-2008
Academic Performance Index (API) 683 720 747
Statewide API Rank 2 3 n/a
Similar Schools API Rank 8 9 n/a
Made Adequate Yearly Progress (AYP) Yes Yes No
Program Improvement School No No No
Source: 2007-2008 School Accountability Report Card – Orchard Elementary School
California Department of Education -http://www.cde.ca.gov
As these tables reveal, the schools serve similar populations (majority Hispanic) and have
generally similar performance levels as evidenced by their API scores. However, Orchard
is almost double the size of Mesa.
Data Collection Procedures
Data collection took place over a period of several months, from October to
December. Eight teachers were interviewed at each of the schools selected for this study,
resulting in 16 total interviews with teachers. These teachers represented a mix of grades
levels from kindergarten through sixth grade. At both sites, the teachers were recruited
for the study by the principals, ensuring that a mix of those who regularly engage in data-
driven decision making and those who have been more reluctant to do so were included.
I specifically asked for a range of teachers, and my sense is that the principals complied
with my request. While the principals were aware of the range in teachers’ engagement
in data driven decision making prior to the interviews, this information was not shared
44
with me until after all of the interviews were conducted, helping to ensure that the
interviews were unbiased.
The interview protocol focused on the topics addressed in the research question
and sub-questions, in hopes of gathering information on real or perceived barriers to data
use. Each interview lasted for approximately 35minutes and was conducted in either a
conference room or empty classroom. All interviews were digitally recorded and notes
were taken, as necessary. Each interview was then transcribed verbatim. The interviews
were conducted using what Patton (2002) refers to as “the general interview guide
approach” (p. 342). This approach consists of outlining the issues prior to the interview
to ensure that all relevant topics are addressed. Patton (2002) identifies this approach as
being very useful when the time in which to conduct interviews is limited. The protocol
for these interviews is included in Appendix A.
In addition to interviewing classroom teachers, I also conducted interviews with
the principals at the two schools to determine the supports that are in place to assist
teachers with using data in their classrooms. The interview with the principal at Mesa
was conducted in person, while the interview with the principal from Orchard took place
via the telephone. These interviews were also semi-structured and lasted for
approximately 30 minutes. Both interviews were digitally recorded and notes were taken.
These interviews were transcribed verbatim. The protocol for these interviews is
included in Appendix B.
45
Data Analysis Procedures
Data analysis is the process of taking what can be complex data in its raw form
and simplifying or making sense out of it (Patton, 2002). The transcribed interviews with
teachers and principals were analyzed thoroughly in hopes of identifying patterns or
themes. In total, the 16 interviews yielded over 190 pages of transcribed data. The
process of analyzing the data began during the data collection period. Merriam (1998)
stresses that data collection and data analysis are processes that should occur
simultaneously. Analyzing the data in the field during the initial interviews allowed for
more informed data collection in subsequent interviews. According to Patton (2002), the
first step of data analysis is to develop a “manageable classification or coding scheme”
(p. 463). With this in mind, I began by developing codes with which to search for themes
in the data. I coded the data according to the research question and the sub-questions.
These codes consisted of the various internal and external factors discussed previously.
A list of all codes used can be found in Appendix C. The structure of the interview
protocol facilitated the “chunking” of the data into the various codes. To assist me with
more thorough coding of the data, I used HyperRESEARCH® coding software. After
coding and sorting the data, I then identified themes within each code. In addition to
coding the interview data to identify themes related to the internal and external factors
that can affect responses to data-driven decision making, the characteristics of individual
teachers were also recorded on a chart and analyzed to identify common themes based
upon gender, years of experience, training, grade level, etc.
46
Ethical Considerations
Permission was obtained from each of the principals at both schools and from the
appropriate district personnel prior to the commencement of the data collection process.
Institutional Review Board (IRB) approval was granted through the University of
Southern California.
Ethical considerations included such issues as informed consent, confidentiality,
and protection of participants’ anonymity. Prior to the interview, each participant was
given an informational sheet describing the nature of the study and their right to withdraw
from the study at any time.
Because the study was conducted in just two schools, there was a risk that
anonymity may be compromised. I minimized this risk by guarding the names of all
participants from other participants. Data gathered during research, such as taped
interviews, interview transcripts, field notes, and other documents, was kept confidential.
Information included in my final dissertation is presented in ways that mask the
individuals’ identities.
At all times during the data collection, analysis, and reporting process, I followed
the University of Southern California’s procedures, as well as any procedures maintained
by the districts and schools that were studied. In particular, I adhered to their guidelines
for ethical conduct in research.
47
Limitations of the Study
The study was conducted at two selected schools in two selected districts. Factors
in the district and/or at the school may have affected the applicability and transferability
of this study to other sites. The current study gathered qualitative data from teachers and
administrators in two Southern California schools that have been identified as supporting
teachers in employing data-driven decision making in their classrooms. The study was
limited to 16 teachers and two administrators due to limited resources. In some cases, the
interviews had to be limited to a shorter period of time due to the late arrival of the
participant or difficulty in arranging for classroom coverage. The various internal and
external criteria used to select the two schools may affect the applicability of the study to
schools that do not exhibit these characteristics. While the sample may not allow for
generalization to other dissimilar schools and districts, the selection was purposefully
made with the hope that the data gathered would be relevant.
Researcher’s Subjectivity
While I am open to considering the views of participants in my study, with regard
to barriers to data use, my own subjectivity may have come into play. As a principal of
an elementary school in the early stages of implementation of data-driven decision
making, I am familiar with many factors cited by teachers as being obstacles to this
process. I have also worked with many teachers who have been resistant to using data
regularly and may be biased as to my thoughts about the importance of data use. During
the study, I tried to be cognizant of any potential bias that might have stemmed from my
48
own background. Since my intent in choosing this topic was to gain a broader
understanding of barriers to the use of data school sites, I remained open to the views
expressed in hopes of using this information to further data use at my own school.
Summary
In this chapter, I have described the data collection and data analysis procedures
that I used in conducting my case study. In the chapters that will follow, I will present
my research findings and an accompanying analysis.
49
CHAPTER FOUR
Data Analysis and Interpretation of Findings
Introduction
An increase in the availability of student assessment data and the prominence of
this data in legislative initiatives, such as No Child Left Behind (NCLB), have prompted
many districts and schools to emphasize the process of data-driven decision making
amongst their teaching staff. In an attempt to improve instructional practices and to
gather data prior to the administration of the state assessments, many districts have
implemented common chapter tests, benchmark assessments, or both. Although the
results of state and district assessments may hold consequences for schools and
administrators, this same level of accountability has not yet reached the individual
classroom teacher. While many districts and schools have encouraged teachers to use
student assessment data to guide instruction, these efforts have been met with mixed
responses. While some teachers have embraced the move toward data-driven decision
making, others have been reluctant to do so. This apparent lack of support may be due to
any number of factors. These factors may be related to external issues, such as time,
training, or technology, or internal issues, such as values, trust, or self-efficacy.
Identifying the impact of such factors on teachers’ receptiveness to the implementation of
data-driven decision making is crucial to the successful adoption of this practice as a
large-scale reform.
50
This chapter contains an analysis and interpretation of the data collected in this
qualitative study on the factors affecting teachers’ levels of engagement with the process
of data-driven decision making. The purpose of this study is to gain a better understating
of the nature of some teachers’ resistance to using assessment data regularly to guide
instruction.
Research Questions and Thematic Underpinnings
The organization of this chapter is designed to facilitate the linking of the data
gathered through the interview process to the corresponding research questions. After
conducting research, coding the data, and performing thorough analysis, the following
major themes surfaced. This chapter will present an analysis of the sub questions below
and the major themes connected to each.
Research Question:
Why do some teachers resist using data-driven decision making?
Sub Questions and Themes:
a) How do various external factors, such as teacher support, technology, and site
leadership, affect teachers’ acceptance of or resistance to data-driven decision
making?
1. Lack of time
2. Use of collaboration time
3. Technology issues
4. Disparities in training
51
5. Lack of focus/ support from site leadership
b) How do internal or motivational factors, such as the perceived value of data
and teacher efficacy, affect teachers’ acceptance of or resistance to data-driven
decision making?
1. Resentment of emphasis on student assessment data
2. Insecurity with sharing of data
3. Preference for informal data use
4. Validity issues
c) What role, if any, do the background characteristics of teachers play in their
willingness to engage in data-driven decision making?
1. Grade level
2. Gender
3. Age
4. Years of service
5. Role in adoption process
School Settings
Mesa Elementary and Orchard Elementary are both schools that are making efforts
to become more “data-driven”. In their quest to adopt this process at their schools, the
principal’s at these two sites have run into some common problems and similar
experiences. Due to a variety of factors related to their district or school site, each
principal has also had to address challenges specific to her school.
52
Mystic Valley is a district that has a demonstrated record of supporting data-
driven driven decision making amongst its teachers for a number of years. This
commitment to data use has been documented by researchers and educational
organizations (e.g., Datnow et al., 2007; National Center for Educational Accountability,
2006). With support from the district office, administrators and teachers at Mesa have
been encouraged to use data regularly for many years. Like all schools in Mystic Valley,
Mesa uses the web-based program Data Director to store and organize student data.
Conversations with the principal, Brenda Davis, confirmed that the teaching staff consists
of both teachers who have embraced data use and those who have struggled with or have
been resistant to it.
Serving in her second year as principal at Mesa, Ms. Davis has communicated to
her teachers that the movement toward data-driven decision making is a priority for her.
Ms. Davis expressed her feeling that this message was not well-received by everyone.
She attributes this, in part, to a lack of emphasis on data use by the principal who
preceded her. As Ms. Davis stated, “I don’t think that the previous principal really
focused on data a whole lot.” She sees herself as taking the process much further than
her predecessor and acknowledged that not every teacher at her site is supportive of these
efforts. While noting the impact of factors such as time, funding, and training, Ms. Davis
also discussed the role of belief systems or more internal factors. She captured this
sentiment when she stated, “If teachers really, truly believe that looking at this data is
53
really going to change instruction, and it’s really going to make a difference with kids,
then I think the reluctance would go away.”
Over the years, the Smithfield Unified School District has instituted a number of
district assessments and benchmarks and has encouraged all administrators to engage
their teachers in discussions about the use of student data. Like Ms. Davis, Ms. Donna
Johnson, the principal of Orchard, has stressed the importance of data-driven decision
making with her staff. Similar to Mesa, the staff at Orchard consists of teachers who
regularly use student data and those who have been reluctant to do so. While Ms.
Johnson attributes some of this reluctance to the fact that many teachers do not truly
value assessment data, she feels strongly that the most significant impediment to the
moving the process of data-driven decision making forward at her site is the district
office. Having been the principal at Orchard for six years, she has witnessed the
evolution of the push for data-driven decision making within her district and she does not
feel that all of the decisions made at the district level have been positive ones. She
conveyed her frustration over what she views as an “unmanageable number of
assessments” mandated by the district office and an “unrealistic expectation that
administrators and teachers can stay on top of so much data and still use it in a
meaningful way.”
While both principals have encountered similar obstacles while working to
implement the process of data-driven decision making at their sites, each has also had to
deal with some factors unique to her school. In the following sections, the impact of
54
multiple external and internal factors will be discussed in depth. Throughout these
sections, commonalities between these factors as they relate to both schools will be
highlighted, while any findings unique to just one school will also be examined.
Issues Related to External Factors
In this section, the five themes related to external factors that most commonly
emerged from the data will be discussed. These themes represent either real or perceived
barriers that the teachers who were interviewed feel make it difficult to effectively
engage in the process of data-driven decision making. These themes include a lack of
time, inconsistent norms of collaboration, technology issues, disparities in training, and a
lack of focus or support from site leadership. An analysis of the interview data revealed
that the existence of these factors, or perceived existence, may contribute to teachers’
limited engagement with or resistance to the regular use of assessment data. These
themes will be addressed individually in the sections that follow.
Lack of Time
With the many demands that teachers face, finding the time to implement any new
strategy or practice might be difficult. The emphasis on state assessments, particularly in
English-language arts and mathematics, have caused many schools to begin using
benchmark assessments and common chapter or unit tests throughout the year to help
teachers to gauge student mastery of content standards prior to the state exams. While
this practice has been implemented in both of the districts that were studied, many
teachers reported viewing the administration of these assessments and the analysis of the
55
resulting data as being too time consuming. As one teacher from Mesa stated, “I spend
all of my time testing and less time teaching.” While many of the teachers expressed a
desire to reteach or provide interventions based upon the results of any of these
assessments, none of them felt that they had adequate time to do so. Chapter or unit
assessments are administered on a weekly or bi-weekly basis in both districts, depending
upon the subject, and both districts administer district-wide benchmarks four times a
year. Teachers at Orchard shared the fact that the district had mandated weekly
assessments in both English-language arts and mathematics at the beginning of the school
year, but then changed to a bi-weekly schedule after hearing from many of the teachers
and administrators that the pace of the assessments made it difficult to use the data in a
meaningful way. This sentiment is summed up in the following remark by a teacher from
Orchard:
It’s basically push, push, push, push in our district. Like every week you have to
submit. By Friday at 4:00, you have to have it turned in. You have to have it
averaged. And then Monday, you’re on to teaching something else. The next
Friday you have to have it turned in and averaged.
The teachers at Orchard each expressed similar concerns over the weekly assessment
schedule and all of them favored the move from weekly to bi-weekly assessments. While
this move may have allowed for more time to gather and analyze student assessment data
at Orchard, many of the teachers at both schools still consider the time available for the
collection and analysis of the data to be inadequate. Multiple teachers described the
process as being “overwhelming” and one teacher from Mesa expressed his
56
dissatisfaction over his perception of how, when it comes to data analysis, the teachers
are “required to do more of it on [their] own time.”
The issue of time is not limited only to the collection and analysis of the data, but
also to the implementation of interventions designed in response to them. In light of the
heavy emphasis state assessment place on the areas of English-language arts and
mathematics, many districts have adopted pacing guides to ensure that all content
standards in these subject areas are covered prior to the state exams. While this measure
has been implemented in both districts in the study in an effort to increase student
exposure to the state standards, many of the teachers interviewed expressed concern over
the affect of these pacing guides on their ability to react to the data that they have
analyzed. At both schools, a majority of the teachers stated that the pacing guides keep
them from being able to reteach material that students might not have fully grasped. This
belief has led them to resent the practice of collecting and analyzing data, as they do not
feel that the ultimate goal of these processes, using the data to guide instruction, is being
achieved. Many of the teachers have admitted to “going through the motions” of the
process to satisfy school or district mandates, but no longer trying to find the time to act
upon the data. A teacher from Orchard stated it this way:
I still have to move on, so it’s kind of, you know, where is the time where I
actually get to go back and teach that? So, I would say that it’s in my lesson plan
where I actually set aside a time, but we took another test this week, so now I’m
analyzing data and I need to talk to, you know, this kid and this kid. I’ll have it
all neat in my lesson plan where I’m going to have this time to talk, but it’s not
like that.
57
The need for additional time to gather, analyze, and respond to student assessment
data was a recurrent theme. Analysis of the interview data showed that the mere
existence of additional time is not, in and of itself, something will not ensure that all
teachers embrace the process of data-driven decision making. As is discussed in the next
section, the quality of such time may be as important, if not more important, than the
quantity.
Use of Collaboration Time
Both Mesa and Orchard have developed structures that allow for regular
collaboration amongst teachers by grade level. These meetings occur weekly for
approximately one hour. The principals at both schools stated that the primary purpose
of this collaboration time is to allow teachers to analyze their data collaboratively and to
discuss possible interventions. While the teachers at both schools agreed that this is the
intended purpose of these meetings, very few of them felt that the collaboration time is
used effectively in that regard.
While research highlights the importance of structure, focus, and consistent norms
of collaboration to the success of collaboration meetings, the teachers at Mesa and
Orchard described their meetings using such terms as “free form,” “casual,” and
“general.” While some of them referenced agendas, none of them felt that these agendas
were fully adhered to. One teacher from Orchard stated, “I don’t think that there are set
norms of how our meetings should be run.” The common feeling amongst the teachers
interviewed was that the meetings lack focus and that the tendency to discuss other issues
58
prevents them from really analyzing assessment data or discussing their findings. It
became clear in speaking to the teachers that these other issues are not limited solely to
academic concerns. Many of the teachers expressed concern over the amount of time that
is spent on what they view as complaining, rather than on constructive issues. One
teacher lamented the practice of colleagues “bringing up their problems of the day” while
another referred to the collaboration meetings at her grade level as a “complaining fest.”
One teacher summed up the impact of this practice by saying, “if you start the meeting
off with grievances, you’re never going to get there.” It was clear from the interview data
that many teachers have come to view the collaboration meetings as not serving their
intended purpose and that they do not value them in their current form. This reality may
explain the lack of enthusiasm exhibited by many of these teachers with regard to
participating in the analysis and discussion of student data.
While an overwhelming majority of the teachers expressed a desire for more
focused and constructive collaboration meetings centering around the analysis of student
data, a couple of teachers expressed their belief that such a meeting would not be helpful
to them. These same teachers also shared their dislike of the common assessments being
used and did not feel that anything was to be gained from sharing the results of their class
with other teachers. Their responses serve as proof that designing any collaboration
structure that meets the perceived needs of every teacher may not be possible. With
regard to collaboration, it is clear, however, that the absence of clear and consistent
norms is an obstacle to the implementation of data-driven decision making.
59
Technology Issues
With an ever-increasing number of data storage programs and services becoming
available to districts and schools, the use of technology has become an integral part of the
process of data-driven decision making. Like many districts, both Mystic Valley and
Smithfield use web-based systems to store student assessment data. In speaking with the
teachers and principals within these two districts, however, it became clear that there are
distinct differences in the way in which these systems are used. Mystic Valley uses Data
Director to store student demographic information, state assessment results, and the
results of benchmark tests. Data Director is a web-based system into which student
assessment data is uploaded and then stored for teachers and administrators to access.
This data can be downloaded in a variety of formats, facilitating the process of data
analysis. Teachers have access to this system and they can log on at any time to check
the latest results for their students and to create any one of numerous reports merging
multiple sources of data. To store student demographic information and assessment
results, Smithfield uses Data Driven Classroom. Much like Data Director, Data Driven
Classroom is a web-based program through which teachers can access student assessment
data. Both teachers and administrators in Smithfield have been granted access this
system. While users can access state assessment results for classes and individual
students, the results of benchmark assessments are not stored in this system. For the
benchmark tests, software developed by Pearson Learning is used to create reports for
teachers and administrators. Teachers do not access the Pearson program, but instead
60
wait to have their reports prepared for them by the site computer technician. While these
reports can be generated by the technician at any time, data cannot be merged between
the Data Driven Classroom and Pearson systems.
Within the area of technology, one theme that arose was a discrepancy in both
teacher’s knowledge of the databases available and the frequency with which they
accessed them. At Mesa, every teacher was aware of the existence of the Data Director
program and the fact that they could access this system periodically to view the results of
their students. This is not to say, however, that every teacher accesses the system with
the same frequency. Some teachers report logging on only “at the beginning of the year,”
while others stated that they access the system as often as “three times per week.” On
average, the teachers at Mesa reported accessing the system between one and two times
per month.
To be expected, teachers as Orchard reported accessing Data Driven Classroom
less frequently than teachers at Mesa access Data Director, due to their inability to obtain
the results of benchmark assessments within the system. With this being said, two of the
teachers interviewed stated that they were not aware of the existence of either of the
systems. Needless to say, they had never accessed the Data Driven Classroom site. Of
the teachers who were aware of the system, they reported accessing it anywhere from
“five times a year” to “four or five times a month.” Interestingly, it was learned that
shortly after the start of the school year, an issue arose involving the log in procedure for
Data Driven Classroom which caused the teachers to be locked out of the system. At the
61
time of the interview, this situation had yet to be resolved and the teachers were still not
able to access the system. Some of the teachers expressed frustration with this situation
while others stated, off of the record, that they were glad to have “one less thing to do.”
The technology-related problems at Orchard were not limited to issues of access
or to Data Driven Classroom. While the Pearson Learning software allows for
benchmark tests to be scanned and for reports to be generated, many teachers reported
that the necessity for schools to share scanners created long lag times between the time in
which the test are administered and when the reports are received. The principal at
Orchard acknowledged the impact of the hardware sharing and expressed similar
concerns to those of her teachers with regard to addressing it.
While discussing the frequency with which teachers access the web-based data
storage programs at both schools, it became clear that the teachers had vastly different
levels of familiarity and comfort with regard to technology. Some teachers viewed
themselves as being very technologically savvy and reported no issues with learning how
to access the database or how to generate reports. Other teachers who were not as
comfortable working with technology made comments such as, “I am probably the least
tech person on this site.” One teacher at Mesa revealed that while he is comfortable
analyzing assessment data, he is not so comfortable having to access the data on his own.
He explained this by saying, “If it’s given to me, that’s one thing. If I’m having to go and
pull it, which is more and more how it’s operating now because of Data Director, it’s
more difficult.” This finding holds some importance as many of the teachers attributed
62
one’s ability to make good use of assessment data to his or her ability to use technology.
Teachers who view themselves as not being able to successfully use technology may feel
that they are not able to engage in the process of data-driven decision making effectively.
This feeling of the connection between technology and data use surfaced when teachers
discussed the attributes of someone who is able to effectively use student assessment
data. In response to this issue, one teacher shared the following opinion:
The system that we use, computerized system of accessing data, is really
convenient for people who know how to use it, but, you know, some teachers who
aren’t as technology inclined have a really hard time figuring out how to use the
system and apply the data.
Another teacher shared her feelings about the connection between a teacher’s familiarity
with technology and effective use of data by saying, “I see some teachers who are on the
computer and they create like these spreadsheets and are like, ‘Oh, look, I’m
manipulating the data to show me this’ and it’s very intimidating.” This notion of the
importance of the link between technology and data use was not shared only by the
teachers who considered themselves to be less able to use technology effectively. One
teacher who considered herself proficient in the use of technology summed up her
feelings about a teacher’s willingness to use student assessment and his or her ability to
use technology as follows: “I look at teachers that haven’t had the experience in other
careers to learn that and I’m like, ‘Wow, this would be really hard to use.’ I don’t know
if I would use it if I couldn’t do that.”
Clearly, the increasingly important role that technology plays in the storing and
analyzing of student data requires that teachers have some level of familiarity with using
63
a computer, accessing a program, and performing basic functions to generate reports.
Teachers who possess these abilities may be more apt to engage in the process regularly
than those who lack these skills. With this in mind, it is not too surprising that it is in the
area of technology that many teachers expressed a desire for additional training and
support.
Disparities in Training
Professional development in the school or district setting can cover many topics
and take a variety of forms. For the purposes of this study, teachers were asked about the
availability and perceived effectiveness of professional development or training specific
to technology related to the collecting or analyzing of student assessment data or the use
of assessment data to inform instruction.
When discussing the availability of professional development related to data use
or technology, an interesting theme emerged amongst teachers at both schools. While a
few teachers stated that they were not aware of any training that had been conducted in
these areas, most of the teachers acknowledged that either the district or the school did
provide training in at least one of the areas. What was interesting was that many of the
teachers indicated that they had not been included in these trainings because they were
not part of either the data team or leadership team at their school. It was clear from some
of the responses that a few of the teachers harbor some resentment over the fact that they
were not included in these trainings. One teacher stated that “when we’re asked to go
somewhere else…the leadership team goes. A lot of the other teachers just kind of fall
64
out of the loop.” In contrast to the teachers who were not members of the data team or
leadership team, those who were on such teams reported receiving “so much” training or
“weeks of training” on these topics. One teacher who is a member of the leadership team
at Mesa recalled that “there was a Leadership Academy a couple of years ago and it was
helpful. They sort of walked us through the different ways you could look at data. We
put it into a pie chart and graph and looked at individual data as well.” The stark contrast
in responses may indicate that the inclusion of only certain teachers in the trainings
associated with data use or the use of technology, with no follow-up training later
conducted with the entire staff, may result in some teachers feeling left out and hence
removed from the process of data-driven decision making.
Although most teachers expressed interest in attending professional development
in the areas of technology and the use of assessment data to improve instruction, it is
important to note that not every teacher who attended such trainings found them to be
useful. For example, a teacher from Orchard who had participated in training on the use
of the web-based student data system stated that she “found it interesting, but not
necessarily useful.” Another teacher stated, off the record, that she considered the
training to be “a waste of time.”
With regard to the topics addressed through professional development, an
overwhelming majority of the teachers at both schools indicated that they desire
additional training specific to the use of technology to support the process of data-driven
decision making. Almost all of the teachers felt that training in this area would allow
65
them to use assessment data more effectively with their students. Teachers expressed
interest in learning how to create reports and various graphs representing student
assessment results. Far more teachers indicated the desire for further training in the area
of technology rather than professional development specific to using the data to modify
instruction. This may be related to the perception amongst the teachers that those
teachers who are adept at using technology are necessarily better able to make use of
student assessment data in their classrooms.
Lack of Focus/Support from Site Leadership
While factors such as time, technology, and training may affect the level of
engagement that teachers demonstrate toward the process of data-driven decision making,
it is possible that the actions and attitude of the site leadership toward this process can
also impact the degree to which it is implemented at a school site.
Both of the schools that were chosen for this study met the criteria of having a
principal who has made an effort to implement data-driven decision making at her site.
While the degree to which each principal actually supports data use amongst their
teachers could not be accurately ascertained during the limited time spent at each site, the
responses from their teachers and comments shared during their interviews provide a
glimpse into the reality behind the rhetoric. When asked to articulate the ways in which
she supports teachers in using student assessment data, the principal at Mesa stated that
she mainly provides them with the data that is to be organized in a binder referred to as
“the Bible.” All teachers are expected to maintain this binder as the year progresses.
66
When asked the same question, the principal at Orchard also cited the distribution of data
and reports to the teachers as the primary way in which she supports data use by teachers.
The principal at Orchard also mentioned occasionally providing feedback to teachers
after their data has been submitted to her, but she acknowledged that time constraints and
a lack of support staff make it difficult to do so regularly. Beyond providing teachers
with their data and sharing some feedback, neither principal mentioned providing more
substantial support to their teachers as they attempt to analyze and use student assessment
data.
Confirming the statements made by both principals, the teachers identified the
practice of providing them with data as the primary way in which they are supported by
the site leadership. While a few of the teachers expressed appreciation for this support,
many of them were critical of the level of assistance, or lack thereof, that they have
received from their principals. In addition to the desire for more training described in the
previous section, multiple teachers also called for additional support with the
collaboration process. One teacher described her principal’s current role in the process
by saying that “she could walk in and see what, you know, you’re talking about…that’s
very rare though.” Another teacher described her principal’s approach to participating in
the collaboration meetings as “putting on the board ‘discuss assessments’ blah, blah,
blah..,” but then leaving it up to the participants to decide how in depth they will go.
These teachers expressed a want for “more consistent meetings” and “guided rules” set
by the principal.
67
With regard to the role of the site leadership in supporting data use, some teachers
noted that the emphasis placed on this practice starts off strong at the beginning of the
year, but that it wanes as the year moves on. As one teacher put it:
…it starts off strong and then, it starts, you know, things happen and other
mandates come, other things that they want people to do. So, it starts dwindling,
and, I mean, it’s more ‘turn in your report’ versus ‘drive the instruction’ and that’s
the way I see it.
Another teacher added that the state assessment results are discussed heavily during the
first month of school, but that for most of the year “we hardly really talk about it.”
While providing teachers with student assessment data is a form of support, a
majority of the teachers do not seem to feel that it is adequate. Areas where teachers
desire additional support include a more active role in the collaboration process and a
more consistent focus on the use of data throughout the school year. For these teachers to
truly engage in the process of data-driven decision making, it was apparent that they must
feel that the site leadership is dedicated to making the process a consistent and primary
focus.
Through the course of the interviews that were conducted, it became evident that
certain external factors may be affecting the teachers’ level of engagement with data-
driven decision making. The teachers identified issues such as a lack of time,
inconsistent norms of collaboration, problems related to technology, inadequate training,
and low levels of support from site leadership as standing in the way of them using
student data effectively.
68
Issues Related to Internal Factors
In this section, the four themes related to internal factors that arose from the
interview process will be examined. These themes deal mainly with teachers’ feelings or
perceptions and the resulting impact on their willingness to engage in the process of data-
driven decision making. These themes include a resentment of the increased emphasis on
student assessment data, insecurity with the sharing of data, a preference for more
informal data collection, and questions of validity. These themes and their possible
impact on the regular use of student assessment data by teachers will be discussed in the
following sections.
Resentment of Emphasis on Student Assessment Data
As student assessment data continues to assume a more prominent role in the
educational process, teachers are now being asked to gather and analyze more of this type
of data than ever before. Many teachers at both sites commented on the volume of
assessment data being collected. The common sentiment was that too much assessment
data is being collected. Some teachers felt that the kids themselves were being negatively
impacted by the frequency of the assessments and many felt that the amount of data that
they are being asked to manage is excessive. One teacher summed up the view of many
by lamenting that, “it just seems like everything is data, data, data now.” Another teacher
shared her feeling that, “when there was less data, it was more valuable.” It was clear in
speaking with the teachers that many of them feel that there is a trade-off between the
collection of so much data and the implementation of effective instruction. The current
69
level of data collection is viewed as “getting in the way” of what are seen as good
teaching practices.
In addition to dissatisfaction regarding the volume of data being collected, many
teachers also expressed frustration over the fact that they view the collection and use of
student assessment data to be something that is being mandated by their district, as
opposed to originating at the school site. When asked who, if anyone, requires the use of
student assessment data, most teachers identified their district office. One teacher at
Mesa stated that she views the use of student data as being a “top-down thing.” An
Orchard teacher echoed her sentiments and went further by sharing that she is “resistant
to all of these mandates that [the district office staff] are bringing down.” Her colleague
added that the district’s insistence on the use of student data “is making it unbearable for
us.” If the push for the use of student assessment data is viewed as being a district office
mandate, it is plausible that some teachers may resent having to engage in the process and
therefore will not do so whole-heartedly.
Beyond their resentment for what they view as district directives to use student
assessment data, many of the teachers expressed displeasure over the way in which such
data has been, or might someday be, used against teachers. At the site level, none of the
teachers reported having assessment data being used against them in a punitive way.
Some did mention feeling uncomfortable when discussing their students’ test results with
their principal, and one teacher stated that “as a teacher, we punish ourselves,” implying
that external consequences for poor performance are not necessary. Another teacher
70
recalled her feelings after seeing that her students performed poorly on an exam by
saying, “no one comes up and says, ‘Oh, you’re kids did horrible so you’re a bad
teacher,’ but the implication is always there.” While none of the teachers interviewed
have experienced punitive actions being taken against them as a result of assessment data,
it was very clear that many of them are worried that this will be the case in the future.
Teachers expressed feeling such as “fear” when discussing the possibility of student
assessment data being viewed as a reflection of the quality of their teaching. This
sentiment was expressed clearly by one teacher who admitted that, “it’s a bit scary…to
think that your data might be discussed and how your students did and you feel very
accountable.”
Dissatisfaction over the use of student assessment data was not limited to the
school or district level. Almost all of the teachers expressed misgivings over the way in
which such data is used at the state and federal levels as well. The consensus was that the
use of such data is “unfair” due to the variety of factors that can affect student learning.
The teachers viewed student data as being “misused” by politicians and being presented
in a generally negative way. The sense of frustration this view causes is evident in the
following statement made by a teacher at Mesa regarding the role of student assessment
data in legislative initiatives: “I feel like I am being set up to fail a lot of the time…I, the
teacher, am being punished because I’ve chosen to work at a low-socioeconomic school.”
If teachers are expected to use student assessment data effectively, it stands to
reason that they must value this data and the reasons for which it is being used. The
71
issues addressed in this section demonstrate that some teachers at both Mesa and Orchard
resent the push to collect and use data, the origins of this mandate, and the uses, or
potential misuses, of the data itself.
Insecurity with the Sharing of Data
While teachers may use their individual classroom data to improve their own
instruction, the improvement of instructional practices schoolwide may benefit from the
sharing of data across classrooms and throughout the school. At both of the schools
visited, some degree of sharing of data has occurred, either in the form of reviewing
individual classroom data at grade-level meetings, or sharing overall grade-level
performance at meetings of the entire staff. It should be noted that none of the teachers
reported having their individual data shared with the entire staff at any meeting. The
sharing of assessment data beyond the individual classroom is not the norm in most
schools and this practice, in any form, has caused some anxiety for the teachers who were
studied.
As has already been discussed, many of the teachers have doubts about the
strength of the connection between their teaching and the results of their students on
district benchmark or state exams. Many teachers expressed fear over what they viewed
as inevitably being judged by their colleagues based upon the performance of their
students. One teacher stated that, “I am grateful that my particular class is not put up on
the overhead and shown.” Another teacher described it this way:
They gave us the results of our grade level and it was good because I was on the
high end…but then you saw teachers who were on the low end and…I’m
72
experiencing that…it might be a little uncomfortable for the teacher who is at the
low end if everyone knows…I think that’s something that doesn’t have to be
shared with everybody.
While these teachers focused on the effect that sharing data may have on a
teacher’s comfort level, others discussed the effect that seeing their students’ test
results has on them personally. A teacher at Mesa said, “I am encouraged when I
see the test scores go up…that makes me feel like I’m doing a better job.” She
also stated that a drop in tests scores “makes you feel like maybe you’re not doing
everything you could.” A teacher from Orchard, who had witnessed her students’
test scores decline, shared how she, “started feeling a little inadequate in my own
teaching.”
Whether due to concerns about the accuracy of the data, insecurities about sharing
data, or a fear of the development of doubt in one’s abilities, the use of student
assessment data is obviously something that is affected by a teacher’s own perceptions
and confidence level. It is possible that discomfort over this data being shared may turn
some teachers off from the process of data-driven decision making.
Preference for Informal Data Use
Data can take many forms in the classroom. For the purposes of this study,
teachers were asked mainly about the use of data from state assessments and benchmarks
tests. While these assessments were the focus of the interviews, it became evident
throughout the data collection process that teachers preferred the use of more informal
73
assessments, such as questioning and observations, to these more formal assessments in
almost every case.
Teachers reported informally assessing their students in a variety of ways,
including, “talking,” “sharing things,” “listening to kids read,” and gathering “anecdotal
information.” One teacher responded that he uses his “own judgment more than any
assessment.” When discussing his preference for informal assessment over formal
assessment, another teacher described how he puts more emphasis on “practice rather
than the big game.” The preference for this type of informal data over more formal
assessment data was consistent across all of the interviews. The teachers mentioned that
they favor informal assessment because they view it as being timelier, more accurate, and
more reflective of their teaching than formal assessments. They described how students
being scared, nervous, sick, or having “a bad day” might affect their performance on
more formal assessments. Two teachers discussed how their own experience with formal
assessments as students has affected their view of these assessments as teachers. A
teacher at Mesa described that because she knew “how to take tests,” she was placed in
classes that were at a higher level than she could handle. In contrast, a teacher at Orchard
recalled that, despite being capable, she was constantly placed in lower-level classes due
to her inability to perform well on formal exams. While these experiences are dissimilar,
they both have the same effect of coloring the teachers’ views on the utility of formal
assessments.
74
While informal assessments must certainly be taken into consideration when
designing lessons and interventions, most data analysis initiatives focus on the use of
results from chapter, benchmark, or state assessments to drive instruction. If teachers
prefer the use of informal assessments to these more formal assessments, as was stated by
most teachers in this study, it is possible that they will not show much enthusiasm for the
process of data-driven decision making, at least as it is conceptualized by the leadership.
Validity Issues
If teachers are to be expected to make the use of assessment data a part of their
regular professional practice, it stands to reason that they must hold a belief that the data
that they are receiving is valid. A theme that surfaced throughout many of the interviews
was a distinction in beliefs regarding the validity of certain types of assessments.
When asked about the results from state assessments, it was evident that most
teachers did not find these results to be very useful. While some teachers mentioned
using them at the beginning of the year to group students, others have found no use for
them at all. This might be because they believe, as one teacher stated, that “the test is so
far above [my students] that it doesn’t give me any useful data.” Despite this feeling, two
teachers from Orchard did mention that they use the results from state assessments not so
much to design instruction for students, but rather to reflect on their own teaching. They
indicated that they use these results to assess their strengths and weaknesses as a teacher
so that they can improve their professional practice.
75
Compared to the state assessments, the teachers interviewed found the results of
the benchmark tests to be more useful. This is not to say, however, that they consider the
results from these tests to be useful, just more useful than the results from the state
assessments. In fact, of all of the areas discussed during the interviews, none inspired
stronger responses than the topic of the district benchmark tests. While a few teachers
mentioned using these formative assessments to regroup students or to guide instruction,
a majority of the teachers focused on what they consider to be flaws with the tests.
Teachers at both Mesa and Orchard described their district benchmark tests as being too
long, containing errors, and not being aligned to the district pacing guides. These
sentiments are expressed clearly by a teacher from Orchard who stated:
They’re very long and there’s lots of mistakes in them. You know, we go back
and we find lots of mistake and, you know, it’s been a problem with the district.
So, I think, as far as my opinion, when I see these mistakes on them, I’m just kind
of like, ‘Let’s just take it and get it over with, because it’s something we have to
do.’ And some of the kids are like, ‘What’s this? We haven’t even covered that,’
but they include it.
A majority of the teachers expressed similar feelings regarding the perceived
misalignment of the district benchmark and the pacing guides. It was clear that
there is some resentment over the fact that they are being directed to follow the
pacing guides, which many of the teachers are not in favor of, and then the
assessments are perceived to include material not covered in them. This is not the
only issue related to the pacing guides that came up during conversations
regarding the district benchmark tests. A few teachers mentioned that, although
they feel that the material on the benchmark tests is covered in the pacing guides,
76
their inability to keep up with the pacing guides has kept their students from being
exposed to some of the concepts included on the benchmark tests. One teacher
described how prior to a district benchmark test she told her students, “I’m sorry
that I haven’t taught you this. Just try your best babies and we’ll go from there.”
Obviously, the perception that the benchmark tests contain errors, cover material
that is not in the pacing guides, or contains subject matter that teachers did not
have the time to teach will affect teachers’ views on the validity of the data from
such exams and therefore alter the way in which they use this data.
Of the three forms of formal assessments discussed, state assessments,
district benchmark tests, and publisher or teacher-created chapter tests, teachers
overwhelmingly favored the chapter tests as the most useful source of data. The
most common reason given for this response was that the chapter tests more
accurately reflect the instruction in the classroom. Some teachers indicated that
they alter the chapter tests to better match what was taught, a practice that they
feel gives them data that is more reflective of student learning.
Each of the three forms of formal assessment discussed is viewed differently by
the teachers who use them. Since some are seen as being more accurate and appropriate
for their students, their results are considered to be more valid. Teachers differing views
on the validity of any given assessment would naturally affect their willingness to use the
data from that assessment to drive their instruction. As one teacher from Mesa stated, the
effective implementation of data-driven decision making requires “a belief from the
77
teachers that it is legitimate…if the teachers aren’t on board and they don’t believe that it
is really reflecting what’s happening, they’re not going to want to use it.”
Effect of Teacher Characteristics
In addition to gathering data on the external and internal factors that might affect
a teacher’s level of receptiveness to data-driven decision making, this study also sought
to examine the individual characteristics of the teachers in the sample to look for patterns
that might lead to conclusions about common traits amongst teachers who have not
embraced or are resistant to the practice of analyzing assessment data to guide
instruction.
Table 7 consists of a chart indicating the grade level taught, gender, approximate
age, and years of teaching service for each teacher in the sample. The table also indicates
the designation given by the principal as to whether they are attempting to use data to
guide instruction or whether they are not attempting to engage in this practice.
Table 7: Characteristics of Interviewed Teachers
Teacher Grade Gender Age Years Teaching Designation
S1 4 Female 41-50 10 Not Attempting
S2 5 Female 31-40 10 Not Attempting
S3 3 Female 31-40 13 Not Attempting
S4 K Female 31-40 12 Attempting
S5 6 Male 31-40 7 Not Attempting
S6 2 Female 21-30 7 Attempting
S7 4 Female 50+ 10 Not Attempting
S8 1 Female 50+ 15 Attempting
W1 2 Female 21-30 6 Not Attempting
W2 2 Female 21-30 6 Attempting
W3 1 Female 31-40 13 Not Attempting
W4 4 Female 31-40 6 Attempting
W5 K Female 31-40 6 Attempting
W6 5 Female 41-50 7 Not Attempting
78
Table 7, Continued
W7 5 Male 31-40 13 Not Attempting
W8 3 Male 31-40 8 Not Attempting
Based on the information found in the chart above, the individual characteristics that
were recorded for each teacher will be discussed and any relevant themes or patterns will
be highlighted. It should be made clear that the teachers interviewed represent only a
portion of the teaching staff at their respective site and that any commonalities found
amongst these teachers may not be applicable to the larger group. It should also be
mentioned that the designations listed above were assigned by the principals and that
both principals admitted to having some difficulty in determining the exact degree to
which some of the teachers are attempting to use assessment data.
Grade Level
In total, 16 teachers were interviewed as part of this study. Of the 16, nine were
lower grade teachers, teaching kindergarten through third grade, and seven were upper
grade teachers, teaching fourth grade through sixth grade. Of the nine lower grade
teachers, four were designated as not attempting to engage in the process of data-driven
decision making. Of the seven upper grade teachers, six received this designation.
Within the sample, therefore, it was found that there was a higher representation of upper
grade teachers than lower grade teachers in the “not attempting” group.
Gender
Within the sample of 16 teachers, 13 of them were female and three were male.
Of the 13 female teachers, seven were designated as not attempting to use assessment
79
data regularly. Of the three male teachers, all three received this designation. While
there were obviously more females than males represented in the study, the high
percentage of male teachers in the “not attempting” group could indicate that there is
some connection between gender and levels of receptiveness to the process of data-driven
decision making.
Age
The teachers included in the sample represent a broad range of ages. To help in
identifying patterns in the data, teachers were placed in one of four age groups; 21-30,
31-40, 41-50, or 50+. Of the three teachers aged 21-30, one was designated as not
attempting to use data. Six of the nine teachers aged 31-40 were placed in this group, as
well as both of the two teachers aged 41-50. Of the two teachers over fifty years of age,
one was designated as not attempting.
While the small number of teachers represented in each group might make it
difficult to draw meaningful conclusions from the age distribution, the data show that
seven of the 12 teachers between the ages of 21 and 40 were designated as not attempting
to use assessment data. Of the four teachers who were 41 years of age or older, three of
them were assigned to this category. This data might indicate that age does affect one’s
receptiveness to this practice, with older teachers possibly being less receptive than
younger ones. It is possible that this finding is related to the fact that younger teachers
often have more experience and comfort with using technology.
80
Years of Service
The years of service amongst the teachers in the sample ranged from a low of six
years to a high of 15 years. Of the 16 teachers, eight had ten or more years of service and
eight had less than ten years. Of the eight teachers with ten or more years of service, four
were designated as not attempting to use data. Of the eight teachers with fewer than ten
years of service, six had this designation. The slight difference between the two groups is
not significant enough to conclude that years of service might affect a teacher’s
receptiveness to the process of data-driven decision making.
Involvement in the Process
While not specifically asked during the interview, an interesting theme that
emerged was the teachers’ view of the degree to which they were involved in the
adoption of the practice of data-driven decision making. It was clear that many of the
teachers felt left out at the early stages of the adoption of this process. Most of the
teachers who were interviewed were not members of the leadership team or the data team
at their school and, therefore, did not have access to the training, professional
development, or general information that teachers on these teams were privy to. It was
interesting to note that all six of the teachers who were identified as attempting to use
data were members of either the leadership or data team. Whether or not these teachers
became receptive to data use as a result of the training that they received as team
members or their general receptiveness served as the criteria for them to be selected for
the team is not known. The comments made by many of the teachers who were not on
81
these teams suggest that the assignment to the leadership or data team, along with the
inherent increased involvement in the adoption of the process of data-driven decision
making, may affect teachers’ willingness to engage in regular data use.
While there might not be any single, reliable profile of a teacher who is resistant
to engage in the process of data-driven decision making, comments can be made about
the characteristics common to the teachers in the sample who were identified as not
attempting to use data. As a percentage, data use was engaged in less frequently by
teachers in the upper grades, males, and those over forty years of age. More extensive
studies would have to be conducted to determine whether or not this conclusion is
accurate when applied to a larger sample of teachers.
Conclusion
This chapter represents a presentation and analysis of the data gathered from
teachers at Mesa Elementary School and Orchard Elementary School regarding the
reluctance of some teachers to engage in the process of data-driven decision making. The
research questions focused on both internal and external factors that might serve as
obstacles to teachers’ involvement in this process, as well as an examination of the
characteristics of the individual teachers involved, in hopes of identifying common
characteristics amongst them. Through the analysis of the data collected during the
interview process, several prominent themes emerged. These themes were explored and
then linked to the appropriate research questions. Each theme was critically examined
and the data relevant to the themes were analyzed thoroughly to determine their impact
82
on the research questions. The thematic underpinnings included under each research
question represent the most pertinent data gathered for that specific topic.
Amongst the issues cited by teachers as hindering their involvement in the
process of data-driven decision making were a variety of external factors. The lack of
sufficient time allotted for the analysis of assessment data and for working with
colleagues to make use of the data was emphasized. With regard to the limited time that
is available, the teachers expressed a desire for more structured collaboration meetings
with clearly identified norms of collaboration. A recurrent theme was the impact of
technology issues on the teachers’ willingness to engage in the process of data-driven
decision making. Within this area, issues included a lack of proficiency in the use of
technology and also problems related to the hardware and web-based programs used. A
lack of training in both the use of technology and the effective use of data to improve
instruction served as another deterrent to teachers embracing the process. Finally, the
role of the site leadership in establishing and maintaining a focus on the regular use of
data was discussed, with many teachers expressing a desire for more active involvement
and more consistency.
In addition to the external factors already addressed, an analysis of the data
showed that multiple internal factors also affect the level of engagement that teachers
exhibit with regard to the process of data-driven decision making. Amongst many of the
teachers, it was clear that there is some degree of resentment over the emphasis on the
use of student assessment data. Whether it be due to an association between this data and
83
ill-favored legislative policies or a heightened sense of accountability, it was clear that
many of the teachers are not supportive of the extensive use of assessment data. Also
affecting their receptiveness to data-driven decision making were feelings of insecurity
over the sharing of student assessment data, a practice that is an integral part of the
process. For a number of reasons, most prominently a fear of being viewed by their
colleagues as being ineffective, most of the teachers were not comfortable with this
practice. A strong preference for informal assessment over formal assessment was
identified as another obstacle that is preventing these teachers from embracing the
process of data-driven decision making. Lastly, doubts regarding the validity of the
results from many of the assessments through which the data is being derived has lead
some of these teachers to discount the value of collecting and analyzing the data.
While a vast majority of factors, such as time, norms of collaboration, training,
assignment of value, and mistrust of data use, were common themes cited by teachers at
both schools, a lesser number of factors appeared to affect one site more significantly
than another. Most notably, the impact of technology issues at Orchard was much more
profound than at Mesa. This was due to the complications caused by the use of two
separate systems to manage assessment data and also the need to share hardware with
other schools. Teachers and the principal at Orchard also expressed a much more
significant degree of displeasure over the impact of district office decisions and mandates
on the process of data-driven decision making. With regard to Mesa, the fact that the
principal has been at the site for less than two years has posed a significant challenge.
84
Promoting the implementation of data-driven decision making in the absence of
established relationships with her teachers has proven to be quite difficult.
In sum, there are numerous factors that were found to affect a teacher’s
willingness to engage in the process of data-driven decision making. Some of these
factors are external and may be addressed through changes in practice, while others are
more internal and might indicate a need for more extensive work with teachers to address
issues such as values and confidence. Regardless of the nature of the factors, these
obstacles must be addressed if administrators hope to have their teachers embrace the
processes of collecting, analyzing, and acting upon student assessment data.
This following chapter will provide suggestions for administrators who aim to
address these issues, as well as highlight the need for future research in this area.
85
CHAPTER FIVE
Summary and Implications of Findings
Introduction
In an effort to improve classroom instruction and increase student achievement,
many districts and schools are engaging in the process of data-driven decision making.
This practice often involves collecting and analyzing data from a variety of assessments
and using this information to guide instruction in the classroom. While this process has
been embraced by many teachers, others have been more reluctant to engage in it. The
purpose of this study was to examine the factors that contribute to teachers’ resistance to
the process of data-driven decision making and to determine whether or not certain
personal characteristics affect a teacher’s willingness to engage in this practice.
Determining the root causes of resistance amongst teachers to the regular use of
student assessment data to inform instruction requires the consideration of a broad range
of factors. This study examined the role of both external and internal factors in the
process of data-driven decision making and found that both types of factors can have a
significant effect on teachers’ receptiveness to it. With regard to external factors, the
concept of time can play a critical role. The lack of adequate time to collect, analyze and
respond to student assessment data, as well as teachers’ perceptions of the effectiveness
of collaboration time, might impact their level of engagement with data-driven decision
making. The accessibility and efficiency of various forms of technology and the presence
or absence of appropriate training must also be considered. The way in which teachers
86
perceive the site leadership and their commitment to the process of data-driven decision
making might also affect the implementation of this process at a school site.
In addition to these external factors, multiple internal factors must be considered
as well. Teachers’ views on assessment data and insecurities over the sharing of data
might impact their receptiveness to the process of data-driven decision making. A
mistrust of data use amongst teachers or doubts regarding the validity of the data received
could certainly explain a reluctance to embrace the practice. A preference for less formal
data collection on the part of teachers might also attribute to higher levels of resistance to
the process.
Beyond examining this wide array of external and internal factors, this study also
intended to determine the effect, if any, that personal characteristics, such as age, years,
of service, assignment, and gender, might have on the level of engagement that teachers
exhibit toward the process of data-driven decision making.
This study involved an analysis of teachers’ responses to questions pertaining to
the role of assessment data and the perceived supports for and barriers to the regular use
of such data in the classroom. Data for the study were collected through individual
interviews with classroom teachers and principals. A thorough analysis of this data
resulted in a variety of findings that might be useful for administrators who are looking to
implement the process of data-driven decision making at their school sites. The study
found that providing teachers with time to analyze data is not an effective form of support
unless that time is structured and clearly focused. Technology was found to have both
87
positive and negative effects on the process, depending upon the capabilities of the
systems in place, the level of proficiency amongst the teaching staff, and the frequency
with which the technology is used. The role of site leadership in promoting and
supporting the process of data-driven decision making was found to be an element critical
to its successful implementation. The personal beliefs held by some teachers regarding
the utility and validity of assessment data, as well their preference for informal
assessment, served as barriers to them embracing the practice of regular data use. The
implications of data potentially being shared amongst teachers were also found to affect
their level of receptiveness. Age was determined to impact teachers’ receptiveness to
data use, with younger teachers possibly being more receptive to the process. Gender
was also found to be a possible key factor, as the male teachers included in the study
were less likely to engage in the process than the female teachers. In sum, the causes of
teacher resistance to the process of data-driven decision making are complex and many.
Each factor plays a role, with some bearing more weight than others.
Connections to Prior Research
In reviewing the findings from this study, many interesting connections can be
made to current literature in the areas that were examined. Chapter 2 outlined research
on external factors, such as the quantity and quality of time, technology, professional
development, and leadership support, as well as internal factors, such as values,
perceptions of validity, mistrust of data use, and self-efficacy.
88
External Factors
Quantity and Quality of Time
As discussed in the Time subsection of the literature review, prior research has
shown that providing teachers with adequate time to collect, analyze, and collaborate
around student data is vital to the successful implementation of this process at a school
site (Supovitz & Klein, 2003; Armstrong & Anthes, 2001). This section also highlighted
Marzano’s (2003) assessment of the inadequacy of allotted instructional minutes, in light
of the extensive amount of material contained in the state content standards that are
required to be covered over the course of a year. This sentiment was consistent with the
findings of this study. Many teachers reported feeling overwhelmed by the quantity and
pace of the curriculum that they are teaching. The introduction of multiple assessments
and the expectation that time would be dedicated to analyzing the results from these
assessments was viewed by many teachers as infringing upon the precious minutes that
they have for instruction. This finding is consistent with that of Ingram et al. (2004), who
discussed the perceived trade-off between the collection and analysis of data and
effective classroom instruction.
Both of the schools in the study have collaboration models in place, designed to
allow teachers to analyze and discuss student data. While the expectation might have
been that most teachers would describe the amount of time allocated to this process as
being inadequate, this was not the major finding. With regard to the collaboration time, it
was the quality of the time, not the quantity, that most teachers viewed as an obstacle to
89
the successful implementation of this process. While some teachers deemed the time
allotted for collaboration to be adequate, the perceived lack of norms of collaboration was
seen as an obstacle by most. Although DuFour (2006) and Young (2006) identified the
need for such norms, the high value placed on norms of collaboration by teachers
themselves represents a new finding.
Technology
The Technology section of the literature review stressed the importance of
providing teachers with access to both technology and timely data, factors that Wayman
(2005) identifies as separating schools that effectively engage in the process of data-
driven decision making from those that do not. This notion was supported by the
findings from this study. The important role that technology plays in the process of data-
driven decision making was evident in the responses of both teachers and administrators.
Both groups identified the ability to use technology as having a major influence on a
teacher’s attitude toward the process. This attribution of technological proficiency to
teachers’ levels of engagement with data-driven decision making is a new finding in this
field.
Although both districts maintained a database of student assessments results,
Smithfield did not use their system to its full capacity. Instead of storing both summative
and formative assessment results for teachers to access, they chose to store only state
assessment scores in their system. This tendency to use only a portion of their system’s
capabilities was something that Supovitz and Klein (2003) had noted as being common in
90
many districts. It should be noted, however, that maximizing a database to its fullest
potential does not guarantee that teachers will take advantage of all that the system has to
offer. As was found in the study, teachers who are not receptive to the process of data-
driven decision making are less likely to access the database to examine student
assessment results.
Professional Development
Prior research has established the link between a coherent and consistent
professional development program and the effective implementation of data-driven
decision making (Datnow et al., 2007; Supovitz & Klein, 2003). According to Elmore
(2002), one of the hallmarks of effective professional development is that it is consistent
and ongoing. In contrast to these models, the professional development that has been
offered at both of the schools in the study was limited at best. Many of the teachers
involved in the study reported never having received any professional development in the
areas of data-driven decision making or using technology to support regular data use.
This finding is consistent with Wayman (2005), who determined that many teachers do
not receive the training necessary to use technology appropriately.
Any training that was offered at either of the two schools was limited to a small
number of teachers, mainly those on the leadership or data teams, and was not later
shared with the rest of the teaching staff. Datnow et al. (2007) describe how schools that
have successfully implemented the process of data-driven decision making require that
teachers who receive training regularly share their knowledge and expertise with the rest
91
of the teachers at their site. This practice did not occur at either of the two schools that
were studied and may have indeed been a barrier to some teachers’ receptivity to data
driven decision making.
Leadership Support
A finding that was particularly clear in analyzing the data is that, when it comes to
engaging teachers in the process of data-driven decision making, leadership matters.
While prior research has already highlighted the prominent role that school leaders often
play in reform initiatives (ex. Datnow et al., 2007; Kerr et al., 2006; Young, 2006; Lachat
and Smith, 2005; Wayman, 2005; Supovitz & Klein, 2003), the findings of this study
would indicate that the successful implementation of the process of data-driven decision
making might not be possible in the absence of strong, focused, and consistent support on
the part of the principal. These findings were quite consistent with those of Datnow et al.
(2007), Young, (2006), and Wayman (2005) with regard to the critical role of agenda
setting and the establishment of clear and consistent norms of collaboration by the site
leadership. Many teachers in the study considered the absence of such norms to be a
major obstacle to the implementation of data-driven decision making at their school.
It may be in the area of leadership that this study produced its most significant
finding. While previous research on this topic has identified strong leaders as
contributing to effective use of data amongst teachers, this study found that leaders who
are not fully committed to this process may, in fact, serve as obstacles to its
implementation.
92
Internal Factors
The Role of Values
The level of engagement that teachers exhibit toward the process of data-driven
decision making is affected by more than external factors, such as technology and
training. This study found that values play a pivotal role in teachers’ responses to the
results from state examinations and district benchmark assessments. Consistent with the
findings of Supovitz and Klein (2003) and Ingram et al. (2004), teachers in the study did
not assign as much value to state and district assessments as they did to less formal
assessments and anecdotal observations.
Perceptions of Validity
If teachers are expected to wholeheartedly engage in the process of data-driven
decision making, it is necessary that they perceive the data that they are working with as
being valid and reliable. With regard to state assessments and district benchmarks, the
teachers in the study did not view a majority of the data generated from these tests as
being valid. This finding is consistent with those of Kerr et al. (2006) and Marsh (2007),
who cited a connection between questions of validity and a lack of value for assessment
data on the part of teachers. These questions of validity resulted in teachers discounting
the value of the assessments that lay at the heart of the data-driven decision making
process.
93
Mistrust of Data Use
To truly engage in the process of data-driven decision making, teachers must be
willing to share data openly and freely, without fear of reprisal. Ingram et al. (2004) and
Mandinach et al. (2006) identify a mistrust of the uses of assessment data as an obstacle
to the implementation of this process. The findings in this study are consistent with this
research, as many of the teachers who were interviewed expressed fear over possible
misuses of assessment data. It is interesting to note, however, that while they were
fearful of misuses of data, none of these teachers could recall any experience where they
or other teachers they know had had data used against them inappropriately. While the
research suggests that actual misuses of data can have ripple effects that deter teachers
from embracing the process of data-driven decision making, this study showed that such
misuses can still have a detrimental effect on the process, even when they exist in theory
alone.
Self-Efficacy
In the absence of data-driven decision making, it is possible for teachers to
confidently deliver instruction without ever fully grasping the extent to which their
students have mastered the material. Engaging in the routine of analyzing student
assessment data allows a teacher to develop an accurate picture of the effectiveness of
their instruction, which can result in an increase or decrease in self-efficacy, depending
upon the performance of their students. In their research, Ingram et al. (2004) noted the
link between data-driven decision making and self-efficacy. The findings of this study
94
serve to further establish this link, as many of the teachers shared the fact that analyzing
their data occasionally makes them question their ability to reach all of their students
effectively.
Summary
While prior research has been conducted in the area of data-driven decision
making, this study is unique in the fact that it focused on teacher resistance to this
process. This laser-like approach to the topic allowed for various new findings. These
findings include the high value placed on norms of collaboration by teachers, the
overwhelming attribution of technological proficiency to effective data use, the negative
effects of weak or inconsistent leadership on the implementation of data-driven decision
making, and the tendency to mistrust data use without knowledge of past misuses.
Implications for Future Research
Like most innovations, the successful implementation of data-driven decision
making depends upon a wide range of factors. The potential benefits of instituting this
practice in schools and districts throughout the nation would indicate the need for further
research aimed at deepening our understanding of the many factors surrounding this
issue. Listed below are suggested future areas of research on the obstacles to
implementing the process of data-driven decision making.
• Examining the relationship between teachers’ levels of engagement with the
process of data-driven decision making and their perceptions of the
commitment of site leadership to this process.
95
• Analyzing the impact of technology and teachers’ proficiency with such
technology on the implementation of data-driven decision making.
• Evaluating the effect of professional development and training on teachers’
receptivity to data use.
• A study comparing schools that lack clear norms of collaboration with others
that have established such norms to better gauge the effect of these structures
on the efficiency of data use.
• Data analysis of student achievement levels amongst teachers who are receptive
to regular data use versus student achievement levels amongst teachers who are
reluctant to engage in this process.
• A broader study examining the link between teachers’ characteristics and their
willingness to engage in the process of data-driven decision making.
Studies such as those outlined above would serve to further deepen our
understanding of the obstacles and supports for the implementation of data-driven
decision making at a school site.
Implications for Policy and Practice
In light of the findings of this study, in conjunction with prior research on the
implementation of data-driven decision making, administrators who are seeking to
engage their teachers in this process should consider the following:
Lay the Foundation
96
Before engaging their teachers in the process of data-driven decision making, site
administrators should work to establish a clear and pressing need for such a process.
Teachers must see the value in analyzing data and how this process will result in gains in
student learning prior to taking part in it. Building a broad base of support by involving
as many teachers as possible in the initial adoption of this process is crucial. Although it
might not be possible to satisfy the concerns of every teacher involved, an effort should
be made to ensure that every teacher has a chance to express their concerns and provide
feedback during the initial implementation. If this process is viewed by a majority of
teachers as being mandated by the district or as being driven by a select few individuals
at the school site, their buy in will most likely be weak, jeopardizing the successful
implementation of the process.
District administrators can support site administrators in this area by providing
them with research and literature on the importance of data use. These materials can, in
turn, be shared with teachers. Districts can also ensure that meetings or workshops
regarding data-driven decision making are designed to accommodate multiple teachers
from the school sites, helping to build a broad base of support.
Ensure Access to Adequate Technology
Technology can either enhance or complicate the process of data-driven decision
making. Before implementing the process with staff, site administrators should first
assess the adequacy of the technology available to them. A plan should be developed
outlining the role and capabilities of all hardware, software, and web-based programs that
97
will support the data-driven decision making process. User-friendly technology that
allows teachers to easily access student assessment results on a regular basis should be
utilized. The site administrator should also be well aware of the level of technological
proficiency amongst the staff, which will facilitate the development of appropriate
training and support plans.
To support the implementation of data-driven decision making, district
administrators can assist principals with acquiring necessary technology and technical
support. Technological obstacles, such as requiring hardware to be shared between
schools, should be eliminated to the greatest extent possible.
Clear Norms of Collaboration
If the process of data-driven decision making is to result in improved instruction
and increased student achievement, teachers must become accustomed to sharing data in
an open and constructive environment. In addition to providing collaboration time for
teachers to analyze and discuss data, administrators must work with teachers to develop
clear norms of collaboration, ensuring that this time is used effectively. Expectations
regarding the focus and purpose of the meetings, as well as the conduct on the part of
participants, must be clearly and regularly articulated by the site administrator.
Availability of Training
Engaging in dialogue regarding assessment data and using the analysis of this
data to improve instruction are new concepts for many teachers. As with any new
concept, a structured, consistent, and relevant training program must be available to
98
support the practice of data-driven decision making if it is expected to be used
effectively. This training must address not only the practices of analyzing and making
use of data, but also the use of technology to support these processes.
District administrators can assist site administrators in designing effective
professional development programs for their teachers. Resources can be pooled across
the district and trainings can be developed to accommodate staffs from multiple schools.
Validity of Data
What you get out of the process of data-driven decision making will be a direct
result of what you put into it. If the data being analyzed is found to be invalid for any
reason, then analyzing this data will be a futile task. Teachers must be confident that the
data being analyzed and discussed is valid if they are being asked to make instructional
changes based upon it. Prior to implementing the data-driven decision making process,
site administrators should review all assessments with teachers to identify any areas of
concern and then work to address these issues before the assessments are administered.
District administrators can assist with this process by selecting valid benchmarks
that align closely to the material taught in the classroom. These benchmarks should also
be reviewed thoroughly for errors and should be reevaluated regularly.
Actively Involvement of Site Leadership
Administrators must take an active role in the data-driven decision making
process at their school site if they expect this process to be valued and respected by the
99
teaching staff. While laying the foundation and providing verbal support are both
positive steps, administrators should make an effort to attend as many collaboration
meetings as possible, engage in substantive conversations with teachers regarding
assessment data, and maintain a clear and consistent focus on the regular use of data
throughout the entire school year.
Assess Staff
Every teaching staff is going to consist of a varied mix of individuals with
different strengths, weaknesses, and prior experiences. When working with their staff to
implement the process of data-driven decision making, the site administrator must be
cognizant of the individual characteristics of the teachers at his or her school site. Factors
such as age, gender, and years of service might affect their level of receptiveness to this
initiative. By taking the time to assess the individual characteristics of one’s teaching
staff, the site administrator will be better able to anticipate potential barriers and
complications to the implementation process.
Conclusion
The adoption of data-driven decision making represents a shift toward more
informed and targeted instruction. While this study provides information regarding
factors that might serve as barriers to the implementation of this process, further research
is needed to clarify the extent to which each of these factors affects teachers’ levels of
receptiveness to it. The regular use of data to inform instruction is a practice that holds
much promise. Ultimately, the degree to which this process is implemented in districts
100
and schools will depend upon the ability of administrators to identify and remove any
potential barriers for teachers. Understanding these obstacles and further developing the
profile of a “data resister” may prove to be key steps in this effort.
101
REFERENCES
Abelmann, C., Elmore, R., Even, J., Kenyon, S., & Marshall, E. (1999). When
accountability knocks, will anyone answer? University of Pennsylvania Graduate
School of Education:Consortium for Policy Research in Education.
Acker, S. (1988). Teachers, gender and resistance. British Journal of sociology of
education, 9(3), 307-322. Retrieved March 2, 2008, from JSTOR Database.
American Association of School Administrators. (2002).Using data to improve schools:
What’s working? Retrieved March 27, 2008, from http://www.aasa.rd.net/files/
PDFs/Publications/UsingDataToImproveSchools.pdf.
Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to
raise student achievement. American School Board Journal, November, 2001, 1-4.
Bambrick-Santoyo, P. (December 2007-January 2008). Data in the driver’s seat.
Educational Leadership, 65(4), 43-46.
Bernhardt, V. L. (2002). The school portfolio toolkit: A planning, implementation, and
evaluation guide for continuous school improvement. Larchmont, NY: Eye on
Education.
Bernhardt, V. L., (2004). Data analysis for continuous improvement (2
nd
ed.). Larchmont,
NY: Eye on Education.
Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through
classroom assessment. PhiDelta Kappan, 80(2), 139-144.
Bolman, L. G., & Deal, T. E., (2003), Reframing organizations (3
rd
ed.). San Francisco:
Jossey-Bass.
Chanin, D. (2007, October 6). Nevada teachers rally against no child left behind act.
KOLO TV. Retrieved July 24, 2008, from http://www.kolotv.com/home/headlines/
10293792.html.
Choppin, J. (2002). Data use in practice: Examples from the school level. Paper
presented at the annual meeting of the American Educational Research
Association, New Orleans, LA.
102
Clark, R. E., & Estes, F. E. (2002). Turning research into results: A guide to selecting the
right performance solutions. Atlanta, GA: CEP Press.
Coburn, C. E., Honig, M. I., & Stein, M. K. (2005). What is the evidence on districts’ use
of evidence? In J. Bransford, L. Gomez, D. Lam, N. Vye (Eds.) Research and
practice: Towards a reconciliation. Cambridge: Harvard Educational Press.
Coch, L. & French, J. R. P., Jr. (1948). Overcoming resistance to change. Human
Relations, 1(4), 512-532. Retrieved February 17, 2008, from http://hum.sagepub.
com/cgi/ content/refs/1/4/512.
Corcoran, T. B, & Fuhrman, S. H., & Belcher, C. L. (2001). The district role in
instructional improvement. University of Pennsylvania Graduate School of
Education: Consortium for Policy Research in Education.
Datnow, A. (1997). Using gender to preserve tracking's status hierarchy: The defensive
strategy of entrenched teachers. Anthropology and Education Quarterly, 28(2),
204-228. Retrieved March 25, 2008, from JSTOR database.
Datnow, A., & Castellano, M. (2000). Teachers’ responses to Success for All: How
beliefs, experiences, and adaptations shape implementation. American
Educational Research Journal, 37(3), 775-799. Retrieved February 17, 2008,
from the JSTOR database.
Datnow, A., Park, V., & Wohlstetter, P. (2007) Achieving with data: How high-
performing school systems use data to improve instruction for elementary
students. Newschools Venture Fund. Retrieved February 22, 2008, from
http://www.newschools.org/ about/publications/achieving-with-data.
Desimone, L. (2002). How can comprehensive school reform models be successfully
implemented? Review of Educational Research, 72(3), 433-479. Retrieved March
2, 2008, from http://rer.sagepub.com/cgi/content/abstract/72/3/433.
Diamond, J.B., & Cooper, K. (2007). The uses of testing data in urban elementary
schools: Some lessons from Chicago. In P.A. Moss (ed.), Evidence and decision
making (National Society for the Study of Education Yearbook, Vol. 106, Issue 1,
pp. 241- 263). Chicago: National Society for the Study of Education. Distributed
by Blackwell Publishing.
103
Duffy, G., & Roehler, L. (1986). Constraints on teacher change. Journal of Teacher
Education, 37,55-58. Retrieved March 22, 2008, from http://jte. sagepub.com/cgi/
content/abstract/37/1/55.
DuFour, R., DuFour, R., Eaker, R., & Many, T. (2006). Learning by doing: A handbook
for professional learning communities at work. Bloomington, IN: Solution Tree.
Earl, L. & Katz, S. (2002). Leading schools in a data-rich world. In K. Leithwood, P.
Hallinger, G. Furman, P. , Gronin, J. MacBeath, B. Mulford, & K. Riley (Eds.),
The second international handbook of educational leadership and administration.
Dordrecht, Netherlands: Kluwer.
Elmore, R. F. (2002). Bridging the gap between standards and achievement. Washington,
D.C., Albert Shanker Institute. Retrieved March 21, 2008, from http://www.ccsso.
org/content/PDFs/ResearchonEffectivePDfinal.pdf.
Feldman, J. & Tung, R. (2001). Whole school reform: How schools use the data-based
inquiry and decision making process. Paper presented at the annual meeting of the
American Educational Research Association, Seattle, WA.
Firestone, W. A., Mayrowetz, D., & Fairman, J. (1998). Perfomance-based assessment
and instructional change: The effects of testing in Maine and Maryland.
Educational Evaluation and Policy Analysis, 20(2), 95-113. Retrieved March 25,
2008, from JSTOR database.
Foley, R. (2007, November 1). Wisconson teacher protests no child law.ABC News.
Retrieved July 24, 2008, fromhttp://abcnews.go.com/US/wireStory?id=3804768.
Fullan, M. (2000). The return of large-scale reform. Journal of Educational Change, 1(1),
5-27. Retrieved March 25, 2008, from http://www.springerlink.com/content/
hg346065u2031k70/.
Gitlin, A., & Margonis, F. (1995). The political aspect of reform: Teacher resistance as
good sense. American Journal of Education, 103(4), 377-405. Retrieved February
17, 2008 from JSTOR database.
Guskey, T. R. (1986). Staff development and the process of teacher change. Educational
Researcher,15(5), 5-12. Retrieved on March 22, 2008, from the JSTOR database.
Guskin, A. E., (1996). Facing the future: The change process in restructuring universities.
Change July-August, 27-37.
104
Hargreaves, A. (1996). Revisiting voice. Educational Researcher, 25(1), 12-19.
Retrieved March 21, 2008, from JSTOR database.
Heritage, M., & Yeagley, R. (2005). Data use and school improvement: Challenges and
prospects. In J.L. Herman & E. Haertel (Eds.), Uses of data for educational
accountability and improvement. (National Society for the Study of Education
Yearbook, Vol. 104, Issue 2, pp. 320-339). Chicago: National Society of
Education. Distributed by Blackwell Publishing.
Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school
inquiry and continuous improvement: Final report to the Stuart Foundation.
National Center for the Study of Research on Evaluation, Standards, and Student
Testing. University of California, Los Angeles.
Heubert, J. & Hauser, R. (1998). High stakes: Testing for tracking, promotion, and
graduation. National Academy of Sciences. Washington, DC.
Hollander, J. A., & Einwohner, R. L. (2004). Conceptualizing resistance. Sociological
Forum, 19(4), 533-554. Retrieved February 17, 2008, from JSTOR database.
Hoover Institute. (1998). A Nation Still At Risk. Retrieved March 25, 2008, from
http://www.hoover.org/ publications/policyreview/3427891.html.
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data driven” mantra:
Different conceptions of data-driven decision making. In P.A. Moss (Ed.),
Evidence and decision making (National Society for the Study of Education
Yearbook, Vol. 106, Issue 1, pp. 105-131). Chicago: National Society for the
Study of Education. Distributed by Blackwell Publishing.
Ingram, D., Seashore Louis, K., & Schroeder, R. G. (2004). Accountability policies and
teacher decision making: Barriers to the use of data to improve instruction.
Teachers College Record, 106(6), 1258-1287. Retrieved February 2, 2008, from
ProQuest database.
Janas, M. (1998). Shhhhh, the dragon is asleep and its name is resistance. Journal of Staff
Development, 19(3). Retrieved May 1, 2008 from http://www.nsdc.org/library/
publications/jsd/janas193.cfm.
Johnson, R. S. (2002). Using data to close the achievement gap. (2
nd
ed.). Thousand
Oaks, CA: Corwin Press.
105
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons
from three urban school districts. American Journal of Education, 112(4), 496-
520.
Kohm, B., & Nance, B. (2007). Principals who learn: Asking the right questions, seeking
the best solution. Alexandria, VA: Association for Supervision and Curriculum
Development.
Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools.
Journal of Education for Students Placed At Risk, 10(3), 333-349.
Learning Point Associates. (2004). Guide to using data in school improvement efforts: A
compilation of knowledge from data retreats and data use at Learning Point
Associates. Retrieved March 12, 2008 from http://www2.learningpt.org/
catalog/item.asp? SessionID=954688929&productID=242.
Learning Point Associates. (2006). Using data as a school improvement tool. Retrieved
March 22, 2008, from http://www.ncrel.org/litweb/adolescent/qkey10/qkey10.pdf.
Lewin, K. (1947). Frontiers in group dynamics: Concept, method and reality in social
Science; social equilibria and social change. Human Relations, 1(3), 5-41.
Retrieved March 25, 2008 from http://hum.sagepub.com/cgi/reprint/1/1/5.
Linn, R. L., Baker, E., & Betebenner, D. W. (2002). Accountability systems: Implications
of requirements of the No Child Left Behind Act of 2001. Educational
Researcher, 31(6), 3-16. Retrieved February 11, 2008, from JSTOR database.
Love, N. (2004). Taking data to new depths. National Staff Development Council, 25(4),
22-26. Retrieved March 28, 2008, from http://www.nsdc.org/library/publications/
jsd/love254.cfm.
Mandinach, E. B., Honey, M., & Light, D. (2006). A theoretical framework for data-
driven decision making. Paper presented at the annual meeting of the American
Educational Research Association, San Francisco, CA.
Marsh, J. A., Payne, J. F., & Hamilton, L. S. (2006). Making sense of data-driven
decision making in education. Rand Corporation. Retrieved February 21, 2008,
from http://rand.org/ pubs/occasional_papers/OP170/.
106
Marzano, R. J. (2003). What works in schools: Translating research into action.
Alexandria, VA: Association for Supervision and Curriculum Development.
Mason, S. (2002). Turning data into knowledge: Lessons from six Milwaukee public
schools. Paper presented at the annual meeting of the American Educational
Research Association, New Orleans, LA.
Means, B., Gallagher, L., & Padilla, C. (2007). Teachers’ use of student data systems to
improve instruction. United States Department of Education. Retrieved March 9,
2008, from http://www.ed.gov/rschstat/eval/tech/teachers-data-use/teachers-data-
use-intro.html.
Melendez, L. (2007, May 9). Local teachers campaign against no child left behind. ABC
News. Retrieved July 24, 2008, from http://www.kolotv.com/home/headlines/
10293792.html.
Merriam, S. B. (1998). Qualitative research and case study applications in education
(Rev. ed.). San Francisco, CA: Jossey-Bass Publishers.
National Center for Educational Accountability. (2006). High quality high schools;
Education commission of the states 2006 national forum on educational policy.
Retrieved May 23, 2008 from www.just4kids.org/en/files/Presentation
High_Quality_High_Schools-07-12-06.pps.
National Commission on Excellence in Education. (1983). A nation at risk: the
imperative for educational reform. Retrieved March 25, 2008, from
http://www.ed.gov/pubs/NatAtRisk/index.html.
O’Day, J. (2002). Complexity, accountability, and school improvement. Harvard
EducationalReview,72(3), 293-329.
Ormrod, J. E. (2006). Educational psychology: Developing learners (5
th
Ed.). Upper
Saddle River, NJ: Pearson-Merrill-Prentice Hall.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3
rd
ed.). Thousand
Oaks, CA: Sage Publications, Inc.
Reeves, D. (2004). Accountability for Learning: How Teachers and School Leaders Can
Take Charge. Alexandria, VA: Association for Supervision and Curriculum
Development.
107
Richardson, V. (1990). Significant and worthwhile change in teaching practice.
Educational Researcher. 19(7), 10-18. Retrieved March 22, 2008, from
http://edr.sagepub.com/cgi/content/abstract/19/7/10.
School accountability report card: Mesa Elementary School (2008).
School accountability report card: Orchard Elementary School (2007).
Slavin, R. E. (2002). Evidence-based education policies: Transforming educational
practice and research. Educational Researcher, 31(7), 15-21. Retrieved February
24, 2008, from http://edr.sagepub.com/cgi/content/abstract/31/7/15.
Stecher, B., Hamilton, L., & Gonzalez, G. (2003). Working smarter to leave no child
behind. Rand Corporation. Retrieved February 21, 2008, from http://rand.org/
pubs/ white_papers/WP138/.
Supovitz, J., & Klein, V. (2003). Mapping a course for improved student learning: How
innovative Schools systematically use student performance data to guide
improvement. University of Pennsylvania Graduate School of Education:
Consortium for Policy Research in Education.
Togneri, W., & Anderson, S. E. (2003). Beyond islands of excellence: What districts can
do to improve instruction and achievement in all school. Washington, DC: The
Learning First Alliance and the Association for Supervision and Curriculum
Development.
Van den Berg, R., & Ros, A. (1999). The permanent importance of the subjective reality
of teachers during educational innovation: A concerns-based approach. American
Educational Research Journal, 36(4), 879-906. Retrieved February 17, 2008,
from JSTOR database.
Waddell, D., & Sohal, A. (1998). Resistance: A constructive tool for change
management. Management Decision, 36(8), 543-548. Retrieved February 17,
2008, from http://www.emeraldinsight.com/ Insight/viewPDF.jsp;jsessionid=
832F956B581B6A8276B06B1DD18B3F74?Filename=html/Output/Published/
EmeraldFullTextArticle/Pdf/0010360807.pdf.
Wayman, J. C. (2005). Involving teachers in data-driven decision making: Using
computer data systems to support teacher inquiry and reflection. Journal of
Education for Students Placed At Risk, 10(3), 295-308.
108
Wayman, J. C., & Stringfield, S. (2003). Teacher-friendly options to improve teaching
through student data analysis. Paper presented at the annual meeting of the
American Association for Teaching and Curriculum, Baltimore, MD.
Wayman, J.C., & Stringfield, S. (2005). Teachers using data to improve instruction:
Exemplary practices in using data warehouse and reporting systems. Paper
presented at the annual meeting of the American Educational Research
Association, Montreal, Canada.
Welner, K. (1999). They retard that which they cannot repel: Examining the role teachers
sometimes play in subverting equity-minded reforms. The Journal of Negro
Education, 68(2), 200-212. Retrieved March 2, 2008, from JSTOR database.
White House press release, January 8, 2002. Retrieved February 15, 2008 from
http://www.ed.gov/news/pressreleases/2002/01/01082002.html.
Winkler, A. (2002). Division in the ranks: Standardized testing draws lines between new
and veteran teachers. Phi Delta Kappan, 84(3), 219-225. Retrieved February 12,
2008, from the ProQuest database.
Young, V. M. (2006). Teacher’s use of data: Loose coupling, agenda setting, and team
norms. American Journal of Education, 112(4), 521-548.
109
APPENDIX A
Teacher Interview Protocol
Participant’s Name: __________________________ Date: __________________
Position: _______________________________________________________________
[Introduction: Begin with a few minutes of explaining the study, who you are, and the
purpose of the study. Explain that while the interview will be taped, their responses are
strictly confidential. Let them know if there is something they would like to say off tape,
they can inform you and the recorder will be shut off for their comment. Also, let them
know the approximate length of the interview and ask if they have any specific questions
before beginning.]
I. Background- Laying the Foundation
1. What grade do you teach? How many years have you been teaching? How long
have you been at this school? What is your prior experience and training?
(background/research question #4)
II. Internal Factors
A. Assignment of Value/Questions of Validity
1. What are your beliefs about the use of student assessment data? (research question
#1/research question #3)
2. What role do assessment data play in your planning of lessons and interventions?
What other factors do you take into account when designing lessons and
interventions? (research question #3)
3. How do you use the results from state exams? (Ask about use of results from
benchmark assessments and chapter tests as well.) What other data do you use
when planning instruction? (research question #3)
B. Mistrust of Data/Political Use
1. How do you feel about the ways in which student assessment data are used at the
state and federal levels? (research question #3)
110
2. Have you ever experienced a situation where student assessment data where used
in a way that you felt was unfair or unjust? (research question #3)
C. Teacher Efficacy
1. How do you respond when student assessment data show you that a particular
student is not achieving in one or multiple areas? (research question #3)
2. Has the use of data changed your feelings about your ability to increase student
achievement? (research question #3)
3. Are some teachers better able to make use of student data than others? If yes,
why? Where would you place yourself on the spectrum of effective data use?
(research question #1/research question #3)
D. Power Concerns
1. Do you have the authority to make changes in the education program as you see
fit, if those changes are based on data? (research question #3)
2. Are you required to use data? If yes, who requires you to do so? If you do not
use data, what are the consequences? (research question #3)
III. External Factors
A. Time
1. Is time allocated for collaboration among teachers, with respect to analyzing data
and developing action plans? How often? (research question #2)
2. Do you feel that the time provided for data analysis is adequate? (research
question #1/research question #2)
111
B. Technology/Technical Issues
1. Do you have access to a database of student assessment results? If so, how often
do you access this system? (research question #2)
2. Have you been provided with any training on the use of this database? (research
question #1/research question #2)
C. Professional Development
1. Have you attended any training on data-driven decision making? Has your
district/school sponsored professional development for schools that focuses on
using data to make decisions? In what specific areas? Is the professional
development offered voluntary or mandatory? (research question #2)
2. If you have attended trainings on data-driven decision making, which ones did
you find useful? If you think they were not useful, why? (research question #2)
D. Leadership Support of Data Use
1. In what ways, if any, do the site leadership support data use amongst the teachers?
(research question #2)
E. School Culture
1. Do you think this school has a culture of data-driven decision making? How
would visitors know that this was a data-driven school? What would they see and
hear? (research question #2)
2. How public is assessment data at this school? (research question #2)
F. Norms of Collaboration
1. Does your school have informal and/or formal grade groups, small learning
communities, or other collaboration opportunities for teachers to talk about
instruction and student achievement? (research question #2)
112
2. How would you characterize the culture of this school – collaborative,
competitive, or somewhere in between? (research question #2)
G. Conclusion
1. What do you feel would make the process of data-driven decision making work
better for you at your school site? (general)
[Concluding Remarks/Questions: Is there anything else I should know? Thank them for
their cooperation and time. Inform them that I might need to contact them for follow-
ups.]
113
APPENDIX B
Principal Interview Protocol
Participant’s Name: __________________________ Date: __________________
Position: _______________________________________________________________
[Introduction: Begin with a few minutes of explaining the study, who you are, and the
purpose of the study. Explain that while the interview will be taped, their responses are
strictly confidential. Let them know if there is something they would like to say off tape,
they can inform you and the recorder will be shut off for their comment. Also, let them
know the approximate length of the interview and ask if they have any specific questions
before beginning.]
I. General
1. Tell me about the history of data use at this school.
II. External Factors
A. Leadership Support of Data Use
1. What are your beliefs about the use of student assessment data?
2. In what ways, if any, do you support data use amongst the teachers?
B. Time
1. Is time allocated for collaboration among teachers, with respect to analyzing data
and developing action plans? How often?
2. Do you feel that the time provided for data analysis is adequate?
C. Technology/Technical Issues
1. Do you provide access to a database of student assessment results? If so, how
often do most teachers access this system?
2. Has training been provided to the teachers on the use of this database?
114
D. Professional Development
1. Have you offered teachers any training on data-driven decision making? Has your
district/school sponsored professional development for schools that focuses on
using data to make decisions? In what specific areas? Is the professional
development offered voluntary or mandatory?
2. If training has been offered, which ones did your teachers find most useful? If
they thought they were not useful, why?
E. School Culture
1. Do you think this school has a culture of data-driven decision making? How
would visitors know that this was a data-driven school? What would they see and
hear?
2. How public is assessment data at this school?
F. Norms of Collaboration
1. Does your school have informal and/or formal grade groups, small learning
communities, or other collaboration opportunities for teachers to talk about
instruction and student achievement?
3. How would you characterize the culture of this school – collaborative,
competitive, or somewhere in between? (research question #2)
G. Conclusion
1. What do you feel would make the process of data-driven decision making work
better at your school site?
[Concluding Remarks/Questions: Is there anything else I should know? Thank them for
their cooperation and time. Inform them that I might need to contact them for follow-
ups.]
115
APPENDIX C
Code List
The following is a list of the codes used to analyze the data gathered through the
interview process:
access to database
alternatives to formal assessment data
characteristics attributed to effective data use
evidence of data use at school
feelings about use of assessment data
feelings about usefulness of trainings
mistrust of data use
political uses of data
power and autonomy issues
self-efficacy
sharing of data
sufficiency of time
support of site leadership
training
use of collaboration time
use of data from benchmarks
use of data from chapter tests
use of data from state assessments
validity of data
Abstract (if available)
Abstract
As a result of its identification as a promising practice in numerous educational studies and its prominence in recent legislative mandates, data-driven decision making is a reform being implemented in schools and districts across the country. As with any reform, the successful implementation of data-driven decision making often requires overcoming some degree of resistance. This study examined the nature of resistance to data-driven decision making by some teachers, in hopes of discovering patterns or themes that may guide administrators in designing implementation strategies. Data for this study were gathered through interviews with teachers at two elementary schools where the teaching staff have exhibited varying degrees of acceptance of or resistance to data-driven decision making. Various external and internal factors were identified as having an effect on teachers’ receptiveness to the process of data-driven decision making and possible solutions for addressing these issues are outlined.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Examining the applications of data-driven decision making on classroom instruction
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
Does data-driven decision making matter for African American students?
PDF
Multiple perceptions of teachers who use data
PDF
How districts prepare site administrators for data-driven decision making
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
PDF
Beyond the numbers chase: how urban high school teachers make sense of data use
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
District level practices in data driven decision making
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Do we really understand what we are talking about? A study examining the data literacy capacities and needs of school leaders
PDF
The pursuit of equity: a comparative case study of nine schools and their use of data
PDF
Data-driven decision-making practices that secondary principals use to improve student achievement
PDF
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
PDF
The implementation of data driven decision making to improve low-performing schools: an evaluation study of superintendents in the western United States
PDF
Towards trustworthy and data-driven social interventions
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
Structures for change: involving faculty in equity-based decision-making in independent schools
Asset Metadata
Creator
Markarian, Brian Steven
(author)
Core Title
Holding on and holding out: why some teachers resist the move toward data-driven decision making
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/21/2009
Defense Date
03/06/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
analyzing data,data,data-driven decision making,OAI-PMH Harvest,resistance
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Datnow, Amanda (
committee chair
), Brewer, Dominic J. (
committee member
), Love, Laurie (
committee member
)
Creator Email
bmarkari@usc.edu,brianmarkarian@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2099
Unique identifier
UC179271
Identifier
etd-Markarian-2748 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-222124 (legacy record id),usctheses-m2099 (legacy record id)
Legacy Identifier
etd-Markarian-2748.pdf
Dmrecord
222124
Document Type
Dissertation
Rights
Markarian, Brian Steven
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
analyzing data
data
data-driven decision making