Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Beyond the numbers chase: how urban high school teachers make sense of data use
(USC Thesis Other)
Beyond the numbers chase: how urban high school teachers make sense of data use
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
BEYOND THE NUMBERS CHASE:
HOW URBAN HIGH SCHOOL TEACHERS MAKE SENSE OF DATA USE
by
Vicki Park
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(EDUCATION)
December 2008
Copyright 2008 Vicki Park
ii
Acknowledgements
I started the journey towards my doctoral degree as a former teacher working in
schools that served low-income minority students. I left my teaching job with a heavy
heart but also with a calm sense of assurance that my future vocation would always
center on ensuring that our educational institutions provided equal opportunities to all
students. Thus, my role as a front-line change agent shifted to that of a researcher and
hopeful policy influencer. This transformation could not have occurred without the
intellectual, emotional, and material support I received from a variety of people I had the
privilege of meeting throughout my graduate school tenure.
Being the subject of a study is never an easy position. Numerous teachers and
administrators gave of their time generously to participate in this dissertation project.
Educators are pulled in many directions at any given moment and their time and attention
are scarce resources that should never be taken for granted. Those that participated in this
project did so out of their desire to share their knowledge with the rest of the community
and to let others learn from their experiences. I continue to be humbled by their
willingness to be open and reflective about the rewards and challenges of being an
educator.
Throughout the past year and a half, my dissertation committee provided essential
support. Wading through theories and dense data in order to make a coherent whole can
be a murky process and Dominic Brewer‘s advice on how to sharpen my overall
argument and to make clearer connections to policy implications were much appreciated.
Ron Astor‘s enthusiasm for my work and his general passionate attitude about…well
iii
everything, never failed to re-ignite my interests whenever I found myself suffering from
dissertation fatigue. His insightful comments and suggestions about improving my work
were always spot on and he is one of the few individuals who can say, ―I told you so‖
without anyone being able to take offense. My chair and advisor, Amanda Datnow, has
not only guided me throughout the dissertation process but also made sure that I made it
through the doctoral program with some semblance of sanity intact. I appreciate her
spirit of generosity and her mentorship.
My successful completion of graduate school is also very much owed to the Ph.D.
program at USC, which has been my intellectual home for the past four years. The
financial assistance enabled me to focus on my research endeavors and the support
structures for their graduate students should be a model for other institutions. In
particular, Dianne Morris, Aba Cassell, and Tamara McKenzie were available at a
moment‘s notice to offer assistance on all things, from mundane paperwork to the
complicated process of fulfilling various milestones. The cohort structure of the program
allowed me to become friends with a group of students whose work and intellectual
strengths continually spurred my own. Many of them have read my work and provided
feedback or have encouraged me by asking about my progress.
Finally, I like to acknowledge my family and friends who often reminded me that
life existed beyond graduate school. I had to accept that my dissertation work is the
beginning rather than the end of my journey. Thus, I leave with more complex questions
and nuanced understandings of school reform.
iv
Table of Contents
Acknowledgements
ii
List of Tables
vi
Abbreviations
vii
Abstract
viii
Chapter 1: Introduction 1
Why Data-Driven Decision Making Matters 2
Purpose Statement 4
The Institutional Context and the Emergence of Accountability Systems 5
Significance of the Study 16
Definition of Key Terms 18
Organization of the Dissertation
20
Chapter 2: Review of the Literature 21
Research on Data-Driven Decision Making 21
Limitations of the Current Research on DDDM 48
Towards a Theory-Driven Understanding of DDDM 52
A Theoretical Framework for the Study of DDDM 72
Figure 1, Theoretical Framework 72
Conclusion
74
Chapter 3: Methodology 75
The Case Study Tradition and Mixed Methods 76
Data Sources 90
Data Analysis Plan 98
Ethical Considerations: Criteria for Establishing Research Trustworthiness 100
Study Limitations 103
Conclusion 104
Chapter 4: Teachers‘ Sensemaking of Data-Driven Decision Making 105
Teachers‘ Orientations towards DDDM
The Prioritization of Data Types
Criteria for Data Use
Conclusion
Chapter 5: Power and Ideology in District Framing of Data-Driven Decision
Making
139
Creating a Shared Construction of DDDM 140
v
Conclusion
176
Chapter 6: Structuring Sensemaking on Data 178
Sensemaking Structures and Opportunities at Mesa High School 178
Sensemaking Structures and Opportunities at Adams High School 187
Conclusion
194
Chapter 7: Conclusion 197
A Distributive Theory of Policy Implementation 199
Connections to Prior Research and Theory 202
Implications for Further Research 203
Implications for Education Policy 205
Final Remarks: The Need for Learning-Driven Policy Making
210
References
212
Appendix A: Interview Protocol 226
Appendix B: Focus Group Protocol 230
Appendix C: Data Meeting Observation Protocol 231
Appendix D: Classroom Observation Protocol 232
Appendix E: Survey 236
Appendix F: Coding List 241
Appendix G: Survey Results 242
Appendix H: Mesa‘s Data Discussion Protocol 244
vi
List of Tables
Table 2.1: A Developmental Model of DDDM 24
Table 2.2: Empirical Studies on DDDM 49
Table 2.3: Summary of Major Theories 71
Table 3.1: High School Profiles 82
Table 3.2: Adams High School Participant List 89
Table 3.3: Mesa High School Participant List 90
Table 3.4: Overview of Data Sources 91
Table 4.1: Teachers‘ Orientations on DDDM 116
Table 4.2: Sources of Data Used to Evaluate Performance 119
Table 4.3: Most Useful Data at Mesa High School 122
Table 4.4: Most Useful Data at Adams High School 125
Table 4.5: Barriers to Data Use 135
vii
List of Abbreviations
AIMS Arizona‘s Instrument to Measure Standards
AP Advanced Placement
API Academic Performance Index
AYP Adequate Yearly Progress
CAHSEE California High School Exit Exam
CAPA California Alternative Performance Assessment
CELDT California English Language Development Test
CAT/6
th
California Achievement Test, 6
th
Edition
CST California Standards Test
DDDM Data-Driven Decision Making
NCLB No Child Left Behind Act (2001)
PBA Performance Based Assessment
PIP Program Improvement Plan
STAR Standardized Testing and Reporting Program
viii
Abstract
The growing body of research on data-driven decision making (DDDM) details
the implementation and positive results at the district and elementary school levels
(Armstrong & Anthes, 2001; Datnow, Park, & Wohlstetter, 2007; Supovitz & Klein,
2003; Togneri & Anderson, 2003). However, theoretically grounded studies of DDDM
and the impact of data use in high schools in particular remains limited. This study
focused on how urban high school teachers conceptualized data use and how their local
contexts shaped their implementation of DDDM.
Using case study research design and drawing on interviews, observations,
document reviews and teacher survey methods, I examined how high school teachers in
two urban schools perceived and used data for school improvement. To guide instrument
design and data analysis, I developed a theoretical framework building on organizational
theories and frame analysis, concentrating on how people construct and enact policy
messages.
This dissertation presents two major findings on how urban high school teachers
make sense of DDDM. First, teachers‘ sensemaking of DDDM was filtered by how they
conceptualized data, their motivations for data use, and the outcomes they expected for
using data. From these patterns, a typology of teachers‘ orientation on DDDM was
developed. There were distinctions amongst teachers who view DDDM through an
inquiry-centered, solution-centered, bureaucratic-centered, or compliance-centered
orientation.
ix
Second, teachers‘ sensemaking of DDDM was mediated by the accountability,
district, and departmental contexts. Policy implementation is ultimately a distributive
process where multiple types of contexts play different roles in determining how teachers
make sense of data and ultimately, how they use data. The accountability system
prioritized the use of data while framing at the district level shaped teachers‘ construction
of DDDM. Departmental structures and norms on sharing data along with the degree of
joint work determined the extent to which teachers used data to inform instructional
practices. Finally, even when these contextual factors align to promote a robust version of
DDDM practices, ignoring teacher learning and professional development will likely lead
to a surface level use of data for instructional decision making.
1
Chapter 1:
Introduction
In 2003, faculty members at Lincoln Public School, located in the heart of a large
urban city, confronted some difficult news: the state identified the school as needing
program improvement. With a student population that was over 95 percent children of
immigrant backgrounds who also qualified for the Free/Reduced Lunch Program,
teachers and administrators put forth their best effort to help students learn. At the same
time, the overwhelming refrain heard from staff members when pushed to incorporate
higher standards or produce improvements in student performance was, ―Look at our kids
and where they come from. We can‘t change who they are‖ or, ―We‘re doing the best we
can. It‘s not us—it‘s the _________ (e.g., parents, students‘ behaviors, lack of support
from the administration, or lack of resources).‖ The truth was that the majority of
teachers were fulfilling their professional responsibilities to their utmost ability. Some
even went above and beyond. However, the deficit framework (Valencia, 1997) that
served to stereotype children from low-income minority backgrounds operated to
undermine faculty hopes and morale.
To get past the ―we can‘t‖ attitude, the administrators shared student achievement
data from a neighborhood school serving the same types of students. Despite having the
same student population, the same number of students, and the same resource concerns,
the comparison school was recognized as a high performing school at the top of the
state‘s rankings. At the faculty meeting where this data were shared, there was complete
silence. For once, the tendency to blame kids or their families was challenged by the
2
superior student performance of a similar school. For once, teachers were forced to
consider that children and their families might not be the problem. For the first time,
space was created for teachers and administrators to reflect on what made the difference.
At that moment, there were infinite possibilities for change and growth. Yet, where
should this school go from there and what should they do with the data? How should
teachers make sense of the data and how should they harness it for improving instruction
and learning?
Why Data-Driven Decision Making Matters
Schools like Lincoln are not unique to the pressures of accountability and the
focus on examining student achievement data. The implementation of the federal No
Child Left Behind Act of 2001 (PL 107-110) ushered in a new era of standards-based
reform centered on improving students through accountability mechanisms.
1
The policies
formulated under No Child Left Behind (NCLB), also known as the reauthorization of the
Elementary and Secondary Education Act of 1965, concentrate on three main themes:
accountability for schools, monitoring of student and school performance, and closing the
achievement gap for historically underrepresented groups (U.S. Department of Education,
2005). NCLB is particularly noteworthy for the attention it places upon populations that
have been served poorly by K-12 educational institutions:
The No Child Left Behind law ensures that schools are held accountable for the
academic progress of every child, regardless of race, ethnicity, income level or zip
code… Schools must have high expectations for every child -- the soft bigotry of
low expectations is no longer tolerated. (U.S. Department of Education, 2005)
1
The Congress passed the legislation on January 8, 2002.
3
The above quote highlights the U.S. Department of Education‘s efforts to re-envision K-
12 public education by holding state and local school systems responsible for student
outcomes. In one catch-phrase (i.e., leave no child behind), the federal government
pledged that the system would no longer remain culpable in ignoring the needs of
America‘s diverse student body; it would now demand accountability at multiple levels.
In concert with accountability policies, policymakers and researchers have
emphasized ―data-driven decision making‖ (DDDM) as a key strategy for improvement.
The main purposes of collecting and examining data are not only to demonstrate
accountability to external parties but also to enhance student achievement and
organizational performance. The emerging research supports the importance of DDDM in
creating schools with a focus on continuous improvement. However, the presence of data
alone does not lead to substantive or informed decision making. Research points to the
ways in which different structural factors constrain or enable data-use (Armstrong &
Anthes, 2001; Datnow, Park, & Wohlstetter, 2007; Supovitz & Klein, 2003; Togneri &
Anderson, 2003). How teachers make sense of and use data has remained largely
unexplored. Since teachers are front-line change agents responsible for student learning,
their perspectives, experiences, and responses to DDDM need to be examined.
4
Purpose Statement
The growing body of research on DDDM details the implementation and positive
results at the district and elementary school levels (Armstrong & Anthes, 2001; Datnow,
Park, & Wohlstetter, 2007; Supovitz & Klein, 2003; Togneri & Anderson, 2003).
However, theoretically grounded studies of DDDM more generally and the impact of
data use in high schools in particular remains limited. In light of these gaps in the
research of DDDM, this dissertation examines how teachers in two high schools grapple
with data use for school improvement. My study focuses on how urban high school
teachers conceptualize data use and how their local contexts shape their implementation
of DDDM. Specifically, the purpose of this case study investigation is to understand how
high school teachers in two urban schools perceive and use data for school improvement.
My overarching research question is, how do high school teachers make sense of data-
driven decision making? Following this line of inquiry, I developed these research
questions:
1. What types of data are high school teachers relying upon and for what purposes?
2. How do the institutional and district contexts shape teachers‘ sensemaking of
data-driven decision making?
3. How do high school teachers‘ sensemaking processes interplay with the structure
(e.g., department affiliation) and culture of the high school? What types of
conditions act as barriers to or facilitators of data use?
To address these questions, I examine data collected from a multi-site case study
of data-driven decision making. I employ a conceptual framework developed from
5
sociological theories, policy implementation research, and empirical studies on DDDM.
Sociological traditions focusing on meaning-making activities provide the analytical tools
in which to understand how people construct and re-construct the meaning of DDDM
given the multi-layered context of school reform. Building on policy implementation
research, I use sensemaking and frame analysis to analyze teachers‘ understanding and
use of data for instructional improvement. In this chapter, I first describe the institutional
context of DDDM implementation and the emerging high school reform agenda. I then
detail the study and discuss its significance.
The Institutional Context and the Emergence of Accountability Systems
The five decades of reform after Brown v. Board of Education has created the
current policy landscape in which accountability predominates. During the 1960s and
1970s, federal policies viewed equity in terms of equalizing educational opportunities.
Propelled by the Civil Rights Movement and the War on Poverty, reform efforts were
marked by an increase in educational funding and the development of intervention
programs aimed at mitigating racial and class inequalities (Tyack & Cuban, 1995). Policy
initiatives expanded student access to educational services through court rulings,
categorical funding, the development of supplementary services, and the delivery of
goods and services (Honig, 2006; Moore, Goertz, & Hartle, 1983). Specifically, court
mandates for desegregation and finance equalization attempted to produce equal access
and opportunities. Other federal initiatives such as Title I and the Education for All
Handicapped Children Act in 1975 were targeted at providing additional resources to
6
specific groups of students where funds were distributed into categorized services. In
general, the modus operandi for education agencies—including federal and state
governments and districts—was to allocate funds and broadly regulate schools (Fuhrman,
Goertz, Weinbaum, 2007; Honig, 2006; Moore, Goertz, & Hartle, 1983) with little effect
on curricular and instructional practices (Berman & McLaughlin, 1978; Meyer & Rowan,
1977).
The Steady Move towards System Coherence
The 1980s saw the beginnings of a shift in the beliefs about creating school
change and improving learning opportunities for all children. The commissioned report,
A Nation at Risk (1983), was an important symbolic lever that provided a strong public
impetus for change. It raised serious doubts about the quality of America‘s educational
institutions by drawing attention to declining student performance, particularly with
respect to international competition. While some criticized it for being overly alarmist
and inaccurate (Berliner & Biddle, 1995; Nichols & Berliner, 2007), the nation began to
question the value of public education in terms of its effectiveness and outcomes given
the past two decades of increased investment.
With mounting evidence suggesting that American schools were unable to close
the performance gap between diverse groups of students and failed to compete on an
international level, the standards-based reform period gained traction during the mid-
1980s and 1990s (Firestone, Fuhrman, & Kirst, 1991; Murphy & Adams, 1998; National
Commission on Excellence in Education, 1983). Following the recommendations made
7
by A Nation at Risk, policy turned to improving academic and professional quality. The
scale of the change focused on students and teachers with policies focused on
establishing minimum competency standards targeted at students and teachers. More
specifically, the recommendations focused on raising high school curriculum and teacher
education standards. Consequently, the main policy tools besides the symbolic levers
manifested through the commissioned report and the subsequent America 2000 summit
meeting (which called for national education goals), included increasing academic
standards, adding teacher credentialing requirements, and intensifying school-related
practices (e.g., increasing school hours and types of services).
Despite these attempts at educational improvement, most state systems lacked
coherence in their overall approach to reform (Firestone et al., 1991). While beginning to
focus on mechanisms that would produce systemic changes, policymakers continued to
utilize policy instruments that focused on mandates, inducements, and symbolic gestures
while shortchanging tools to build capacity. Cohen, Hoffitt, and Goldin (2007)
summarized this period of reform leading up to the current federal policy of No Child
Left Behind: ―Weak policy instruments, modest capability in practice, and political
dependence combined with flexibility to promote weak response in practice‖ (p.75).
Although the underlying hope of the policy initiatives was to improve teacher and student
performance, the policy designs focused on fidelity and establishment of programs rather
than the quality of programs. Adding to the complexity, states and local education
agencies tended to respond to federal policy agendas based on their unique history and
habit. For example, states that historically relied on large-scale implementation continued
8
to do so while states that focused on incentives and mandates also followed their existing
patterns (Firestone et al., 1991; Walker, 2004).
As the standards-based reform era gained momentum, the policies of the 1990s
explicitly focused on improving the quality and delivery of school-related services,
especially instruction and curricula. With arguments and research against piecemeal
approaches to reform (Smith & O‘Day, 1993; Fullan, Rolheiser, Mascall, & Edge, 2001),
the strategy of system alignment and coherence entered into the policy lexicon. Policy
instruments were developed to produce systems-level reform by emphasizing alignment
of resources, coordinating efforts amongst government agencies, and re-distributing
authority. During this period, new governance structures such as site-based management
and charter schools took the stage. Re-distribution of authority (e.g. vouchers),
comprehensive school reform models, and public-private partnerships also emerged as
important features of the large-scale reform landscape.
NCLB and the Emphasis on Large-Scale Reform
The shift to large-scale reform was crystallized in NCLB, which instituted a new
accountability system based on assessments and standards. In essence, NCLB marked a
culmination of previous federal policy efforts as it brought together elements from Title I
and Comprehensive School Reform. As the reauthorization of the Elementary and
Secondary Education Act (ESEA), NCLB followed up on the ideas set by Presidents
George H.W. Bush‘s America 2000 and Bill Clinton‘s Goal 2000 plans. This new policy
gave the federal government unprecedented authority in several ways (Finn & Hess,
9
2002). Historically, the U.S. education policy has been driven by values of equity,
efficiency, and the balance between local and federal authority (Fusarelli, 2004; Tyack &
Cuban, 1995). State and local communities controlled education policies while the
federal government acted primarily as a funding source (Fusarelli, 2004). With NCLB,
the federal government intervened on local communities past powers to determine their
own educational goals and outcomes by setting a national system of performance
outcomes, timetables for progress, and sanctions for underperforming schools.
More importantly, NCLB has rhetorically shifted the notion of equity from one of
equal access and resources to one of high expectations and equal outcomes. The current
federal accountability system moves past the traditional focus on schooling ―inputs‖ and
holds educators responsible for student achievement (Dembosky, Pane, Barney, Christina
et al., 2005; Ingram, Louis, & Schroder, 2004; Lafee, 2002). NCLB embodies high
expectations that have been operationalized in a progressively high-stakes system of
sanctions and incentives. It is the first comprehensive educational framework consisting
of standards, assessments, and an accountability mechanism. Under a system of
incentives and sanctions, the mechanisms for accomplishing these goals emphasize data-
driven decision making (i.e., test scores, yearly progress reports), the implementation of
practices evidenced by ―scientifically-based research,‖ and increased school choice for
parents. The legislation promotes the use of evidence to determine changes in schooling
practices and calls for measurable data as a primary indicator of success or needed
improvement.
2
2
The federal government sets guidelines for school improvement based on student achievement data
developed by the states, known as Adequate Yearly Progress (AYP). Specifically, it requires states to have
10
In attempting to balance federal and state powers, NCLB is a mixture of clarity
and ambiguity. While the deadline for meeting Adequate Yearly Progress (AYP) and
sanctions are specified, the actual content of the goals are ambiguous. The policy has a
built-in flexibility that allows states to define proficiency levels and the content of the
standards towards AYP. Overall, NCLB has strengthened the role of the federal and state
government to regulate educational matters while increasing pressure on districts and
schools to produce measurable progress. The driving force of NCLB policy is the
assumption that accountability will ensure equitable results for students who have been
consistently ignored or neglected in the educational system.
In contrast to past reform efforts, the implementation of standards-based
accountability under NCLB requires teachers to use data regularly to inform their
instruction. This system of accountability has set the stage for schools becoming ―data-
rich‖ since one of the underlying assumptions of the policy is that educators should use
data to adjust their decision making processes. The spotlight on DDDM practices is
relatively new and has largely been attributed to policy mandates such as NCLB and the
Total Quality Management movement from the business sector (Deming, 1986;
Dembosky, Pane, Barney, & Christina, 2005; Ingram, Louis, & Schroeder, 2004). These
models typically contend that the success of an organization depends on its ability to
develop short-term, measurable goals and to respond to performance data to make needed
standards detailing content for student learning. Testing is mandatory for grades 3-8 and results must be
used to drive instruction and teaching practices. In addition, student performance data must be
disaggregated based on major demographic classifications such as race/ethnicity, SES, gender, disability,
English language learner, and so forth. Systematic testing is also coupled with prescriptive intervention
remedies for schools designed as not meeting AYP. Schools are pushed to improve under threat of
sanctions that ultimately allow parents to choose out of low-performing schools. Additionally, guidelines
for enhancing teacher quality are laid out.
11
modifications (Deming, 1986). Paralleling these developments in business and
organizational learning, the theory of action regarding DDDM presupposes that once
educators receive data, they will examine the data to assess the strengths and weaknesses
of their school, and then adjust or re-work existing practices (Ingram, Louis, &
Schroeder, 2004).
The Need for High School Reform
The movement towards increased accountability and data-driven decision making
set the context for reform in American schools. In spite of significant systemic reform
efforts at the federal level, high school student achievement remains a pressing national
concern. Both research and practice provide ample evidence that the current
comprehensive high school models do not adequately or rigorously prepare students to
face the personal, intellectual, and occupational demands of the 21
st
century (Newman,
1992; National Association of Secondary School Principals, 2004; National Center for
Education Statistics, 2007). Recent statistics highlight this pressing need. Approximately
1.23 million students fail to graduate from high school each year (Editorial Projects in
Education, 2007). Of these students, over one-third drop out by the end of ninth grade
and over half of the total students who leave are from minority groups. Although a whole
set of inter-related structural and contextual variables contribute to the problem, the lack
of student engagement is considered the major predictor of leaving high school
(Rumberger, 2004).
12
In response to this challenge, a critical mass of educators, policymakers, and
private sector advocates have come together to reform high schools (Martinez & Bray,
2002; EdSource, 2005; Harvey & Housman, 2004). Many of the proposed reforms, such
as creating smaller more personalized learning communities for students and connecting
curriculum to real-world experiences, focused on changing the structure and the culture
of American high schools. Such approaches are evident in most high school reform
efforts today. Yet, in 2004, the National Association of Secondary Principals published a
report emphasizing the need for the strategic use of data to produce continuous
improvement in high schools. The underlying assumption was that student performance
would not improve unless assessment results were examined and used for decision
making. More specifically, school personnel needed to develop the capacity to
―systematically use data for purposes of equity, accountability, and instructional
improvement‖ (Lachat, 2001, p.33).
Although the research on DDDM sheds important insights on how to enable
elementary school educators to use and act on data (Datnow, Park, & Wohlstetter, 2007;
Diamond & Spillane, 2004), literature on what data-driven decision making looks like at
the high school level needs to be further developed. It may be likely that many of the
same conditions that enable effective use of data in primary schools are true also for
secondary schools. However, the organization, structure, and culture of high schools
present particular opportunities and challenges that might influence the implementation
of DDDM (Ingram et al., 2004; Siskin, 1994). For example, secondary schools are much
more complex, larger organizations than primary schools. They have a much greater
13
variety of departments, academic programs, and extra-curricular activities than
elementary schools. High schools also vary greatly in size, with some arguing that small
schools have environments that are more engaging for students and enhance
communication among faculty (Martinez & Bray, 2002; Harvey & Housman, 2004).
Additionally, reform in secondary schools is inevitably affected by the school context in
which it takes place. A school‘s location, particular mix of students, and history influence
educational reforms (Stoll et al., 2006), like DDDM. Other external influences, such as
district and state policies, and local community norms and politics, further affect change
efforts in schools. Overall, it is not clear to what extent high schools are fulfilling the
vision of systematically using data not only for accountability purposes but also to
enhance instructional practices and equitable outcomes.
Research does suggest that without a focus on local level implementation,
transformation of high schools is unlikely. The groundbreaking Rand Change Agent
study helped to usher in a new era of policy research that started to take seriously the
importance of local level implementation (Fullan & Pomfret, 1977; McLaughlin, 1987).
The five year study (1973-1978) examining federal programs found that effective
projects were characterized by a process of mutual adaptation rather than uniform
implementation and that local factors dominated project outcomes (McLaughlin, 1990).
Although policies can enable preferred outcomes, even well planned and well-supported
policies ultimately depend on individuals within the system and how they interpret and
act out the policies. As McLaughlin (1987) noted:
Change is a problem of the smallest unit….Because implementation takes place in
a fluid setting, implementation problems are never ‗solved.‘ Rather they evolve
14
through a multi-staged iterative process. Every implementation simultaneously
changes policy problems, policy resources, and policy objectives. New issues,
new requirements, new considerations emerge as the process unfolds. (p.174)
The following three decades of research on policy implementation demonstrated that
while pressure from the center can provide the catalyst for change, it alone does not lead
to effective implementation or long-lasting reform (Firestone, Fitz, & Broadfoot, 1999;
Datnow, Hubbard, & Mehan, 2002). Thus, research needed to move beyond the emphasis
on top-down, macro-level change efforts, and concentrate on bottom-up, micro-level
implementation.
To understand the process of change at the local level, some policy researchers
have turned to sociological traditions that utilize institutional theory and sensemaking
theory (Coburn, 2001). Institutional theory concentrates on how organizations respond to
the larger legislative, normative, and cognitive environment and how these variables
affect their legitimacy. Sensemaking theory highlights how individuals and groups within
organizations interpret, construct, and then enact messages from the institutional
environment. As I discuss further in the upcoming chapter, because sensemaking theory
neglects to consider the role of power to shape the construction of policy messages, I also
employ frame analysis to understand how local actors such as district administrators and
school leaders strategically attempt to shape teachers‘ sensemaking of data.
Teachers and DDDM
The focus on data as the source of decision making indicates the expectation that
transparency of school performance outcomes and disaggregated student data will act as
15
irrefutable evidence regarding the efficacy of public schools. Data, typically in the form
of student achievement on standardized test, is the presumptive force in determining and
improving equity in schools (Ingram et al., 2004; O‘Day, 2002). Given the current
educational policy landscape, with its focus on DDDM and evidence-based practices,
teachers are now required to grapple with using data for instructional improvement.
The emphasis on DDDM has altered teachers‘ roles. Under the previous
arrangement, student performance testing was often an annual ad-hoc activity
disconnected from the day-to-day decision making happening within schools (Tyack,
1977). Besides fulfilling state reporting requirements, test results were mainly utilized to
categorize and place children in differentiated classrooms (e.g., special needs, English
Language learner, gifted) which tended to structure the rest of students‘ educational
experiences (Oakes, 1985). Of course, exceptional teachers used their expertise and
professional judgment to go beyond what was merely provided to them. However,
classroom lessons and pacing were driven largely by textbooks and curriculum packages
rather than students‘ day-to-day learning needs (McNeil, 1986). Furthermore, teachers
often lacked the knowledge, technical capacity, and tools to engage in systematic inquiry
in order to adjust their instructional practices (Herman & Gribbons, 2001; Labaree, 2004;
Talbert & McLaughlin, 1996). As noted by Herman and Gribbons (2001) immediately
before the advent of NCLB, ―Rarely does teaching rhetoric include program planning,
performance-based decision making, or the intricacy of data collection, analysis, and
interpretation. These are new principles in the culture of schools‖ (p.2). With these new
expectations for schools and teachers, capacity for engaging in these activities and the
16
ability to apply data to inform actions is likely to be a struggle without resources and
attention paid to the learning component of reform implementation.
Significance of the Study
This study aims to develop a theory of teacher sensemaking processes on DDDM
and contribute to the overall literature on high school reform. To reiterate, although
emerging research points to the ways in which different factors constrain or enable data-
use (Armstrong & Anthes, 2001; Datnow et al., 2007; Supovitz & Klein, 2003; Togneri
& Anderson, 2003), there is still a need for more evidence on how high school teachers
define DDDM, how they enact the implementation of DDDM, and the potential
consequences DDDM practices may produce. Existing studies that include high schools
in their report do not investigate the extent to which the unique organization and culture
of high schools might shape teachers‘ perceptions of data use (Ingram et al., 2004; Lachat
& Smith, 2005). More specifically, the research literature lacks an examination of how
high school teachers use data to influence their instructional decisions.
Because the evidence on DDDM is still in its nascent stage, researchers cannot
answer the question of whether the strategy is an effective intervention for school
improvement. Researchers also do not know the extent to which DDDM has become
embedded within schools or teachers‘ practices. Further complicating matters, data-driven
decision making has also become a catch-phrase for all things having to do with data.
There is also the possibility that some types of DDDM hinder rather than improve
teaching and learning. Rather than assuming that the presence and use of data will bring
17
positive changes, it is important to understand how, why, and in what contexts DDDM
leads to improved changes in instruction, curriculum, policy, and school culture. Much
more theory building and data collection remain.
Examining how teachers make sense of DDDM and how this process may be
mediated by context can contribute to practice and policy. While NCLB provides a larger
framing context, how local actors make sense of the implementation of data-driven
decision making remains unclear. NCLB places importance on the collection and use of
data to monitor student performance, but it does not prescribe how schools should use
data to promote student achievement goals. These tasks are left to local school systems
and educators who have to operationalize data use in a meaningful way and develop a
process that actually informs their decision making.
Discourse surrounding education policy typically places teachers as objects of
change rather than active mediators and agents of reform; yet, teachers are the forefront
of implementing accountability reform (Basica & Hargreaves, 2000; Darling-Hammond,
2004; Fullan, 2000). Reforms that focus on technical aspects of school improvement
neglect that schooling is not just a process of doing but is also a process of being. In
other words, how individuals carry out their responsibilities as well as interact with others
and education institutions develop from their belief systems, tacit assumptions, and
experiences (Firestone & Louis, 1999). Since material reality becomes subjective when
filtered through an individual‘s experiences and perceptions, macro-level policies such as
accountability reform interacts with micro-level processes. Thus, the first step towards
critical reflection involves close contact with individuals and groups (e.g., teachers) who
18
are immediately situated within these processes. Understanding teachers‘ sensemaking of
data use may provide insights about how actors at the local level construct reform ideas
and what conditions support and hinder DDDM.
Definition of Key Terms
There are three basic key terms that I use throughout the study: data, data-driven
decision making, and sensemaking. To clarify their definitions and to ensure their
consistent usage, I review them here.
Data-Driven Decision Making (DDDM): means to systematically gather and analyze
data to inform improvement decisions in education (Marsh et al., 2006). Some scholars
make delineations between concepts such as data, information, and knowledge
(Mandinach et al., 2006). In these cases, data are defined as ―raw‖ pieces of facts while
information and evidence are described as an interpretation of data. In light of the fact
that all data, including those collected from formal research projects, are designed and
gathered based on a theoretical and methodological perspective, this distinction narrowly
defines data and ignores the importance of data collection methods. In contrast to these
narrow definitions, Earl and Katz (2002) adopt a broader view of what constitutes data.
They argue that data are ―summaries that result from collection of information through
systematic measurement or observation or analysis about some phenomenon of interest,
using quantitative and/or qualitative methods‖ (p. 4). Data are not characterized based on
their visual representation (e.g., whether they are numbers or words or ―raw facts‖) but
19
by the quality of its collection and synthesis. Furthermore, evidence refers to the
interpretation arising out of data synthesis and analysis that is then used as a justification
to support a course of action and confirm or disconfirm assumptions (Lincoln, 2002).
Types of Data: refers to broad categories of information. In the school context, data are
categorized into four main types (Bernhardt, 1998):
1. Demographic data, including attendance and discipline records;
2. Student achievement data, which encompasses not only standardized data but also
formative assessments, teacher developed assessments, writing portfolios, and
running records;
3. Instructional data, which focuses on activities such as teachers‘ use of time, the
pattern of course enrollment, and the quality of the curriculum; and
4. Perception data, which provides insights regarding values, beliefs, and views of
individuals or groups (e.g., surveys, focus groups).
Sensemaking: is the process by which individuals and groups make meaning of
experiences and ideas (Weick, Sutcliffe, & Obstfield, 2006). Sensemaking theorists
concentrate on how meaning is shaped because they view this process as an important
driver of action. They assume that individuals use their pre-existing beliefs and
experiences to interpret new situations. The process occurs during social interactions and
thus context is important. Sociologists from this tradition contend that sensemaking is an
active and dynamic process.
20
Organization of the Dissertation
In the subsequent chapters, I detail the literature and methodology that guided this
dissertation. Chapter 2 provides a review of the literature on DDDM to understand how
the reform strategy has been conceptualized and to identify the factors that are salient in
the implementation of data use. For the second portion of the chapter, the literature in
sociology is explored to examine policy implementation in general and in particular, to
understand the processes and conditions that may shape teacher sensemaking of DDDM.
I then intersect sociological theories on human meaning-making activities and the
implementation process of DDDM to build a conceptual framework for the study. In
chapter 3, the research methodology and the rationale for the study design, the data
collection process, and the data analysis plan are described. Relying on the research
questions and the conceptual framework as an outline, findings are then presented in
chapters 4 through 6. I conclude in chapter 7 with a summative discussion of the major
findings and implications for theory, practice, and policy.
21
Chapter 2:
Review of the Literature
As stated in the previous chapter, how urban high school teachers make sense of
and ultimately enact data-driven decision making (DDDM) remains largely unexplored.
Emerging research suggests important factors to consider in the implementation of the
reform strategy. This chapter reviews the literature on DDDM and policy implementation
theories. In the first section, I examine the literature on DDDM and detail the current
knowledge in the field. Because much of the work on this area has been descriptive and
atheoretical, I argue that the implementation of DDDM might benefit from employing
organizational theories and frame analysis. Consequently, the second portion of the
review explores sociological theories and the ways in which they inform teachers‘
sensemaking of data use. In the third section, I describe the theoretical framework
developed from the literature review, which guides the research methodology and
analysis of the dissertation.
Research on Data-Driven Decision Making
In the past decade, the concept of DDDM has flourished as a pivotal tool for
education improvement. Although the use of data cannot be deemed revolutionary, the
emphasis placed by federal and state accountabilities in DDDM is unique in the current
policy period. This growing importance is evidenced by a publishing industry that
produces ―how-to‖ books and articles for educators on using data (Bernhardt, 2004; Celio
& Harvey, 2005; Holcomb, 1999; Johnson, 2002; Thorton & Perrault, 2002);
22
implementation research that focuses on examining district conditions that support
DDDM (Datnow et al, 2007; Dembosky et al, 2005; Kerr et al., 2005; Togneri &
Anderson, 2003); studies on practitioners‘ attitudes towards data use (Booher-Jennings,
2005; Skrla & Scheurich, 2001; Woody et al., 2004); evaluation of data management
technology (Chen et al., 2005; Light et al., 2005; Mason, 2002; Wayman & Stringfield,
2006); and research on school-level implementation challenges (Diamond & Spillane,
2004; Diamond & Cooper, 2007; Feldman & Tung, 2001; Halverson, et al., 2005;
Herman & Gribbons, 2001; Ingram et al., 2004; Lachat & Smith, 2005; Young, 2006). In
this section, I summarize the substantive literature on the implementation of DDDM,
concentrating on seven themes: (1) the conceptualization of DDDM, (2) educators‘ uses
of data, (3) district structuring of data use, (4) roles played by technology and support
personnel, (5) school-level structures that shape teachers‘ use of data, (6) capacity for
data use, and (7) the influence of culture and politics on DDDM. Taken together, the
literature indicates that DDDM is a complicated reform strategy in which structure,
capacity, and culture at multiple levels intertwine to set the stage for how teachers make
sense of data. After the review of the empirical findings, I discuss DDDM‘s current
limitations in informing the process of implementation.
Conceptualizing DDDM
The term ―data-driven decision making‖ is sufficiently vague to be a catch-phrase
for all things having to do with data. With the presence of state and federal accountability
systems, schools and districts are likely to consider themselves data-driven, whether or
23
not they desire to be so. Studies that highlight the ways in which DDDM helps improve
performance (Feldman & Tung, 2001; Togneri & Anderson, 2003) may persuade some
schools that being data-driven is a good strategy to embrace, although the strategy itself
may not be completely defined. Considering the emerging understanding of what aspects
of DDDM may be helpful and the potential consequences of using data, some scholars
have undertaken the task of conceptualizing DDDM and explaining its variations (Earl &
Katz, 2006; Ikemoto & Marsh, 2007; Mandinach et al., 2006). In this portion of the
literature review, I outline some of the models and discuss how they help direct research
and implementation of DDDM.
One approach to defining DDDM has been to focus on information processing of
data. Building on the previous work of Light and colleagues (2005), Mandinach et al.
(2006) developed a DDDM model outlining how data converts into knowledge. They
delineate data, information, and knowledge based on how each is processed. According
to the authors, individuals collect and organize data, which are considered as raw pieces
of facts without initial meaning. Data are translated into information when analyzed and
summarized. In other words, information is defined as data with meaning. Finally,
knowledge is created when information is synthesized and prioritized. Knowledge, in
essence, is information deemed useful to guide action. This information model of DDDM
highlights how the use of data is not a simple process of having and then using data. Data
must be interpreted and knowledge must be actively constructed in order for it to affect
decisions. However, the model is limited in that it emphasizes the sequential nature of
DDDM and the interpretation of data without an explanation of how different cognitive
24
capacity may shape the ability to engage in the transformation of data into knowledge.
Educators‘ ability to use data may be enabled or constrained by a host of structural and
cultural factors when applied to real life settings. As Ikemoto and Marsh (2007) noted,
this model fails to account for the complexity of DDDM implementation in school
settings and the competing demands on educators‘ attention and time. The model also
does not consider the research on sensemaking, which suggests that data interpretation is
affected by an individual‘s tacit beliefs and assumptions.
Going a step further, Earl and Katz (2006) present a developmental model of
DDDM that concentrates on the ability and capacity to use data. The Table 2.1 lists the
stages of the model, which highlights the developmental nature of using data.
Table 2.1: A Developmental Model of DDDM
From Novice to Expert
Ability and Capacity
Stage 1 No practical experience and dependent on rules
Stage 2 Expects definitive answers, some recognition of patterns, and still relies
on rules
Stage 3 Analytical, locates and considers possible patterns, and has internalized
some dimensions so that they are automatic
Stage 4 Uses analysis and synthesis, sees the whole rather than the aspects, looks
for links and patterns, and adjusts to adapt to the context
Stage 5 Understands the context, considers alternative in an iterative way, and
integrates ideas into efficient solutions, solves problems and makes
ongoing adaptations.
*Adapted from Earl & Katz (2006, pp. 101-103)
The authors take a cognitive and capacity approach to understanding implementation of
DDDM. They recognize that schools and individuals may be at different stages of
becoming experts at thoughtfully using data. Because the process is developmental,
educators require opportunities to learn and apply the skills in their context. This
conceptualization of DDDM underlines how DDDM is not merely a linear process of
25
finding and using information, but one of skill and learning. However, the model does not
address how the development of these skills are enabled or constrained by local
educational settings.
Taking into consideration the contextual features of DDDM, Ikemoto and Marsh
(2007) propose four different models of DDDM. These models are based on two
overlapping continuums: simple to complex data types and simple to complex data
analysis and decision making.
3
Based on these two dimensions, the authors conclude that
schools have a basic, analysis-focused, data-focused, or inquiry-focused approach to
DDDM. Schools that fall in the basic category tend to use simple data and engage in
simple analysis. Analysis-focused schools focus on the collect of simple data but
undertake complex analysis and decision making. Data-focused schools collect complex
data but engage in simple analysis. Finally, inquiry-focused schools collect complex data
and employ complex analysis and decision making. By constructing a range of DDDM
models, the authors underscore the importance of examining how different sets of
contextual factors may produce a given model and data use consequences. Ultimately,
they argue that DDDM is not a straightforward process and caution against viewing one
model as the ideal type; rather, they suggest that different types of models may be useful
for different purposes. Consequently, DDDM may be an adaptive rather than a sequential
strategy.
3
Simplicity or complexity of data types are defined by these elements: timeframe (e.g., snapshot vs. trend
data); types (single vs. multi-measures); data source; and level of detail. The degree of data analysis and
decision making is determined by use of assumptions vs. empirical evidence; use of basic vs. expert
knowledge; straightforward vs. sophisticated analysis techniques; individual vs. collective; and single vs.
iterative processing (Ikemoto & Marsh, 2007, pp. 110-101).
26
In sum, the development of DDDM conceptualizing is a promising start to
gaining a firmer grasp of how to define DDDM, its elements, and its possible
consequences. DDDM is often implicitly presented as a singular model in empirical
studies but the literature suggests that researchers need to define clearly how data are
conceptualized and used within specific contexts because there are different stages of
implementation as well as different models of data collection and usage. As the theories
above suggest, data alone does not guarantee substantive improvements. The emphasis on
evidence-based practice and the capacity to engage in DDDM need be considered within
organizational settings and the context of its larger policy environment because they
structure how data are used and for what purposes. Concerning teachers‘ uses of data, the
district and school context—its structures, culture, and capacity building efforts—are
likely to shape the enactment of DDDM. Finally, the ways in which these organizational
elements work to mutually reinforce or contradict the relevancy of DDDM have
implications for the impact of data use. With these lessons in mind, I now turn to the
literature to explore how data are used for educational improvement.
The Uses of Data
Studies demonstrate that multiple types of data are being utilized to inform a wide
variety of decisions (Datnow et al., 2005; Marsh et al., 2006; Supovitz & Klein, 2003).
Uses of data may help reveal problems, target areas for improvement, forecast future
conditions, improve policy and practice, help evaluate program effectiveness, and
promote accountability (Bernhardt, 2004; Earl & Katz, 2006). Concerning using student
27
performance data, Supovitz and Klein (2003) suggest that external, school-wide, and
classroom assessments function differently. While external assessment may provide
initial direction to help identify struggling students, develop improvement plans, or set-
long term goals, school-wide assessments help refine instructional strategies and adjust
professional development needs throughout the year. Classroom assessments, especially
formative assessments, provide immediate, flexible and custom data to guide their day-
to-day practices. Research on formative assessments demonstrates that activities such as
checking for student understanding, providing student feedback, increasing wait time for
student response, enabling student self-reflection, and asking higher-order questions
contributes to learning (Black et al., 2003; Black & Wiliam, 2006; Heritage, 2007;
Herman et al., 2006; Leahy et al., 2005; Ruiz-Primo & Furtak, 2007).
4
Thus, research
suggests that day-to-day instructional and student engagement data are particularly useful
for teachers.
However, educators are still developing the necessary skills, capacity, and support
structures to use data. Using interview, focus group, and survey data from three districts
partnering with an external organization, Ikemoto and Marsh (2007) found schools use of
data ranged from basic to inquiry-focused models. The majority of schools, however,
describe their practices as falling in the basic category (Ikemoto & Marsh, 2007).
Although teachers utilized formative assessments, Supovitz and Klein (2003) found that
school-wide data was ―enormously underutilized‖ (p. 39). Larger patterned data that
4
Heritage (2007) defines formative assessments as made of four elements: identifying the learning gap,
providing feedback, enabling student involvement, and linking to learning progression (p.141-142).
Activities range from teachers‘ observations of students during tasks, classroom discussions, classroom
tests, and student self-reflections.
28
informs general programmatic decisions at the grade and content levels were not
frequently used by the schools. At the high school level, Ingram et al. (2004) found that
teachers and administrators exhibited a strong tendency to make decisions about teaching
effectiveness based on anecdotal information and relied on intuition and experience
rather than systematically gathered data. Thus, while the accountability system may have
saturated schools with a wide array of data, educators are still figuring out how to
develop the capacity to use data in basic and sophisticated ways.
District Structures and DDDM
Districts play a critical role in directing reform efforts, especially DDDM
(Datnow, Park, & Wohlstetter, 2007; Dembosky et al., 2005; Kerr et al, 2006; Togneri &
Anderson, 2003; Wayman, Cho, & Johnston, 2007). They primarily do so by investing
resources and developing structures that build human capacity within the organization.
One way district structures enable DDDM is through the system alignment of curriculum,
instruction, and assessments. This type of system coherence gives staff members a
foundation in which to develop goals and best practices (Datnow, Park, & Wohlstetter,
2007; Togneri & Anderson, 2003). Districts have also helped schools develop
improvement plans with regular monitoring. Districts can further help schools prioritize
standards and instruction by developing both short-term and long-term goals tied to
measurable indicators of progress (Datnow, Park, & Wohlstetter, 2007). For example,
some districts have developed quarterly benchmark assessments to assess student
progress throughout the year rather than waiting for summative results from state
29
assessments. These common assessments allow districts and schools to compare their
results based on a variety of categories (e.g., across similar schools, across district
average, across grade levels) and make frequent adjustments.
At the same time, district alignment may produce conflicts that can hinder the
emphasis on using data to make decisions. For example, Kerr et al. (2006) found that
teachers at one district felt that instructional plans conflicted with the use of data. When
the results of assessment data indicated the need for re-teaching, teachers felt constrained
by the pacing guide. The authors noted that, ―Given the perceived pressure to stay on
pace, many teachers opted to follow the curriculum instead of the data‖ (p. 513). In
districts where DDDM practices became a core strategy for improvement, central office
administrators, principals, and lead teachers expected data to be used to inform and
justify decisions (Datnow, Park, & Wohlstetter, 2007). As part of this vision, a degree of
autonomy and flexibility for teachers was found to be necessary in order to maintain the
perspective that decisions are based on data rather than pre-determined conclusions.
Whether teachers have the flexibility to reorganize their student groups based on
benchmark assessments, re-teach previous topics outside the scope and sequence of the
curriculum, or alter the pace of the curriculum impacts the degree to which data will be
used to guide decisions. Consequently, findings suggest that school systems with success
in DDDM focus on balancing standardization and flexibility during implementation of
changes (Datnow et al., 2007; Kerr et al., 2006).
A district‘s organization of instructional units and division of labor may also
shape the implementation of DDDM. Coburn and Talbert (2006) found that district
30
organizational structure played a role in how staff conceptualized evidence because
different units and departments varied in their exposure and engagement in internal and
external reform initiatives. In one district, the majority of special education teachers
defined evidence in terms of psychometric criteria and scientific measurement. They
were also more likely to use these types of evidence to make decisions about student
placement. This resulted from the department‘s adoption of an initiative that focused on
regular monitoring of student progress through assessments. The special education unit
personnel at this district tended to have similar conceptions of evidence due to their
involvement in reform and shared professional norms.
Technology and Support Personnel
In addition to balancing a strong vision for DDDM with system alignment, district
investment in technology and support personnel can facilitate or hinder data use (Chen,
Heritage, & Lee; Herman & Gribbons, 2001; Kerr et al., 2006; Wayman & Stringfield,
2006). Several studies indicate that the lack of quality data was a major barrier to data use
(Diamond & Spillane, 2004; Ingram et al., 2004; Marsh et al, 2005; Mason, 2002;
Petrides & Nodine, 2005). Lack of easy access to timely achievement results meant that
schools could not make immediate changes or plan ahead. For example, both the
California testing system and federal indicators of AYP are typically reported to schools
on an annual basis. The ability to fine tune instructional strategies and make curricular
decisions can be severely compromised when data received are already outdated. In
cases where schools rely on annual testing data, they make inferences about the
31
upcoming year with data that may or may not be relevant (Herman & Gribbons, 2001).
Hence, data that merely provide a snapshot picture of the organization‘s history do not
always lead to action-oriented decisions for the present or the future.
The format in which data are presented also matters. Although schools receive
data from multiple sources, including the federal, state, and district offices, the data are
not always depicted in a format that easily shapes the decision making process. The types
of data displays highlight key aspects of the data and communicate different information.
For instance, bar charts are useful for comparison purposes while line graphs are useful to
represent progress over time. Because schools are in the nascent stages of building
DDDM capacity, Herman & Gribbons (2001) suggest that data displays need to be
intuitive and self-explanatory. They found graphs and charts communicated information
much more effectively and efficiently than words or tables with numbers. Additionally,
how data are disaggregated affects classroom instruction. School-wide data can provide
a global picture of the school and suggest organizational needs, but it does not provide
sufficiently calibrated information to target instruction (Supovitz & Klein, 2003). To
prioritize instruction and standards and to intervene with specific students, studies
suggest individual teachers need their own grade level and classroom data (Anthes &
Armstrong, 2001; Supovitz & Klein, 2003).
Without access to quality data, school systems spend time and resources
developing their own data collection, management, and analysis; altering existing
datasets; creating their own data systems; or contracting with outside consultants
(Ingram et al., 2004; Marsh et al, 2005; Mason, 2000; Petrides & Nodine, 2005).
32
Supovitz and Klein (2003) also noted that the lack of ability to use computer technology
meant that school personnel use inefficient methods to process data. They cite examples
of schools where data were put into tables and were sorted by word-processing functions
rather than spreadsheets or database programs. The authors found that administrators
knew about the availability of these tools but did not know how to utilize them or easily
process the information needed. In their survey of school leaders, only 19 percent of the
respondents felt competent about using data to answer questions. In one district studied
by Lachat & Smith (2005), the lack of district data personnel significantly slowed data
processing and access for schools; to mitigate the burden on the district and to receive
timely faster, high schools had to develop a Data Access Plan, which outlined the data
retrieval procedure.
The districts that emphasized DDDM devoted resources and personnel to
collecting, managing, and analyzing data. At different levels (e.g., district, school, or
grade level), a staff member was responsible for ensuring that data remained up to date
and could be shared with other staff members (Datnow et al., 2007). Within schools, the
principal or leadership teams shared this responsibility; others also have a formal data
manager position established. Wayman & Stringfield (2006) found that districts often
developed a system-wide plan for the implementation of data systems to support and
increase existing initiatives, or develop district-sponsored professional development for
data technology use. One large district developed a plan in which all teachers and
principals would become involved in the use of the data system to examine student data.
Under this plan, 500 educators were certified in the use of the data system. In turn, they
33
were responsible for preparing groups of colleagues to use the system. Furthermore, Kerr
et al. (2006) found that one district trained school-level instructional specialists in a
protocol to lead groups of teachers through a process of reviewing student work and tying
results to instructional practice (p. 507). Lachat and Smith (2005) also found that the
presence of a ―data coach,‖ who can provide procedural assistance and model data use,
helped build a data team and foster ongoing data use by school personnel.
The presence of a user-friendly data management tool also seems to enable
collaboration. Wayman & Stringfield (2006) found that by providing teachers with a
user-friendly data management system, professional collaboration improved. These
researchers discovered that the use of the data system increased teacher efficiency since it
reduced time spent on collecting and organizing data. Teachers reported that they gained
more time to analyze the data, respond to student needs, and reflect on ways to improve
practice. Collaboration was described as increasingly professional and curriculum
focused. Teachers also believed that the access to data provided them with a common
language, increasing interaction among faculty. Similarly, Chen, Heritage, and Lee
(2005) discovered that the web-based data system tool, Quality School Portfolio,
promoted collaboration and shared planning among educators. With the easy storage and
retrieval of standardized data, the authors also found that educators were increasingly
able to pay attention to formative assessments.
In sum, technology to aid in data management, storage, and analysis was found to
be a necessary ingredient before data could be used to improve school decision making.
School systems need data infrastructures that enable them to analyze and manipulate data
34
efficiently and frequently if DDDM is to become more than an episodic activity that
occurs once or twice a year. Overall, a district‘s attention to and investment in the
implementation process signals to staff the legitimacy and relevancy of DDDM. Districts
that take the time and energy to plan the implementation process, coordinate efforts at
various system levels, and build capacity make DDDM a more salient strategy for system
improvement.
School Structures and DDDM
In concert with district structures, schools structures also have implications for
DDDM because they shape data use for teachers. In particular, Young (2006) suggests
that deliberate and strategic activities (i.e. agenda-setting) affected how teachers respond
to DDDM. These strategies included establishing rationale and expectations for teachers‘
use of data in addition to modeling data use and structuring collaboration time. School
leaders played a vital role in the agenda-setting process and ensured that district efforts at
implementing DDDM occur at the school level. Because of their positions, principals
may derail the emphasis on DDDM. At one school, Young (2006) discovered that the
principal took a hands-off approach rather than pushing faculty to engage in peer
collaboration or classroom observations. Assuming that teachers already knew how to
apply data for instructional decision making, the principal did not engage in any agenda-
setting activities around DDDM. Consequently, teachers did not participate in team-based
inquiry as envisioned by the district.
35
Because the systematic use of data is a skill that teachers have to develop,
structured collaboration opportunities are considered a key element of DDDM
implementation. In their study of high schools, Ingram et al. (2004) concluded,
Teachers are more likely to mention systematic data collection and use when they
are involved in groups looking at school processes. It does not appear that
teachers are data-phobic but rather that they don‘t have recent experiences in
working with data to improve specific classroom practices. (p. 1280)
Schools in which DDDM was a core feature adjusted time in the school day to allow for
such professional development so that teachers could practice and develop expertise in
data use (Datnow et al., 2007; Feldman & Tung, 2001; Halverson, Grigg, Prichett, &
Thomas, 2005; Marsh et. al, 2005). Group-based inquiry or ―collaborative data teams‖
were most successful in implementing DDDM due to the broad participation from a
diverse array of staff including teachers and administrators (Mason, 2002). School
systems are also starting to use data reflection protocols to guide these data meetings
(Datnow et al., 2007). These structured data discussions provided teachers with
continuous and intensive opportunities to share, discuss, and apply what they are learning
with their peers.
Once teachers identify instructional and learning gaps, improvement efforts may
be blocked if teachers are unaware of intervention or instructional strategies. In addition
to consistent structured time for collaboration and professional learning, schools need
strategies for planning, sharing, and evaluating their efforts (e.g., action plans or
scorecards) (Datnow et al., 2007). If data use is going to have a meaningful impact on
teacher practices, studies suggest that a collaborative environment in which teachers and
36
administrators have multiple opportunities and resources to examine and interpret data,
followed by time to develop an action plan to change behavior are required.
Capacity for Data Use
In addition to structural supports at the district and school level, teachers may
require specific set of skills and knowledge to use data effectively. Several studies
suggest that teachers need to develop the ability to engage in systematic inquiry—skills
that are not necessarily built into the professional development of educators (Marsh et al,
2005; Mason, 2002; Petrides & Nodine, 2005). Researchers found that educators often
lack the skills or knowledge to formulate questions, assess different types of data,
interpret results, and develop an action plan (Feldman & Tung, 2003; Mason, 2002;
Supovitz & Klein, 2003). Teachers require the basic knowledge to critically evaluate
what types of inferences can be drawn from different sources of data. While they need
not be experts in statistics, some scholars suggest that teachers should have a basic level
of assessment literacy and understand concepts such as distribution, variation, and
reliability (Mandinach et al., 2005; Webb, 2002).
Although there is a consensus in the research literature about the need for data
literacy, it is unclear which specific skills and abilities comprise this construct. Research
on quantitative and statistical reasoning provides some clues about what may be
important features of data literacy. Although quantitative literacy and statistical literacy
(i.e., statistical reasoning and statistical thinking) are interrelated concepts, some
researchers tend to make important distinctions between the two (Schield, 2005; Watson
37
& Callingham, 2003, Wilkins, 2000). Quantitative literacy (i.e., numeracy skills) refers to
―aggregate of skills, knowledge, beliefs, dispositions, habits of mind, communication
capabilities, and problem-solving skills that people need in order to engage effectively in
life and works (Steen, 2001, p. 7). Thus, the concept of quantitative literacy encompasses
the whole developmental range of thinking with and using numbers (e.g. doing basic
arithmetic to using data to make judgments).
Unlike quantitative literacy, statistical literacy refers to a particular set of
knowledge and skills. More precisely, statistical literacy involves understanding such
concepts as probability, sampling, inference, measurement, ratio/proportion, functions,
and central tendencies (Garfield, 2003; Schield, 2005; Watson & Callingham, 2003).
Sometimes included in the notion of statistical literacy is the ability to carry out and
conduct statistical analysis as well as to use statistical concepts for problem-solving.
Therefore, statistical literacy extends beyond basic understanding of numeracy and
requires a more sophisticated set of skills. Given the complexity and broad scope of both
quantitative and statistical literacy, it might be useful instead to think about specific sets
of knowledge and skills that educators need to interpret and use data that they come
across.
5
Focusing on the need for data literacy development, Earl and Katz (2006) have
outlined six aspects of understanding and using data (pp. 18-20). First, they note data
literate educators think about the purposes of different types of data. For example, no
physician would use a patient‘s temperature to ascertain the fitness of the heart. Likewise,
5
Some of these skills and types of knowledge have been included in the Standards for Teacher
Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990).
38
educators must have the ability to discern what conclusion can be drawn from different
types of data. Second, data literacy means that educators will be able to distinguish sound
data from unsound data. Third, educators need knowledge of statistical and measurement
concepts because, ―If leaders are going to use data to enhance rather than distort
educational decisions, they have a responsibility to understand the principles that underlie
statistics‖ (p. 19). Fourth, data are not limited to quantitative representations; educators
need to recognize that data come in a variety of forms, including observations of
behaviors and instructional practices. Fifth, educators need to have the capacity to engage
in the interpretation process and make conclusions based on data as well as considering
logical flaws. Finally, educators need to know how to represent data to a variety of
audiences (e.g., parents, students).
Standards and efforts to promote data literacy enable teachers to evaluate data but
they do not increase pedagogical skills and content knowledge. The use of formative
assessments not only requires assessment knowledge, but also domain specific and
pedagogical expertise for teachers to use data to enhance student learning (Heritage,
2007). Essentially, formative assessment is an ongoing process that aims continually to
assess student learning to close the gap between existing student knowledge and
instructional goals. Taken together, data literacy goes hand in hand with instructional
capacity. Both require a sophisticated set of skills and knowledge, which educators need
to apply in context. This suggests that educators need multiple and sustained
opportunities to develop and apply these skills and knowledge.
39
Culture, Politics, and DDDM
Culture of schools and districts need to be considered in the implementation of
DDDM because they shape how teachers make sense of and use data. Given the
accountability context, districts and schools may not focus on using data for continuous
improvement, but use it only to fulfill policy mandates (Booher-Jennings, 2005; Diamond
& Cooper, 2007). At the district level, leaders may develop different types of culture
around DDDM. Firestone and Gonzalez (2007) suggest that there are two types of district
cultures. The accountability culture focuses on student test scores, tends to have a short-
term time frame, and teacher and principal voices are excluded. Data are used mainly to
identify problems and monitor compliance. In contrast, the organizational learning
culture focuses on student learning and instructional improvement, is long-term in scope,
and includes teacher and principals voices. Data are used to identify and diagnose
problems. The differences in cultures reflect whether districts are engaged in meaningful
continuous improvement efforts or are merely chasing numbers to avoid sanctions.
At the school level, studies comparing responses to accountability mandates
found that teachers in high-performing schools perceived data use as empowering
(Diamond and Spillane, 2004; Herman and Gribbons, 2001). In contrast, teachers in a
more diverse, low-performing school view data as demoralizing. For the low-performing
schools, data were used mainly to comply with accountability demands without focusing
on efforts to improve learning (Diamond & Spillane, 2004; Diamond & Cooper, 2007).
Because NCLB is premised on a system of high-stakes pressures and rewards, educators
often have ethical and professional concerns about how data will be used to evaluate not
40
only student performance but also their own. Despite the accountability system‘s effort to
promote data use for school improvement, teachers saw data as threats to their teacher
identity and sense of self-efficacy. Although it can be argued that teachers should be
judged on their students‘ academic performance, this perception can block the use of data
for continuous improvement in low-performing schools.
Within schools, teachers are also situated in subcultures that shape their responses
to DDDM. Ingram et al., (2004) note that data use is a new cultural value and norm:
In the recent past, teachers could ignore outcome data if they wanted to. They
could isolate themselves in their classrooms and teach their subject, in which case
there is very little organizational learning. The notion that teachers should,
collectively, take responsibility for student outcomes is both recent and
controversial. (p. 1278)
Indeed, often pre-existing teacher norms and culture affect how teachers use data. Young
(2006) conducted four case studies of elementary schools to examine teachers‘ use of
data. The author found that different grade level teams within the same schools exhibited
different professional norms. In terms of collaboration, groups ranged from ―story-
swapping‖ to joint work with regards to data use. Teams also exhibited differences in
relation to their interaction styles ranging from ―team discord‖ to ―team cohesion.‖ For
example, the author found that a second grade team had relatively high cohesion, but the
content of their collaboration was not based on joint work. Rather, discussions
concentrated on one veteran teacher who offered advice to novice teachers. This is in
contrast to a third grade team at another school where the members jointly worked
together to develop grade-level consistency, shared lessons, and solved problems. Young
presents a heuristic tool for assessing the current norms of teacher subcultures. The two
41
continuums that make up collaboration help researchers examine both the style of
interaction as well the content of the interaction. As other research has noted teachers can
also work cohesively to undermine reforms (Datnow, 1995). Teacher cohesiveness itself
does not lead to a focus on instructional improvement for the sake of student learning.
Additionally, the ways in which teachers collaborate matter. They may simply exchange
―war stories‖ without sharing lessons, developing common goals and strategies, or
offering a critical examination of one another‘s practices.
In the institutional context and the framing of school change by NCLB, data are
not simply ―facts‖ but can be utilized for different purposes. According to Weiss (1980),
data and evidence function in multiple roles: they fulfill an instrumental, conceptual, and
symbolic role (cited in Coburn, Honig, and Stein, 2006). First, they can take an
instrumental role whereby evidence is gathered and used to formulate a solution to a
problem. Second, they play a conceptual role by helping decision makers generate new
ideas or generalizations to inform their perception of a problem. Both the instrumental
and conceptual use of evidence contributes to the sense that data aid problem solving and
knowledge building. These are the roles that evidence are expected to play from the
rational theory underlying data-driven decision making. However, evidence also fulfills a
symbolic role whereby it is used to legitimate or sanction decisions. For example, data
can be used to justify pre-existing preferences or political action (Ingram et al., 2004).
Not all symbolic uses of data produce negative consequences. Data are a symbolic and
visual representation of a school‘s strengths and weaknesses. Thus, they can be employed
42
to celebrate accomplishments or cultivate a sense of unity towards a common goal
(Supovitz & Klein, 2003).
Despite the three roles that data might take, research suggests that teachers see
data being used symbolically rather than instrumentally or conceptually. Because the
emphasis on data is situated within a high-stakes accountability framework, teachers view
the use of data as suspect (Ingram et al., 2004; Marsh et al., 2005; Mason, 2002). Many
studies found that teachers fear that data would be used to justify political or ideological
choices (Datnow et al., 2007; Ingram et al., 2004; Mason, 2002). In a study of secondary
school teachers across the nation, teachers viewed data as a sanctioning tool used to fit
pre-determined decisions rather than information used to shape the decision making
process (Ingram et al., 2004).
Teachers are less likely to be swayed by data if it is used mainly for symbolic
purposes. Teachers (and others) are more likely to use data for substantive and
instrumental goals when data are presented as a tool for improvement rather than
ideological or political motives (Coburn & Talbert, 2006). Strong leaders, committed to
utilizing data for decision making and knowledgeable about the process, were found to be
essential to ensuring that a positive culture for data use is implemented in the school
(Dembosky et al, 2005; Marsh et al, 2005; Petrides & Nodine, 2005). They lead by
creating an atmosphere where DDDM practices are relevant for instructional decision
making. Schools that create teacher buy-in consciously and strategically focused on
shifting perceptions of data use. To shed negative perceptions of data use, leaders
continuously framed data as a tool for learning and improvement (Datnow et al., 2007).
43
For example, data were mainly used to assess the effectiveness of a teaching strategy
rather than an evaluation of the teacher. By doing so, teachers were able to maintain their
sense of self-efficacy while working to improve their instructional practices. Mason
(2002) further suggests that data use is most effective when it is open, inclusive, and
transparent to all staff. To build such as culture, planning and goal development that
clarifies the purpose and usage of data increases the likelihood that different members of
the school community will buy-in to the process.
The presence of professional learning communities and a culture of inquiry seem
to facilitate DDDM (Chen, Heritage, & Lee, 2005; Lachat & Smith, 2005). Feldman and
Tung (2001) found that teacher inquiry groups help faculty to form a professional culture
and re-assess their assumptions about teaching practices and student learning. More
specifically, participants reported that they had a deeper understanding of the process of
inquiry, were less likely to jump to conclusions or solutions, and were more likely to use
reflective practices by basing their comments on data rather than assumptions. In order to
build effective inquiry groups, however, schools relied on external support providers who
could train a large group of teachers (Datnow et al., 2007; Feldman & Tung, 2001).
The Consequences of DDDM
The research on DDDM suggests that the strategy can positively affect school
improvement and student achievement (Datnow, Park, & Wohlstetter, 2007; Feldman &
Tung, 2001; Hamilton et al., 2007). The use of data by teachers, in particular, was
reported to be beneficial to improving instruction and student learning (Datnow, Park, &
44
Wohlstetter, 2007; Feldman & Tung, 2001; Light et al., 2005). They helped teachers
decide how to pace their instruction, align their lessons to standards, identify lessons for
re-teaching, guide their flexible grouping of students, and target students for intervention
(Datnow, Park, & Wohlstetter, 2007; Dembosky et al., 2005; Marsh, Kerr, Ikemoto,
Darilek, Suttorp, Zimmer, and Barney, 2005). Dembosky and colleagues (2005) found
that teachers, especially in the elementary levels, often used data to differentiate
instruction. Second, data help to set and refine concrete goals (Armstrong & Anthes,
2001; Supovitz & Klein, 2003; Togneri & Anderson, 2003). Data enabled teachers to
pinpoint instructional strengths and weaknesses as well as encourage them to share best
practices (Datnow, Park, & Wohlstetter, 2007). For example, rather than generic
comments such as ―your child is reading below grade level,‖ data can be used to steer
conversations to learning goals and targeted benchmarks. Data may also be used to shed
light on a discrepancy between grades and test scores, which might indicate a need to
reexamine grading practices. Third, DDDM practices may foster a culture of inquiry and
work to reinforce school priorities by providing supporting information that aids
communication between teachers, students, parents, and the rest of the school community
(Earl & Katz, 2002; Supovitz & Klein, 2003).
The use of data may also help contest negative tacit beliefs and assumptions about
low-income minority students. In some instances when educators are confronted with
evidence that challenges their views about students‘ abilities, data can act as a potential
catalyst for changing perceptions (Bensimon, 2005; Datnow & Castellano, 2000; Skrla &
Scheurich, 2002). Anthes and Armstrong (2001) indicated that comparisons to high
45
performing schools with similar student demographics helped teachers in lower achieving
schools to stop blaming students‘ background for low academic results. Skrla &
Scheurich (2002) suggested that the Texas accountability system‘s emphasis on
disaggregating student data by subgroups helped to displace, but not eliminate, deficit
views of students. Similarly, Woody‘s (2004) survey of educators‘ view on California‘s
accountability system found that larger data patterns increased teachers‘ awareness of
inequities in student outcomes. At one high school, Lachat and Smith (2005) found that
reviewing the relationship between student assessment data and student absence data
challenged the belief that student absence led to poor academic performance. When
confronted with data that indicated students with regular attendance also demonstrated
low achievement, teachers were able to engage in conversations on improving both
student engagement and the quality of instruction. Overall, Lachat and Smith‘s (2005)
study of five low performing high schools located in high poverty districts, report that
―when data revealed false assumptions or hunches about specific groups of students, it
became easier to get school staff to recognize the importance of basing decisions on data‖
(p. 22).
While there are many reported benefits about the use of data, a focus on DDDM
may also have detrimental consequences to improving learning for all students. Critics of
DDDM suggest the emphasis on testing data in particular leads to myopic diagnosis and
solutions. Shirley and Hargreaves (2006) argue,
With AYP deadline looming and time running out, teachers have little chance to
consider how best to respond to figures in front of them. They find themselves
instead scrambling to apply instant solutions to all the students in the problematic
cells—extra test-prep, new prescribed programs, or after-school and Saturday
46
school sessions. There are few considered, professional developments here, just
simplistic solutions driven by the scores and the political pressures behind them.
(pp. 2-3)
As the authors argue, the emphasis on using data to fulfill high-stakes accountability
mandates can lead to a focus on easy solutions over sustained continuous development.
Of the literature that takes a more critical look at data uses and abuses, there is a
consensus that data are tools that can be either helpful or detrimental to improving
schools; thus, context makes all the difference (Booher-Jennings, 2005; Diamond &
Spillane, 2004).
Two major problems have consistently emerged on the abuse of data: targeting of
resources to ―bubble kids‖ (Booher-Jennings, 2005; Diamond & Cooper, 2007; Lachet &
Smith, 2005; Light et al., 2005) and the narrowing of the curriculum (Booher-Jennings,
2005; Diamond & Cooper, 2007). Coined ―educational triage‖ by Gillborn & Youdell
(2000), schools delineate students from non-urgent, suitable for treatment, and hopeless
cases to target their resources and attention (cited in Booher-Jennings, 2005, p. 241). For
example, one elementary school in Texas devoted their efforts on students hovering near
the proficiency mark (Booher-Jennings, 2005). Students who were identified as being
suitable for treatment (i.e., students who were on the bubble or cusp of making
proficiency) were targeted by teachers for remediation and additional tutoring while the
―just too low‖ students (i.e., very low proficiency) were considered lost causes. Similarly,
from their observations of teachers‘ data reflections meetings at four K-8 schools,
Halverson et al. (2005) found that faculty data discussions overwhelmingly centered on
helping students below proficiency levels with few discussions focused on raising
47
students from proficient to advanced levels. Highlighting this trend, Light et al.‘s (2005)
survey results indicated that school level administrators used data reports to frequently
target ―bubble kids‖ due to pressure from the district. These patterns of focusing on
specific groups of students at the expense of others have important implications for
equity, especially with regards to equal opportunity to learn. The premise of
accountability policies such as NCLB is to ensure that educators have high expectations
for all students. The legislation is intended to raise achievement across the board,
regardless of a student‘s current ability levels. By focusing on the use of data to avoid
sanctions or meet AYP, educators, intentionally or unintentionally, end up subverting the
rationale behind policies such as NCLB.
The narrowing of the curriculum is another negative trend suggested by research
on DDDM (Diamond & Cooper, 2007; Marsh et al., 2007). Accountability policies and
high-stakes testing have been found to direct schools to focus on math and language arts
at the expense of other subjects such as science or social studies (Diamond and Cooper,
2007). Even within targeted subjects such as math and language arts, teachers may
emphasize test preparation (e.g., learning test styles and formats) for state assessments
(Marsh et al., 2007). While such strategies may produce short-term gains on test scores,
long-term student progress may be hindered because testing mastery rather than learning
is emphasized.
In sum, empirical studies on DDDM underscore the need to study the
implementation of DDDM in its institutional and organizational context. The emerging
research literature suggests that DDDM practices may positively influence continuous
48
improvement efforts when the focus is on organizational learning and student
improvement. Studies also offer a cautionary tale of data use when implemented in a
high-stakes accountability context with a focus on compliance. The research literature
affirms that DDDM is not characterized by sporadic examination of test results, but by
systematic and sustained reflection on a multiple array of data. When DDDM is a central
part of the school planning and improvement process, it becomes embedded and infused
into the structure and culture of the organization. Practices, policies, and support
structures are the scaffolds that enable effective data synthesis and analysis, while culture
is the framework that provides these elements with meaning. Mutually supportive
structures, policies, and technical capacity framed within a culture of collaborative
inquiry were found to be necessary ingredients to ensuring that DDDM practices become
relevant strategies for improving student learning.
Limitations of the Current Research on DDDM
To date, the research on DDDM is confined to implementation studies that tend to
focus on structural enablers and constraints with little consideration of theory. Further,
analysis has tended to concentrate at the district or elementary and middle school levels.
Table 2.2 lists the empirical research on the implementation of DDDM, categorized by its
level of analysis and focus.
49
Table 2.2: Empirical Studies on DDDM
Level of Analysis
Focus
Author(s)
Administrators &
Teachers:
Institutional & sensemaking theories; examines how the
organizational context shapes practitioners conceptions of
evidence
Coburn & Talbert
(2006)
Administrators &
Teachers:
Reports attitudes towards the use of data and the impact of
the California Accountability System
Woody et al.
(2004)
Administrators/
Superintendents:
Assesses the impact of the state accountability system in
Texas
Skrla & Scheurich
(2001)
District/System:
Examines the implementation of DDDM at charter
management organizations & districts; focuses on support
conditions
Datnow et al.
(2007)
District:
Reviews factors that impede DDDM in Southwestern PA
Districts
Dembosky et al.
(2005)
District:
Provides model for simple to complex DDDM uses and
assess types of data used by districts
Ikemoto & Marsh
(2007)
District:
Reviews strategies used to promote data use for
instructional improvement in 3 districts
Kerr et al. (2006)
District:
Reports system level reform & uses of data
Togneri &
Anderson (2003)
District:
Evaluates implementation of DDDM
Wayman, Cho, &
Johnston (2007)
Elementary School:
Using institutional theory, focuses on how the
organizational context shapes teachers‘ responses to data
use
Young (2006)
Elementary School:
Institutional theories; examines responses to the Texas
Accountability System in urban schools
Booher-Jennings
(2005)
Elementary School:
Institutional stratification theory; analyzes how the
accountability status of schools shape their responses to
data use
Diamond &
Spillane (2004);
Diamond &
Cooper (2007)
Elementary &
Middle School:
Distributed leadership & organizational learning theories;
focuses on how school leaders developed organizational
capacity to use data
Halverson et al.
(2005)
Elementary &
Middle School:
Reviews the uses of data in the American Choice school
reform design & implementation challenges
Supovitz & Klein
(2003)
Elementary &
Middle School:
Examines how the presence of data management systems
affects teachers‘ use of data.
Wayman &
Stringfield
(2006)
50
Table 2.2, Continued
Level of Analysis
Focus
Author(s)
High School:
Organizational theory; highlights the culture of teaching
that works against ongoing data use
Ingram et al.
(2004)
High School:
Highlights lack of technological infrastructure & culture of
fear around data use as barriers in urban high schools
Lachat & Smith
(2005)
K-12 School:
Reviews conditions that facilitate or hinder DDDM
Feldman & Tung
(2001)
K-12 School:
Recounts lessons learned from implementation of DDDM
Herman &
Gribbons
(2001)
Technical Support:
Examines the influence of data management software,
Quality School Portfolio, on school improvement
Chen, Heritage, &
Lee (2005)
Technical Support:
Reports the attitude towards Grow Reports (a web-based
report system) & data use among teachers and
administrators
Light et al. (2005)
Technical Support:
Examines the implementation of a data software, Quality
School Portfolio, & conditions that facilitate or hinder
DDDM
Mason (2002)
Technical Support:
Overview of data software for schools
Wayman et al.
(2004)
The table highlights the lack of research on high schools and DDDM. To date,
only two studies have looked at the implementation of DDDM in high schools, and of
these, neither have examined how the unique organizational structure of high schools
may shape DDDM implementation (Ingram et al., 2004; Lachat & Smith, 2005). While
there might be similarities in how DDDM is perceived and used across a variety of
contexts, research on sensemaking and teacher mediation of reform suggests that
organizational contexts matter. Research on secondary schools further indicates that
implementation of reform is mediated by subject-matter area in which teachers are
located (Grossman & Stodolsky, 1995; Siskin, 1994). That is, within the culture of each
51
school, different departments such as math and English also manifest subcultures that are
distinct from one another. In fact, Siskin (1994) suggests that math departments across
various school sites are more likely to share more common features than math and
English departments located in the same school. In addition, subject-matter areas
influence teachers‘ attitudes and orientation towards instructional practices. Unlike
elementary school teachers who tend to have a more holistic approach towards student
learning and development, high school teachers mainly identify themselves as ―subject-
matter specialists‖ whose emphasis is teaching in a particular content area, rather than
attending to the needs of ―the whole child‖ (Grossman & Stodolsky, 1995; Siskin, 1991).
Varied curricular programs or tracks such as Advanced Placement, Honors, and English
Language Development add another dimension of complexity. Thus, teachers in high
schools with different pedagogical assumptions and department affiliation may use data
differently from teachers in elementary schools.
Of the handful of studies that incorporate theory in assessing the implementation
of DDDM, researchers have utilized institutional theories (Booher-Jennings, 2005;
Coburn & Talbert, 2006; Young, 2006), organizational learning theories (Halverson et
al., 2005; Ingram, 2006), and institutional stratification theory (Diamond & Cooper,
2007). With the exception of Booher-Jennings (2005) and Diamond and Cooper (2007),
the majority of the theoretical studies begin with the premise that DDDM is a positive
strategy for school improvement. From this assumption, the focus is on how to improve
the implementation of DDDM rather than critiquing it. By doing so, they neglect to
52
examine the potential negative consequences of the emphasis on DDDM or understand
how the political context may shape teachers‘ reactions to DDDM.
These studies identify factors that may facilitate or hinder the use of data but
remain underdeveloped because of its atheoretical nature. As a result, much of the
literature tends to be a list of variables that shape DDDM rather than examine how these
variables may work together. Additionally, the process of teachers‘ sensemaking of data
use and the extent to which it is shaped by the organizational and institutional context
remains largely unexplored. With these limitations in mind, I turn to sociological theories
to inform the DDDM implementation process.
Towards a Theory-Driven Understanding of DDDM
Given what is known about the complexity of DDDM implementation, I utilize
literature pertaining to institutional theory, sensemaking, and frame analysis to examine
teachers‘ perspectives on data use. All of these theories are rooted within the cultural-
sociological tradition. Each theory, however, provides a different lens through which to
view reform implementation. Neo-institutional theory offers a macro framework to
understand how people make meaning while sensemaking offers a micro level
examination of a phenomenon. Theories on collective action framing also add an
important dimension in that they attend to the ways in which sensemaking may be
situated within hierarchical social systems and power. All these theories are linked in that
they emphasize the important role played by context and social interactions. I then
53
elaborate on the key concepts, assumptions, and limitations of each theory before I move
on to the presentation of my research framework.
Institutional Theory
Rooted in organizational theory, neo-institutionalists have contributed to the
understanding of organizational development by emphasizing how organizations are
shaped by the larger cultural and regulative environment. Institutions are ―multifaceted,
durable social structures, made up of symbolic elements, social activities, and material
resources‖ (Scott, 2001, p. 49). The basic premise of institutional theory is that
organizations are open systems that are shaped by the wider context. Institutional theory
moves away from viewing organizations as filled with individuals with unbounded
rationality who are purely motivated by self-interest (Rowan & Miskel, 1999). Unlike
traditional institutional theories which tend to stress formal structures and rules, neo-
institutionalists view individuals as social actors embedded in cultural-cognitive,
regulative, and normative environments that constrain and enable action (Powell &
DiMaggio, 1991; Scott, 2001). Although social actors continuously interpret the world
around them, they also are supplied with cultural-cognitive frameworks to orient and
guide their decision making (Powell & DiMaggio, 1991; Jepperson, 1991). These
models, schemas, and scripts provide the structure in which individuals or group select
and process information.
In terms of context, the notion of organizational field is key. The concept of field
helps illuminate how schools, as organizations, are influenced by multiple institutional
54
pressures and are inter-related with other organizations. An organizational field is held
together by common regulatory, normative, and cultural-cognitive systems (Scott, 2001).
For instance, teachers receive normative messages about good pedagogy from the policy
environment and from their pre-service institutions and professional organizations. In the
field of education, besides schools and governmental agencies, teacher organizations,
researchers, and educational consultants are also included. This list of key actors
underlines the fact that schools are subject to multiple, overlapping, and sometimes
contradictory institutionalizing pressures (Rowan & Miskel, 1999; Friedland & Alford,
1991; Powell & DiMaggio, 1991). Thus, organizational fields are sites of negotiation and
contested forces.
To understand organizational responses to institutional pressures, neo-
institutionalists focus on the logic of a given field. Each organizational field, such as
education, has a central logic ―which constitutes its organizing principles and which is
available to organizations and individuals to elaborate‖ (Friedland & Alford, 1991, p.
248). The authors go on to define institutional logic as system of practices and symbolic
constructions embedded within the organizational structure and sector. As mentioned in
chapter 1, the current institutional logic of school reform focuses on system coherence,
alignment, and student outcomes. As such, much of the reform efforts at the
organizational level are likely to be focused on these strategies.
55
Organizational Responses to Institutional Pressures
With the shifting logic of school reform and increasing expectations on educators,
neo-institutional theories suggest several organizational responses. DiMaggio and Powell
(1983) argue that there are three mechanisms by which schools adopt reform: coercive
pressures, which tend to be associated with regulatory agencies and policymakers,
normative pressures from the profession organization and communities, and mimetic
pressures associated what copying best practices. Meyer and Rowan (1977) found that
organizations responded to institutional pressures by ―decoupling‖ formal structures from
technical activities in order to maintain legitimacy while buffering internal units from
actual change. Despite numerous policy attempts to change schools, the penetration and
sustainability of reform efforts since then remain in question (Rowan & Miskel, 1999).
Neo-institutionalists argue that schools are likely to decouple structure from practice if
there are high symbolic gains from adoption but equally high costs of implementation
(Scott, 2001). With the emphasis on DDDM, it is unsurprising that schools are
increasingly identifying themselves as data-driven. This is legitimized rhetoric and
method in the contemporary educational reform landscape. Institutional theory predicts
that the combination of coercive and mimetic pressures will lead to such an outcome.
However, neo-institutional theories suggest that changing an organization‘s formal
structure or work practices alone will not lead to transformative changes but will instead
lead to superficial engagement. Consequently, teachers may comply with attending peer
collaboration meetings or participate in data discussions, but this does not guarantee that
they will actually use data to inform decisions.
56
Yet, organizations do not simply respond to institutional pressures through
symbolic implementation. Schools are likely to respond to institutional pressures such as
accountability in a myriad of ways. Oliver (1991) offers a typology of strategic responses
that extends beyond ―decoupling‖ rhetoric from practice. Organizations may respond by
conforming to pressures, negotiating, defying, or manipulating their field. Building on
Oliver‘s (1991) typology, Coburn‘s (2004) study on the implementation of a new reading
policy indicated that teachers responded to institutional pressures in five ways: rejection,
symbolic response, parallel structuring, assimilation, and accommodation. Of the five,
assimilation and accommodation involve important distinctions. When teachers
assimilate policy messages, they are more likely to implement the superficial aspects of
reform by changing the structure of their classes, instructional routines, and materials
without understanding the pedagogical assumptions (Coburn, 2001; Spillane et al, 2002).
In contrast, accommodation occurs when teachers fundamentally alter the underlying
beliefs related to teaching and learning approaches. Coburn (2004) found that teachers
were more likely to respond with accommodation and significant changes to their
practices when they encountered normative pressures. In this instance, normative
pressures were more important to producing transformative changes in teacher practices.
Thus, the mechanisms in which policy messages are carried out (i.e., regulative,
normative, and cognitive) and the extent to which they overlap with one another
influence classroom and organizational practices.
Coburn (2004) further argues that the nature of the policy message and how it is
carried to teachers create a structure in which teachers take action. The extent to which
57
the policy message is congruent with existing beliefs, the intensity in which it is
communicated, its pervasiveness, and the degree to which policy is voluntary affect how
teachers respond. Although there are considerable variations in how teacher response to
institutional pressures, similar encounters with policy messages tended to produce
comparable teacher responses. Their responses will likely hinge on their organizational
context, their frequency of contact with policy messages, and the intensity of message.
Overall, institutional theory indicates that organizations are more likely to avoid
conforming to external regulatory requirements rather than with normative or cultural-
cognitive pressures. This has been borne out by the empirical research on policy
implementation (Coburn, 2004, McLaughlin, 1990; Spillane et al., 2002; Walker, 2004).
In addition to regulations, professional norms, taken-for-granted assumptions about
teaching and learning need to be changed as well. Studies examining implementation of
structural changes such as de-tracking indicate that these taken-for-granted assumptions
about teaching and learning often derail reform efforts. Datnow, Hubbard, and Mehan
(2002) found that teachers who attributed student achievement to innate intelligence
believed that reforms could not solve problems associated with students from
disadvantaged backgrounds. In contrast, teachers who perceived student achievement to
be based on effort believed that success could be nurtured and were more likely to
embrace changes to instruction and curriculum. Other studies also demonstrate how
educators‘ beliefs about student ability and intelligence constrain schools‘ restructuring
efforts aimed at equity (Lipman, 1997; Oakes, Wells, Jones, & Datnow, 1997). Thus,
institutional theory cautions researchers to probe both the underlying assumptions and
58
behaviors of educators when they implement DDDM. While schools and teachers may
describe themselves as data-driven given the current climate of accountability, they may
actually be maintaining their existing practices or engage in structural tinkering with little
transformation. However, few studies have been conducted on the relationship between
the shifting institutional context and teachers‘ work (Rowan & Miskel, 1999; Coburn
2004).
In sum, institutional theory suggests that researchers probe the rules, norms, and
implicit assumptions of educational organizations to uncover how change does or does
not occur. However, while all organizations within a field are affected by similar
institutional processes, they do not necessarily experience or respond in the same ways.
Institutional theory tends to neglect the role of human agency and employs a top-down
approach to studying organizational responses. Thus, institutional theories provide a
partial picture of implementation by showing how external pressures might condition an
organization‘s response. To bring the role of agency into the conversation, I now turn to
sensemaking theory to understand how teachers construct DDDM as a reform strategy.
Sensemaking Theory
In order to gain further insights on how the institutional context shapes teachers‘
work, sensemaking theory has been employed to examine how local actors interpret and
enact policy (Spillane et al. 2002; Coburn, 2001). Based in social psychology and
organizational theory, the sensemaking framework presents social actors as complex,
meaning makers who do not merely react to external stimuli but engage in interpretation
59
in order to act upon their environment (Weick, 1995). Weick et al. (2005) summarize the
process:
Sensemaking involves the ongoing retrospective development of plausible images
that rationalize what people are doing. Viewed as a significant process of
organization, sensemaking unfolds as a sequence in which people concerned with
identity in the social context of other actors engage ongoing circumstances from
which they extract cues and make plausible sense retrospectively, while enacting
more or less order into those circumstances. (p.409)
In other words, people socially construct their world as they interact with others and their
environment. The process always begins with ―chaos‖ as a person or group attempt to
organize the constant flood of information that surrounds them (Weick et al., 2005). The
first step in organizing begins with attending to certain cues and then bracketing
information from their experiences. This is then followed by a process of labeling and
categorizing in an effort to construct meaning.
More specifically, Weick and colleagues (2005) stress the seven underlying
dimensions of sensemaking. First, sensemaking is grounded in identity construction.
Echoing the postmodern turn on identity, sensemaking is an iterative process in which
images of oneself and others are constantly constructed and reconstructed. Second,
sensemaking is always retrospective in that the process involves the reliance on pre-
existing beliefs and experiences to make sense of a current situation. Third, sensemaking
is a social activity because conversations and narratives are the main modes of the
sensemaking process. Fourth, sensemaking is ongoing without a formal beginning or end.
Fifth, sensemaking relies on enactment theory, the belief that people produce part of their
environment. That is, people have agency because their interpretations shape a situation,
which then influences the trajectory of the future. Sixth, people make meaning by
60
selecting information (i.e., ―extracting cue‖). They pay attention to salient cues, linking
them to pre-existing knowledge that helps clarify the meaning of the cue. Finally,
sensemaking is driven by the notion of plausibility (i.e., what is credible or coherent)
rather than accuracy. Thus, sensemaking is a process theory in which perceptions,
interpretations, and actions build on each other (Weber & Glynn, 2006).
Sensemaking theory underscores the complex inter-relationship between meaning
and action. By nature, sensemaking is incremental, fluid, and recursive. The call for
more evidence-based and data-driven decision making is built on the premise that human
beings function in a rational manner, actively making decisions once all facts are
assessed. However, sensemaking precedes and frames decision making. It is the
mediating process by which constructed meaning and action is developed. The
assumptions of data use is a rational-scientific one in which educators can engage in
continuous assessment of their practices, which in turn would lead to the maintenance of
effective practices or discontinuation of ineffective ones. While the rational theory of
action sounds relatively straightforward, the reality of implementation belies its simple
logic because the mere presence of data is insufficient to create the impetus to bring
about meaningful change. As fallible beings with ―bounded rationality‖, individuals are
limited in their information processing ability as well as access to time and resources
(House, 1996; March, 1994; Rhine, 1998). Like all decision makers (March, 1994),
teachers interpret and integrate other forms of information into their existing knowledge
base when deciding upon a course of action. Given the multi-faceted nature of the reform
process, it is to be expected that teachers and other school-based practitioners mediate
61
implementation off DDDM by filtering the policy messages through their pre-existing
beliefs and experiences.
Studies utilizing sensemaking theory have produced critical insights on how
organizational change occurs because it provides a lens to understand how human agency
alters environments. This line of inquiry investigates the roles teachers (and other
practitioners) play in interpreting, adapting, or transforming policy (Coburn, 2001;
Spillane, Reiser, & Reimer, 2002). Because sensemaking occurs in a social context and is
shaped by interactions at various levels and groups, there can be different interpretations
of the same message even within a single organization such as a school. Siskin‘s (1994)
study on secondary studies indicates that teachers‘ responses to reform were dependent
on their departmental context and their professional identities. Within the culture of each
school, departments such as math and English manifested sub-cultures that were distinct
from one another. Additionally, subject-matter areas influenced teachers‘ attitudes and
orientation towards instructional practices. In terms of pedagogy and instructional
practices, there was more intra-organizational variability among department than inter-
organizational differences. More recently, Coburn‘s (2001) study of elementary schools
implementing a new reading policy found that the interpretation and practical application
of the policy message were mediated by both formal (e.g., grade level) and informal
teacher groupings. As teachers attend to information from the environment, they
construct understandings within the confines of their current cognitive frameworks (e.g.
knowledge and beliefs) and enact their interpretations in a way that creates new ways of
thinking, relational patterns, and practices.
62
With sensemaking theory, failure of policy is attributed as a learning problem
because change agents are asked to implement programs in which they lack experience or
knowledge. Spillane et al. (2002) and Coburn (2001) show how often policies fail to be
implemented not because of motivation or lack of capacity but due to miscommunication
and misinterpretation rising out of the policy sensemaking process. Given the constraints
and demands on teachers‘ times, they have limited opportunities or support to engage in
deliberate organizational sensemaking processes. Policy that require deeper cognitive
changes in addition to changes in practices are particularly susceptible to being
misinterpreted. In these cases, teachers and other practitioners are more apt to
misunderstand new ideas as familiar, hindering change altogether. They may also focus
on the superficial elements of implementation, missing deeper relationships between
pedagogical assumptions and practice. Sensemaking theory highlights how policy
implementation is not just problem of will and organizational structure but also one of
social learning and cognitive capacity. Thus, researchers have suggested in order to
improve organizational capacity for improvement, educational systems need to
restructure learning for teachers (Firestone et al., 1999; Spillane et al., 2002).
When designing new reform, many policies and programs directed at changing
instruction at the classroom level seem to skip teacher learning altogether (Firestone, Fitz,
& Broadfoot, 1999). For instance several new programs developed around the principles
of constructivist and culturally relevant pedagogy ironically expect teachers to acquire
these new methods and approaches without providing the necessary support systems or
learning opportunities (Firestone, Fitz, & Broadfoot, 1999; Spillane et al., 2002).
63
Teachers are exposed to the programs through sporadic training and then are expected to
incorporate the new knowledge into their classrooms. These new instructional
innovations have failed because they do not simply require teachers to implement new
procedures or structures for learning. They fundamentally require teachers to rethink
their roles as teachers and learners while simultaneously changing how they relate to their
students. With this lack of learning and support, teachers tend to incorporate superficial
aspects of reform or continue to use their existing practices (Spillane et al., 2002).
―Drive-by‖ training sessions or sporadic professional development (Elmore, 2002) that
do little to address the specific needs of schools and teachers cannot support the
continuous learning that is required for capacity-building (Darling-Hammond &
McLaughlin, 1995). Building a collaborative learning culture is an essential component
of the degree to which data are used to improve instructional strategies.
In review, because teachers mediate reform, their beliefs and experiences
influence the implementation of any change efforts, including data-driven decision
making. Whether or not different sources of data are perceived as valid and relevant
representations of student learning and educators‘ beliefs that student progress can be
improved based on effort interact with the process of interpreting data for school
improvement (Diamond & Spillane, 2004; Mason 2000; Petrides & Nodine, 2005).
Furthermore, differing values and assumptions about the purpose of teachers‘ roles have
an important impact on how data are viewed and for what purposes. Consequently,
teachers mediate reform by actively and tacitly incorporating their understandings of
64
what constitutes valid knowledge to inform their practices (Hemsley-Brown & Sharp,
2003; Ingram, Louis, & Schroeder, 2004; Labaree, 2004).
Limitations of Sensemaking Theory
Although sensemaking theory provides an important lens to understand micro-
level changes, it alone provides a partial theory of the policy implementation process.
Reminiscent of theories on symbolic interactionism (e.g., Goffman, 1967), sensemaking
theory overly emphasizes the immediate, micro-mechanism interactions, at the expense
of institutionalized patterns (Weber & Glynn, 2006). It neglects to examine the degree to
which an organization‘s history, its institutional pressures, existing social structures and
practices might frame the manner in which individuals and groups encounter and
understand policy messages. As institutional theorists suggest, organizations cannot be
understood apart from their larger cultural, social, and political context (Scott, 2001).
Focusing on the immediate process by which groups interpret and enact policy messages
obscures the ways in which the larger context, such as the organizational field, enables or
constrains these processes. Without situating sensemaking in historical time and space, it
holds little analytical explanation in how humans, as social actors, make meaning of their
world.
Sensemaking theory also tends to stress cognitive processes and ignores other
aspects of human relations, including the dynamics of power and ideology (Weick et al.,
2005). Differential access to decision making positions, as well as resources, means that
some social actors have more power to shape social reality. Those in power often have
65
more opportunities and leverage to regulate behavior by shaping what is valued or
discounted and what is privileged or suppressed (Firestone et al., 1999; Scott, 2001).
Research suggests that most significant factors affecting the impact on practice in
education is legislation because schools adopt policies whether or not they are evidence-
based (Hemsley-Brown & Sharp, 2003). In schools, leaders have strong voices in the
construction of policy messages: they can shape where and how sensemaking happens,
they can frame policy messages and its interpretation, and they can provide material
support (Coburn, 2001; Datnow et al., 2007). The ways in which policymakers and other
decision-makers generate confidence in the policy maintains its legitimacy. The amount
of resources invested in a program signals to teachers and other school personnel the
level of commitment devoted to the reform thereby influencing their behavior. Activities
and attention at multiple levels work together to indicate the legitimacy and the
sustainability of reforms.
Multiple studies also point to the importance of ideology. Teachers are most
likely to support and implement reforms that are ideologically aligned with their pre-
existing beliefs (Datnow & Castellano, 2000; Spillane, Reiser, & Reimer, 2002; Coburn,
2002). In her study of a school attempting to restructure through increased teacher
participation, Lipman (1997) found that without critical dialogue offering alternative
views, the dominant deficit-model construction of students‘ capabilities were reinforced
and reproduced. Oakes and colleagues (1997) reached a similar conclusion in their study
of high school de-tracking. In sum, the complexities of any given school context
including existing capacities, culture, ideology, and power have consistently shaped the
66
implementation process of new education program or policy (Firestone, Fitz, &
Broadfoot, 1999; Fullan & Pomfret, 1977; McLaughlin, 1990). Thus, these factors need
to be considered when examining how teachers make sense of and enact DDDM.
Bridging the Macro and Micro
Both sensemaking and institutional theories have offered important insights in
policy implementation research. However, one remains micro level in nature while the
other remains at a macro-level. Micro-mechanism of sensemaking theory tends to project
the notion that because meaning making occurs at the individual level, change can be
idiosyncratic and that agents have unlimited power to affect their environment. Macro-
level theories such as institutional theory tend to offer a deterministic focus on larger
forces outside of the school system, sometimes neglecting to consider how agency does
matter. Similarly, sensemaking theory needs to be connected larger contextual forces if it
intends to move beyond micro-interactionism. To ignore one over the other means that
one loses an important aspect of how change occurs in education.
Weber and Glynn (2006) believe that micro-level theory of sensemaking and the
macro-level tendency of institutional theory can be complementary. The issue is not on
choosing one or the other but developing theories that build on the connections between
the two. They suggest that researchers focus on the mechanisms that connect
sensemaking to institutions. First, there needs to be an examination of how the
institutional context primes, edits, or triggers sensemaking (Weber & Glynn, 2006). They
recommend that researchers investigate the ways in which the institutional context enters
67
into the sensemaking process and focus on these three connections: first, how institutions
serve as the building blocks or substance for sensemaking: second, how institutions
dynamically guide and edit action formation; and third, how institutionalized practices
are continually enacted and accomplished in the ongoing sensemaking processes. By
studying the relationships between these dimensions of institutionalization and examining
how it primes, triggers, or edits the sensemaking process, the researcher can began to
build a theory that examines their interaction (Weber & Glynn, 2006). With this line of
inquiry, researchers come closer to bridging the micro-macro divide by focusing on the
inter-relationships between organizational learning and the larger context. I now turn to
frame analysis to understand how educational organizations may guide teachers‘
sensemaking of DDDM.
Frame Analysis
Scholars in the sociological tradition studying collective action framing offer
useful analytical tools to bridge institutional and sensemaking theories. Frame analysts
focus on the conditions in which people agree on a shared sense of reality and provide
analytical tools to examine how meanings are constructed and how organizations may
prime, trigger, or edit sensemaking. Like sensemaking theorists, framing theory assume
that meaning making is an active and dynamic process. Benford (1997) articulates the
basic assumption of frame analysts:
Meanings are derived (and transformed) via social interaction and are subject to
differential interpretations. Hence, meaning is problematic: it does not spring
from the object of attention into the actor‘s head, because objects have no intrinsic
meaning. Rather, meaning is negotiated, contested, modified, articulated, and
68
rearticulated. In short, meaning is socially constructed, deconstructed, and
reconstructed (p.410).
Similar to theories on sensemaking and institutional theories, frame analysts concentrate
on how meaning is shaped because they view this as an important driver of action.
Framing functions to organize experiences by giving them meaning, motivating
actors, and coordinating action. However, frame analysts also assume agency on the part
of actors and tends to focus on the strategic ways in which actors construct meaning for
specific purposes. The act of framing is defined as ―schemata of interpretation‖ enabling
people to make sense of their experiences (Goffman, 1974). The degree to which frames
also motivate action or shift in beliefs are based on the significance and salience of the
frames to the targeted audience. That is, the extent to which frames are relevant to actors
and compatible with existing experiences and narratives determine their level of
consequences. Frame analysts argue that when frames become credible and salient, they
become compelling enough to motivate groups of people to act or shift their beliefs (i.e.,
frame resonance) (Benford & Snow, 2000). This degree of resonance determines the
extent to which it mobilizes actors for a cause, garner support, or demobilize opponents
and resisters (Benford & Snow, 2000). Generally, frame analysts have analyzed the
process by which large social and political movements garner support for their causes,
such as the civil rights and feminists movements. Framing, however, can also illuminate
how change occurs within organizations such as high schools. I explain further in detail
how I applied frame analysis to understand the implementation of DDDM in my
conceptual framework.
69
For the implementation of data-driven decision making, Benford and Snow
(2000) pinpoint three core framing tasks that determines the degree of participant
mobilization (buy-in and implementation). First is diagnostic framing. When involved in
diagnostic framing, framers focus on defining the problem, assigning blame and or
responsibility, and suggests attributions (good vs. bad). For example, within the realm of
school reform, accountability policies have identified the achievement gap and low
quality of schooling as a major problem. Responsibility rests mainly on the shoulders of
educators who are expected to maintain a high quality of education. The second framing
task is prognostic framing. This type of framing involves an articulation of how the
problem may be solved. Because the frame is solution-oriented, it articulates the types of
strategies and tactics that may be useful. With NCLB, the prognostic framing has
centered on regular collection and monitoring of student achievement data as well as the
implementation of evidence based practices and targeted sanctions at under-performing
schools. Finally, the third task is motivating framing in which the rationale for action is
articulated. NCLB articulates the motivation for accountability as the twin goals of equity
and excellence. While the larger institutional context has framed new policies around
equity and accountability, how DDDM is framed at the local level and how this framing
shapes teachers‘ sensemaking is left unexplored. As stated previously, by using frame
analysis, researchers can examine how the larger institutional environment is connected
to teachers‘ work and how they make meaning of new reform strategies such as DDDM.
70
Summary
In sum, institutional theory, sensemaking theory, and frame analysis are
complementary frameworks that can illuminate how teachers‘ enact DDDM. These
theories rest on similar premises for school reform: that people actively construct
knowledge; learning is an interactive process situated in specific contexts; and how
people make sense of and actively construct interpretations of school improvement
matters. Institutional theory has examined the relationship between organizations to their
larger social and cultural environment to understand the nature of institutional legitimacy,
persistence, and change. Sensemaking theory has contributed to understandings on how
individuals and groups within organizations process, interpret, and act based on their
experiences. Frame analysis provides the tools to analyze how context may structure
meaning-making activities. Table 2.3 presents the major sociological theories reviewed.
71
Table 2.3: Summary of Major Theories
Theory
Level of Analysis
Key Concepts & Assumptions
Limitations
Institutional
Theory
Relationship
between
organizations &
their larger social
& cultural
environments
Legitimacy, persistence,
& change in institutions
Logic, field, actors, &
governance structures
Simplifies the
persistence and change
process by focusing on
top-down view and
ignoring agency
Does not acknowledge
conflicting
institutional pressures
Tends to focus on
structure and culture
of institutionalization
rather than the
mechanisms
Sensemaking
Theory
Individual &
group meaning-
making processes
Examines how
perceptions,
interpretations, &
actions are dynamically
related
Attends to the cognitive
processes of meaning-
making
Focuses on variation &
patterned responses
Neglects to consider
power and authority in
shaping sensemaking
Limited studies on
how sensemaking
relates to larger
organizational and
institutional structures
Frame
Analysis
Conditions &
processes that
shape meaning-
making activities
Examines how social
actors strategically
organize meaning-
making activities in
social & political
contexts
Framing tasks,
resonance, credibility &
salience
Largely applied to
political and social
movements rather than
organizations
Alone, each theory view on how urban high school teachers make sense of DDDM.
Incorporating insights from each sociological tradition enables the researcher to develop
a more robust and nuanced portrait of DDDM implementation.
72
A Theoretical Framework for the Study of DDDM
The research framework that informs this dissertation builds from institutional
theory, frame analysis, and sensemaking. Figure 1 depicts the major analytic domains
suggested by the theories. In this section, I explain the how these elements inform the
study‘s methodology and analysis.
Figure 1: Theoretical Framework
Sensemaking Factors:
1. Sensemaking opportunities
2. Access to data
3. Membership in subcultures
4. Pre-existing belief about students
and learning
5. Capacity for data use
How high school teachers make sense of and use data for
decision making
School Context:
Organization & Culture of Urban
High Schools
District Framing of DDDM
73
First, as suggested by institutional theories, the organization and culture of high
schools within the larger organizational field are likely to shape how teachers respond to
DDDM. The manner in which classes are categorized by departments, the assumption
that high school teachers are content specialists, and the function of high school as
gateways to work and college are part of the institutional logic that affects all high
schools. As some neo-institutional theorist have argued though, not all high schools are
likely to respond the same to institutional pressures. Each school may have its own
unique set of assumptions about teaching and learning as well as structures that may
mediate how teachers make sense of DDDM. The first domain directs attention to how
the schools within this study fit in within the institutional environment.
The second domain focuses on the district framing of DDDM. Much of the
substantive literature on DDDM indicates that districts are taking the lead in making data
use relevant for teachers. In the organizational context, district leaders and principals also
have the power to structure sensemaking opportunities for teachers and to direct
resources to DDDM implementation. Thus, how districts frame the use of data and the
tools they provide are likely to be important in how teachers construct DDDM. The ways
in which district leaders engage in the framing tasks—diagnostic, prognostic, and
motivating—will likely shape the degree to which teachers buy-in to DDDM as a
meaningful strategy for decision making.
The third domain indicates the variables that may further shape teachers‘
sensemaking of data use. As suggested by the literature, sensemaking is a process by
which social actors construct policy messages or reform efforts based on their pre-
74
existing beliefs and capacities. Engagement in activities that enable teachers to discuss
their understandings about data and data use are likely to be important factors in how
teachers use data for decision making. The literature also suggests that teacher
subcultures and the degree of collaboration and joint work are also going to shape the
actual use of data. By considering these three analytical domains work together, I attempt
to understand how teacher sensemaking of DDDM is influenced by multiple layers of
context while also recognizing that various contexts may play different roles in shaping
the process.
Conclusion
In chapter 1, I proposed a study to explore how teachers make sense of DDDM. In
the first half of the review of the literature, I examined the empirical findings on DDDM
and highlighted the major themes that may shape teachers‘ responses to DDDM. Because
the research base for DDDM implementation is still emerging, much of the work remains
largely descriptive and atheoretical. There is also a dearth of literature on high schools
and how their organizational contexts may influence teachers‘ understanding and
enactment of DDDM. To begin to fill this research gap, I turned to sociological theories
to examine the interaction between structure and agency, employing institutional theory,
sensemaking, and frame analysis to inform my framework. Taken together, the domains
that make up the theoretical framework provide a guideline for data collection and data
analysis. In the next chapter, I detail my rationale for the methodology, review the data
sources, and explain the data analysis process.
75
Chapter 3:
Methodology
Debates of the value and limitations of different methods have raged amongst
scholars, with some preferring quantitative or qualitative methodology (Lincoln & Guba,
1985, Merriam, 1998). Although these types of discussions play an important role in
furthering the research field, my purpose in this section is not to review the debates or
argue for one against the other: rather, my point is to clarify my assumptions and how
they shaped the study design. The pragmatic paradigm informs my methodological
position. Pragmatists make epistemological claims on practical grounds and view both
qualitative and quantitative methods as valid strategies of inquiry (Creswell & Plano
Clark, 2007). Researchers from this standpoint have a pluralistic attitude towards
methods and focus on the problem being examined or the outcome being investigated.
Consequently, the starting question for justifying a study design is not what type of data
are more credible or valid, but what method is best suited to address the research
questions.
In the previous chapters, I introduced the purpose of the study and drew upon the
literature in sociology and education to develop a conceptual framework of how urban
high school teachers may make sense of data use. To investigate how urban high school
teachers conceptualize DDDM, I employed case study methods, relying primarily on
qualitative data along with survey data gathered in two high schools. In this chapter, I
provide an overview of the research methodology by describing the rationale for the
76
study design. I then outline the data sources, analysis techniques, and procedures to
ensure trustworthiness.
The Case Study Tradition and Mixed Methods
Given the emerging knowledge base of DDDM and my intent to build on
sensemaking theories, a case study was the most appropriate method. My choice of
research design was guided by the following sub-questions:
1. What types of data are high school teachers relying upon and for what
purposes?
2. How do the institutional and district contexts shape teachers‘ sensemaking of
data-driven decision making?
3. How do high school teachers‘ sensemaking processes interplay with the
structure (e.g., department affiliation) and culture of the high school? What
types of conditions act as barriers to or facilitators of data use?
I employed a mixed-methods case study using a concurrent nested strategy
because different data sources are likely to address different aspects of the larger research
question. Creswell (2003) summarizes mixed-methods as an approach in which the
researcher makes epistemological claims on pragmatic grounds (i.e., pluralistic, problem-
centered, and outcome-oriented). Methods are strategies of inquiry and the research
project determines which are best suited to help understand the problem to be studied.
When conducting a mixed-methods study, the researcher must account for the ways in
which both the qualitative and quantitative data will be employed. With the concurrent
77
nested strategy, a project has a predominate method that guides the data collection with
the secondary method embedded within the study. The secondary method is not
necessarily used to triangulate the findings but addresses a different research problem,
informs a sub-question or seeks information on a different level of analysis (e.g.,
individual vs. organizational). For this study, the primary data are qualitative (interviews,
observations, and document review) with quantitative data (survey) being utilized as a
secondary source. The survey was intended to assess the teachers‘ beliefs about the
usefulness of various types of data —attitudes which were not explicitly addressed by the
interview questions.
To reiterate, my overarching research question sought to understand how teachers
interpret DDDM. I chose qualitative methodology as my primary method for two main
reasons: (1) a desire to investigate the process of sensemaking rather than DDDM
outcomes; and (2) an interest in understanding how teachers construct meaning around
DDDM. Qualitative methods are well-suited to address this focus on interpretation and
meaning-making activities (Merriam, 1998) while survey data provided specific
information on teachers‘ attitudes towards a wide array of data types. Researchers
employing qualitative strategies generally operate under three main assumptions. First,
qualitative research focuses on process, meaning, and understanding (Merriam, 1998, p.
8). Studies, thus, tend to be rich in description and highlight participants‘ perspectives.
Second, because the goal is to understand how a phenomenon unfolds in real-life contexts
rather than controlled environments, researchers conduct fieldwork. This opportunity
allows the researcher to interact with participants, observe behavior, and gain first-hand
78
knowledge about the context. Third, qualitative methods are intentionally designed to be
flexible. Although researchers may develop theoretical propositions informed by existing
theories and empirical studies, the aim is to allow iterative data collection and analysis
(Creswell, 2003). Since the goal is not to test existing theories or make predictive claims,
this flexibility enables the researcher to consider a multitude of factors that may arise
while embedded in the research setting.
Teachers’ Sensemaking of DDDM as the Case
The case study tradition focuses on developing an in-depth analysis of a single
case or multiple cases. Because the boundaries of the phenomenon may not be clear, the
central task for the researcher is to identify the case. Defined generally as a bounded
system, the case should have a finite quality in terms of time, space, or elements
comprising the case (Stake, 2000; Yin, 2003). In other words, the case is a single entity or
unit to be analyzed. The case can be a program, a process (e.g., sensemaking), an event,
or an individual. Besides the nature of the bounded system, elements of the case include
the historical background, the physical setting, and political, legal, or cultural contexts
(Yin, 2003). Stake (2000) presents two main types of case studies: intrinsic and
instrumental. Intrinsic case studies focus on a particular or unique case and attempts to
build thick descriptions. The purpose of instrumental case studies is to inform theory.
The case for this study is teachers‘ sensemaking of data-driven decision making
and is bounded by teachers‘ interpretations and uses of data. I employed the instrumental
case study method, since my goal was to develop the theory on teachers‘ sensemaking of
79
data use. Although schools may be implementing other types of reform and these may
interact with teachers‘ uses of data, the unit of study was narrowly focused on how
teachers make sense of DDDM. Thus, the school itself does not constitute the case.
Instead, I examined this phenomenon using a multi-site case design by choosing two
different urban high schools where teachers‘ sensemaking on DDDM was occurring. In
both high schools, staff members identified themselves as being data-driven, thus
ensuring that data use was a salient aspect of their context. In the remaining sections of
the chapter, I detail the methods used for this study including sampling guidelines,
participant selection, summary of data sources, and the data analysis plan.
Purposive Sampling
My research drew on data gathered from a larger multi-site case study of data-
driven decision making in four urban high schools. Led by USC Professor Amanda
Datnow, the purpose of the study was to examine how teachers implemented DDDM for
instructional decision making. The goal was to identify common themes and practices as
well as pinpointing implementation differences.
6
As a research associate, I was part of the
team that conceptualized and designed the overall study. In addition to the principal
investigator and myself, our research team included three other graduate students who
served as additional research associates. As project manager, I led the development of the
data collection instruments and conducted data collection visits to the schools and district
offices.
6
The study was funded by NewSchools Venture Fund, Gates, and Hewlett
80
Purposive sampling, rather than a probability sampling, was used to identify
schools and participants: in other words, sites were chosen intentionally to answer the
research questions and to understand the role of context. Before the data collection
occurred, the research team established a list of criteria for site selection. Previous
research suggests that schools focused on DDDM develop support structures and invest
in resources to facilitate the implementation process (Datnow et al., 2007, Supovitz &
Klein, 2003; Togneri & Anderson, 2003). Subsequently, we chose urban systems and
high schools that had evidence of implementing DDDM to learn from their experiences
with using data. We also sought out schools that showed records of improving
achievement over time. Specifically, we looked at these six main criteria as indicators of
the focus on DDDM.
1. Did they explicitly, either through their vision or mission statement, stress DDDM
as a school improvement strategy?
2. Did the school systems invest in data management systems and were they
available to educators?
3. Did they collect and use various types of data beyond annual state test results
(e.g., presence of interim/benchmark assessments, tracking of other types of
student achievement data, teacher growth, etc.)?
4. Did they invest in professional development around data use?
5. Are they located in an urban area?
6. Do they serve a majority of low-income minority students?
81
Site Selection
The research team narrowed down the list of possible sites after reviewing school
and system websites across the nation and speaking with experts in the field. The initial
list included 50 schools that were recommended as fitting our criteria. We then conducted
phone interviews with principals and school system leaders to determine their eligibility.
The larger study included four high schools: two district high schools and two charter
management organization (CMO) high schools.
7
All schools were located in urban areas
serving a majority of low-income minority students. They were located in various regions
of the country, which enabled us to understand the implementation of DDDM in different
geographical contexts. The schools were also studied within their district contexts, as we
were interested in understanding what system level supports needed to be in place to
facilitate data use at the school and classroom levels.
For the purposes of this dissertation study, I chose to focus on the two high
schools from traditional districts because of their structural similarity and established
histories. Adams High School is located in a metropolitan area of Arizona and belongs to
Arlington Unified High School District. The second high school, Mesa is part of Costa
Unified School District, located in an urban area of Orange County, California. Table 3.1
presents a general profile of each school.
7
Our rationale for including both regular public school districts and charter management organizations was
based upon research suggesting that both types of school systems are engaging in innovative practices in
DDDM and because the funder, NewSchools Venture Fund, was particularly interested in CMOs. Charter
management organizations (CMOs) are networks of charter schools that operate ―home offices.‖ They
function similarly to school districts‘ central offices, providing oversight in accounting, curriculum,
governance, and organization.
82
Table 3.1: High School Profiles
Profile
Adams H.S.
(Arizona)
Mesa H.S.
(California)
Total Student Population 1600 1800
Student Demographics
Latino 53% 39%
White/other non-minority 28% 7.5%
Black 12% 1.0%
Asian & Pacific Islander 3.7% 52%
Native American 4.0% <1%
Free/Reduced Lunch Program 59% 70%
English Language Learners 10% 34%
District Student Population 15,000 49,600
Total Number of Teachers 77 70
*Data are for the 2007 school year.
Adams High School
Located in the western region of Arizona, Adams High School is a comprehensive
high school belonging to Arlington Unified High School District. Arlington is comprised
of nine comprehensive high schools and two alternative schools. The current
Superintendent previously served within the district as a teacher and a principal. Besides
the Superintendent, the district is lead by the Associate Superintendent for Curriculum
and Instruction who oversee the 10 curriculum coordinators.
Given its relatively small size, the district office and school administrations lead
jointly in running the schools. At Adams High School, the administrative team consists of
the principal, three assistant principals and seven counseling staff. The school houses 10
academic departments that are led by department chairs. The department chairs primarily
work with the schools administrative team but also report directly to the district office
staff and the curriculum coordinators. Additionally, the school-level leadership body
83
includes committee members who meet once a month and are responsible for guiding and
monitoring specific school goals. For example, one committee examines the parent
satisfaction and involvement survey to plan any necessary outreach efforts for the
duration of the year. The school employs 77 full-time teachers with 12 belonging to the
Language Arts department, 10 to math, and 9 to science. Of these 77 teachers, 47 percent
have more than 10 years teaching experience, 19 percent have seven to nine years of
experience, 17 percent have four to six years of experience, and 17 percent have less than
three years of experience.
The district has undergone major demographic changes in student enrollment over
the years. In the last decade, the racial and ethnic composition of the district has shifted
from a 75 percent white majority to a split majority between white and Latino students.
Adams is one of the high schools in the district that has a majority of students of color,
with Latino students being the pre-dominant subgroup. Adams‘ faculty members believe
that all children respond well to a safe learning environment and pride themselves on
creating a nurturing and welcoming atmosphere for all types of students. Although the
campus is located off a major freeway ramp in a crowded section of the city, the school
has a similar layout to Mesa High School. Classrooms are single-floor buildings with
doors opening directly to the outside. Teeming with school spirit, Adams‘s purple and
white colors decorate the bulletin boards and buildings. As an unusual feature of the
school, snippets of music ranging from Frank Sinatra, Sesame Street, and contemporary
pop rock filled the air in lieu of tardy bells.
84
Adams High School is considered as a more diverse and urban school within the
district and is academically on par with the other comprehensive high schools. Based on
its 2006-07 accountability evaluation, the school was recognized as Excelling on the
Arizona LEARNS profile, the highest designation possible. This is an improvement from
the past two years‘ designation of Highly Performing. Similar to California‘s API system,
the state of Arizona has instituted Arizona LEARNS to rank each of its schools. The
accountability system labels schools on a six-tier category: Excelling, Highly Performing,
Performing Plus, Performing, Underperforming, Failing to Meet Academic Standards.
The state evaluates high school students using two assessments: the Terra Nova, a norm-
referenced test, and the Arizona‘s Instrument to Measure Standards (AIMS).
8
However,
Arizona is notably different from California in that evaluations of schools are based on a
variety of indicators beyond student performance on assessments. Each high school is
ranked according to state standardized test results, AYP, dropout rate, and graduation
rate. Although in previous years, the school has also consistently met AYP, Adams High
School failed to do so and is under warning status for the first time based on its 2006-07
results. The school failed to make AYP because insufficient numbers of enrolled students
were assessed.
9
8
The Terra Nova is a norm-referenced test taken by ninth graders in March. The instrument provides a tool
for comparing schools throughout the state of Arizona. The AIMS test is given to all 10
th
grade students in
the spring of each academic year. It assesses student learning in the areas of reading, writing, and
mathematics. Once a student demonstrates mastery in each area, the student has passed the test and does
not have to re-test. Students who do not demonstrate mastery in each area have opportunities to re-test
twice yearly and are placed in remediation classes that help them to prepare.
9
AYP is determined by three criteria in Arizona. For high schools, 95 percent of students must be assessed,
students must meet all Annual Measurable Objectives, and the school must demonstrate adequate gains on
graduation rates. According to Arizona Department of Education, Adams met only two of the three criteria.
It met its goals on test objectives and graduation rates but did not have 95 percent of its students tested.
85
Mesa High School
Mesa High School is part of the Costa Unified School District. The district is
considered quite large as it is ranked 11
th
out of 1,000 school districts in California in
terms of size. With its reputation for fiscal responsibility and strong organization, the
district has little staff turnover, which is highlighted by the fact that the majority of the
leadership team has worked in Costa for over two decades. Similar to Arlington, the
district has stable leadership with many staff members having attended the schools as
students and working up to administrative positions. The Superintendent began as a
student teacher in the district before taking on administrative positions and the Assistant
Superintendent of Curriculum and Instruction has been with Costa for over 26 years. For
teachers, the average time working in the district is 11.5 years.
As one of Costa‘s seven high schools, district staff considers Mesa as a leader in
education reform, especially for DDDM implementation. One Mesa High School teacher
jokingly referred to the faculty members as ―snobs‖ because of their reputation as
innovators and change leaders within the district. Described by a teacher as ―immigrant
central,‖ Mesa has served students from pre-dominantly Asian and Latino backgrounds
for over a decade. The majority of students are of Vietnamese and Latino ancestry with
low socio-economic status, which enables Mesa High to qualify for the School-wide Title
I designation. Located in a semi-residential urban fringe area in southern California, the
campus at Mesa has an open, clean, and cheery atmosphere, despite the fact that school is
filled to capacity. Transitional bungalows have been placed at the back of the campus and
a handful of teachers have to rove between rooms. Mostly, the school consists of multiple
86
stand-alone, single floor buildings with bright white and blue trim. All classroom doors
open to the outside, with many leading directly to courtyards framed by patches of grass
and dotted with benches and tables.
Like many high schools, administrative and teachers at Mesa High serve in formal
leadership roles. Two main leadership bodies direct the school: the administrative staff
and the leadership team. Together they establish school level goals and makes day-to-day
decisions. The administrative team is composed of the principal, two assistant principals,
four counselors, 10 department chairs representing each academic subject area, and a
Title I Coordinator who oversees professional development. The leadership team is
comprised of teacher representatives from various departments. During the 2007-08
school year, the school employed 70 full-time teachers with an average of 13.5 teaching
years. Of these teachers, 12 belonged to the English department, 10 to the Math
department, and 9 to the Science department.
Academically, Mesa High School ranks as a top school within its state.
California‘s accountability system is built on the Standardized Testing and Reporting
Program (STAR) which measures student learning from grades 2 through 11.
10
Besides
the standards-based assessments named California Standards Tests (CST), the program
includes a national norm-referenced test (California Achievement Test, 6
th
Edition), and
the California High School Exit Examination (CAHSEE). Based on the results of the
STAR program, California assigns each school and district an Academic Performance
10
The STAR program was first implemented in 1998.
87
Index (API) score —a metric summarizing a school‘s test results.
11
The purpose of the
API is to measure the academic growth and performance of schools. Each school is
expected to have an annual increase in the API equal to five percent of the difference
between the previous year‘s scores and the statewide goal of 800. Using the API, schools
are then ranked against other schools in the state on a one to ten scale. Schools are also
designated from underperforming to highly performing which determines sanctions or
awards. For 2007, Mesa received an API score of 763 and exceeded its growth target by
seven points. The school has an API School Ranking of eight and a Similar School
Ranking of nine for the 2007-08 school year and has consistently met AYP requirements.
Participant Selection
Once the schools were chosen, the research team contacted each high school
principal and asked for the school‘s participation in the study. Because the project was
designed to examine teachers‘ sensemaking of DDDM and its interaction with the school
structures, the principal was a critical informant that provided us access to the school site
and teachers. We requested permission and time to conduct:
(1) interviews with the principal and any other key administrators that dealt with
DDDM at the site;
(2) individual interviews with 8 to 12 teachers (from two departments) lasting
about 45 minutes each (the particular departments were chosen by the principal
because they were frontrunners in the schools‘ DDDM efforts).
11
API is a fundamental part of the Public Schools Accountability Act (PSAA) passed in 1999 and overlaps
with NCLB. The API ranges from 200 to 1000; the state‘s goal is for all schools to be above 800.
88
(3) classroom observations with the same teachers we interviewed;
(4) two focus groups with 6-8 teachers in other departments;
(5) observations of staff meetings--if there happened to be any meetings that deal
with data (e.g., a grade level team meeting where teachers might be looking at
assessment); and
(6) surveys on teachers‘ beliefs about data to the whole teaching staff.
Our data collection focused on the school level, but we also interviewed district
administrators to understand the background and history of DDDM implementation.
Tables 3.2 and 3.3 list the participants from each school. Afterwards, I review the
information garnered from each data source.
89
Table 3.2: Adams Participant List
Name
Position & Department
Yrs
Site
Yrs
Teach
Data Source
William District Superintendent 30 5 Interview
Laura District Testing Coordinator 16 10 Interview
Carol District Director of Research & Evaluation 2 Interview
Ben Assistant Principal-Operations & Resources <1 15 Interview
Jay Assistant Principal-Student Services 1 11 Interview
Samuel Counselor & Advanced Placement Coordinator 5 5 Interview
Carly Principal 2 16 Interview
Risa Teacher-Career & Technical 7 13 Focus Group
Cecile Teacher-English 1-2 Team Leader 13 13 Interview
Catherine Teacher-English 9 15 Interview
Nate Teacher-English 5 1 Focus Group
Belinda Teacher-English, Title I Coordinator
Tutoring Coordinator for State Assessments
25 27 Interview
Peter Teacher-English 3-4 Team Leader 6 8 Interview
Crystal Teacher-English 10 10 Interview
Carla Teacher-English Department Chair 30 35 Interview
Katya Teacher-Fine Arts 8 8 Focus Group
James Teacher-Fine Arts Department Chair 15 15 Focus Group
Bob Teacher-Math 11 11 Interview
Abe Teacher-Math 15 15 Interview
Darcy Teacher-Math 11 15 Interview
Janice Teacher-Math Department Chair Interview
Martha Teacher-Math 5 7 Interview
Winnie Teacher-P.E. 10 10 Focus Group
Jennie Teacher-Science 4 6 Focus Group
Laney Teacher-Science 2 8 Focus Group
Julian Teacher-Science Department Chair 5 5 Focus Group
Dennis Teacher-Social Studies Department Chair 11 11 Focus Group
Ron Teacher-Social Studies 4 3 Focus Group
Ella Teacher-Social Studies 10 3 Focus Group
Annette Teacher-Social Studies 9 9 Focus Group
Arnold Teacher-Special Education 3 3 Focus Group
Teresa Teacher-Special Education 8 8 Focus Group
Rhina Teacher-Special Education 2 5 Focus Group
Total: 33
90
Table 3.3: Mesa Participant List
Name
Position & Department
Yrs
Site
Yrs
Teach
Data Source
Gina District Director of 7-12 Instruction Interview
Lynn District Superintendent >25 Interview
Donna Principal 7 Interview
Tina Staff Development Facilitator
Title I and AVID Coordinator
Teaches part-time Science
13 13 Interview
Anne Teacher-Art Department Chair 36 36 Focus Group
Deanna Teacher-English 6 11 Interview
Luke Teacher-English; Union Representative 11 11 Interview
Jim Teacher-English 3 4 Interview
Pamela Teacher-English 14 14 Interview
Hayley Teacher-English 11 11 Interview
Nancy Teacher-English, School Leadership Team 6 6 Interview
Cole Teacher-Math 40 Focus Group
Dean Teacher-Math 12 12 Focus Group
Kate Teacher-Math & English Language Development
Department Chair
9 9 Focus Group
Jane Teacher-Math, School Leadership Team 5 11 Focus Group
Richard Teacher-Music 22 28 Focus Group
Daniel Teacher-Science 3 7 Interview
Frank Teacher-Science 3 14 Interview
Kimberly Teacher-Science Department Chair 4 10 Interview
Candice Teacher-Science, Coach for Direct Instruction 4 14 Interview
Mason Teacher-Science, School Leadership Team 6 16 Interview
Craig Teacher-Social Studies 28 28 Focus Group
Tim Teacher-Social Studies Peer Coach 25 27 Focus Group
Linda Teacher-Special Education Department Chair 11 11 Focus Group
Lee Teacher-World Language 5 5 Focus Group
Total: 25
Data Sources
The research team collected data from August to December 2007. For each
school, we made two visits in the fall semester. We conducted two to three multi-day site
visits to each of the high schools and their corresponding district offices. The central
office and school leadership interviews were primarily conducted on the first site visits,
and the teacher interviews and observations were conducted on the second visits. At each
91
school, we conducted observations of the school, classrooms, and relevant meetings. We
observed eight to ten classrooms of teachers we had interviewed at each school to
examine how they tied data-driven decision making to their classroom practices. We also
observed ―data discussions‖ among teachers whenever possible so that we could capture
data-informed instructional decision making in action. Finally, we gathered a plethora of
documents at the school and system levels that were pertinent to our study.
Table 3.4: Overview of Data Sources
Data Type
Data Source(s)
Number of
Participants at
Adams
Number of
Participants at
Mesa
Semi-structured Interviews Teachers & Administrators 18 15
Focus Group Teachers 12 10
Observations--Meetings Teachers 0 4
Observations--Classrooms Teachers & Students 12 10
Survey Teachers 46 (59.7%) 57 (81.4%)
In-Depth Interviews
The research team interviewed administrators from the central office, including
the superintendent and other personnel in charge of assessment and evaluation. At each
school, we interviewed the principal, often an assistant principal or equivalent, and at
least half of the teachers from the two academic departments that the school identified as
using data in the most cohesive and effective manner. We also interviewed department
chairs where possible. Majority of interviews lasted from 45 minutes to 1.5 hours.
Semi-structured interview protocols were developed around themes with some
probing questions. Interviews focused on the following topics: the types of data used by
teachers, how data influenced instructional decisions, the factors teachers felt were
92
critical to their success as effective data users, and the challenges of using data
(Appendix A). We chose to employ semi-structured interviews with the informants
because it produces focused data while also allowing space for organic development of
topics based on issues or concerns that are more salient for the participants (Merriam,
1998). I find using themes as frames of reference, rather than a structured list of
questions, helped to balance between having a focus to the queries and enabling
participants to direct the flow of conversation. All interviews were audio-taped and
researchers also took notes. At Adams, we conducted 18 in-depth interviews. The
interview participants included three district level administrators, four school level
administrators, six English teachers, and five math teachers. At Mesa, we conducted 15
interviews. These included two district level administrators, two school level
administrators, six English teachers, and five science teachers.
Focus Groups
In addition to the in-depth interviews, two teacher focus groups were held at each
school. Semi-structured protocols were also developed for the focus groups sessions
(Appendix B). Each focus group consisted of five to six participants to garner information
about DDDM from teachers in other departments (i.e., outside of those investigated for
the interview portion), giving a broader sense of DDDM in each site. Probes ranged from
questions on types of data used, how teachers define data, how data affected their
teaching, and how data affected teachers relationships with their colleagues and their
students. For each focus group, two researchers were present: the first researcher led the
93
group while the second researcher took observational notes. The focus groups were also
audio-taped and lasted an hour each.
In prior studies, focus groups has been cited as a useful technique to gather
information when interaction among the participants likely yield the most useful
information, when participants are familiar with one another, and single participants
might be reluctant to provide an individual interview (Creswell, 1998). Given that part of
the theory I am using to inform my study builds on sensemaking, it is doubly important to
get a sense of how teachers, as a group make sense of data. Although this is partially
possible through observations of faculty meetings, focus groups provide an opportunity to
explore the sensemaking process as teachers interact with another while also giving the
researcher the opportunity to probe participants about their interpretative process.
At Adams, 12 teachers from the social studies, special education, and various
elective departments participated in the focus groups. Each session consisted of six
teachers. At Mesa, 10 teachers ranging from math, social studies, and elective
departments participated.
Observations and Field Notes
Even before the researcher begins the physical act of examining one‘s
surroundings, observation entails making choices. Whom I study, where I do so, and the
method I choose are intertwined with who I am as the instrument of data collection
(Creswell, 1998). Perhaps for some the difference between recording and producing is
simply one of semantics; however, the word produce captures the notion that observing is
94
an active form of sensemaking. The verb produce, while also having multiple meanings,
emphasizes the act of creating (e.g., to offer to view or notice, to give being, form or
shape to, to make, to compose, create, or bring out by intellectual or physical effort).
Because it requires the researcher to make decisions about who to study, when, where,
and why, observing is not merely the act of recording events but the process of producing
them. Sanger (1996) points out that what we deem significant becomes significant.
Sanger‘s comment points to the notion that as the instrument of observation, who one is,
the experiences that has shaped an individual, and the ways in which one views the world
influences how one records the world. Observation is not merely about gathering data
but also entails interpreting events and interactions as they are filtered through the lens of
the researcher. However, observing is not a completely subjective and unstructured
activity. As Adler and Adler (1994) note:
What differentiates the observations of social scientists from those of everyday-
life actors is the formers‘ systematic and purposive nature. Social science
researchers study their surroundings regularly and repeatedly, with a curiosity
spurred by theoretical questions about the nature of human action, interaction, and
society. (p. 377)
Observation, in the context of conducting research, is a systematic and purposive
endeavor that is guided by theory, questions, and perspectives.
To that end, two types of observational protocols were developed: data-related
meetings and classrooms (see Appendices C and D). As supplementary data, the research
team collected observational data of teachers engaged in data discussions, professional
training on data use, other structured types of meeting, and classroom observations of
teachers who participated in the in-depth interviews. All the observations included
95
descriptions of settings, events, and activities as well as the researcher‘s initial reactions
and impressions. When observing classrooms and meetings, the research team also wrote
descriptive notes, reflective comments, and sketched pictures of the environment. These
types of detail enabled us to develop thick descriptions of each context.
Document Review
Document analysis and records provide important contextual and confirmatory
information. Hodder (2000) points out that material culture (e.g., documents) is useful
when ―explor[ing] multiple and conflicting voices, and differing and interacting
interpretations‖ (p.705). For example, artifacts such as school regulations convey both
official interpretations of accountability policy and their underlying assumptions. It can
offer up contrasts between official views and personal interpretations held by teachers.
Material culture can also work as ―probes‖ for further insights about tacit cultural norms
or beliefs since participants might have become desensitized to certain routines or
presence of objects. At each site, we collected various documents related to the
implementation of DDDM. These included data discussion protocols, teacher analysis
and reports, lesson plans using data, and school improvement plans.
Survey
A teacher survey was developed to assess the degree to which teachers‘ find
different types of data as useful. The survey instrument was developed using the
Construct Mapping Program and the Rasch Model, which is based on Item Response
96
Theory (Wilson, 2005). The Rasch Model has three main assumptions. First, the
construct is assumed to have a single dimension. Second, the variable is assumed to be
hierarchical. That is, high levels of the construct should result in high scores on the
instrument. The third assumption is that each item is independent of all (i.e., the
responses to later items are not dependent on a correct response of the earlier item).
Based on the Rasch model, the construct ―teachers‘ perceptions of data
usefulness‖ (TPDU) was formulated and went through several iterations. The instrument
was designed as a standardized fixed-design format whereby the participant chooses from
a set of responses. All questions for the construct have an outcome space of not useful,
minimally useful, moderately useful, very useful, and not available. Items were scored
from 1-4 (not useful to very useful) with the not available responses designated with a
zero score. Addition questions about the school‘s context and participant background
information were also included (See Appendix E for final survey).
The item paneling process and pilot study was critical in determining the face
validity of the TPDU instrument. The panel was comprised of four participants. Two of
the participants were faculty members with research background in data-driven decision
making. The remaining two participants were Ph.D. students enrolled in a measurement
theory course. Participants were solicited for feedback on the survey items, including the
relevancy of the type of questions, the wording of the stems and answer choice options,
and the formatting. Based on their feedback, the survey instrument was revised and the
piloted to a group of doctoral students (n=17) enrolled in a course titled, ―Evaluation of
Education Programs.‖ Of the 17 students who took the test, 15 of them were full-time
97
teachers or school-based administrators working in K-12 settings. The remaining two
students were educators working in other contexts (i.e., colleges). Based on their
feedback, the survey was further refined before being administered to teachers in the
schools that are a part of this study.
For the survey administration, I solicited participants from a morning faculty
meeting during our second set of visits to Adams. I introduced myself and the purpose of
the study before asking volunteers to complete the survey. I requested teachers to return
the survey in an envelope left at the main office by the end of our visit. At Mesa, due to
scheduling conflicts, the principal offered to administer the survey during an all-day
professional development for the staff. Teachers were asked to fill out the survey during a
break and return it back to the principal.
Mesa High School survey participant background. Of the total survey
participants (n=63), six responses were excluded because three of respondents were
counseling staff and the other three respondents did not indicate their title or school role.
Subsequently, the survey responses analyzed comprised of 57 respondents. With 57
respondents out of 70 possible respondents, the response rate was 81.4 percent. Of these
respondents, the department affiliations were: 8 (14 percent) English/Language Arts, 8
(14 percent) Math, (8) 14 percent Science, 9 (15.8 percent) Social Studies, 3 (5.3 percent)
Fine Arts, 2 (3.5 percent) Special Education, 2 (3.5 percent) ELD, 10 (17.5 percent) other
(e.g., P.E. and other electives), and 7 (12.3 percent) did not indicate their department.
Adams High School survey participant background. Of the total survey
participants (n=49), three responses were excluded because the participants identified
98
themselves as non-teaching staff. Subsequently, the survey responses used to analyze the
data comprised of 46 respondents. With 46 out of 77 total possible respondents, the
response rate was 59.7 percent. Of these respondents, the department affiliations 9 (19.6
percent) were Language Arts, 5 (10.9 percent) math, 3 (6.5 percent) Science, 6 (13
percent) Social Studies, 3 (6.5 percent) Fine Arts, 3 (6.5 percent) English Language
Development, 10 (21.7 percent) other, and 2 (4.3 percent) did not indicate their
department.
Data Analysis Plan
In qualitative research, analysis methods are often iterative. It is helpful to think
about the process as a spiral in which ―the researcher engages in a process of moving in
analytic circles rather than using a fixed linear approach‖ (Creswell, 1998, p.142). After
each interview, observation, and document review, I jotted down notes on the margins of
text, summarized the data, and wrote down reflective notes including speculations about
possible themes, relationships, and noteworthy leads for future data collection. These
memos and annotations then became sources for further analysis.
Once all the data was collected, I proceeded according to the case study methods
of Yin (2003) and Miles and Huberman (1994) by following the conceptual framework
that was presented in chapter two and research questions that led to the study. Using the
conceptual framework, I developed a preliminary descriptive coding list as an organizing
tool for the data (Appendix F). The codes focused on the major conceptual domains:
district framing of DDDM, school context, teacher background, and sensemaking of
99
DDDM. These preliminary codes mostly served to identify salient components of
teachers‘ work context and their perceptions of data. To assist in qualitative analysis and
data management, the Hyperresearch 2.0 software was used.
12
Once the salient components were identified, I looked for emerging themes and
patterns as well as examining how and when teachers‘ perceptions were similar or
different. I then reduced the data within these codes to more refined thematic categories
to aid in within and cross-site analyses. After completing a case report for each school, I
conducted cross-site analyses to identify similar patterns as well convergent ideas on
teachers‘ perceptions of DDDM. Throughout the process, categories were continually
refined and checked against the conceptual framework (Miles & Huberman, 1994).
With the quantitative data derived from the survey instrument, I analyzed the data
using SPSS. I developed a codebook to facilitate the data inputting process. The survey
was originally intended to be used look at whether teachers‘ perception of data usefulness
correlated with their departmental affiliation (e.g., did math teachers across sites have
similar beliefs about data usefulness?) and other background information (e.g., years of
teaching). However, given the low response rates and the missing data on departmental
affiliations and backgrounds, the survey data were used as descriptive information to
provide additional information about teachers‘ attitudes from each school site. I also used
the survey data to rank the types of data teachers found most useful or not useful.
12
Hyperresearch is a program that enables data reduction, coding, and management of interviews and
fieldnotes. The software is available through www.researchware.com.
100
Ethical Considerations:
Criteria for Establishing Research Trustworthiness
Quality control is an important aspect of any research endeavor. Merriam (1998)
states, ―All research is concerned with producing valid and reliable knowledge in an
ethical manner‖ (p.198). However, validity and reliability may be constructed differently
based on philosophical assumptions and the methodology being employed. In qualitative
research, the researcher is the main instrument of data collection and thus mediates the
types of data collected and how they are analyzed. Research subjectivity, assumptions,
and biases need to be acknowledged and reflected upon to ensure the trustworthiness of
the results. Embedded throughout data collection and analysis, I developed safeguards to
ensure that I carried out the research process in an ethical and thoughtful manner. At this
point, it would be useful to employ terms created by Lincoln and Guba (cited in Creswell,
1998, p.197) - credibility, transferability, dependability, and confirmability as criteria for
judging the quality of a research study. First, since I conducted a study of why and how
high school teachers use data for instructional improvement, the credibility of my account
depends on the length of engagement in the field and the degree to which I gathered a
diverse array of data in order to give credence to the claims, assumptions, and
descriptions I made. Merely interviewing teachers would be insufficient; spending time
in their classrooms, observing their interactions (e.g., faculty meetings), and gaining
perspectives from other groups (e.g., administrators) are pivotal in triangulating the data
as well as highlighting discrepancies. Since I was interested in uncovering beliefs about
101
data held by teachers, the credibility of my interpretations is strengthened by utilizing
multiple sources of data.
Trustworthiness can be further bolstered through triangulation. There are two
major categories of triangulation: source corroboration and method corroboration. Source
corroboration deals with interviews. Similar to a trial, different witnesses share their
accounting of an incident to match the other witnesses perspectives. If district officials,
school administrators, and teachers relate similar stories about how a data meeting is
structured, this strengthens the credibility of the account. The second triangulation type
consists of having a multiple types of data to support a conclusion. This would require
analysis of interviews, observations, and documents to examine how the data support or
challenge a conclusion. While triangulation does not eliminate misrepresentation, it
lessens the degree to which findings may be misinterpreted and underscores the necessity
of having different pieces of evidence converge to support a conclusion.
Next, in order to ensure that findings are transferable from the participants to the
researcher and from the researcher to the reader/reviewer, thick description was
necessary. Rather than simply telling the reader about an incident or event, the
researcher‘s responsibility is to ―show‖ the setting, actions, and interactions in a manner
that helps the audience feel present. This required that I construct detailed field notes and
analysis along the process. Preliminary drafts of the data reports were reviewed by
participants (i.e., member checking), to mitigate the chances that the research team made
factual errors, erroneously took actions out of context, or misinterpreted a participant‘s
viewpoint.
102
Peer review sessions were also employed to establish confirmability and
dependability. One of the dangers in conducting qualitative research is becoming so
immersed within the culture being studied that the larger context within which the culture
is located can be neglected. Another danger is that unacknowledged bias, values, and
beliefs can create situations where the researcher unconsciously ends up focusing on data
that supports these orientations. While I agree that all research process is both theory and
value-laden, standards of trustworthiness require that I deal with these biases explicitly
and have a willingness to be challenged. Peer reviewers, who played devil‘s advocate and
challenged the assumptions and interpretations I made along the way, helped ensure that I
justified the decisions I made about methods and analysis. Besides clarifying my own
assumptions and position regarding the study through notes, the research team engaged in
debriefing sessions on each case.
Audit trails are also useful to help establish the trustworthiness of a research
study. Maintaining a ―chain of evidence‖ increases the reliability of the information in a
case study (Yin, 2003, p. 105). By setting up a database with the study design, data
sources, and detail plans on data analysis, I am enabling an external evaluator to trace the
conclusions presented in this study from the evidence to the protocols to the research
questions.
Finally, the research enterprise should never supersede the safety, rights, or needs
of study participants. Since the data collected for this investigation was taken from a
larger study on the implementation of DDDM in high schools, the research design and
protocols were reviewed and approved by the Institutional Review Board (IRB) at the
103
University of Southern California, which ensured that investigators protect the rights of
research participants. During the study, each participant was asked to review the consent
form and then fill out the permission agreement. They were also given a copy of the
study information sheet for future reference. Participants were further reminded during
the study that if they felt uncomfortable discussing specific topics, changed their minds
about being recorded, or decide to opt out of the study altogether, they were free to do so
at anytime. All of the participants we approached agreed to participate and none opted
out of the study.
Study Limitations
The use of case study method presents several limitations. First, the findings are
context specific and intended to build theory. The purpose of case study research is to
generalize to theoretical propositions rather than populations (Yin, 2003). The study is
intended to build theory on teachers‘ sensemaking of DDDM and does not presume that
the data presented here represents all the iterations of DDDM. Rather, the intent is to
examine how teachers in these specific contexts construct and enact DDDM. Second, as
the main instrument of data collection, researchers have to be conscious of several
pitfalls. One of the dangers in conducting qualitative research is neglecting to consider
various factors outside the bounded case that may contribute to shaping the phenomenon
being studied. All researchers have to make choices about what elements of the study
may be foregrounded while leaving other elements in the background (Datnow, Hubbard,
& Mehan, 2002). However, such limitations need to be acknowledged and considered
104
during analysis. As I outlined in the Ethical Considerations section, some of these
limitations may be mitigated by the precautions I have taken to maintain trustworthiness.
Conclusion
While the literature on DDDM has documented the implementation process,
studies on how high school teachers construct and interpret DDDM largely remain
underdeveloped. Thus, high school teachers‘ and their beliefs are privileged in this
dissertation. In this chapter, I outlined the research methodology to answer the question
of how high school teachers make sense of DDDM. I employed a case study design
primarily utilizing qualitative data, which included interviews, focus groups,
observations, and document review. Additionally, I relied on data gathered from a teacher
survey to rank their beliefs on the usefulness of different types of data. The next chapter
examines teachers‘ general attitudes and orientations towards DDDM with subsequent
chapters investigating the influence of the district and school contexts.
105
Chapter 4:
Teachers’ Sensemaking of Data-Driven Decision Making
The main purpose of this dissertation project was to examine how urban high
school teachers make sense of DDDM. Relying on sensemaking theory, frame analysis,
and institutional theory, I developed a theoretical framework that guided my analysis of
the data. By employing these theories, my intent was to illuminate how different levels of
contexts may influence teachers‘ interpretation and enactment of DDDM. In this chapter,
I begin by highlighting general attitude patterns on DDDM held by teachers across both
high school contexts. I first present a typology of teachers‘ orientations towards DDDM,
providing examples of each. Then, I discuss how the state and district accountability
context shape the prioritization of data types. Finally, I present the four main criteria that
facilitate and hinder teachers‘ data use.
Teachers’ Orientations towards DDDM
As noted by various scholars, data have multiple dimensions (e.g. evaluative,
descriptive) and can be used to inform different types of decisions (e.g. organizational
structure, program, instruction, or student levels) (Earl & Katz, 2006; Supovitz & Klein,
2003). Some teachers may use data to comply with accountability demands (Firestone &
Gonzalez, 2007) and some teachers may take an inquiry approach (Feldman & Tung,
2001). Building on these two perspectives and sensemaking theory, I developed a
typology of teachers‘ orientation towards data use. I define the concepts specifically by
focusing on teachers‘ motivation for engaging in DDDM and how teachers actually used
106
data in their decision making process. In chapter two, I discussed how there are multiple
structural, cultural, and technical factors that contribute to the implementation of DDDM.
Sensemaking theory implies that in addition to these variables, DDDM is mediated by
how teachers interpret data use. Despite working under similar working conditions,
teachers within a single school may have different responses to DDDM based on their
pre-existing knowledge, beliefs, and experiences. Teachers may view data not as
instrumental or conceptual tools intended by principles underlying DDDM, but perceived
them as symbolic and political devices meant to produce compliance. How teachers
conceptualize DDDM shapes how data are processed and used for school improvement.
With sensemaking theory in mind and the different roles played by data, I
developed a typology of teachers‘ approaches to DDDM. The orientations were
developed by assessing three elements of teachers‘ attitudes: their beliefs about the main
purpose of data use, motivations for data use, and expected outcomes of DDDM. In both
high schools, teachers‘ approach towards data use fell in four distinct categories: inquiry-
centered orientation, solution-centered orientation, bureaucratic-centered orientation, and
compliance-centered orientation. These categories are not mutually exclusive and
teachers may respond to data use in all four ways or different combinations thereof, but
generally, teachers tended to exhibit one orientation over the others. In this section, I
define each orientation and provide examples.
107
Inquiry-Centered Orientation
First, the inquiry-centered orientation refers to a mindset that treats data as a
source for examination and leads to an understanding of a concern, issue, or question.
Teachers relying on this approach have an investigative stance with a focus on reflecting
on data. Data‘s main purpose is a conceptual one in which it helps teachers generate new
ideas or inform their perception of a problem. Take for example, Mason, a science
teacher at Mesa, who started with a student‘s low test score as a point of reference.
Instead of immediately using the data to change instruction or make assumptions about
the student‘s ability, the initial data—the benchmark test results--led him to investigate
further why a student may be getting low scores. He explained his thinking process:
When we take our tests-, we can go back to the kids‘ transcripts from whenever,
which I love to do. I go back to junior high and I go; Dang! You were doing
really well in junior high, what the heck happened to you? You know when the
time comes, like not right now because it‘s too early, but as I see things going on
like they‘re smart but they‘re not doing anything, then I‘ll bring them up and I‘ll
just show them on the computer…So I use it like that. I know that oh, that kid
really has some smarts in him, but there‘s something else going on. That‘s the
way I really like to use it… You can look and see is it their math skills; is it their
English that‘s low? Surprisingly I find kids that are really good in science and
horrible in English like I was.
Mason began to gather more data about the student by reviewing previous achievement
data and transcripts to determine an explanation for the student‘s performance. Based on
the discrepancy between classroom observations, test scores, and previous performance
levels, the teacher concluded that the student is able but under-performing. This then led
Mason to speak with the student to garner additional insights and to tease out whether the
problem lies in motivation, lack of basic skills, or lack of content-related skills. Mason
108
treated data as a conceptual tool—one that enabled him to learn more about his student‘s
history, strengths, and needs.
Similarly, Darcy, a math teacher at Adams High School, used homework return
rates as one gauge to understand student learning and engagement:
So in that class I‘m looking at homework, how‘s the homework rate turn in? Do I
see a lot of missing homework? If I do then I‘m saying ooh is there something
going on here? Why aren‘t they turning in stuff? Are they not understanding it or
do they have too much going on? So I‘m asking them questions; what‘s going on
with homework?
The teacher did not immediately make assumptions about why students may not be
turning in their homework. Instead, the homework return rate led to questions about
where the problem may lie—is it the classroom instruction or something related to a
student‘s personal life? This questioning then prompted Darcy to speak to the student to
garner more data about the situation. Overall, with an inquiry-centered orientation view,
teachers treat data as primarily a conceptual tool and are motivated by their desire to
improve their instructional practices. Through the DDDM process, these teachers expect
learning about their instruction or their students as their expected outcome.
Solution-Centered Orientation
In the second approach, teachers exhibit a solution-centered orientation. Within
this framework, teachers focus on ―fixing‖ a problem and data use tends to be
prescriptive: Data highlight problems that need to be solved and teachers develop plans to
address the problem. In contrast to the inquiry-centered orientation, where an initial
examination of data leads to more questioning and more data collection before the
109
development of a possible action plan, the solution-orientated mindset quickly leads to a
prognosis of a problem and an action plan. The main purpose of data is instrumental with
teachers treating data as barometers that indicate relative strengths and weaknesses in
instruction and student learning. Candice, a science teacher at Mesa High School,
described the purpose of data this way: ―So the data‘s going to tell me whether they
[students] jumped or not. The teaching strategies are going to give me a variety of ways
to try and get them to jump.‖ For this teacher, data indicated to her whether students have
reached performance goals. If students did not meet them, her solution required a
readjustment of instructional practices. Two other teachers further illustrate this
framework for using data:
So I use it to determine strength and weakness in my students and in my own
teaching. Also just to determine are we ready to move on? Do we need to keep
reviewing this skill or is it done? Are we ready to go on to the next skills? So I
use it to determine my scaffolding techniques quite a bit as well.
[Cecile, English, Adams]
It‘s [data] going to tell you who your weak students are, how they‘re weak, who
your strong students are and how they‘re strong. It helps me just from a very
basic level of setting up the tables, setting up my groups for labs, setting up who‘s
going to work on what project together.
[Kimberly, Science, Mesa]
By engaging in DDDM, teachers with the solution-centered orientation viewed data as an
instrumental tool that enabled problem-solving. These teachers believed the importance
of DDDM as a continuous improvement strategy and treated data as such. Teachers
reflected on their practices based on the data, but little time was spent on making
meaning of the data itself or triangulating sources of data. Whereas teachers with an
inquiry-centered orientation used data to developed questions and tentative hypothesis to
110
further refine their perception of a problem, solution-centered teachers emphasized their
action plans. Attention was directed at fixing the problem not on examining or
understanding the data itself. Solution-centered teachers stress the importance of using
data and not just learning from data.
Bureaucratic-Centered Orientation
With the bureaucratic-centered orientation, teachers tend to concentrate on
following directions and full-filling tasks. DDDM is believed to be relevant and
important to improving practices but teachers handle data as an administrative activity.
Observations of the English department meeting at Mesa illuminate this approach. During
one data discussion meeting between the three teachers of the 9
th
grade English group
(which included Deanna, Anne, and Betty), conversations jumped from one topic to the
other. The main point was to ensure that parts of the discussion protocol were completed
and all the questions were answered (see Appendix H for a copy of the protocol). The
following vignette demonstrates the bureaucratic-centered orientation to data use:
English teachers at Mesa convene as a department twice a month from
7:30am to 9am to talk about a variety of issues including student benchmark
results and instructional practices. By 7:45am, teachers break up into their grade
level teams to discuss the results of the district‟s benchmark exam. The 9
th
grade
team consists of three teachers—Deanna (who is the leader of the team), Anne,
and Betty. Teachers jump right into their discussion and share their individual
analysis, which is the first portion of the protocol. At 8am, Deanna moves onto
111
the second section of the protocol that focuses on general team level trends and
asks the group, “So what was unsuccessful.” Betty replies, “It‟s hard to tell.”
“Yeah,” Deidre adds, “I was just going to say I don‟t know.” She then poses the
next question, “What was the level of cognition?” Betty answers with, “The level
of cognition was pretty low.” Deanna notes, “That‟s also partly the nature of the
tests. It‟s a multiple choice.” Betty begins to talk about the standards and what it
requires before reading the standard out loud, “Identify figurative language but
to literally do this it seems like it is higher level of cognition because it asks more
than identifying and understanding.” She goes on to add, “In order to do this, it‟s
more than knowing.” Reading again from the protocol, Deanna focuses on the
next question, “Did the strategies align with the level of cognition of the
standards? Hmmm I don‟t know…well I think what we are doing in class is higher
level than this. I don‟t know. I hate this. Our class test aligns better than this
test.”
Approximately 35 minutes into the discussion, the group is on section G of
the protocol. After reading another question from the protocol aloud Deanna
announces, “Well we already did this. We said comprehension and use AVID and
SDAIE strategies.”
13
Since this portion of the protocol requires teachers to
brainstorm possible teaching strategies Betty asks, “Do you let your kids write on
the board?” Anne replies, “Not really.” Betty says, “My kids love it, except for
13
AVID stands for Advancement Via Individual Determination. It is a supplementary college preparatory
program aimed at helping low-income minority students. Specially Designed Academic Instruction in
English (SDAIE) methods are teaching approaches developed to scaffold English language learners within
academic content areas.
112
the Twilight period.” Deanna continues with the protocol and reads, “The other
standard that was low-.” At this point, a teacher from another grade level team
asks Deanna for the assessment calendar. The conversation between the two
remaining teachers drifts to classroom management and general instructional
issues while Deanna talks to the other teacher. Afterwards, Deanna leaves the
room for a bathroom break. Ten minutes later, she returns and then another
teacher from 12
th
grade team asks for the curriculum mapping. (Since the meeting
is taking place in her classroom, she has access to all the materials). After doing
so, Deanna returns to her grade team, breaks up the conversation between the
other two teachers, and announces, “Okay for H, not to be (makes whipping
motion and sound) but just to get it done.” Betty poses another question, “So do
we just do part D and then turn it in?” Deanna replies, “In the past, I‟ve turned
this in (pointing to the last section on the instructional action plan).” Anne chimes
in with, “Yours is really nice. Mine is messy.” Betty also murmurs that hers is not
organized as well. Deanna suggests, “So we‟ll write „we‟ll revise the standards.”
Betty asks a question about the standards and the difficulty of using them for
planning and assessment. Deanna gets her language arts textbook and shows her
the standards that are listed and embedded throughout each unit. Then Betty asks
another question about the test and how they are going to administer the
benchmark. The conversation continues for a couple of minutes.
Five minutes or so after rejoining her group, Deanna fills in the last part
of the protocol and announces, “Yay, we‟re done. What time is it…?” Anne
113
replies, “I think we have until 9am.” The group continues to chat about students
and classroom management issues.
For the 9
th
grade English team at Mesa, instead of authentic engagement in understanding
how data may inform decision making, DDDM consisted of a checklist of activities that
needed to be completed. These teachers generally believed that data use was relevant and
worthwhile. However, attention was paid to superficial aspects of data use such as
completing data-related tasks (i.e. implementation fidelity). This results from
inexperience and a lack of expertise with interpreting and using data rather than a lack of
motivation to engage in DDDM. As I further examine in chapter six, sensemaking
opportunities for teachers to understand data, as well as the availability of a data coach to
assist in data interpretation, shape the degree to which teachers have the capacity to move
beyond the surface level engagement with DDDM.
Compliance-Centered Orientation
Finally, for those teachers with a compliance-centered orientation, the overall use
of data is considered irrelevant to improving their instructional practice or student
learning. Data are viewed as symbolic or political tools used to evaluate teachers.
Teachers with this mindset are mainly motivated to use data because of regulative and
normative pressures. For example, when asked about the purpose of benchmark
assessments at her school, Hayley, an English teacher at Mesa, replied that the point was
to hold teachers accountable and to enable comparisons between teachers. During
meetings, she complied with the protocol and engaged in discussion with other teachers
114
about benchmark analysis. She also used the benchmark assessment results to a certain
degree. However, when asked questions about if she used data or whether data use was
relevant to her, Hayley explained:
I still go and do my own thing. I just know what I need to focus on or what I need
to change from what-, but I don‘t look at other people‘s data and I‘m not
interested in other people‘s data. I‘m not interested in the bigger data. I‘m not
interested in all these other things and APIs and you know I could care less about
anything like that. So I guess if you were really into it you can answer that
question a little bit more, but I don‘t look at any-, I‘m not interested, sorry.
This teacher went on say that she believed that good teaching required a combination of
passion and experience and that the focus on data did not add much to her instructional
practice. Unlike the other three types of orientations, compliance-centered teachers like
Hayley examine data and use it to meet goals set by the district or the state because it is
required and not because they believe that DDDM is inherently a worthwhile process.
The compliance-centered orientation was also visible at Adams High School.
Belinda, an English teacher shared that some teachers are primarily motivated by the
merit pay, which accompanied school performance data. She explained:
I think within the English department, we have learned, whether we have wanted
to or not, learned to embrace the PBAs, the whole district level-. I know in
previous years there‘s been some who, you know they don‘t care how their
students do. I think once it was tied to our performance based pay-, I think once
the money followed the performance, it‘s sad to say we‘re here for the money, but
people started thinking this is kind of becoming public and everybody knows how
I did.
Confirming this viewpoint, another English teacher added that the only incentive for him
to use data was the oversight from the school and district. For this teacher, accountability
resulted from the district and state levels, as well as pressure from other teachers because
of the school-wide merit pay. Teachers with this mindset towards data use viewed
115
DDDM as another task required by the system, which did little to help their actual
teaching or student learning. When asked about why he used data, he stated:
We had the benefit of having the accountability built into it. Otherwise data by
itself I think is just data and I‘ve got other things to do on my plate. It‘s like
another e-mail that‘s sent to me. Cool but I‘ve got other stuff. Being data-driven
works for us because there‘s accountability built into it.
[Peter—English, Adams High School]
Peter felt a primary sense of responsibility for his colleagues rather than his students. He
used state assessment and district required data to monitor student performance and to
target his instructional practices. He went on to explain his use of data:
Honestly, I‘m so busy I don‘t care if they‘re passing and doing well, it‘s
just--. We have a phrase, ―Above the mark.‖ You have to get a four to
pass in each category so honestly as a--, what is unfortunate is I really
don‘t care about excelling students. I don‘t have time to. I‘m always
teaching to the middle and just that--, not even the middle but just that
passing mark and bringing kids above the passing mark.
Teachers such as Peter, tended to feel burned-out, overwhelmed by their professional
duties, and generally expressed negative attitudes about the possibility of helping all
students learn. Given the limited amount of human and material resources and the lack of
belief in data as a useful tool for improvement, regulative and normative pressures
become the overriding focus rather than student learning. Echoing other studies (Booher-
Jennings, 2005; Diamond & Spillane, 2004), teachers with a compliance-centered
orientation towards data concentrated on avoiding accountability sanctions by making
sure that targeted students meet standardized test goals.
116
The Possibility of Evolving Orientations
In sum, teachers tended to view data use in four distinct ways. Table 4.1
summarizes the four orientations.
Table 4.1 Teachers‘ Orientations on DDDM
Orientation Main Purpose of
Data
Motivation for Data Use Expected Outcome of
DDDM
Inquiry-centered Conceptual Continuous improvement Learning
Solution-centered Instrumental Continuous improvement Problem-solving
Bureaucratic-centered Instrumental Continuous improvement Implementation fidelity
& completing data-
related tasks
Compliance-centered Symbolic/political Regulative & normative
pressures
Avoiding sanctions
Except for the compliance–centered oriented teachers, all perceived data use as important
to improving teaching practices, although the approaches differed in how teachers
defined the purpose of data and expected outcome for data use. The inquiry-centered
approach revolved around understanding a problem or an issue while the solution-
centered approach concentrated on finding a prescription to a problem. Teachers with a
bureaucratic-centered orientation narrowly viewed data use as a series of steps that
needed to be completed and compliance-centered teachers merely fulfilled accountability
demands.
The organizational context and framing of DDDM shaped the extent to which
teachers‘ buy-in to the relevance of data use and influenced how they actually used data
to inform their decision making. Overall, three teachers demonstrated an inquiry-centered
orientation and only two exhibited a compliance-centered orientation. The majority fell in
117
the solution-centered and bureaucratic-centered orientation. Teachers at Adams High
School tended to have solution-centered approaches to data use while teachers at Mesa
High School revealed bureaucratic-centered orientations. The fact that teachers primarily
have a solution or bureaucratic orientation to data also harkens back to Shirley and
Hargreaves (2006) warning that the high-stakes accountability context may lead to
simplistic solutions derived from test scores. However, the inquiry-orientation can also be
problematic if all teachers do is to investigate a problem with little follow-up or proposed
solutions to student needs. As Earl and Katz (2006) have suggested, school may be at
different stages of developing the capacity for data use. Mesa, for example, recently made
time for common planning and data discussions. Therefore, it is understandable that
overall, the model of DDDM is a basic one (Ikemoto & Marsh, 2007). Adams, in
contrast, has been working with data for decades and thus utilized a wider array of data.
As a result, their data use was more sophisticated by comparison.
Some teacher reflections also indicate that they may shift in their orientations over
time. One English faculty member, who has been a teaching for six years at the district,
described how her view of data use has evolved:
When I came to this district, when I saw oh my God the data, the data, the data,
making sure they [students] did what they were supposed to do was like you have
to become little robots, you know you have to do this, this, this; they were a
means to an end to get that score. Now I can look at them as more like they‘re not
little robots anymore that they have to do exactly what I want when I want; like a
whip master you know. Now it‘s like ok you didn‘t get it today, you can get it
tomorrow; maybe you can get it Wednesday, as long as by the end of the project
or the end of the process you‘re where you need to be. I can look at each student
now as an individual based on their own improvement based on what I see on my
grades, what I see on their assessments.
[Christy, English, Adams]
118
Christy initially perceived data with a compliance-orientation. Having ―good‖ data was
the goal and the emphasis was on making sure that her student performed to meet
accountability demands. At first students were a means to an end (i.e., getting better
scores) but now she viewed them as individuals with specific needs. Data use has become
the means rather than the end itself. This teacher‘s comment suggests that with practice
and professional norms that embrace data use and data sharing, teachers find DDDM
more meaningful. Thus, the organizational context may play a pivotal role in the extent to
which teachers buy-in to the notion that DDDM is a continuous improvement tool.
The Prioritization of Data Types
In this portion of the chapter, I describe teachers‘ attitudes towards different types
of data and examine how the institutional and organizational contexts shape such
attitudes relying on survey and interview responses. There are multiple types of data used
for a myriad of reasons and teachers often judge what is useful based on the
accountability system and district context. In particular, the types of data made available
and stressed by the accountability policies affected what types of data were considered
most useful. Table 4.2 lists the different types of data used for school evaluation.
119
Table 4.2: Sources of Data Used to Evaluate Performance
14
Source
Mesa High School
Adams High School
State California Standards Test (CST) Arizona‘s Instrument to Measure Standards
(AIMS)
California Achievement Test, 6
th
Edition
TerraNova
California English Language Development
Test (CELDT)
Arizona English Language Learner Assessment
(AZELLA)
California High School Exit Exam
(CAHSEE)
Graduation & Dropout Rates
California Alternative Performance
Assessment (CAPA)
Measure of Academic Progress
Academic Performance Index (API)—
formula based on test scores in the STAR
program.
Arizona Learns Profile—determined by AIMS,
graduation rates, dropout rates, and AYP
District District Quarterly Benchmarks—core
subjects only
District Assessments—for all subject areas.
Implementation Data (Action Walks)
(voluntary involvement)
Student Achievement Index—the percentage of
students successful on district assessments
Principal‘s Dashboard (Goal Reports) Principal‘s Continuous Improvement Reports—
longitudinal data for the past 4 years
Student Course Enrollment & Completion Student Enrollment in AP Courses
Graduate Survey—administered to graduates to
track their educational status
Student Participation Surveys—administered to
determine student participation in school
activities
Satisfaction Surveys—administered to students,
parents, teachers, and administers.
School Mid-Quarter Benchmarks
Content Assessments
Data Director Reports-generated by the
teachers themselves
Program Improvement Plans (PIPS)—
developed by each content team.
Curriculum Embedded Tests
Formative Assessments—dependent on
department and content area.
Advanced Placement (AP) Test Results Advanced Placement (AP) Results
14
State and local education agencies may require the collection of other types of data but the table lists
data used to measure performance and improvement.
120
California‘s emphasis on solely using standardized test scores to rank schools
seems to have filtered down to the district and school levels. Currently, Mesa focuses on
benchmark and school-created assessments aligned to the CSTs to determine progress or
needed areas of improvement. Although, the school and the district does collect other
types of data, the primary indicator of evaluation are the standardized assessments. In
contrast, Arizona‘s ranking system relies on other indicators besides the state‘s
assessments. Arizona‘s assessments heavily influence a school‘s improvement
evaluation, but other factors such as graduation and dropout rates are also considered.
At the same time, the district plays an important role in prioritizing other types of
data. Mesa‘s district is beginning to gather and examine student placement data to ensure
that students have access to AP courses or are able to meet university requirements.
Proficiency levels on the CELDT are also emphasized at the district level to ensure that
English language learners are making steady progress. This has meant that teachers at
Mesa High School use these types of student background data to differentiate instruction
or make decisions about student enrollment in specific courses. This is all relatively new
and it is unclear how this type of data will affect evaluations of high schools such as
Mesa. At Adams, the district incorporation of stakeholder data as an evaluation measure
and its standardization of instructional practices meant that data are considered in many
forms. Teachers at Adams are not only responsible for assessments results but also
continuous improvement in students‘ extra-curricular participation, parent involvement,
and formative assessments.
121
According to survey data at Mesa, teachers believed that their schools focus on
data helped most teachers to search for effective teaching strategies (79 percent) and
monitor student learning (91 percent). Similarly, at Adams, the majority of teachers
surveyed believed that the school‘s focus on data helped most teachers to search for
effective teaching strategies (96 percent) and monitor student learning (66%). During
interviews, teachers further explained why data use was helpful:
I think you have to use data or else you won‘t know how you‘re doing and then
you can‘t get better. How are you going to get better if you don‘t know how you
did?
[Carla, English, Adams]
I just don‘t know how I could figure out where I needed to help them if I didn‘t
use [data]. I mean if I wasn‘t looking back at that stuff, even in a very informal
way, like just looking back at the quiz and saying something‘s going on here. If I
wasn‘t looking back at that data, how would I know what specifically they needed
help on?
[Darcy, Math, Adams]
Overall, most teachers felt that the use of data was beneficial to their growth although
they also referred to its limitation and challenges (see Appendix G for a full report of the
survey results).
Data Usefulness at Mesa High School
The table below shows the top five types of data teachers find most useful for
instructional decision making at Mesa. The analysis of types of data useful for
instructional decision making indicate that teachers generally believed different
assessments disaggregated by student proficiency levels and items analysis as well as
teacher-created tests were moderately or very useful. Of the 27 different types of data
122
listed, respondents tended to view the state tests, disaggregated by individual proficiency
levels and by item analysis, as useful. Department created tests disaggregated by student
proficiency levels, individual teacher created tests, and item analysis of district developed
tests were also considered useful. However school-wide data and data disaggregated by
student sub-groups were considered not useful or minimally useful.
Table 4.3: Most Useful Data at Mesa High School
Types of Data
Percentage
1.Student performance results on state test(s) disaggregated by individual student proficiency
levels
82.4
2. Student performance results on common department-created tests disaggregated by
individual student proficiency levels
77.2
3. Individual teacher-created assessments
75.5
4.Student performance results on state test(s) disaggregated by item analysis (e.g., by
standards, topic, skills)
75.4
5. Student performance results on district test(s) disaggregated by item analysis (e.g., by
standards, topic, skills)
73.7
When probed further during interviews, science teachers such as Candice and Daniel also
mentioned lab assignments and teacher-created assessments as important data.
Specifically when asked what data she used most often, Candice replied:
Most frequently would be their grades. How they are performing individually in
terms of their test grades and their lab grades because the test directly reflect what
they are thinking, because for my honors class I write my own test. That is going
to reflect to me what exactly they were able to process. In their labs, I have them
do written conclusion. Where they have to pull the lab together, express the
concept, the purpose and kind of re-teach and kind of teach me that they
understand it. They have to all express that. I also use the benchmarks and the
123
mid quarter exams and the information that is on the data director from previous
years from the other classes.
English teachers such as Hayley and Nancy also mentioned students‘ language
backgrounds, quizzes, essays, and chapter tests as data used for instructional decision
making. Nancy shared: ―We have vocabulary quizzes that I collect. We have chapter
tests that I give. I collect their graphic organizers and grade them and go over them.
Everything we do in here gets looked at by me.‖ However, when asked about the types of
data that were most useful for her when making decisions about instruction and
curriculum, she added, ―I look at their, their benchmarks or their CSTs. Are they below
basic or far below basic and most the majority of kids are basic or below.‖ In general,
teachers at Mesa tended to narrowly define data solely in terms of state and district tests
results and had to be pushed to consider data other then benchmark assessments. Take for
example this snippet of conversation between the interviewer and Deanna, an English
teacher.
Interviewer: So it seems like you‘re definitely looking at the benchmark
assessments and standards in terms of thinking about how to have
instruction that is more effective. Are there other types of data that
you also rely on or that‘s really helpful for you in terms of lesson
planning or thinking about how to approach your instructional
practice?
Deanna Hum. You know there‘s the Data Director and the interim test, but
we can talk about our results as a group-, I‘m enjoying [that]. That
really where our biggest push has been in terms of collecting data.
It‘s thorough. I printed up some stuff for you to look at from last
year‘s thing. Not only does it go by standards, not only does it go
by the school, it goes by the student.
124
A conversation with Science teacher Frank also highlights this trend. When asked about
how he used data, the teacher spoke about test results and the interviewer probed to find
out whether he considered other types of data as useful:
Interviewer: I mean when you say data, it sounds like you‘re talking primarily
about standardized test data or benchmark assessment data,
quarterly assessments; is that all-, yes?
Frank: That is correct.
Interviewer: So when you‘re thinking data it‘s not-, I mean would you think
about let‘s say a student comes to tell you what this ecosystem
looks like and how it works; is that data to you?
Frank: Yes.
Interviewer: It is for you?
Frank: To me it is.
Interviewer: But it‘s not one of those-?
Dean: It‘s hard to measure.
Thus, the district and school context shapes teachers‘ conceptualization of what counted
as data. Because Mesa High School and its district focused primarily on teachers‘ using
student performance scores results from benchmark assessments and state tests, teachers
narrowly tended to define data.
Data Usefulness at Adams High School
In contrast to Mesa where teachers were situated in an organizational and
accountability context that primarily emphasized student test score data, teachers at
Adams High School tended to have a broader notion of what counted as data. Based on
125
the results from the survey and interviews, various types of data were rated as useful to
informing instruction. Although the top six are listed below in Table 4.4, over 16
different types of data received an overwhelming majority (70 percent or higher). Besides
assessments results disaggregated by student proficiency levels and item analysis, lesson
plans and school-wide improvement plans were considered important data in informing
instruction. Furthermore, other types of data, such as student attendance and student
retention rates were also considered to be moderately or very useful by at least 70 percent
of the teachers surveyed. This may be partially explained by the fact that the district and
state accountability systems require the collection of these types of information to
evaluate school improvement. Additionally, the district heavily emphasized the
development and use of Program Improvement Plans and ultimately disbursed teacher
merit pay based on school-wide improvement.
Table 4.4: Most Useful Data at Adams High School
Type of Data
Percentage
1. Student performance results on common school department-created tests disaggregated by
individual student proficiency levels
91.3
2. Student performance results on district test(s) disaggregated by item analysis (e.g., by
standards, topic, skills)
87
3.Student performance results on state test(s) disaggregated by item analysis (e.g., by
standards, topic, skills)
86.4
4. Lesson plans developed from analysis of student performance data
82.6
5. School improvement plans developed from analysis of student performance data
82.6
6. Student attendance rates
82.6
126
During interviews, teachers further expanded on the types of data they considered
most useful for instructional decision making. The most salient data that affected
instruction at Adams High School were the formative assessment data gathered on a daily
basis by teachers. As Carla, an English teacher stated,
I guess monitor and adjust is one of my Madeline Hunter mantras. You know I
look right away and if I see that the students are not engaged in the activity I try to
do anything I can to make sure that they‘re on task and engaged.
When asked by an interviewer what he considered the most useful data for informing
instruction, Abe, a math teacher replied:
Well I think the comments from the students, not data that you do from the test
results. I think the comments from the students-, the students are honest. They‘ll
basically tell you hey I didn‘t understand what you did, or I‘m clueless… Also
during the lessons, I can evaluate my data based on how many questions I get
asked during the lesson. I‘m like that was too many questions, am I doing
something wrong? Is there another way I can teach it? I teach one problem
several different ways.
Teachers also mentioned data such as homework return rates, informal course
evaluations, daily assignments, and bell work during the beginning of class as the most
useful types for instructional decision making. Again, this emphasis on gathering
formative assessments and monitoring student engagement was a reflection of the
district‘s standardization of instructional practices built on Madeline Hunter‘s work.
Additionally, only the math department had common unit tests that were frequently given
and the members of this department were likely to mention both their daily classroom
observations and the unit test results as important sources of instructional planning. Other
departments administered district assessments either once a year or twice a year, making
standardized assessment less salient throughout the year. Consequently, the majority of
127
the teachers listed their daily or weekly formative assessments as important indicators of
student learning and progress.
Criteria for Data Use
The literature on DDDM suggests that there are hosts of structural and cultural
factors that affect how and why educators use data. Research tends to focus on structural
and technological limitations (see Table 2.2) as well as teachers‘ ideological biases
against data use (Ingram et al., 2004). Across both high schools in this study, teachers
indicated similar beliefs about the factors that enabled or hindered DDDM, confirming
much of the existing empirical evidence. However, the main barrier was the lack of
capacity for data interpretation. In determining what types of data were useful and how
they were useful, teacher listed four main criteria: accessibility of data, ease of use for
decision making, the quality of data, and the relevance of data. In this next section, I
discuss the definition of each criterion and offer up teachers‘ beliefs about the role each
element plays in their use of data.
Accessibility of Data
Teachers indicated that accessibility of data, particularly those that were
disaggregated by standards and proficiency levels, was an important criterion for data
use. Accessibility of data refers to how easily data are available to teachers. This
accessibility may be facilitated or mired by various factors such as technology and data
formatting. Confirming findings from Wayman and Stringfield (2006), the accessibility
128
of data encouraged or hindered its uses. At Mesa, for example, having access to prior
student performance data or English proficiency level made differentiating instruction
much easier for teachers such as Hayley. These types of data were available through the
district‘s data system and could be accessed by the teacher at anytime. Deanna, an
English teacher, further remarked:
Data Director‘s pretty user friendly. So just personally, I haven‘t heard of too
many complaints, but I‘m just with the English department and it‘s very
conducive to our subject, this kind of breakdown, so you know we‘re on it. We‘re
enjoying the extra tool.
Most teachers did not feel that data access was a problem at Mesa where the data system
was considered very user-friendly by most teachers. The system was also considered
comprehensive in that it allowed teachers to disaggregate student performance data in a
myriad of ways. Considering that teachers have to juggle a multitude of tasks and
responsibilities, data use can be hindered if teachers have to grapple with tasks like
figuring out how to access or format the data.
Paradoxically, the presence of a data management system also highlighted
technological problems. Of the 69 teachers who responded to the open-ended survey
question about challenges to data use, seven teachers cited access issues as a barrier to
data use. At Mesa in particular, where the school has an online data warehouse called the
Data Director, problems with logging in to the site, missing student data, and fear of
using technology were listed as barriers. Candice, a science teacher shared her frustration
with trying to access data: ―Last year I was struggling logging on. I finally got on and
now this year I‘m having that same concern; so my accessibility to Data Director. So
there are those things-, just the mechanical aspect of it.‖ During observations of
129
department meetings on data, both the English and Department team members discussed
troubles with accessing the Data Director. Thus, minor technological problems
minimized the extent to which teachers relied on the data management system despite it a
being user-friendly program.
At Adams High School, access to technology was not an immediate concern
because the district has only recently invested in a data system focused on instructional
decision making. The district and the school‘s administrative staff provided teachers with
data reports rather than teachers seeking or accessing them through a system. However,
Christy, an English teacher mentioned that not all data were readily available. She stated:
I wish that instead of me requesting the AIMS scores-, I understand that there are
privacy issues and I totally respect that, but I do wish sometimes they could make
it available to us in a certain way. I‘m sure it probably is if I looked for it and
hunted for it, I could probably find it online.
While the district collected a wide variety of data, the information was disaggregated at
the school and classroom level rather than the individual student level. Individual student
test results on the state test were not provided by the district although it was accessible
through the Infinite Campus system. Teachers like Christy desired to use the AIMS
scores, but the task of tracking down the data was a hurdle she was unwilling to tackle.
Teachers only utilized data that were readily available to them and preferred access to
their own classroom and student data disaggregated by standards, proficiency levels, or
goals.
130
Quality of Data
The second criterion for data use is dependent on the degree to which teachers
find a data source valid or reliable. This speaks to the credibility of data. In general,
teachers at Adams were less likely to question the validity of data because the district
encouraged decision making based on a wide array of data. Teachers at Adams were also
less likely to question the validity or reliability of their district assessments, perhaps due
to the fact all teachers are involved in the development and refinement of the tests with
the aid of the curriculum coordinators and the Director of Research and Evaluation. In
contrast, teachers at Mesa were more likely to challenge the validity of data, especially
district created tests. This was largely a consequence of the district‘s decision to change
its assessments and pacing plans. Both science and English teachers at Mesa High School
have consistently pointed to the format and content of district created tests as
problematic. They viewed their own internally created test as more valid and useful
because it was aligned to their instructional planning.
Relevance of Data
The third criterion for data use by teachers is whether data are considered salient
for instructional decision making. Teachers‘ sensemaking of what defined relevant data
was based on their pre-existing beliefs about learning and student growth. This degree of
relevancy was determined by teachers‘ attitudes toward what counts as learning. For
these teachers, while assessment data may be deemed informative, data ―does not tell the
entire picture.‖ Hayley, an English teacher at Mesa, described her belief about DDDM:
131
I don‘t think it makes any of us better teachers. Like I said, and I‘ll keep saying it
over and over, you‘re not teaching to a bunch of numbers, you‘re teaching to
students, you‘re teaching to human beings and you have to know how to deal with
them and what works for them as individuals. I mean you can look at data all you
want but in the end, you know they‘re just numbers to me. They‘re just numbers.
If anything is going to change its got to start with the teacher, it‘s got to start with
the students and how you teach them. If you have a passion to teach, I mean your
passion in what you teach rubs off on the students and they‘ll learn.
Because data has been largely defined in terms of student performance results on district
and state assessments, some teachers such as Frank believed that there was too much
focus on data. To him, data were synonymous with testing and he believed that the type
of learning that he valued could not be measured. He remarked:
I think we feel, as science teachers, there‘s an overemphasis on the data. I feel
like we are testing our kids beyond reason. It leaves very little for a child to just
explore. How do you grade a child when I can send them out to the horticulture
field and ask him or her to go find me an ecosystem and describe an ecosystem
for me? I‘m sorry it‘s not on the test. It‘s not in the bubbles. But it‘s learning.
When student learning was narrowly defined in terms of performance on multiple-choice
tests, results were contested by teachers as ―not telling the whole story.‖ The tests were
often considered to be geared towards lower thinking skills on Bloom‘s Taxonomy.
Furthermore, emotional and contextual factors (e.g., student test anxiety, difficulty at
home) were considered important information that teachers considered when making
judgments about student performance. Teachers viewed observations of student
engagement as well as student attitudes and emotional well-being as vital sources of
information.
Teachers also made sense of what counts as relevant data based on their class
compositions and their students‘ backgrounds. Hayley, who teaches AP English, talked
about how collaborating around data does not work for her:
132
I mean when we have our collaboration meetings, and we‘re looking at our class
results, then we‘ll look at it and I‘m compare my results with another junior
teacher‘s results and see, ok my students scored really high, yours didn‘t score
really high, what do I do to teach them? But it‘s not even a fair comparison
because I teach AP and honors kids. My kids are going to outscore the other
college prep students. So it‘s not really a fair comparison. It seems a little bit
ridiculous for me. Then with the other data, I don‘t deal with ELD students so
you know, I‘ll go onto our Data Director and I‘ll look it up and then just to see
that I don‘t have any ELD students. So I don‘t have to change anything.
Hayley‘s comment suggests how difficult it may be for teachers to collaboratively use
data for the development of best practices when classes are grouped by ability levels.
Ease of Use for Decision Making
The fourth and primary criterion for data utilization hinges on the ease of use for
decision making. Teacher‘s use of data was facilitated or hindered by the degree to which
the data lend itself to easy decision making as well as the extent to which teachers have
the capacity and time to use data. Several teachers remarked that certain types of data
enabled them to make instructional decisions easily and thus they are more prone to use
those types of data. Data that were disaggregated by student proficiency levels or
standards enabled teachers to pinpoint which students need remediation or which areas of
the standards need re-teaching. For example, the math department at Adams found their
departmental unit assessments arranged by competencies (i.e. skills and outcomes) as
very easy to use because the numbers were considered straight forward and could easily
be compared with the target outcome:
It makes me happy to know ok sixty percent of my kids did that on there. I say
well there‘s something I did wrong for that group of kids. Because if sixty
percent of my kids didn‘t get it there‘s something that I didn‘t do with monitoring
or checking or remediation or I didn‘t notice that the kids couldn‘t reduce these
133
fractions. That‘s how I reflect it too. When I see they do badly I‘m like I need to
make a change when I re-teach.
[Martha, Math, Adams]
Data has made it so you can focus on one thing. Back in the old days like you
took a test to like what you‘re saying, forty questions, and all of the questions
were mixed up. I mean it doesn‘t matter, they can mix all these up too. It made it
easier to grade. It made it easier to figure out where they‘re having a low spot.
[Bob, Math, Adams]
For the same department however, the state test results may be less useful because the
format does not lend itself for easy interpretation. Darcy, another math teacher, shared:
Boy, I would have to say sometimes the organization of some of the data that we
get [is lacking]. Sometimes it‘s just so hard to figure out what to do with it that
its like aaahh I‘m just overwhelmed with it, there‘s just no way I can go
anywhere. I would say a lot of that has to do with the AIMS [Arizona‘s
Instrument for Measuring Standards] stuff that we get. A lot of the data that we
get from the state department with AIMS it‘s just like if I wanted to use it would
take me hours to go through it and figure out how to use it.
As Herman and Gribbons (2001) suggested, the organization and formatting of data are
important because various types of data display communicate or highlight different
information. Considering teachers‘ lack of capacity or practice with interpreting data,
displays of data need to be intuitive and self-explanatory.
Similarly, teachers at Mesa remarked that the lack of capacity for data analysis
was a major obstacle. The lack of data literacy meant that teachers either had to spend a
lot of their own time making sense of the information and even when time was given,
teachers had difficulty interpreting data when they had little or no training. Nancy, an
English teacher expressed this sentiment:
I‘m not a numbers girl. I would like to use it more but I mean we were looking at
our day to day and we didn‘t even know what some of it meant numbers wise.
How do I better use it in my classroom if I don‘t even understand the numbers
that are being generated for me? I can‘t. It doesn‘t mean anything. None of us in
134
the tenth grade department today could understand that-, anyway from the interim
one. The district one is much easier.
The science department was also not immune to these issues. Although people did not
declare that they are not a ―numbers person,‖ teachers referred to the lack of data literacy
when it came to interpreting and using data. When asked what hindered data use, Katie, a
science teacher, remarked:
I don‘t think that everybody‘s as comfortable understanding the data. I mean just
like when you have a bunch of kids in a class and you give them data to interpret,
not everybody can do it. So I think just understanding what it means and how to
interpret it. I think most people are pretty good at it once they understand. If they
understand what it means they will use it.
For teachers at both schools however, having time to manage and interpret the
data was considered the major barrier because the organizational structure of large
comprehensive high schools make it difficult for teachers to tailor instruction for each
individual student. Unlike teachers at elementary schools who work with the same group
25 to 35 children throughout the year, high school teachers typically deal with five
periods of classes filled with 30 to 40 students. This makes the use of data daunting for
teachers as explained by Jim, an English teacher at Mesa:
The other challenge would be, and it‘s a big one, is that when you have a hundred
and eighty-five students it‘s hard to look at all their data effectively. You can
scan it, but to look and say here‘s where this student was a year ago, here‘s where
the student is now, here‘s where I want the students to be, how can I get them to
keep up leveling or even just growing within that band or whatever? That‘s
difficult when you have-, when your eyes are burning because you‘ve been
looking at data all day.
Lamenting about not being able to get to know all of his students, Frank (a science
teacher) remarked:
135
Well you know, you just don‘t-, you‘re going to know their first and last name but
you‘ll know very little about them other than she got a C. Ask me who her
boyfriend is, I don‘t know. Ask me if she has a dog? I don‘t know. You know I
didn‘t have time to talk to her. I was too busy talking to the thin edges of the bell
curve. And that‘s a shame.
Taken together, the ease of data use for decision making seems to be hindered by three
interrelated barriers: data overload, data paralysis, and lack of time. Data overload refers
to having too much data (i.e., quantity) and data paralysis refers to the lack of data
literacy (i.e., the skills to interpret or process data). The lack of time specifically refers to
opportunities for teachers to manage and interpret data.
Based on the open-ended survey responses, I have summarized the main barriers
to data use in Table 4.5.
Table 4.5: Barriers to Data Use
Type of Barrier
Mesa Teachers
(n =36)
Adams Teachers
(n= 33)
Total
(n= 69)
Accessibility 6 1 7
Quality 5 8 13
Relevance 5 5 10
Other—External Factors
15
3 6 9
Ease of Use: 24 19 43
Data Overload--management 7 3 10
Data Paralysis--capacity 6 4 10
Time 11 12 23
15
This category included a broad range of responses in which teachers pinpointed external and personal
barriers that hindered their data usage. For example, one teacher wrote that there were too many external
factors such as illness and hunger that make data use difficult. Another teacher noted the emotional
challenge to her sense of self-efficacy as a teacher. Both of these examples are pertinent to how teachers
make sense of DDDM and are further discussed in subsequent chapters. Since the focus of Table 4.1 and
the section is to highlight patterns, I chose to use ―other—external factors‖ as a category in which
responses were unique in the survey.
136
The table highlights the fact that teachers believe the main barrier to data use was time
and capacity (data overload and paralysis). Time was a large factor, as teachers believed
that there was insufficient time to interpret and use data especially given that high school
teachers dealt with large groups of students. Despite the monthly and bi-weekly
collaboration times instituted at their schools, serving 180 students per semester made it
unlikely that teachers had adequate time to use data to differentiate instruction
effectively.
In review, an analysis of both survey and interview data suggests four main
reasons on why teachers use or do not use data. The first two criteria deal with teachers‘
ability to make sense of and use data for decision making. As sensemaking theorists have
noted, when teachers encounter new reform ideas, they construct their beliefs based on
pre-existing assumptions and capacity (Coburn, 2001; Spillane et al., 2002). With little
data literacy or technological skills, teachers are likely to believe that data use is too
cumbersome. Additionally, teachers‘ beliefs about data use, its relevancy and its
limitations, are also shaped by their pre-existing beliefs about what counts as learning and
teaching.
Conclusion
Teachers are not a monolithic group and as studies on sensemaking (Coburn,
2001; Spillane et al., 2002) and high school reform (Siskin, 1994) suggest, their
responses to DDDM vary depending on their background, experiences, and pre-existing
beliefs. Using examples from the data, I developed a typology of teachers‘ orientation on
137
DDDM. I noted that there were distinctions amongst teachers who view DDDM through
an inquiry-centered, solution-centered, bureaucratic-centered, or compliance-centered
orientation. Teachers‘ sensemaking of DDDM was filtered by how they conceptualized
data, their motivations for data use, and the outcomes they expected for using data.
Although teachers representing each orientation category were present in both high
schools, both schools had a majority of teachers who fell into the solution-centered and
bureaucratic-centered orientations.
Teachers at both locations suggest that what counts as data are defined by
regulative and normative pressures produced by the state and district levels.
Specifically, the state and district accountability systems have influence over how
teachers prioritize data and what counts as data. Teachers at Mesa were more likely to
rely mainly on standardized test assessment data to inform their decision making because
California and its district evaluated school improvement based on these measurements. In
contrast, teachers at Adams were more likely to indicate that a wide array of data types,
including state assessment results, formative classroom assessments, and perception data
as useful because their continuous improvement evaluation included these indicators.
I also presented four main criteria that affected how teachers used data.
Confirming other studies on DDDM, these criteria revolved around concerns with access
to quality data, the validity of data gathering instruments, the relevance of data itself, and
the ease of using data for decision making. In contrast to studies that highlight how
teachers are resistant to data use because of questions of validity or relevancy (Ingram et
al., 2004), the majority of teachers in this study believed that DDDM was critical to
138
making improvements in teaching and learning. Of these four criteria, the ease of using
data for decision making—which highlighted the need for data literacy and time—was
the a primary criteria for whether teachers used data. Barriers, such as lack of data
literacy, resources, or structural enablers meant that despite teachers‘ willingness or
desire to use data to inform their instructional decision making, multiple pressures for
time and attention made it unlikely that data use occurred.
In this chapter, I focused my analysis on the general patterns of teachers‘
responses to DDDM and highlighted how their sensemaking orientations shaped their
uses of data. I also demonstrated how the state and district context influences the ways in
which teachers prioritize different types of data. Throughout, I alluded to the importance
of the district and state in structuring teachers‘ sensemaking of DDDM. In the subsequent
chapters, I delve deeper into the district and school contexts to analyze how structures
and cultures influenced teachers‘ sensemaking processes. In chapter 5, I examine how
leaders at district and school levels strategically framed the meaning of DDDM in order
to persuade teachers on the utility and relevance of using data to inform decision making.
In chapter 6, I focus on sensemaking opportunities at the school level and examine how
departmental subcultures mediate teachers‘ orientation towards DDDM.
139
Chapter 5:
Power and Ideology in District Framing of Data-Driven Decision Making
In chapter 2, I argued that studies on teachers‘ sensemaking tends to neglect the
ways in which power and ideology work to shape teachers‘ construction of reform. I
pointed out that differential access to decision making positions and resources means that
some social actors have more power to shape social reality. Leaders, in particular, are in
positions of authority to prime, trigger, or edit sensemaking processes. One way they do
so is by engaging in deliberate framing of DDDM as an essential and useful strategy for
school improvement. The act of framing is primarily a cognitive one whereby social
actors strategically create shared understandings of a problem or a cause in order to
mobilize people toward a common goal (Benford & Snow, 2000; Campbell, 2005).
Frame analysis suggests that the degree to which teachers buy in to the credibility and
salience of DDDM is dependent on the extent to which leaders take up the three core
framing tasks: diagnostic, prognostic, and motivating framing (Benford & Snow, 2000).
Diagnostic framing refers to the act of defining a problem and making attributions for a
social movement. Prognostic framing occurs when leaders lay out strategies to address
the problem or develop solutions for achieving goals. Movement leaders engage in
motivating framing when they provide intellectual and emotional rationale for action.
These three tasks provide the immediate context from which teachers define and use data.
To encourage the need for DDDM, district leaders have to identify a common problem or
a cause. To legitimize DDDM, they also have to develop strategies and motivating
rationale to garner support from teachers.
140
Creating a Shared Construction of DDDM
Beginning with Costa Unified School District, I examine how leaders strategically
engaged in framing tasks to produce a shared conceptualization of DDDM. At Costa,
leaders utilized sensemaking frames that centered on student equity, system alignment,
and collective responsibility. Arlington, which already had a long history of using data,
concentrated on assisting teachers in shifting from a program level focus to individual
student levels and teacher empowerment.
Diagnostic Framing at Costa: Confronting Inequality
For Costa leaders, the diagnostic framing task required an acknowledgement of
the district‘s low academic rigor. As a result, conceptualizing the need for DDDM went
hand in hand with the need to improve student achievement. Costa and Mesa High
School‘s move toward DDDM resulted from a confluence of factors but was propelled
largely by state and federal accountability policies. According to the superintendent and
assistant superintendent, California‘s standards movement and the establishment of a new
accountability system in 1999 presented an opening for the district to plan curricular
alignment. The use of data to inform decisions was certainly not new at this stage:
District leaders had always collected and analyzed data to make decisions about
organizational planning. However, district leaders admitted that they rarely attempted to
look at data in a systematic way or in a collaborative context with teachers and
administrators. By the time the API, AYP, and school improvement designations rolled
out, district leaders consciously decided to view accountability policies as challenges that
141
needed to be tackled by the district as a whole. Overall, the district leaders believed that
the accountability policies were a positive turn in education reform despite some
reservations about how the system held schools responsible for student results.
While district personnel embraced the accountability system, its leaders grappled
with presenting their diagnosis of the district‘s problem to staff members. Part of this
process required that educators at all levels of the system confront the culture of low
student expectations that pervaded their secondary schools. This necessitated an honest
assessment of their obstacles. The district leadership had to ask why some students failed
if educators believed that all students are capable of achieving high standards To
highlight this concern, the superintendent shared some startling data with her staff. As
she noted:
If you were an Asian student, and you scored at basic, your chances of being in a
regular college prep A-G course are about 80 percent. If you‘re a Hispanic
student, and you score is in basic, your chances of being in [a college prep course]
were probably less than 40 percent.
Rather than accusing anyone in the system of intentional harm or dereliction of duty, the
focus was on using the data as information to ensure that student placement was more
equal. The Superintendent added:
Now, [it‘s] not mean spirited people, not a horrible system designed to
discriminate, but regardless of your intentions, the results were the same. And it‘s
all about expectations and perception, and that‘s where data has become so
valuable, it‘s the leveler to say, if you are a basic or above, you should be put in
A-G classes.
Data highlighting unequal access to college preparation classes were a launching pad for
the district to have discussions between staff members about their responsibilities and
expectations for students. These types of conversations were considered to be very
142
valuable because they were able to look at student data and discuss ―the big elephant in
the room‖—the groups of students whose needs were being unmet by the district.
Around the time the district was focusing on DDDM and implementing
accountability policies, Mesa High School was simultaneously undergoing a pivotal
leadership and cultural shift. While the school was described by teachers and
administrators as having a long history of cultivating a nurturing environment, high
academic and behavioral expectations were not necessarily emphasized. Luke, a veteran
English teacher, described the previous atmosphere:
I saw a lot of kids going through here being encouraged to join the military or to
go to community college because after all they certainly wouldn‘t have the grades
to go to a college or a university, particularly a university right away.
The principal acknowledged that teachers tended to ―feel sorry‖ for the students because
of their backgrounds and this usually translated into low academic expectations.
Furthermore, although the professional environment was described as collegial, teachers
were not known for engaging in instruction-related discussions with peers or promoting
high professional standards.
Rather than seeking blame, Costa‘s diagnostic framing of its problems as low
academic expectations focused on appealing to the idealistic and moral values of
educators. Because of the positive social capital that had accrued within the district and
between the district and its community, the leaders believed that they had all the
necessary tools to become a top ranking district. The problem was not due to a lack of
compassion or obligation to students. There was already a sense that teachers cared
deeply about students‘ welfare but high academic standards were often missing,
143
especially for low-income minority children. By engaging in diagnostic framing that
appealed to ideals of equity (i.e., equal opportunity to learn), the district clarified the
nature of the problem as a need for academic rigor and attempted to create a shared sense
of purpose around this issue. At Mesa, this framing was creating some success. The
school‘s culture has progressed in such a way that several veteran teachers mentioned an
increasing emphasis on high academic standards. Over the past several years, the
principal noted, ―I‘ve seen the staff evolve in that now we see we can also be very caring
and challenge our students.‖ However, as I explain in the upcoming section, establishing
high academic standards alone does not produce equitable outcomes.
Prognostic Framing: Ensuring Equity
After taking stock of their status concerning student achievement and low
academic expectations, the district leaders and Mesa High School teachers and
administrators had to maneuver themselves in a strategic manner to develop solutions to
improve student learning. Frame analysis suggests that for reforms to be salient and
credible, leaders must pay attention to developing strategies that address problems in a
substantive way. In order words, in addition to attending to diagnostic framing, leaders
have to offer a prognosis to solve the problem. For Costa, this prognostic framing
centered on system alignment and the use of data to continually assess strengths,
weaknesses, and needs. Although the accountability system was becoming increasingly
clear about the definitions of success, failure, and growth, the process of continuous
improvement was a black box that schools and educators had to unpack. The district
144
wanted to establish some sort of criteria that held different groups of stakeholders (i.e.,
district leaders, administrators, teachers, students, and parents) accountable. However,
because they are concerned with sustainability, the district leadership tends to roll out
changes slowly and is careful to avoid placing one reform on top of the other. Their
overall approach was to have a strong vision and ―a big idea but starting real small to get
there,‖ explained the superintendent. This was viewed as a critical approach to DDDM
implementation as leaders noted that some inequities existed in the API and it was
possible for schools merely to ―chase the numbers‖ without doing what was right for the
students. With an incremental approach to change, they focused on looking beyond test
results and decided to develop multiple measures of accountability.
The strategies laid out through prognostic framing at Costa centered on the
continuous improvement model with an emphasis on using data to inform decisions. With
the possible threat of low-performing status looming, the school leadership at Mesa High
School also decided to implement standards-based instruction and develop benchmarks to
assess their progress towards goals. This movement towards curriculum alignment and
benchmarking coincided with the principal‘s arrival eight years ago, as she was credited
by several teachers as having led the school into data use.
With the district and Mesa High School moving in the same direction, the
strategies for DDDM implementation were condensed into four vital elements. The major
components of their DDDM system included: developing district-wide goals, aligning
curriculum to standards and assessments, monitoring student performance with
benchmark assessments, and providing data management support. These components of
145
the system were considered the building blocks that would enable continuous
improvement efforts. I provide further details below, examining the district‘s approach to
each element.
Developing goals. As part of its strategy for continuous improvement, the
district‘s leadership team decided that having a formalized, district-wide goal would
ensure that everyone had a common purpose. Having goals is an important aspect of
prognostic framing because goals direct attention and resources to a shared vision.
District leadership decided early on to focus on developing reasonable, objectively
measurable goals because they believed that generalized goals such as, ―We want all
students to become lifelong learners,‖ would not enable the district to assess whether or
not the goals were being met. During this process came the realization that the district
leadership team was ill equipped to write strong goals. Deciding to be proactive about
seeking assistance, the district‘s leaders soon thereafter collaborated with an external
professional development provider to help them improve on both DDDM and standards-
based education. With the aid of WestEd, an external partner, the district leadership
underwent a multi-year process of developing and refining its goals. The process took
three years before the goals were converted into printable format that could be shared
with schools and teachers. Despite the length of time it took to finalize their plans, the
goal-setting process was considered invaluable because administrators felt that much of
the growth and understanding of how to look at data as well as developing a plan for the
future arose out of the overall process. Thus, not only was the establishment of the goals
an important tactic to produce a shared sense of purpose within the district but the
146
process of developing the goals was a pivotal strategy that enabled staff to build the
capacity to achieve their goals.
Emerging out of the goal-setting process was a set of measurable goals and annual
benchmarks closely tied to the California‘s accountability system and NCLB. The district
established two general goals that were intended to encourage progressive improvement
each year. First, all students were expected to make annual progress through the band on
CST scores (e.g., if a student is far below basic, she/he will become basic within a year).
Within five years of being in the district, all students were expected to be at least in the
proficiency level and no student should drop out of the proficient/advanced proficiency
level. Second, all English language learners were expected to progress through the
California English Language Development Test (CELDT) levels each year (e.g., from
beginning to early intermediate). On the one hand, these goals were specific enough to
be tied to the state‘s measures of student progress. On the other hand, the goals were
general to the extent that they applied to all teachers regardless of grade level. These
objectives, in essence, encapsulated the district leadership team‘s desire to create a shared
sense of purpose within all levels of the system.
Costa‘s district-wide goals have remained the same for several years now, while
individual schools have also established their own measures of progress. The basic
district goals remain the focus with schools and teachers adding or refining them
depending on their local context. At Mesa High School, the principal and her leadership
team have developed two main implementation goals on the use of data. First, the goal is
to have teachers share their data and work collaboratively to develop best practices. The
147
school has recently instituted regular departmental and content team meetings to foster
these professional norms. I return to the discussion of teachers‘ sensemaking of data later
in chapter 6.
The second goal at Mesa High School is that all teachers use both academic
performance and student demographic data to make decisions on differentiating
instruction or classroom placement. At the high school level, there is an increasing focus
on having students meet college entry requirements. This goal is directly tied to the
prognostic framing of equity and equal opportunity to learn. One method was to re-design
courses to prepare all students for college. Instead of basic, college prep, and honors
courses, classes were re-configured to college prep, honors, and Advanced Placement
(AP). Pamela, an English teacher, suggested that restructuring class levels to promote
college acceptance and providing open access to AP classes stopped some teachers from
using deficit views of students to rationalize poor performance scores. Because low-
performing students were dispersed across different types of classes, in theory, teachers
should be able to compare the effectiveness of instructional practices. She elaborated on
this issue and discussed the change in teachers‘ behaviors:
There were a couple of teachers who would say, ―Well I always have the low kids
so of course I‘m going to do worse.‖ But now it‘s sort of been all of the low kids
have been spread across the board because there aren‘t the Basic courses
anymore, except for one class that‘s a read/write aid, but they don‘t take the test.
They take it within all the same classes in English and in history and in math or
whatever. Now it‘s more like, ―ok but we have the same kids so the scores should
be approximately the same right,‖ so not too much animosity or whatever
anymore. Then the people who used to complain they really look dumb, you
know what I mean? So they really stopped.
148
However, it is important to point out that Pamela taught honors level courses with
students who outperformed classes designated as college prep. Thus, while the school
may have restructured its courses to promote high academic expectations, teachers did
not experience much change in their student composition. Other teachers‘ comments
belied Pamela‘s belief that class restructuring has led to changed attitudes about students.
Some teachers‘ comments reveal the difficulty of changing teachers‘ beliefs about
student ability. Nancy, an English teacher in the same grade level as Pamela, shared her
frustration about using and sharing data:
To be honest with you we don‘t have any basic classes. We all have basic kids in
our college prep classes so that makes it difficult as well. Unless you‘re dealing
strictly with all honors classes, you‘re going to have your kids that are your far
below basic in your college prep classes. Some of them they just don‘t care.
Another English teacher viewed equity and excellence as competing rather than
complementary goals and had a negative view on class restructuring:
You know when I started teaching we had basic English, college prep and honors.
Now we have AP, honors and college prep. There‘s no more basic English. So
the college prep, you know I mean you have to put college prep in quotes there
because it‘s not really college prep. I mean these kids are probably not even
going go to college.
Differentiating students by ability was an important distinction that some teachers made
in terms of believing that high academic standards were possible for every student.
Research on de-tracking and other equity-oriented reforms indicate that without
addressing teachers‘ implicit assumptions about student ability levels and their capacity
for learning, structural changes will do very little to actually produce equitable outcomes
(Oakes et al., 1997; Lipman, 1997). In the case of Mesa High School, the re-labeling of
classes amounted to structural tinkering where the names of the course have changed to
149
indicate high goals (e.g. college prep) but students were still clustered by ability levels.
Although the leadership and some teachers bought into the equity framing, pre-existing
beliefs about student ability hindered others‘ capacity to believe that all students were
capable of achieving high standards. Because the re-structuring of classes did not
fundamentally de-track students, data did not challenge teachers‘ implicit deficit frames
of students. Teachers‘ comments suggest that goals of high academic standards and equal
access are likely to be hindered by the structural organization of high schools such as
ability grouping. Thus, while the developing common goals has been a useful strategy by
the district to mobilize teachers and administers to focus on student performance, the
underlying belief that not all students are capable of meeting high expectations makes the
use of data somewhat problematic as some teachers may believe that student performance
may be pre-determined.
Curricular alignment. In addition to establishing measurable district-wide goals,
the foundation of the continuous improvement efforts was to ensure that the curriculum
and assessments were aligned to the state standards. This strategy was tied to the notion
that measuring progress necessitated that teachers and schools be on ―the same page.‖
Knowing that state standards were to be the basis for instruction and should guide
curricular plans, Costa‘s leaders took the stance that materials and instructional practices
may vary but the curriculum and standards were to be ―absolutely non-negotiable.‖ At
Mesa High School, teachers had already developed their own curriculum guides. Rather
than forcing Mesa teachers to create completely new curricula, the district leadership
relied on the teachers to help inform the district‘s efforts. When the district started its
150
own curriculum guide initiative, teachers at Mesa were also brought into the process at all
stages. Teachers were given previews and piloted the guides and instructional materials.
For Mesa High School teachers, this lent credibility to the district‘s work and garnered
additional support. Tina, the Title I Coordinator at Mesa commented, ―So often you‘re
given a test and somebody else chose what the standard was and what the questions
should be. I think the teachers really bought into it largely because they got to make
some of those decisions.‖ The reliance on Mesa High School teachers further solidified
the image of the school as a frontrunner in reform.
In conjunction with the curriculum guides, Costa created pacing plans for all
academic subjects, derived from the CST and benchmark assessments. Teachers were
expected to cover the materials for each benchmark assessments but did not have to
rigidly follow the plan as long as they met the deadlines. Again, teachers at Mesa were
consulted on the development of the plan.
Monitoring student performance. A large part of the district‘s prognostic framing
of using data as a continuous improvement tool required a method of frequently
evaluating progress. To monitor student performance throughout the year rather than
depend on the end-of-the-year state tests to inform decisions, the district created
benchmark assessments aligned to the curriculum guides and pacing plans.
Benchmarking was considered critical to the DDDM system because it would enable
teachers to make instructional adjustments or student remediation throughout the year.
All teachers in core subjects, such as English, math, and science, were required to
administer the quarterly benchmarks except in AP courses, which were exempted.
151
English teachers were also obligated to administer a writing sample test twice a year. In
deciding how frequently to administer the assessments, the district made a strategic
decision to go with four benchmarks per year rather than six-week interim assessments.
The district‘s rationale was to have quarterly assessments that would keep the district on
track but would also give schools the flexibility to develop their own school or teacher
created assessments.
In addition to the district-required assessments, the teachers at Mesa High School
administered their own department-generated mid-quarter benchmark assessments.
These, too, were given in all core subjects and in some elective courses such as art.
Additionally, teachers frequently used other assessments provided by textbook or
curriculum packages. As of this year, Costa was also relying on the PSAT as a source of
data. The leadership required all tenth graders to take the PSAT, paid for by the district,
so that the data could be used for placement into AP classes. As schools become more
sophisticated in their data use and decision making, the district‘s leadership expects open
access or at least expanded opportunities for all students to enroll in AP courses.
Furthermore, because the district was increasingly focusing on making sure that students
were eligible for the state‘s university system, course enrollment was perceived to be an
important tool to monitor student performance.
Due to the frequent changing of assessments and pacing plans, teachers at Mesa
High School questioned the validity of the district-developed tests. Instead, they preferred
their own commonly developed mid-quarter benchmark assessments. The pacing plan
was seen as a bit of a mixed blessing for some teachers, particularly with respect to
152
creating assessments aligned to them. This has important implications in how teachers
view the validity of the district assessments since teachers dismissed test scores when
they felt rushed to cover materials or failed to teach all the standards. Pamela explained
the dilemma:
We‘ve been so focused on trying to get our tests to align, because they changed
the benchmark standards for us because now what they‘re trying to do is to get all
of the standards in prior to CSTs. So this year the pacing guides are different
from last year. So then the tests don‘t align. So that‘s the frustration. If you have
to revamp your tests every year…you have to start over. That part is frustrating
because then we spend more time creating new tests and not actually getting to
look at them and figure out what our kids learned and what our kids didn‘t learn.
The constant readjustment of the pacing plan and benchmark assessments has created a
level of dissatisfaction about the validity and reliability of using data to make
instructional decisions. The pacing plan, in particular, made it difficult to buy in to the
notion that benchmarks indicate actual student learning rather than superficial coverage
of the standards. Thus, while similar curriculum guides and pacing plans may help
teachers ―stay on the same page,‖ it can also be counterproductive when it is constantly
readjusted. When benchmark assessments are considered unreliable or invalid indicators
of teaching practices, data derived from these measurements will not be used to inform
instruction.
Managing data. The district believed that the development of common goals,
curriculum, and assessments were important foundational strategies to build a culture of
continuous improvement. There was also an acknowledgement that teachers may not
have the capacity for DDDM. To encourage data use, the district invested in a data
management system to enable quick and easy access to a wide array of data. Named the
153
Data Director, the data system includes reports from state assessment, benchmarks, and
other teacher-created tests. Testing clerks were responsible for distributing, picking up,
and scanning tests from school sites to the district‘s scan site, alleviating school personnel
from some of the administrative burdens. Users could then access the data and create a
myriad of disaggregated data reports. Typically, district test results were scanned and
uploaded to the system within 24 to 48 hours to ensure that results were used to inform
decisions in a timely manner. Through Data Director, teachers at Mesa also had access to
students‘ CST scores, CELDT scores, and grades over the past several years. Data
Director held most of the ―quantifiable‖ data on students, but counselors also had access
to students‘ cumulative records and teachers could approach them for that data as well.
To further help manage the data, both the district and schools had personnel
devoted to such tasks. Instead of being experts in statistics or psychometrics, these staff
members tended to be veteran educators who had some background in using data or
working with technology. At the district office, two administrators were dedicated to
maintaining the information management system and running analyses for district and
school staff. The Director of Evaluation and Research, who had been in his current
position for the previous four years, had worked within Costa for over 38 years in various
roles as a teacher, principal, and administrator. His main function was to work with the
Data Director. He did not have any formal background in data or statistics; however, he
was very knowledgeable in Data Director and his previous experience as an administrator
provided him with sufficient knowledge to make data comprehensible to other staff
members. Another district staff member with extensive technological expertise directly
154
oversaw the development of benchmarks, training, and logistics associated with the
testing. The district also encouraged all schools to develop their own data teams to
oversee the implementation of data use.
At Mesa High School, two staff members assisted teachers with data. Tina, the
Title I Coordinator and a science teacher, was the school‘s primary professional resource.
She supported all of the special programs on campus and assisted teachers with
implementing instructional strategies and with data use. Tina spent 10 years as a scientist
before becoming a teacher and comfortably analyzed and understood data. Another staff
member, John, who worked in the school‘s library, was the point person for technology
related issues. Members of the leadership team were also considered important resources
for teachers who needed assistance with data.
In sum, the district viewed these four strategies as important elements to help
make sense of DDDM as a continuous improvement tool. The district specifically
concentrated on the process of establishing goals, monitoring progress, aligning
curriculum, and reevaluating performance continuously. These strategies were meant to
build capacity for DDDM and to ensure that continuous improvement became sustainable
within schools. The district encouraged schools to use DDDM practices to make
substantive changes in their curriculum, instruction, and policies in an effort to improve
student learning. Still, much of this was still a work in progress as the district further
refines its goals and develops increasingly sophisticated approaches to tackle
improvement efforts. Overall, the prognostic framing on continuous improvement helped
to re-direct focus away from merely using data to meet accountability demands, but
155
strategies such as pacing plans and the readjustment of benchmarks also lessened the
relevance of data use for teachers.
Motivating Framing: Extending Collective Responsibility
When first introduced to data use, many teachers and administrators fear that
performance results will be used for political purposes or to unfairly judge schools.
Tackling such fears was important given that one of the qualities of effective DDDM
schools is the ability to work collaboratively and transparently (Ingram et al., 2004;
Marsh et al., 2005; Mason, 2000; Petrides & Nodine, 2005). Motivating frames about
data use needed to be explicitly addressed given that teachers perceived many of the
accountability mechanisms in NCLB, such as meeting Annual Yearly Progress (AYP), to
be high-stakes oriented. School leaders believed that it was important to think about how
to steer conversations about students, learning, and instruction so that teachers felt
empowered toward proactive improvement rather than a fatalistic lack of agency. As
frame analysis suggests, diagnosing needs and creating strategies to address problems are
not likely to be sufficient to promote buy-in: Leaders have to create a meaningful
rationale for the movement so that teachers and others have a stake in the implementation
of DDDM.
A large part of creating a school culture that valued data use necessitated new
ways of thinking about and using data. The accountability system provided district and
school leaders with the political leverage to focus on DDDM as a continuous
improvement method but this did not automatically translate into staff buy-in.
156
From system level personnel to school based leaders, there was an agreement that the
sharing and use of data needed to be presented in a non-threatening manner to nurture a
culture of learning and improvement. The district leadership deliberately set out to
present DDM not just as an accountability mechanism but also as a necessary strategy for
teaching and learning. To gain support for data use and alleviate fears of evaluation, the
district leaders presented data as a diagnostic tool.
In addition to establishing system support structures for the implementation of
DDDM, leaders took on the task of defining the purpose and motivation for data use
around higher objectives. The answer was not immediately obvious: The district and
school leaders grappled with explaining how data use would ensure equitable and
rigorous academic opportunities for all students. Merely fulfilling accountability demands
or meeting the district‘s established goals were not going to be a sufficient motivating
frame to encourage teachers and principals to actually believe in the relevance of data for
decision making. Realizing the importance of motivating her staff to move beyond
chasing numbers, the superintendent described the tenor of a conversation she had with
her staff:
All of a sudden, once we got through [the goal-setting process, we had to ask]
what‘s the purpose of these goals anyway, just so that we meet proficiency?
Well, why do you want to meet proficiency, just so we stay out of AYP and API
jail? No, we really want to do this because this is what it‘s really going to take for
kids to be able to be successful in truly rigorous college prep courses, you know,
and to go on.
To reinforce the point that the district wanted to move past meeting accountability
mandates, recent conversations had turned to understanding the difference between
compliance and commitment. The superintendent argued that, ―Compliance is doing
157
things right, and commitment is doing the right thing. Compliance is you doing the right
thing in your classroom [and] commitment is making sure the right thing is going on in
every classroom for every kid.‖ By attending to these motivating themes of equity and
collective responsibility, the district was attempting to foster a culture where teachers
took responsibility for the whole school rather than their individual classrooms.
A large part of the motivational framing entailed reshaping perceptions about data
analysis and beliefs about who ―owns‖ data. To encourage collaboration and a sense of
collective responsibility, school leaders emphasized the need for data sharing. Sharing
data—across all levels of the educational system—is a paradigm shift for educators. For
districts like Costa, there is little precedent within their own schools for how data are
shared and used. Citing another district as an example, the superintendent argued that
data have to be safe in order to make people feel comfortable with them and ―then over
time, you can start ramping it up.‖ Teachers needed to feel safe as they delved into data
analysis so that they could share instructional practices or ask for help. To do so, school
leaders evoked medical analogies to promote a sense that data was an instructional tool:
no more and no less. They framed data as ―your friend, just like for doctors, lab reports
are not a bad thing.‖ Instead of blaming a teacher or a school for poor performance on
the tests, the district leaders focused on examining the data (e.g., ―Perhaps something is
wrong with the benchmark so let‘s look at the data district-wide,‖). Part of the rationale
for ―depersonalizing‖ data was to assist teachers to make objective assessments about
their instructional practices while simultaneously protecting their sense of self-efficacy.
158
These types of motivating frame helped teachers view DDDM practices in a favorable
light.
Believing that data use cannot be mandated, leaders strategically focused on
building a critical mass of staff members who viewed DDDM as valuable and necessary.
In turn, leaders argued that this core number of early adopters would further motivate
other teachers to use data. They concentrated on ―building the thirst‖ for data use in order
for teachers to value data and minimize ―push back.‖ The superintendent shared her
analogy on creating and sustaining reform,
You‘ve got to till the ground and that‘s real hard, and it creates a lot of dust that
comes up, and then you‘ve got to plant some seeds strategically and put enough
water on them for enough time to grow, and then you go plant something else.
This approach enabled district leaders to focus on successful implementation of a single
reform initiative rather than attempting to make multiple changes at once. Additionally,
gaps evidenced by tests were addressed in a manner that invited help from the district
leaders such as the superintendent: ―Tell us what tools we‘ve given you. They may not be
adequate tools. We need to shore them up.‖ By deliberately creating a shared rationale
for DDDM, the district was increasingly positioning itself as an important support
provider to the schools. At the same time, the district sent clear messages about holding
schools accountable for their results. The superintendent stated, ―If you you‘re not thirsty
for it, we‘re still holding you accountable for the results.‖ Thus, the district coupled
accountability for results with a large degree of support.
The principal and other school leaders at Mesa High School had similar attitudes
about motivating teachers to use data. Data sharing was encouraged and framed as a non-
159
evaluative tool. Teachers were particularly persuaded to use data within a collaborative
learning environment with their peers. Intended to be learning opportunities, these
meetings had minimal oversight from administrators. Katie, a science teacher, shared
how quickly she adjusted to sharing her data with colleagues:
Well in the beginning when I first got here it was really strange to sit there and go
to these meetings and take what I thought was something that was kind of
personal, my kids‘ data, and lay it out there in front of everybody to see. I think it
only takes once, like one meeting, one collaboration time to understand that really
nobody cares. I think that everybody really wanted to do what was better for the
kids and most of them realized that we can sit there and say well little Johnny‘s so
sweet, little Johnny‘s so nice, well that doesn‘t tell us squat about little Johnny
and how he performs.
The principal, in particular, was viewed as an important figure in fostering this evolving
attitude towards DDDM. Several teachers described the principal‘s role in cultivating a
learning attitude towards data:
I think Donna [the principal] has done a good job of presenting it in a positive
light even though teachers have been resistant in the beginning. I mean she‘s
offered opportunities that if you struggle, here‘s your safety net. If you‘ve failed,
try again. You know she‘s done it in a very non-judgmental way and let people
get to their levels.
[Candice, Science, Mesa]
People are at different levels and Donna is really good about going here‘s where
we‘re going but kind of letting the teachers have a little more ownership of it than
just going, you will be! You know she‘s really doing the baby steps and letting
people first kind of volunteer, I want to be a part of this. Then it kind of starts to
infuse throughout the whole campus and then even those naysayers start to go, oh
well you know, check it out.
[Deanna, English, Mesa]
Again, learning from data and from one‘s errors was a way to make data-use non-
threatening. As a firm believer in using leverage points to persuade and move her staff
towards implementation, the principal also acknowledged that different teachers were at
160
different stages of understanding and using data. Tina, the Title I Coordinator affirmed
that this attitude was born of experience with implementing change at the teacher level:
Unfortunately, when you jump on that bandwagon and you‘re requiring
everybody to get to the same place, they‘re not there. So you‘re almost creating
resistance because either fear, you know you have a teacher that says, ―Oh my
gosh that‘s not my teaching style, you want me to interact with the kids?‖
Immediately that fear comes in. It might be the best strategy in the world but they
won‘t even give it a try.
Instead, the leadership provided multiple opportunities to receive training or multiple
points of entry to use data.
In sum, the Mesa High School and Costa leadership had a slow and steady
approach to the implementation of DDDM. Because of the concern regarding
sustainability, the definition of DDDM, its purpose, and support systems had been
carefully built. This alignment between tools, beliefs, and capacity was intended to foster
buy-in at all levels of the system and to ensure that staff members strived for successful
implementation of DDDM. The core framing tasks undertaken by the system leaders
suggest that framing will be an ongoing effort as the district attempts to deepen
engagement with DDDM.
When leaders pay attention to the motivating frame of DDDM implementation,
they acknowledge that change does not simply evolve out of good ideas or practices. The
human and relational aspect of change must also be addressed. In the case of teachers at
Mesa High School, the principal and other leaders at Costa Unified School District
recognized that emotional reactions such as fear of evaluation or learning something new
might create resistance to DDDM. Without directly confronting this aspect of reform, the
credibility and the salience of data use may be undermined. Ultimately, teachers have to
161
come to the conclusion that an idea is not only good but that its benefits outweigh the
costs to themselves and their students.
Diagnostic Framing at Adams and Arlington: Adding to the System
The leaders in Costa were explicit about how they conceptualized data use
because DDDM was a new strategy for the district. In contrast to Costa, DDDM had been
part of Arlington‘s district culture for over two decades and conscious justification for
data use was no longer explicitly needed. When asked, teachers and administrators shared
that the collection and use of data had been embedded within the district for a long time
and people could not recall a time when data use was not important. Specifically, data
had often been used to evaluate programs and gauge student learning since the 1990s
when district-wide assessments were implemented. The diagnostic framing instead
focused on the need for collecting and using different types of data to ensure that students
graduated. Thus, the diagnostic framing was connected to the need to produce successful
student outcomes overall.
More specifically, the problem was diagnosed as the need for student level data
that would enhance learning for each student. Like Costa, the accountability movement
created the awareness of this issue. Laura, the district‘s testing coordinated explained,
The whole accountability, you know that whole movement that has now made us
look—ok we need to look at the student level. It‘s always concentrated, for
twenty-eight years, on improving instruction, helping teachers, staff development,
because our belief was that that pays off for students—ultimately pays off. But
because we need now to have information on specific students, we‘re now
enhancing that other [aspects of the system].
162
When asked about the influence of the accountability system on their new direction, the
superintendent suggested that it heightened expectation levels:
You know five or six years ago we began to realize we have no choice but to
forget the old paradigms and become much more flexible and much more rapid in
our change…It ratcheted up our awareness level when we realized that every one
of our students was expected—we had an expectation for every one of our
students to be successful not only on our tests but on the state mandated tests as
well. Graduation diplomas were at stake.
The district‘s attitude towards the importance of using data had not changed, but it had
shifted the focus on the type of data that were important because of the accountability
context.
In contrast to the explicit equity framing present in Costa, Arlington staff
members were reluctant to categorize students by group. Both the superintendent and
Adam‘s principal stated that they did not believe that the disaggregation of data by
student subgroups made any difference for improvement purposes, as they have always
tried to maintain high standards for all students. The resistance reflected the belief that
students should be considered as individuals, not as members of special groups. With
NCLB mandating student subgroup data disaggregation, district personnel expressed
discomfort about categorizing students based on race/ethnicity, socio-economic status,
and special needs. As expressed by the superintendent and the testing coordinator, the
district‘s goal was to have all students achieve regardless of subgroup identification. The
testing coordinator explained:
We didn‘t even look at the demographics of kids. Because of our
philosophy we never looked at our data. There wasn‘t the state system so
obviously there was no data to look at there. But our belief was such that
the expectations were for all students and it wasn‘t until about six, seven,
eight years ago that we even began looking at the different subgroup
163
populations. That was all because of how we believed. Really, we fought
having to break that data down and look at kids that way.
This comment illuminates how diagnostic framing determines what strategies will be
utilized or what solutions were possible. District officials and school leaders did not feel
that disaggregated data provided important information when making decisions about
teaching and learning. This was despite the fact that the district was undergoing rapid
demographic changes. The superintendent spoke of the influx of newly arrived immigrant
students:
The reality with our changing demographics is that along with that our English
language learner population is growing rather quickly. Five or six years ago we
probably had fewer than three hundred English language learners in our district,
today we have nearly fourteen or fifteen hundred English language learners.
That‘s approximately ten percent of our entire student population of
approximately fifteen thousand.
The purpose of expanding their collection, analysis, and use of data stems from the desire
to improve the system and to make sure that teachers have access to a variety of resources
to help enhance student learning, regardless of the students‘ backgrounds. The diagnostic
framing has been identified as a lack of individualized student level data rather than terms
of equity. It remains to be seen whether the district‘s resistance toward using subgroup
disaggregated data may make it difficult to uncover systemic patterns of inequality.
Prognostic Framing: Shifting Gears
The prognostic framing at Arlington on how to improve achievement was
presented as a three-part system, which consists of curriculum, instruction, and
assessment. Like Costa, slow, steady, and deliberate was the modus operandi for how
164
Arlington introduced and implemented change. The district relied heavily on teachers‘
professional expertise and devoted much of their resources to building instructional
capacity.
Unlike Costa, the Arlington District did not have system-wide annual benchmark
goals for student achievement. The superintendent broadly defined their approach as,
―Our goal is: Better tomorrow than we are today.‖ The district utilized a seven part
continuous improvement cycle which was the basis of their decision making: (1) take
stock of existing practice, (2) identify gaps between existing and desired practice, (3)
generate and study strategies to adopt, (4) develop consensus for adopting strategies, (5)
devise implementation plan, (6) develop plan to monitor implementation, and (7)
implement plan. This process was used to make decisions and to plan for further district
development.
The continuous improvement cycle approach was operationalized in how they
evaluated annual school performance. Instead of having specified improvement targets
for each year, the district utilized a baseline performance outcome for each school on a
variety of measures and indicators. These indicators were closely tied to Arizona‘s
accountability system. For each school in Arlington, continuous improvement progress
were based on the following measures: district assessments, state tests, Advanced
Placement enrollment and results, parent satisfaction surveys, post-graduate surveys,
student participation surveys, the Arizona LEARNS Achievement Profile, graduation
rates, dropout rates, and AYP. For example, on district tests, schools were expected to
have improved scores from previous years or score at 80 percent or higher. Other targets
165
included having at least 20 percent of seniors enrolled in Advanced Placement classes or
an increase from previous years; 91 percent or higher parent satisfaction rating; and at
least 65 percent student participation in school activities.
Curricular, assessment, and instructional alignment. Arlington was also similar
to Costa in that system alignment was perceived to be a foundational element of student
achievement. Teachers were considered to be curricular and instructional experts who
worked in conjunction with the curriculum coordinators to continually develop the three
parts of the system. Part of their duties was to ensure that the district developed a
curriculum that aligned to the state standards. For example, the Language Arts
coordinator worked with English teachers at the freshman, sophomore, junior and senior
level across all nine campuses to provide ongoing professional development workshops
so that teachers had a clear understanding of state standards and expectations.
Curriculum coordinators were also responsible for working with teachers to develop
materials and align formative and summative assessments. District wide, each department
developed a scope and sequence plan. Further details regarding content were decided at
the content team and individual teacher levels.
An integral element of the district‘s continuous improvement efforts was to
monitor academic programs. Almost two decades ago, Arlington instituted annual
assessments in all departments to gauge students‘ abilities to apply their learning,
including Physical Education and Fine Arts. Because standardization of specific
practices varied by department and was not mandated by school level administration,
each department had different testing calendars and curriculum maps. District
166
assessments were semiannual for mathematics, biology, and semester-long courses and
came in a variety of formats depending on the department and/or the course content.
Courses that contained content most accurately measured through open-ended activities
were assessed using performance-based assessments (PBA), while Mathematics,
Thinking Science, Biology, ELL Reading, Economics, and career and technical classes
were assessed using multiple-choice formats sometimes referred to as Criterion-
Referenced Tests (CRT). The PBAs took multiple class sessions to complete and
provided an opportunity for students to apply their content knowledge. For example, an
English PBA may have required students to produce an essay and demonstrate their
ability to engage in all stages of the writing process while a business course PBA may
have required students to produce an Excel spreadsheet and charts. In math, questions
were organized by specific outcomes or skills and student scores were reported according
to their levels of mastery on each outcome.
The district assessments were intended to inform and generalize to the district
programs rather than gauge individual student performance. In other words, the district
tests were summative assessments, given at the end of the school year, that guided the
overall organization, curriculum, and instructional practices. Maintaining the district-
wide goal of continuous improvement meant that tests were continually redeveloped,
piloted, and administered along a three or four year cycle. Consistently high performance
across the district indicated test familiarity rather than accurate assessment of student
learning. Therefore, the district curriculum coordinators reevaluated the curriculum and
assessments and then created new testing models. Teachers were vital partners in this
167
process as they were the ones to write and pilot the new assessments. The collection and
use of formative assessments at the classroom level were expected in every classroom but
this had not been formalized in a systematic manner for most departments.
For teachers, the summer grading of district assessments directed most of their
program improvement and professional development plans. The district tests were graded
during the summer by a cross-section of district teachers using common rubrics. The
grading workshops served to get PBAs graded and functioned as a professional
development opportunity since teachers discussed the results and shared best practices.
At the end of the school year, teachers from each content area gathered to grade all of the
PBAs from the entire district. Thirty students‘ assessments from a teacher‘s entire roster
were randomly selected for grading at the district level. PBAs were graded anonymously
and attention was paid to establishing inter-rater reliability across graders. Each teacher‘s
sample of thirty papers then comprised that teacher‘s overall scores, which were reported
at the individual and school levels and distributed across the district. The testing
coordinator explained the rationale for this method:
What we ask them to do is then generalize that because it‘s a random sample and
that they should know then that, ―Ok, I had three percent of my kids that were
outstanding in this area.‖ Well the natural inclination is to go, ―Well, yeah, that
was because they must have picked these kids,‖ you know, but that‘s not the idea.
We want to generalize that to your whole population that you had last year.
Teachers, however, questioned the random sampling. Carla, the English department
chair, believed that it was an unfair assessment of teachers and did not account for the
ways in which external factors may affect a teacher‘s performance. She instead stressed
the importance of examining patterns of performance:
168
The selecting of 30 districts test as a sample for one teacher is questioned,
especially by the English department. So your sample, so what they‘re doing is
they‘re trashing your sample. So you have to really watch that and … step back
and look at the whole picture. Sometimes people have personal problems going
on that could be affecting their efficacy in front of kids. You don‘t know what‘s
going on so you really have to look at the whole picture. So we do and then we
say, oh wow this is really big. Then if you see something abhorrent and it‘s only
once and then you look back and it‘s nothing, but if there‘s a problem and it
reoccurs, then I would think it would be worth a conversation at least.
At Adams High School, different content teams were working on developing
team-wide assessments, but this was not a school-wide goal. The math and English
departments were considered to have the most aligned curriculum, especially at the
sophomore level, due to the pressure of the AIMS test. As the Social Studies department
chair mentioned, different content areas may lend themselves differently to
standardization, as history teachers might have much more difficulty agreeing on how to
teach their course content than do math teachers. The teams that demonstrated progress in
developing common assessments included all levels of mathematics, Thinking Science,
Biology, and English 1-2.
While the curriculum guides and assessments were dependent on departmental
and content affiliation, standardized classroom management and teaching practices were
promoted by the district. All teachers in the district were evaluated using Madeline
Hunter‘s Essential Elements of Instruction, which stresses the importance of monitoring
classroom learning. Their mantra was ―monitor the students and adjust teaching,‖ and
formative assessments were expected to be present in each lesson plan. Positive
classroom management techniques were also emphasized at the district level. In order to
inculcate district values, including data use, every teacher who was new to the district—
169
regardless of prior years of experience—participated in the mentoring program for their
first three years with the district. These teachers began the school year ten days earlier to
participate in a workshop with the mentors to learn about instructional and curricular
alignment, assessments, classroom discipline, and record keeping. Overall, the district
directed its efforts at not only aligning curriculum and assessments but also building
common instructional practices and pedagogy.
Utilizing expert knowledge. District personnel bear the main responsibility for
compiling and analyzing data and have increasingly built up expertise in both data
analysis and data use. In 2006, Arlington leaders hired a director of research and
assessment who formerly worked for the state of Arizona developing standardized tests.
She was a psychometrican with a PhD in educational psychology, and was able to create
and pilot district tests. She conducted item analysis and worked with the curriculum
coordinators to improve problematic test questions. In addition, she assisted schools in
designing local assessments for specialized programs.
The curriculum coordinators were responsible for the development of tests in each
content area. The coordinator generated the test with teachers and with the aid of the
Director of Evaluation and Research. Besides the principal who received a data CD with
four-year longitudinal performance data for all teachers, the curriculum coordinators also
received a printout of the data for content in order to inform professional development
plans. The coordinators analyzed the data and then presented it back to the teachers in a
workshop.
170
Although the district has had decades of experience collecting and analyzing data
pertaining to their continuous improvement efforts, it was only until recently that they
were focusing on building a technical infrastructure to support DDDM. All teachers had
access to Infinite Campus, the district-wide database that contained statewide assessment
data that could be filtered down to the individual student level. This data management
program gave teachers access to students‘ AIMS and other standardized test scores,
attendance, eighth grade test scores, grades, and personal student information. Teachers
could also use the program as a grade book, which allowed them to track student
performance on classroom-level assessments. Additionally, the school administered the
Scholastic Reading Inventory (SRI) to all freshmen, which gauged students‘ reading
levels.
While continuing to use data to make programmatic decisions, the district was
grappling with building a comprehensive data system that included both formative and
summative assessments that enabled decisions to be made at the building, classroom, and
individual student levels. The district leaders were still in the midst of figuring out the
implications of such changes. As Laura, the testing coordinated admitted:
We love data. We collect it, we use it at the program level so we‘re extending
ourselves here and teachers are not sure what to expect, administrators are not
sure what to expect, curriculum coordinators—in my opinion, the heart and soul
of our district, the heart and soul of this learning system—are not sure what to
expect. You know, what does this mean for them? Are we giving something up
at the district level to provide this information in a readily accessible format at the
building level? All of this is new.
This year, the district was rolling out a new and more user-friendly data management
system called Galileo. Galileo is an assessment tracking system that allows teachers to
171
have instant access to student test results and other teacher-created tools across the
district. The existing system, Infinite Campus, already provided individual student data
to teachers but Galileo would add the function of being able to track formative
assessments, an area that the district had not yet been able to formalize. The items on the
tests contained in Galileo were uploaded at the district level, so teachers would use the
tests that they had created to produce the item bank for each assessment. Teachers would
then be able to generate assessments on demand and have them scored by the software in
their classrooms without having to go through the curriculum coordinator for test
generation and scoring. Furthermore, Galileo generates reports at the district, school,
class, teacher, and student levels, allowing teachers to look at data in a variety of formats.
Besides Galileo, which would be directly available to teachers, much of the
collection and analysis of data was done at the district level. At the beginning of the
school year, all teachers received printouts of school level results on district assessments,
AIMS, and Terra Nova. Scores were compiled to be reviewed at the district, school,
department, and content team levels. Results were broken down by teacher and provided
a vehicle for comparison between teachers. Similar data was shared across the district by
content area at district-wide meetings that also occur at the beginning of the school year.
Principals received the data on a CD along with four years‘ worth of longitudinal data so
that they could place current scores in a historical perspective more readily. Principals
then guided the staff through an examination of school-level data during professional
development meetings before the first day of school. Based on the results, each content
area team wrote a Program Improvement Plan (PIP) for the year.
172
Within schools, each department chair and content team leaders were responsible
for assisting teachers in understanding data printouts and in implementing improvements.
They discuss results with team leaders, who represent all of the teachers that teach the
same courses. Each curricular team then creates a program improvement plan (PIP)
targeting specific goals for improvement throughout the year. The prior year‘s PIP
helped to inform the team of their progress, as well as the areas that still needed to be
targeted. The school principal met with each curricular team and asked them specific
questions about their plans for improvement. PIPs were expected to be revisited at the
beginning of the second semester.
Other school site administrators were also responsible for helping teachers to
utilize the information. For example, the Assistant Principal of Operations and Resources
received data on which students were not involved in extracurricular activities. He then
disaggregated that information by grade level and asked teachers to target several
students on the list in order to encourage students to become more involved. The
assistant principals of student services also worked with the curriculum coordinators at
the district level to facilitate testing administration.
Overall, the district prognostic framing activities revolved around building
instructional capacity and program development. Teachers, with the aid of curriculum
coordinators, were viewed as experts who were chiefly responsible for refining the
curriculum, instruction, and assessments. The district also emphasized common
pedagogical knowledge and classroom management skills, which was evident in their
teacher training and mentoring. With teachers being the acknowledged experts, the
173
district focused on building a common base of teaching knowledge and skills as a
strategy for professional development and collaboration.
Motivating Framing: Empowering Teachers
Given its history with DDDM, Arlington did not necessarily have to make
concerted efforts to make data use salient to teachers and administrators. Motivating
teachers to use data was not an explicit task that the district leaders had to confront and it
was taken for granted that data would be used to inform decision making. Their
motivating strategy was to encourage buy-in and ownership of continuous improvement
efforts. As the Superintendent summarized:
Our approach is consistently applied and here‘s what I would say to our
principals, ―We all have the same final destination in mind. Here‘s where we
want to be. We want to decrease our dropout rate, we want to increase our
graduation rate, we want to do the following five things. How you go about doing
it is up to you as long as it‘s shared and supported by our colleagues sitting at the
table.‖
The district prided itself on its experience using data and having a consistent approach to
program improvement efforts. Much like Costa, Arlington took a learning and continuous
improvement orientation towards motivating data use. The leadership team made
concerted efforts to create buy-in at multiple levels of the system because each person
was viewed as playing an important role in the district. When asked about the
implementation of DDDM, the superintendent described his philosophy:
So it must be implemented where there‘s full ownership so that everyone
understands their role within the system. They each, metaphorically, are one gear
in the system and every one of those gears must work efficiently together or the
whole system stops functioning. I would say that it must be phased in
thoughtfully. It must be phased in so that when it is implemented on day one it
174
comes to no surprise to anyone within the system, anyone. Once it‘s implemented
from day one, you must understand that it will evolve and change rapidly over
time.
However, the focus on sharing individual teacher data was a relatively new emphasis and
both school and district level leaders talked about the importance of making teachers
comfortable with sharing data. Leaders fostered a culture where data was mainly treated
in a non-punitive manner for the individual teacher. The superintendent highlighted the
importance of empowering teachers to make improvements without a high-stakes or
punitive atmosphere:
We do not refer to a teacher‘s students‘ academic performance on our district
assessments in any evaluation. That will not happen because it‘s not meant to be
a gotcha. It‘s meant to be, you know, ―Work with us, empower them, let them
take ownership in it, and have no fear. Meaning work, we expect you to attend to
our content and our curriculum because it‘s aligned with state standards and we‘re
not going to have any kind of punitive measures if you decrease one point, two
points, three points, five points. We don‘t do it that way. We don‘t operate that
way.‖
Confirming this approach and the need to make data use constructive rather than
destructive, the principal of Adams High School explained:
I think as people become more understanding that‘s it‘s not an evaluative process.
I think that was a big key. There‘s less hesitation to share data and to do more
with it and also as we give them more tools to use it. I think that helps.
This focus on continuous improvement and learning seemed to help teachers feel
comfortable with data use. Janice, a math teacher, remarked:
They never use data as a negative to anybody, like oh my gosh you have eighty
percent failure, which never happens, but I‘ve never heard negative comments
from data. I appreciate that. It makes everybody feel more comfortable too, like I
know they‘re not going to [evaluate us] on this, this is just to help us. So that
makes teachers more comfortable with that, I know.
175
This is not to say that teachers did not feel any pressure to improve or maintain
satisfactory performance. At least four teachers, across different departments, mentioned
the unspoken pressure from both their peers and administrators to make sure that their
data ―look good.‖ Peter, an English teacher, speculated that the school worked well with
data because of peer accountability in particular:
No one is supposed to know each other‘s scores, but it ends up leaking out and
you kind of know which teacher is not pulling weight. So because of that it puts
pressure and onus and responsibility on the teacher to perform.
This may be largely explained by the system of merit pay within the district. Another
notable difference between Arizona‘s and California‘s accountability systems was the
existence of performance awards in Arizona.
16
In Arlington, teacher merit pay was tied to
AIMS, PBAs, CRTs, extracurricular surveys, parent satisfaction surveys, and
postgraduate surveys.
17
The bonus was awarded at the school level, so either all or none
of the teachers at a school received it. In the event that teachers did not earn all of the
merit pay available, the money was placed into a fund to cover additional professional
development workshops. Several teachers mentioned merit pay as an additional incentive
to produce results—not so much because they themselves wanted the money, but in order
not to disappoint their colleagues.
16
In the state of Arizona, Proposition 301 passed in 2006, raised taxes to fund a merit pay system for
teachers. Proposition 301 money is given directly to school districts in large chunks, and districts can
choose how to distribute it.
17
The specific formula for merit pay is negotiated with the union each year and depends on the funds
allocated through Proposition 301. While all students‘ work is graded by at least the individual teacher,
only a random sample of 30 students per teacher is graded by the full team of teachers and counts for merit
pay. Seventy-five percent of the determination for merit pay has to do directly with student academic
performance.
176
Overall, Arlington utilized similar motivating frames as that of Costa. It
emphasized the non-judgmental use of data, sharing of data among teachers, and
deliberate buy-in at all levels. The presence of merit pay at the institutional level,
however, provided another motivating tool. Because the district awarded merit pay based
on school-wide performance, teachers were not only accountable to the district but also
felt a sense of responsibility to one another.
Conclusion
In summary, the use of frame analysis sheds light on how leaders took up the
charge of constructing a movement towards DDDM. The tasks of diagnosis, prognosis,
and motivation indicate the complicated nature of reform and suggest that leaders need to
attend to these related facets of reform efforts. The resonance of a reform (i.e. its
credibility and salience) to teachers is dependent on how well these elements are framed.
The high schools and their respective districts were starting at different points and
focused on one type of task over the other. Mesa High School‘s movement towards
DDDM was largely framed by the California‘s accountability system and the need to
improve student achievement and teacher expectations. In terms of diagnostic framing,
district leaders had a clear but difficult task of confronting low expectations for their
students. Unlike Costa, where a great deal of time was spent discussing such problems, a
clear problem that needed to be addressed by data was not evident at Arlington. Rather,
the district continued to focus on how its programs, and thereby student achievement,
could be improved. On strategies for implementing DDDM, the schools and their district
177
were remarkably similar in their focus on system alignment and capacity building. The
difference lay in the details. Schools like Mesa were still in the process of building
capacity at all levels while, as a more mature DDDM system, Adams High School
heavily relied on experts at the district to make data use meaningful. Finally, the
motivating frames highlighted by both leadership teams suggest that the main rationale
for data use needs to be continuous improvement rather than meeting accountability
demands. This requires a paradigm shift in who ―owns‖ data, how data are shared, and
how data are used.
The research on policy implementation indicates that how teachers make sense of
reform mediates how they go about enacting policies (Coburn, 2001). The three levels of
the accountability system—state, district, and school—play important roles in shaping
teachers‘ beliefs and attitudes toward DDDM. The state context in particular affects what
counts or what data are prioritized by both the district and high schools. The district
context— how district personnel frame DDDM and the structures they develop to support
teachers‘ sensemaking—shapes the actual data use as well as what counts as data worth
collecting and analyzing. In this chapter, I detailed the context in which teachers are
situated by employing frame analysis, focusing on how leaders conceptualized DDDM.
This discussion highlighted how leaders choose to define their problems, their strategies
for improvement and their rationale for encouraging widespread commitment to their
visions. In the next chapter, I analyze in detail how teachers‘ sensemaking processes are
mediated by the organizational context of high schools and their departmental affiliation.
178
Chapter 6:
Structuring Sensemaking on Data
In the last chapter, I examined how the district and its leaders attempted to create
a shared conceptualization around the meaning of data use. Research on high schools
(e.g. Siskin, 1994) and sensemaking processes (Coburn, 2004) suggest that departmental
structure and teacher subculture also shape teachers‘ approaches to policy
implementation. Confirming these studies, the data presented here demonstrate that
teachers‘ sensemaking of DDDM is also mediated by their structural location in the high
school organization. In this section, I detail the sensemaking opportunities provided by
the districts and high schools and then discuss how they interacted with department
subcultures to shape teachers‘ uses of data. I first examine the sensemaking structures at
Mesa High School before analyzing how teachers construct data use at Adams High
School.
Sensemaking Structures and Opportunities at Mesa High School
Sensemaking opportunities provided at Mesa High School and the district focused
on developing skills to utilize data and building instructional capacity. District
sensemaking structures were organized according to leadership roles or departmental
affiliations. The district offered a professional development workshop called the Strategy
Academy (formerly called Leadership Academy). The meetings initially were used to
help participants gain expertise in data interpretation, analysis, and usage. Now,
department chairs come together with assistant principals, principals, and counselors 3-4
179
times a year for a full day. They discuss strategies for writing across the content areas,
looking at students reading below grade level, assessing the needs of English language
learners and determining student placement, among other issues. The focus also turned to
foster common instructional strategies and approaches across departments. Tina, the Title
I Coordinator at Mesa, estimated that 10 members of their faculty have thus far
participated in such training sessions. In addition to the Strategy Academy, the district
holds a system-wide professional development session referred to as ―Super Week‖
before the school year begins. Typically, schools are given one full day to set goals for
the school year. Faculty and administrators use that time to analyze the previous CST
scores, review benchmark testing results, and set goals as a large group. Principals also
have the option of providing additional professional development after school or
Saturdays.
A main tool for teacher analysis of data is the data reflection protocol, which
encourages both individual and group reflection on data. Early on, the leadership made
mistaken assumptions that principals and teachers would already know how to talk about
and use data. During leadership meetings, they discovered that having conversations and
trainings about how to introduce the data was necessary. In partnership with an external
company, Action Learning Systems, the district developed a series of protocols to
structure data analysis and discussions at the school-site level (see Appendix G for
protocol).
180
School Level Sensemaking
At the school level, sensemaking opportunities for teachers predominantly occur
at the department and content level unless teachers belong to the leadership team. Leaders
such as the principal and the Title I Coordinator engaged in agenda-setting activities to
promote data use. Starting this year at Mesa High School, grade level teams meet by
subject every two weeks for data discussions. Having the ability to bank time to meet this
often was considered was considered a major coup by school staff, as in the past teachers
had to meet on their own time at lunch of after school. Teachers such as Deanna, from the
English department, appreciated the structured time,
I love our collaboration time. It was just the time to work out how many times-;
you know there can be overkill and not enough to make it meaningful. So I think
right now with the amount of collaboration time we have, I think we‘ve hit on the
right amount of time.
Katie, from the science department also welcomed the structured time to meet with other
members of her team:
Before we had to meet during lunch or after school. The collaboration time; we
started that last year and it has been a Godsend-, I mean to have time to sit down,
because you can actually have the conversation. Beforehand it really was a quick,
oh my gosh my kids did awful on number 3 how about yours? Yes, this was my
percentage. Then oh we‘ll make sure-, let‘s get back together sometime to talk
about why. There was never really a time and now you can carry that
conversation all the way through which is fantastic.
The meetings take place on Wednesdays from 7:30-9:00 a.m. Students arrive at school at
9:30, allowing the teachers 30 minutes from the end of the meeting to the beginning of
instruction. Building in this transition time was seen as important by school
administrators who did not want teachers to feel rushed and use data discussion times for
class preparation.
181
Two meetings each month have been established for collaboration. Teachers meet
as a department for the first 10 minutes or so and then break up into grade-level or
content groups for these data discussions (e.g., 9
th
grade English teachers meet together).
The data discussion meetings are used for the examination of data arising from the
benchmark or quarterly assessments, but also for rewriting test items and joint
instructional planning. One teacher explicitly stated that the expected outcome of the data
discussion was ―modification of instruction.‖ In principle, every other meeting is devoted
to data analysis (i.e., data reflection meeting) and the alternate meetings are for action
planning. Tina, the Title I Coordinator, described the need for the meetings in this way:
The way that it‘s structured here there really is no excuse. There‘s that reflection
meeting and then immediately two weeks later you have that follow up working
meeting. I think what that does is it really, not forces in a negative way, but it
forces the teachers to be looking at ok if this is what we say we‘re going to be
doing or what we‘re working on, this is a great time for me to actually implement
in the class to see, almost to experiment with it and see if this is something that-,
and if this isn‘t going to work the way we think it is then that gives us some time
to actually make some modifications.
Generally, the reflection meetings are structured around the data discussion
protocols, which are later turned into the administrative office. Beforehand, teachers are
expected to reflect on their assessment data before the meetings. Questions include what
standards are taught and assessed, what strategies were used to teach these standards,
what questions did all/most of the students answer correctly and incorrectly, and what
standards were all/most student proficient or not proficient.
Once the whole department convenes for the collaboration meetings, they break
up into their grade level or content teams to review the rest of the protocol. These
questions ask teachers to find patterns across their classes and to discuss possible causes
182
of high or low student performance. Luke, from the English department described the
meeting this way:
Well we have printouts and of course, you can go on Data Director. What we do
is we get together with what other subject areas call course alike. In English, we
call it grade level. We get together and look at the data from the last mid-quarter
or quarter assessment that was given. We have a copy of the test itself with us.
For example, I met with other twelfth grade English teachers this morning. All
right, what did the majority of your students do well on? Do you have an
explanation for that? What strategies did you use? What level of Bloom‘s
Taxonomy is that? All kinds of stuff. What did they not do well on? Why? Was
there a discrepancy?
The last section of the protocol is devoted to developing solutions or modifying
instructional strategies. Overall, at Mesa High School, the structured protocols and the
expectations of the leadership who require specific outcomes from each meeting set the
stage for teachers to believe in the saliency of data use.
The English Department: What Do the Data Mean?
While the district framing of DDDM and school level sensemaking structures
provide the parameters for teachers‘ sensemaking of data, how high school teachers
actually interpret and use data are very much mediated by the departmental context.
Unlike district meetings which may be held a couple of times a year or school-wide
meeting which are held once a month for administrative purposes, the majority of data
interpretation and data use occurred during formal and informal departmental meetings.
Because Mesa High School has recently instituted regular collaboration and data
discussion meetings, much of the data analysis reflections revolve around district
benchmarks or the school‘s mid-quarter exam results. Although these meetings are new
183
for the year, the majority of teachers expressed comfort at sharing data. A novice English
teacher, Jim, goes on to explain why these collaboration meetings are helpful:
You know when I meet with the other senior level teachers, we meet and talk
about things that we couldn‘t really talk about before, like why a majority of our
students missed question 22, which can be I think not only does it help you
become a better teacher but it‘s fairer to the students as well.
English teachers typically described these meetings as important opportunities to delve
into data. They expressed little hesitation about sharing their data with one another.
Thus far, however, much of the discussions have specifically centered on
examining the test, its validity, and understanding the data reports. Jim summed up the
meetings:
Ok so then we go through and before we actually go to a meeting we‘ll look at
where our students are strong and where they were weak. Basically that‘s all
you‘re doing. Basically you‘re answering the questions and figuring out what you
could do differently.
In the English team, there was no formal or informal ―data expert‖ who facilitated the
interpretation of the data. During one of the 9
th
grade team meetings, one teacher asked
about the individual student report. Deanna (the leader of the team) chimed in with an
explanation of the report layout: ―The percent tells you how she [a student] did on the
test—so she got 95%--and the top tells you the total number of items she got correct.‖
Her group spent over five minutes discussing whether the percentile scores were based on
the overall 9
th
grade within the school or within their district. They then compared their
percentile results and noticed that two skills with the same number of questions correctly
answered were ranked differently on the percentile. They begin to wonder if the timing of
the test scanning affected the percentile outcomes. After a couple of more minutes trying
184
to interpret the numbers, the group decided that they should follow up directly with the
principal and moved on to the next section of the protocol. This difficulty with
interpreting and sometimes accessing the data was a problem that the English department
as a whole confronted regularly. Pamela, a 10
th
grade English teacher explained:
This [data report] goes to the person who‘s in charge of that level, the grade level
coordinator, and then they disseminate it to us. They‘re supposed to be experts in
it and they‘re supposed to look at it and then kind of explain what that means
overall to us. But because it‘s been like three or four years since we‘ve been
trained in it and our person has changed, who‘s in charge, we‘ve kind of
forgotten. So we spent a lot of time this morning going ok what is this even mean
this disaggregated data? It‘s like I didn‘t even get it. Sometimes it‘s too much
data.
Due to the lack of expertise or expert guidance, the teachers in the English department
spent the majority of their time either questioning the meaning of the data or answering
the protocol questions in a perfunctory manner. The aim centered on finishing the
protocols as rapidly as possible.
The Science Department: Perfunctory or Meaningful Engagement?
Similar to the English department, the science department‘s discussions tended to
focus on examining tests results and formulating and refining assessments. The science
department faced similar issues with technology and understanding data. Much of the
challenges came from data management and interpretation difficulties. A science teacher,
Katie noted, ―I do think that there are always going to be some who will fight you tooth
and nail on it. I‘m lucky; at least within our department we don‘t have those. We do
have some that struggle but I think primarily we‘re all pretty likeminded.‖ Unlike the
185
English department, where some teachers were resistant data use, the science department
tended to view data as useful.
However, there were important differences in data discussions depending on the
content team. The science department was divided into three groups: physical sciences,
biology, and chemistry. Although issues of data access and interpretation may be issues
within the physical science team, the presence of Tina (the Title I Coordinator/science
teacher) provided the team members with immediate and ongoing assistance. During one
meeting, a science teacher in the physical science team explained to Tina that she had
trouble getting the data she needed out of the program. Tina took the lead in interpreting
the data for the team and at one point she said, ―You haven‘t seen the comparison data.
She‘s (pointing to another physical science teacher) about 10% higher than we are on the
quarterly assessment.‖ As three of the teachers looked at the comparison data sheet, Tina
continued, ―We‘re pretty comparable with the district. Costa was higher but they‘re
always higher. Otherwise, we‘re pretty comparable. I don‘t want to be so quick to blame
they‘re not doing homework.‖ With this data interpreted, the team went on to examine
their instructional practices. The presence of Tina, an acknowledged school leader and
data expert, was likely an important factor in directing the substance of the conversations.
The teachers in the physical science team were focused on reflecting on the protocol
questions and all members contributed to the discussion throughout the collaborative
time.
In contrast to the physical science team, the biology and chemistry teams
exhibited similar patterns of attitudes and behaviors to that of the English teams. All
186
teams were completing the protocol but the intent was to finish rather than engage in in-
depth discussions. For example, after a conversation about the outdated technology in
their classrooms, the biology team reviewed their individual notes from part one of the
protocol. Their discussions focused on the validity of the data results. Some of the
rationale for poor student performance ranged from insufficient time to teach the material
in class from invalid tests questions on some standards. Throughout the meeting, the
discussions did not always remain focused on the mid-quarter assessment data, but did
relate to ideas on students and teaching. Later in the interview, Frank (a biology teacher)
described these meetings as ―productive‖ and their review of the data allowed the team to
be ―all in the same boat rowing in the same direction.‖ However, he also described the
meetings as ―perfunctory‖ and wished that collaboration ―be done more thoughtfully with
more time.‖ The chemistry team, which consisted of two teachers, also deviated from the
protocol. They finished very quickly with the task of completing the questions, but
continued to talk about their classes and their individual teaching methods.
The interviews and observations from the two departments of Mesa High School
confirms Siskin‘s (1994) that teachers responses to reform depend on the departmental
context. However, a consistent subject-matter influence was not discernible. Content
teams in both the English and Science department exhibited a tendency to go through the
motions of answering the questions on the protocol. As researchers have noted (Spillane
et al., 2002; Young, 2006), this is because teacher collaboration by itself does not
improve capacity if all teachers are novices. Sensemaking studies suggest that teachers
will superficially implement reform strategies such as DDDM without making deep
187
connections or transforming practices, even though they may believe that they are deeply
engaged. During interviews, teachers expressed comfort with using data to figure their
instructional strengths and weaknesses. Yet, observations of meetings demonstrate that
when it came to drawing implications or conclusions from the data, the responses did not
move beyond the general assessment that ―my class got this number and it is low so I
need to work on improving.‖ What actually made a difference was presence of a ―data
expert‖ or coach such as Tina who helped teachers make sense of the data and engaged
them in a discussion about instructional improvement.
Sensemaking Structures and Opportunities at Adams High School
I now turn to examination of how teachers‘ sensemaking was structured at Adams
High School, focusing on the district sensemaking opportunities and the math and
English departments. Unlike Mesa High School, much of the data interpretation and data
use were encouraged and supported at the district level. Curriculum coordinators
organized teacher workshops or individual meetings to share data and to discuss
improvement practices. Coordinators also facilitated content-focused workshops: the
major two-week summer workshop and ongoing professional development throughout
the year. During the summer, coordinators may organize a weeklong workshop for
content levels to grade district assessments and to analyze the results (e.g., English 1-2).
Based on the results, the second week is used for professional development and sharing
of best practices. Certain teachers are then asked to demonstrate their practices to the rest
of the group. Throughout the year, curriculum coordinators organize approximately four
188
best practice workshops or general professional development meetings per semester. For
example, the Math Coordinator holds a best practice workshop to assist teachers who
might have questions or difficulty with teaching a particular math concept. Teachers are
asked to demonstrate their lessons and explain how they target instruction for their
students. Darcy, a math teacher, described the process:
What we do know which we didn‘t do in the beginning is we‘ll look at those
worst problems on the final. We take the ten worst problems on the final and we
find the teachers that did the best on those questions, with regular kids not
accelerated kids, regular kids who did the best in their classroom on that question
and we have that teacher then highlight those problems. How did you teach? So
we have a big group of teachers sitting there; how did you teach that concept that
got you to be at a seventy percent where everybody else in the district is sitting at
a thirty? What different did you do for your kids? We do what we call the best
practice.
Because one method may not work for all students, three or four best practices are
presented for teaching one item concept.
Data use and implementing best practices were also reinforced through the three-
year mentoring program. As the Director of Evaluation and Research noted, ―So it‘s not
like the data sits and we just hold people accountable to data. We give them an
intervention and definitely bring them along.‖ The district cabinet holds monthly
meetings with staff members depending on their roles, such as assistant principals,
counselors, or deans. The district leadership team meets with principals every two weeks.
However, the sharing of data within schools and between teachers was dependent on the
departmental context.
189
School Level Structures
In Arlington, schools have late starts on Wednesday that provide time for the staff
to meet. One week is devoted to faculty meeting, the second is for departments, third
week is usually for content teams, and the last is devoted to committee meetings. Each
school is also allotted at least 60 workshop days that may be used for local purposes on
campus. They may be held during Saturdays and teachers are paid during these
professional development sessions. Typically, each department requests theses workshop
days to have additional time to plan curriculum, examine data, or learn new instructional
techniques.
At Adams High School, the sharing of data and data use for instructional decision
making were very much mediated by the departmental context. Every other week, there
are department meetings at the school sites conducted by the department chair. During
these meetings, department chairs act as communication liaisons and disseminate
information on scheduling or upcoming events. The department chairs have a direct link
to the curriculum coordinators. The schools at Arlington also operate on a grade level or
content team approach. There is a team leader and a grade level content team leader
within each of those groupings. The grade-level team leader has the responsibility to
ensure that implementation of programs or practices occur at their content levels.
Rather than the strong agenda-setting by the principal at Mesa, departments at
Adams were expected to set their own norms and rules. The principal relied on the
department chairs to understand the data and to help teachers digest it, which has meant
that each department had their own patterns on data sharing and usage. The department
190
chairs were the ones responsible for overseeing the implementation of program
improvements based on data. They did not have special training requirements in order to
become department chairs. However, after assuming the position, they must attend a
district-wide meeting with other subject-area department chairs and the curriculum
coordinators. Additionally, they meet with all of the other department chairs at the
school once a month.
Math Department: We Team All the Time
The Math department had a reputation for having a cohesive and collaborative
culture at Adams. Darcy, a math teacher stated, ―We team all the time. We team through
our levels.‖ Abe, another math teacher, described the math department as very
collaborative and explained how they worked together:
A lot of groups develop their material together. So maybe you‘re responsible for
chapter one and I‘m responsible for chapter two so that way you don‘t have to
develop the material for the whole year. You develop pieces of it and then if I‘m
done with chapter two I give it to you and go here‘s chapter two, can you give me
chapter one? So there‘s a lot of sharing.
Content collaboration did not solely exist at the campus but also occurred across school
sites. As Darcy noted, this provided her with additional colleagues to reach out to:
We do so many meetings together so I know so many of the other 3-4 teachers
throughout the different schools, and if I‘m struggling with something I might call
them up and say hey how‘d you do this? And they say well if we go on the server
and you go here you‘ll find my lesson.
The math department was also unique in that its algebra and geometry classes throughout
the district administered common unit tests based on district content outcomes. Teachers
developed approximately 14 unit tests for the year that are to be administered in two-
191
three week cycles. These tests are intended for formative purposes—to provide
opportunities for students to master a math concept and to guide instructional re-
adjustment. Questions tend to be open-ended and are organized around content outcomes
(i.e. concepts and skills). These tests were hand-graded by teachers and analyzed as a
team. Based on the success of the math department, the 9
th
grade science course
(Thinking Science) also developed unit tests, which were administered last year. The
principal noted that teachers find it beneficial to meet after every assessment to discuss
their instructional practices. Building on this success, Biology and English 1-2 teams
were also in the process of developing similar assessments.
The math department members also spoke more consistently about their comfort
level with sharing their individual data. This may be because they use the same
assessments and discuss results more frequently. They talked to one another after each
unit test to discuss best practices regarding specific outcomes that each teacher‘s students
missed. Math department members explained their attitude:
Well, data wise, I think overall the math department would be the number one
data-driven curriculum on campus for sure if that‘s what you‘re asking. I mean
we definitely—there‘s no other curriculum that does what we‘re doing.
[Janice, Math chair, Adams]
Some teachers do not like getting bad data back. For me if I‘m getting bad data
back I‘m going what am I doing wrong and what do I need to fix? Some people
think it‘s oh, I‘m a bad teacher. They just kind of do that emotional shut down. I
think with our team they go oh man my data sucks. What are you doing? There‘s
automatically what are you doing that‘s right and let‘s do that.
[Martha, Math, Adams]
192
Because she previously worked on the special education team, Martha, another math
teacher, was cognizant of the different levels of collaboration that existed across
departments. She related her experience:
Well I‘ve worked in two departments at this school. I‘ve worked on one team that
did not work together at all. Granted, now that‘s the special ed team and they-, I
mean that‘s a curricular area that is so fraught with just everything. I mean it‘s
just everything is there so it‘s very difficult. They were a group of people who
did not work together, did not really trust each other. I know if something goes
wrong in any one of the math teachers‘ classes they could call me and say dude
we‘re two days behind you and my baby is in the hospital, and one of us would go
and say-, they could do pre-algebra with us because you‘re right here, and Xerox
it and get a lesson together for each other. There‘s just that sense of-, we‘re all
going to either teach the kids or you‘re going to get my kids next year and it‘s our
school. So I think that‘s been the mentality of this math department.
This level of collaboration, trust, and joint work was facilitated by the common unit
assessments and a mutual sense of responsibility to one another and their students.
The English Department: Taking it Personally
In contrast to the math department, the English department was considerably less
cohesive and collaborative. In particular, English teachers more frequently mentioned
reluctance on sharing their results. Cecile, a 10
th
grade English teacher, described her
team‘s process of using data during the summer workshops:
As a team, we take a look at how we scored on the PBA, the expository essay, and
again we look at our strengths and weaknesses and then we will actually write our
performance goal. Then we‘ll begin to determine what steps we‘re going to take
in order to improve next year. So I think this year our goal is to improve in
conventions and then sentence fluency. We grade according to the six traits
rubric. At the end of the year for their PBA, they do the expository essay. We
give back those six traits. We immediately go to the numbers. That‘s the area we
need to improve in, that‘s our strengths. Everything we do is really driven by the
data.
193
The English department chair, Carla, explained her position in helping understand and
apply the data to the content area:
So you don‘t really want them [to say] twenty-eight percent, forty-eight percent,
fifty-two percent. You want them to say, well I think if we worked on
organization a little bit more, if we had more examples—maybe we need to get
some more examples and show them some more models. Well, I have some. I‘ll
bring some to the next meeting. That‘s what I‘m trying to lead them to do—my
team leaders. I want them to have that kind of discussion, not just [try] to
interpret the data.
The focus was not on interpreting the data or critiquing existing practices. Instead, the
chair emphasized sharing of lessons and other practices.
Although the department meets monthly, the bulk of the collaboration is suppose
to occur within content levels. Much of the use of data remained informal throughout the
school year and was highly dependent on the members of each content team. Peter, who
taught both sophomore and junior grade levels, discussed the lack of collaboration within
his own content teams and attributed the lack of cohesion on leadership changes:
If you‘re doing meetings to do meetings or if you have a group of people that are
really willing to roll up their sleeves and affect change or not. Again, our
previous administrator really emphasized teaming. We would meet weekly. We
would make sure that every single teacher is on the same page and that we‘re
teaching the same things almost lock step and barrel. Right now we don‘t have
that.
Unlike other teachers, Peter belonged to both English 3-4 and 5-6 content teams, which
may color his view on collaboration. Because of his teaching load, he felt he had double
the responsibility and remarked about the uneven distribution of pressure depending on
content areas. The fact that he belongs to two content teams made difficult for him to be
consistently in one team.
194
In general, when asked about collaboration within their department or content
teams, English teachers tended to notice the lack of sharing or data or planning. In
contrast to the math content teams, the English department chair and teachers mentioned
how sensitive teachers were about sharing data. Carla, the department chair, explained,
―It‘s very sensitive… That‘s kind of the culture I think here. You know we really want
kids-, we take it personally. We really want them to do well.‖ While appreciating the
ways in which the department chair was sensitive to teachers‘ reactions to data sharing,
English teachers such as Cecile believed that people were too protective of their data.
Cecile remarked that sharing of data would strengthen their department: ―Quite frankly I
think it should be a little more open. I understand people not wanting to share and that,
but I also think there‘s power in sharing and creating a support system in that manner.‖
The department, with the push of the district was just beginning to actively encourage
data sharing between teachers. Recently, English teams were asked to share their student
data, although all the teachers‘ names were left off. Belinda, another English teacher,
believed the examination of each teacher‘s data has provided opportunities for further
sharing of best practices. The lack of collaboration and the differences in how data should
be shared, however, are likely to pose significant barriers to teachers‘ capacity to
honestly assess their strengths and weaknesses through data.
Conclusion
In this chapter, I demonstrated how high school teachers‘ sensemaking of DDDM
is mediated by the departmental context. The multiple sensemaking opportunities
195
provided at the district, school, and departmental levels suggest that each plays a different
role in how teachers‘ make sense of data use. Districts can institute consistent
sensemaking opportunities so that teachers will have time and space to collaborate on a
regular basis. This sets the stage for teachers to use data but does not guarantee it. The
sensemaking structures at the school further signal to teachers the saliency of DDDM. A
tool such as a discussion a protocol provides teachers with questions and prompts to
begin analyzing their data. Thus, structures enable DDDM to occur while protocols and
data reports are props that teachers can use to analyze and interpret data. Furthermore,
strong agenda setting by leaders can establish professional norms on disclosing data,
which makes it more likely that teachers will share data. However, teachers lack the
capacity to critically examine and use data to inform their decision making as evidenced
by the departments at Mesa High School. The lack of data literacy further suggests that
the presence of data coaches will likely be necessarily, especially in the beginning, in
order for teachers to thoughtfully interpret data. Without strong agenda setting efforts by
school leaders or support of data coaches, the departmental context holds greater weight
in mediating the degree to which teachers buy-in to DDDM and share data. Finally,
although professional norms on data use and structuring of data conversations can be
promoted at the district and school level, ultimately the use of data itself is heavily
shaped by the degree of collaboration, an emphasis on join work, and trust that exists
within each departmental culture.
Each context places a different layer of meaning that may align or compete with
its intended construction of DDDM. The accountability system can enable DDDM by
196
prioritizing the use of data but it can also hinder the focus on using data for continuous
improvement when there is an over-reliance on test scores. Framing at the district level
also shapes teachers‘ construction of DDDM because leaders often have the power to
legitimize a reform strategy. They do so by creating a shared sense of purpose, devoting
time and energy to capacity building efforts, and by investing resources. These activities
create the conditions for DDDM to be relevant for teachers but alone do not guarantee
that teachers will actually use data. Departmental structures and norms on sharing data as
well the degree of joint work determined the extent to which teachers use data to inform
instructional practices, monitor student learning, and determine future goals. Finally,
even when these contextual factors align to promote a robust version of DDDM practices,
ignoring teacher learning and professional development will likely lead to a surface level
use of data to inform decision making processes.
197
Chapter 7:
Conclusion
I began this dissertation assuming that context is a multi-layered and multi-
dimensional construct that shapes how teachers conceptualize data and data use. Policy
implementation research has traditionally favored a technical-approach to understand
how policy is executed (Sabatier & Mazmanian, 1981), or focused on an adaptation
model that emphasized the local construction of policy messages (Berman &
McLaughlin, 1979; Lipsky, 1980). Realizing the limitations of such dichotomies,
researchers acknowledged the need to examine the interconnections between policy
design and implementation (Cohen, Moffitt, & Goldin, 2007; Matland, 1995;
McLaughlin, 1990). As Cohen et al. (2007) point out, there is a ―mutual dependence‖
between policy and practice; policy relies on implementers to realize goals while practice
depends on policy to frame action and offer resources. This process is dynamic; the
relationship between policy and practice are not static or linear but may vary depending
on the clarity of goals, means, and the content of relational norms (Matland, 1995). With
this knowledge in mind, I assumed that teachers‘ enactment of DDDM was likely to be
influenced by the institutional, organizational, and teachers‘ social contexts. Thus, what I
set out to do in this dissertation was to explain how the different level of context shaped
teachers‘ responses to DDDM.
The theoretical framework developed at the end of chapter 2 drew from
sociological theory, policy implementation research, and the literature on DDDM. This
framework guided the study and helped focus attention on the ways the different types of
198
context may constrain or enable teachers‘ use of data. By examining these elements, I
found that policy implementation is a distributive process where multiple types of
contexts play different roles in determining how teachers make sense of data and
ultimately, how they use data.
Teachers‘ sensemaking of data is influenced by the institutional, organizational,
and social contexts. Each layer of context shaped DDDM conceptualization differently.
The institutional context in which accountability systems are located deems data use as
an important strategy for improving schools and ensuring equitable student outcomes.
The regulative pressures of accountability policies guarantee that schools will at least
comply with using student achievement data to make decisions. As institutional theory
further points out, in addition to regulative pressures, organizations such as schools are
subject to multiple, overlapping forces, including normative and cognitive pressures.
Sensemaking theory then implies that while schools may comply with policy
expectations, data use will likely remain superficial or disengaged from actual teaching
and learning processes without an attempt to address the norms and capacity for data use.
Building on sensemaking and human agency, I introduced the notion of power and
ideology as powerful forces in shaping teachers‘ construction of DDDM. Educators,
especially those in leadership positions, have the authority to shape the sensemaking
process. Research using frame analysis illuminates how meaning-making activities
undertaken by leaders can mobilize social actors around common purposes and shared
ideals. Together, this theoretical framework suggests that teachers‘ sensemaking of
199
DDDM are dynamically shaped by structure, agency, and power. With these assumptions
in mind, I now turn to a summary of the major findings from this dissertation.
A Distributive Theory of Policy Implementation
First, I examined teachers‘ conceptualization of DDDM, the types of data teachers
found most useful and criteria for data use. Teachers fell into four distinct categories in
their approaches to DDDM: inquiry-centered, solution-centered, bureaucratic-centered, or
compliance-centered orientation. Most teachers in Adams High School had a solution-
centered orientation while teachers at Mesa High School exhibited a bureaucratic-
centered orientation. This affirms that approaches to DDDM may also be dependent on
the school and district context and the push for DDDM will depend on the existing
capacities of teachers. How teachers prioritized different types of data depended on the
accountability system. When state systems and districts defined data narrowly, teachers
were more likely to view standardized test scores as the main data source. The
broadening of data sources to evaluate school performance meant that teachers were more
likely to view different types of data as important. Across the state and district contexts,
however, teachers relied on four general criteria to determine whether data would be used
to inform instruction and learning. Of these, time was a major factor. Other factors such
as data paralysis, data overload, and questions about the quality and relevance of data
produced by the accountability system contributed to how teachers decided to use data.
Next, I examined how district leaders conceptualized DDDM using frame
analysis, focusing on the ways in which districts constructed shared understandings of
200
DDDM for teachers. At both school districts, leaders recognized that a compliance
orientation towards data use would not lead to authentic or sustained engagement with
data. Given that accountability systems present data as a high-stake measuring tool,
leadership at all levels had to present data use as a learning strategy. However, the unique
history of each district meant that leaders took nuanced approaches to framing DDDM.
Mesa High School was situated in a district where leaders decided to diagnose their
problem as one of unequal student opportunity and made a moral and ethical appeal to
teachers. Their prognostication relied on district-wide alignment of curriculum and
assessment and frequent monitoring of student performance. The development of
common goals operationalized their vision while the use of tools such as structured data
reflection meetings encouraged collaboration on data use. To motivate staff to support
DDDM, the district presented data use as a continuous improvement tool intended to help
support teaching and learning activities. With its long history of using program level data,
Adams High School‘s district did not explicitly engage in diagnostic framing. Rather,
their emphasis on the use of individual student level data was presented as deeper
engagement with a continuous improvement cycle that pre-dated accountability policies.
The district‘s solutions revolved around system-wide alignment centered on common
curricular, assessment, and instructional practices built on expert knowledge of teachers
and curriculum coordinators. Teachers in this district did not need additional motivation
to use data for individual program improvement but needed a rationale for publicly
sharing data. This motivating frame helped teachers make paradigm shifts about who
owns data. Overall, frame analysis provided the interpretive tools to make sense of
201
leadership activities that helped to produce shared constructions of DDDM. These
framing tasks set the conditions for teachers to define the purpose and utility of data use.
Framing is essentially a persuading strategy intended to garner and maintain support for
causes.
Finally, I analyzed how teachers‘ sensemaking was mediated by the organization
of high schools. While leaders‘ attempts to structure sensemaking proved to be a critical
factor in how teachers bought into data use, teachers‘ definitions of what counted as data
and conceptualization of data usage were shaped by overlapping pressures, constraints,
and enablers in their immediate social context as well as the institutional field.
While teachers had similar attitudes about the importance of data use, the actual use of
data as well as sharing of data for collaboration often depended on the departmental
subculture. In the English department at Adams High School, the belief that data should
be private was an entrenched perspective fostered by the department chair. In contrast,
math department members tended to believe that data was a diagnostic tool that could
benefit the math program as a whole. Despite the fact that they belonged to the same
school, departments exhibited different expectations for collaboration and sharing of data.
At Mesa High School, the English and Science departments demonstrated similar
attitudes about the importance and use of data. With consistent collaboration meetings
devoted to data reflection and the use of a data discussion protocol, most teachers held
similar expectations for data sharing and collaboration. However, at both schools, the
capacity to analyze and apply data to improve instruction was influenced by the
assistance of experts who could coach teachers through the interpretation process.
202
Connections to Prior Research and Theory
Most studies investigating DDDM have focused on one element of the reform or
at a single level of implementation site (Kerr et al., 2006; Supovtiz & Klein, 2003;
Wayman & Stringfield, 2006). These studies have drawn attention to the various factors
that constrain or facilitate data use at the district and school levels but neglect to consider
how these elements work together or conflict with one another. I extended the DDDM
literature by studying the interconnections amongst the district, school, and larger
accountability contexts. I argued that reform is constructed and mediated at multiple
levels, rather than viewing change as a top-down or bottom-up dichotomy. Therefore,
policy implementation is a distributive process where various levels need to align to
create consistent policy messages and similar means of producing intended outcomes.
This process is not self-explanatory but takes intentional framing efforts to engender
shared constructs of policy messages.
The majority of research on DDDM also takes for granted the meaning and
processes embedded within the model, rarely investigating how implementers define
DDDM. The research presented here suggests that educators, from the district level to the
teacher level, strategically and implicitly interpret the meaning of DDDM in a way that
orients their actions. DDDM can be implicitly defined as an external accountability
mechanism where educators are simply expected to gather and report data.
Conceptualizing DDDM as a continuous improvement strategy that directly informs
teaching and learning processes was a strategic sensemaking endeavor that leaders
undertook to persuade teachers that data use was relevant. Data use is also an amorphous
203
process fraught with slow starts and errors as educators attempt to figure out how to
apply data despite little history or professional practice. Using data is not a self-evident
action. Teachers and others have to explicitly define data use and lay out the processes
that would produce concrete actions.
The conceptualization of policy implementation in this study also adds to the
growing body of work that views the process as a function of implementers‘ capacity to
learn new ideas and practices (Coburn, 2001; Firestone et al., 1999; Spillane et al., 2002).
It complements studies that examine the construction of policy as a social activity
(Coburn, 2001; Spillane et al., 2002) where patterns of formal and informal interactions
affect the outcomes of teachers‘ sensemaking. To date, these works portray teachers‘
sensemaking as an intra-organizational function, neglecting to examine how leaders (e.g.,
district officials and policy makers) prime, trigger, or edit the sensemaking process. The
same studies also tend to present sensemaking primarily as an unconscious activity where
individuals enact policy after negotiating meaning. The research framework presented
here extends the theoretical approaches to policy implementation that concentrates on
meaning-making activities by introducing the notion of power and ideology. Policy
implementation is not just a learning process, but it is also a political act where various
actors are positioned to structure, direct, or transform teachers‘ sensemaking of reform.
Implications for Further Research
This dissertation has explored how multiple contexts influence how high school
teachers make sense of DDDM. Given the case study methodology and a focus on
204
teachers‘ interpretation of DDDM, the study leaves many questions on DDDM
unanswered. The data used for this study captured a snapshot portrait of the schools and
teachers. Thus, it may be useful to further study how teachers‘ sensemaking is shaped
over time by different contexts and whether their orientations towards data use evolve.
Moreover, this study is limited because the selected schools self-identified as
data-driven and were considered high performing based on their state accountability
systems. As Diamond and Spillane (2004) found, schools considered low-performing find
data use demoralizing rather than empowering. Comparison studies between low-
performing and high-performing urban high schools may further shed light on how the
organizational and political concept shapes teachers‘ perceptions and use of data.
Additionally, comparative studies will enable researchers to examine the influence of
leadership framing on teachers‘ attitudes towards data and whether they differ between
high-performing and low-performing schools.
Finally, this study also highlighted the importance of developing data literacy and
instructional decision making knowledge. However, it remains unclear what types of
professional development programs aimed at teachers would build these skills. The
research on formative assessments suggests that data gathered from day-to-day student-
teacher interactions are most likely to lead to improved student learning (Black et al.,
2005; Black & Wiliam, 2006; Heritage, 2007; Herman et al., 2006; Leahy et al., 2005;
Ruiz-Primo & Furtak, 2007). However, the use of formative assessment is not as simple
as using teacher-created tests, asking students questions, or involving students in self-
reflection. It requires a blend of expert knowledge in pedagogy and subject-matter
205
content to ensure that questions and feedback push student thinking. Studies focusing on
observations of teachers‘ formative practices and interviews to probe their thinking can
illuminate how teachers rely on these data to make decisions.
Implications for Education Policy
In addition to research implications, the findings from this dissertation offer
insights for education policy. Specifically, I discuss how accountability models,
professional development, and leadership development are important elements of DDDM
implementation. The section concludes with discussion on the role of student
engagement.
Expanding Accountability Models
Districts, schools, and teachers prioritize different types of data based on how
accountability models define and use data. California, for instance, relies on the STAR
system to determine school improvement. This system is solely comprised of student
achievement on standardized test scores. Given that performance evaluation is based on
test results alone, it is possible for a school to do well on tests without actually improving
teaching or student engagement. States like Arizona have a broader system of
accountability measures because dropout data and graduation rates are tied to school
performance evaluations, making it more likely that schools will attend to these
indicators. However, neither state collects value-added measures that policy makers and
researchers suggest are important to accurately assess student learning and school
206
progress (Dembosky et al., 2005; Loeb & Plank, 2007). Value-added accountability
models may not only enable states to keep track of improvement, they can also provide
the data to start identifying instructional practices and classroom strategies that improve
student learning. These are key to ensuring that schools focus on improving learning
rather than chasing numbers.
Considering that the emphasis on DDDM is not likely to abate, state systems can
also take the lead in developing a user-friendly data management infrastructure.
Currently, districts have to devote a great deal of money to buying programs or tailoring
existing programs to meet their needs. If the state can take the financial and technological
burdens off the districts, these resources can be allocated to focus on capacity building
efforts such as data literacy development. This also ensures that districts and schools are
on an even playing field when it comes to data access and management.
Finally, states can also take the lead in developing reliable and valid benchmark
assessments that are used for formative rather than evaluation purposes. School systems
are creating benchmarks that are intended to monitor students‘ progress towards state
standards. Some districts such as Arlington may hire their own psychometrician to ensure
that tests are reliable and valid. However, given the lack of assessment knowledge,
districts buy externally developed tests that may or may not be correlated to the state
assessment. Systems that cannot afford to buy tests end up relying on banked items from
data management programs that may not be aligned to the state‘s standards. To enable
schools to begin to monitor student progress throughout the year, states can provide
formative benchmarking tools to all districts.
207
The Need for Professional Development
Because of the focus on DDDM, teachers are asked to interpret and analyze data.
As the literature has indicated, some teachers have difficulty knowing what questions to
ask, how to read numbers, or understand the limitations of different data sources. With
this lack of expertise, teachers may end up making faulty assumptions about their
students or instructional practices. It is also not clear what capacities district and school
leaders have for examining and using data. They have access to a complex array of data
but access does not guarantee that thoughtful conclusions will be drawn from data.
Therefore, professional development that emphasizes data literacy is a new experience
required of educators in the age of accountability.
Research on professional development indicates that sustained opportunities to
understand and apply new knowledge lead to deeper engagement and expertise (Garet et
al., 2001; Wilson & Berne, 1999). Providing consistently structured time for teachers to
develop new skills on data literacy is a necessary condition for such learning to occur but
it alone does not guarantee that teachers will gain such knowledge. Throwing a group of
novices together without guidance is likely to create surface level use of data. In this type
of situation, teacher subcultures are likely to play a greater role in how teachers make
sense of data. Data coaches, who also have instructional knowledge, are important to
assist teachers. Additionally, teachers‘ use of data should be broadly defined to include
formative assessments. As Heritage (2007) stated, formative assessment does not only
encompass teacher-created tests or quizzes but also include student responses to teacher
208
questions and student-student discussions. While much of this is likely occurring,
teachers need opportunities to reflect on these types of data.
Teacher capacity for DDDM can also be developed beyond the school walls.
Teacher education programs are important avenues of learning that could provide
teachers with the basic tools to critically interpret and use data, especially assessment
knowledge. Nevertheless, they could also be sites of simulated inquiry activities that help
pre-teachers to formulate questions, draw conclusions from a wide array of data, and
question assumptions. The focus would not just be on understanding measurement
concepts or limitations of different types of data, but also to institutionalize habits of
inquiry.
Developing a Culture of Continuous Improvement
How DDDM is defined shapes how DDDM is implemented. If the use of data is
presented as an accountability mandate, then individuals and groups are likely to engage
in DDDM as a strategy to avoid sanctions. The conceptualization of DDDM as a
continuous improvement tool that benefits teachers and students requires a conscious
framing of the strategy. Leaders need to be deliberate about how they frame reform as
this shapes how teachers determine whether new ideas are salient and credible. Leading is
not simply an act of directing and managing. As frame analysis indicates, leading is also
a political position that requires attention to motivating people to act towards a common
goal. District and school leaders have the authority to make policy decisions and allocate
resources to promote DDDM as a relevant strategy. More importantly, their position
209
affords them the opportunity to help teachers make sense of data as a learning tool.
District officials and school leadership teams thus have a great deal of power in shaping
the perception of DDDM.
Reform leaders also have to acknowledge that teachers‘ responses to reform are
not monolithic and that different levers motivate people. Teachers who are resistant to
DDDM may do so for a wide array of reasons. Teachers may be simply dismissive of the
importance of using data but other teachers may also be fearful of change. A teacher who
believes that DDDM is a waste of time is not likely to respond to persuasive means but
may react to accountability incentives or pressures such as merit pay. Other teachers, who
are overwhelmed with trying something new, require assistance and an emphasis on
incremental adjustment. Finally, teachers may be highly supportive of reform efforts but
may lack the tools and expertise to effectively transform practices. These teachers also
need opportunities to learn. The challenge for leaders is to use incentives, pressures, or
persuasive tactics accordingly.
High schools should also consider their course restructuring efforts. Having high
expectations and academic rigor is only the first step in ensuring that students have access
to a quality education. However, when students are tracked into courses by ability levels,
this sends a mixed message to teachers about the usefulness of data and can undermine
teacher collaboration. Large comprehensive high schools have the difficult task of trying
to figure out a way to meet the wide array of needs that student bring with them while
making the most out of their finite set of resources. Many target their resources by
clustering same ability groups of students throughout their high school career. Doing so
210
reaffirms deficit frames of students and can also send a message that ability is immutable.
While de-tracking classes does not guarantee that teachers will believe that all students
are capable of meeting high standards, it puts teachers on a common ground and may
promote sharing of practices within and across content areas.
The Need for Student Engagement
At the core of any educational reform is the desire to improve learning outcomes
for students. Yet, much of the literature in DDDM neglects to investigate how students
may take part in the DDDM process. As the literature on formative assessments suggests,
students have an important role to play in DDDM. Certainly, their engagement in
classroom discussions and peer learning are part of helping teachers understand what and
how students are learning. In high schools, this is particularly relevant as students are
cognitively sophisticated enough to engage in data use and self-reflection. Additionally,
data such as course evaluations and student participation surveys provide teachers with
direct information on what students are thinking. Accountability is ultimately about being
responsive to student needs; thus, it is important for educators to engage students in their
own growth process.
Final Remarks:
The Need for Learning-Driven Policy Making
Like most reforms, the implementation of DDDM is a dynamic process mediated
by how educators at various levels make sense of and enact data use. For reforms to
211
penetrate classroom walls, a complicated mixture of resources, capacity building, and
paradigms shifts may need to be emphasized. In pursuit of educational excellence and
equity, policy makers must not forget that schools are ultimately social systems where
people‘s interactions, their pre-existing knowledge, and assumptions come into play
when new policies are introduced. For new reforms to impact student learning, sustained
teacher learning and professional development must drive policy formulation and
implementation. Finally, educators must remember that they are ultimately accountable to
students and not to a system of mandates.
212
References
Adler, P.A. & Adler, P. (1994). Observational techniques: In NK Denzin & Y.S. Lincoln
(Eds.), Handbook of qualitative research (pp.377-392). Thousand Oaks: Sage.
AERA/APA/NCME (1999). Standards for educational and psychological testing.
Washington D.C.: Author.
American Federation of Teachers, National Council on Measurement in Education, and
National Education Association. (1990). Standards for teacher competence in
educational assessment of students. Educational measurement: Issues and
practices, 9(4), 30-32.
Armstrong, J., & Anthes, K. (2001). Identifying the factors, conditions, and
policies that support schools‟ use of data for decision making and school
improvement: Summary of Findings. Denver, CO: Education Commission
of the States.
Bailey, B. (2002). The impact of mandated change on teachers. In N. Basica and A.
Hargreaves (Eds.), The sharp edge of change: Teaching, leading, and the realities
of reform (pp.112-128). London: Falmer Press.
Bascia, N. & Hargreaves, A. (2000). Teaching and leading on the sharp edge of change.
In N. Bascia and A. Hargreaves (Eds.), The sharp edge of change: Teaching,
leading, and the realities of reform (pp. 3-26). London: Falmer Press.
Benford, R.D. & Snow, D. A.(2000). Framing processes and social movements: An
overview and assessment."Annual Review of Sociology 26, 11-39.
Benford, R.D. (1997): An insider's critique of the social movement framing perspective,"
Sociological Inquiry, 67 (4), 409-30.
Bensimon, E.M. (2005). Equality as a fact, equality as a result: A matter of institutional
accountability. Occasional Paper. Washington, DC: American Council on
Education, Center for Advancement of Racial and Ethnic Equity.
Berman, P. and McLaughlin, M. W. (1978). Federal programs supporting educational change,
Vol. VIII. Santa Monica, CA: RAND Corporation.
Bernhardt, V.L. (1998). Multiple measures. Invited Monograph No. 4. CA: California
Association for Supervision and Curriculum Development (CASCD).
Bernhardt, V.L. (2004). Data analysis for continuous improvements, 2
nd
edition. NY: Eye
on Education.
213
Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through
classroom assessment. Phi Delta Kappan 80(2), 139-144.
Black, P., & Wiliam, D. (2006). Developing a theory of formative assessment. In J. R.
Gardner,(Ed.), Assessment and learning. London: Sage Publications.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for
learning. England: Open University Press.
Booher-Jennings, J. (2005). Below the bubble: ―Educational triage‖ and the Texas
accountability system. American Educational Research Journal, 42(2), 231-268.
Brunner, C., Fasca, C., Heinze, J., Hone, M., Light, D., Mandinach, E., & Wexler, D.
(2005). Linking data and learning: The Grow Network study. Special Issue on
Transforming data into knowledge: Applications of data-based decision making to
improve instructional practice. Journal of Education Change for Students Placed
At-Risk, 10(3).
Campbell, J.L. (2005). Where do we stand? Common mechanisms in organizations and
social movement research (pp.41-72). In G.F. Davis, D. McAdam, W.R. Scott, &
M.N. Zald (eds.), social movements and organization theory. NY: Cambridge
University Press.
Celio, M.B.& Harvey, J. (2005). Buried Treasure: Developing a management guide from
mountains of school data. Seattle, WA: Center for Reinventing Public Education.
Chen, E., Heritage, M. & Lee, J. (2005). Identifying and monitoring students‘ learning
needs with technology. Journal of Education for Students Placed at Risk, 10, 309-
332.
Coburn, C. E., Honig, M. I., & Stein, M. K. (in press). What is the evidence on districts‘
use of evidence? In J. Bransford, L., Gomez, D., Lam, N., Vye (Eds.) Research
and practice: Towards a reconciliation. Cambridge: Harvard Educational Press.
Coburn, C.E. & Talbert, J.E. (2006). Conceptions of evidence use in school districts:
Mapping the terrain. American Journal of Education, 112 (4), XX-XX.
Coburn, C.E. (2001). Collective sense-making about reading: How teachers mediate
reading policy in their professional communities. Educational Evaluation and
Policy Analysis, 23(2), 145-170.
Coburn, C.E. (2004). Beyond decoupling: Rethinking the relationship between the
institutional environment and the classroom. Sociology of Education, 77(3), 211-
244.
214
Cohen, D.K., Moffitt, S.L., & Goldin, S. (2007). Policy and practice. In S.Furhman, D.
Cohen, & F. Mosher (Eds.), The state of education policy research (pp.63-85).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Colby, S., Smith, K., & Shelton, J. (2005). Expanding the supply of high quality public
schools. San Francisco, CA: The Bridgespan group.
Creswell, J.W. & Plano Clark, V.L. (2007). Designing and conducting mixed
methods research. Thousand Oaks, CA: Sage Publications, Inc.
Creswell, J.W. (1998). Qualitative inquiry and research design: Choosing among five
traditions. Thousand Oaks, CA: Sage.
Creswell, J.W. (2003). Research design: qualitative, quantitative, and mixed methods
approaches, 2
nd
Edition. Thousand Oaks, CA: Sage Publications.
Cromey, A. (2000). Using Student Assessment Data: What Can We Learn From
Schools? Oak Brook, IL: North Central Regional Educational Laboratory.
Crotty, M. (1998). The foundations of social science research. Thousand Oaks, CA:
Sage.
Darling-Hammond, L. & McLaughlin, M.W. (1995). Policies that support professional
development in an era of reform. Phi Delta Kappan, 76(8), 597-604.
Datnow, A. & Castellano, M. (2000). Teachers‘ responses to Success for All: How
beliefs, experiences, and adaptations shape implementation. American
Educational Research Journal, 37(3), 775-799.
Datnow, A. (1995). Making sense of teacher agency: Linking theory to school reform
policy. Unpublished doctoral dissertation, University of California, Los Angeles.
Datnow, A., Hubbard, L. & Mehan, H. (2002). Extending educational reform: From one
to many. New York & London: Routledge Falmer.
Datnow, A., Park, V., Wohlstetter, P. (2007). Achieving with data: How high
performance driven school systems use data to improve instruction for elementary
school students. Report for NewSchools Venture Fund. Los Angeles: Center on
Educational Governance.
DeBray, E.H., McDermott, K.A., Wohlstetter, P. (2005). Introduction to the
special issues on federalism reconsidered: The case of the No Child Left
Behind Act. Peabody Journal of Education, 80(2), 1-18.
215
Dembosky, J. W., Pane, J. F., Barney, H., Christina, R. (2005). Data driven
decisionmaking in Southwestern Pennsylvania school districts. RAND
Corporation. Retrieved November 4, 2007 from
http://www.rand.org/pubs/working_papers/2006/RAND_WR326.pdf
Diamond, J.B. & Cooper, K. (2007). The uses of testing data in urban elementary
schools: Some lessons from Chicago. In P.A. Moss (Ed.), Evidence and decision
making (National Society for the Study of Education Yearbook, Vol. 106, Issue 1,
pp. 241-263). Chicago: National Society for the Study of Education. Distributed
by Blackwell Publishing.
Diamond, J.B. & Spillane, J.P. (2004). High-stakes accountability in urban elementary
schools: Challenging or reproducing inequality? Teachers College Record,
106(6), 1145-1176.
DiMaggio, P & Powell. W.W. (1983). The iron cage revisited: Institutional isomorphism
and collective rationality in organizational fields." American Sociological Review
48 (2), 147-160.
Earl, L., & Katz, S. (2002). Leading schools in a data rich world. In K. Leithwood, Pl.
Hallinger, G. Furman, P. Gronn, J. MacBeath, B. Mulforld & K. Riley (Eds.) The
second international handbook of educational leadership and administration.
Dordrecht, Netherlands: Kluwer.
Earl, L., & Katz, S. (2006). Leading schools in a data-rich world: Harnessing data for
school improvement. Thousand Oaks, CA: Corwin Press.
Editorial Projects in Education. (2007). Diplomas count 2007. Ready for what? Preparing
students for college, careers, and life after high school. Education Week, 26(40),
40-41.
EdSource (2005, May). The movement to transform high school, forum report. CA:
Author.
Elmore, R. (2003, November). A plea for strong practice. Educational Leadership,
62(3), 6-10. Retrieved on May, 2006 from http://www.oaesa.org/k-8/library.
Elmore, R.F. (1979-1980). Backward mapping: Implementation research and policy
decisions. Political Science Quarterly 94(4), 601-616.
Feldman, J. & Tung, R. (2001). Whole school reform: How schools use the data-based
inquiry and decision making process. Paper presented at the 82
nd
annual meeting
of the American Educational Research Association in Seattle, WA.
Finn, C.E. & Hess, F.M. (2004). On leaving no child behind. Public Interest, 157, 35-36.
216
Firestone, W. & Louis, K.S. (1999). Schools as cultures. In J. Murphy & K.S. Louis
(Eds.), Handbook of research on educational administration (2
nd
ed., pp.297-
322). San Francisco, CA: Jossey-Bass.
Firestone, W. A., & Gonzalez, R. (2007). Culture and processes affecting data use in
school districts. In P.A. Moss (Ed.), Evidence and decision making ( pp.132-154).
Chicago: National Society for the Study of Education. Distributed by Blackwell
Publishing.
Firestone, W.A., Fitz, J. & Broadfoot, P. (1999). Power, learning, and legitimation:
Assesment implementation across levels in the United States and the United
Kingdom. American Educational Research Journal, 36(4), 759-793.
Firestone, W.A., Fuhrman, S.H., Kirst, M.W. (1991). State educational reform since
1983: Appraisal and the future. Educational Policy, 3(3), 233-250.
Fitz, J. (1994). Implementation research and education policy: Practice and prospects.
British Journal of Educational Studies, 42(1), 53-69.
Friedland, R. & Alford, R.R. (1991). Bringing society back in: Symbols, practices, and
institutional contradictions. In W.W. Powell and P.J. DiMaggio (Eds.), The new
institutionalism in organizational analysis, pp.232-263. Chicago: University of
Chicago Press.
Fuhrman, S.H., Goertz, M.E., & Weinbaum, E.H. (2007). Educational governance in the
United States: Where are we? How did we get here? Why should we care? In
S.Furhman, D. Cohen, & F. Mosher (Eds.), The state of education policy research
(pp.41-61). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Fullan, M. & Pomfret, A. (1977). Research on curriculum and instruction
implementation. Review of Educational Research, 47(1), 335-397.
Fullan, M. (2000). The return of large-scale reform. Journal of Educational Change,
1(1), 5-28.
Fullan, M., Rolheiser, C., Mascall, B. & Edge, K. (2001). Accomplishing large scale
reform: A tri-level proposition. Toronto: Ontario Institute for Studies in
Education.
Fusarelli, L.D. (2004). The potential impact of the No Child Left Behind Act on equity
and diversity in American education. Educational Policy, 18(1), 71-94.
Garet, M.S., Porter, A.C., Desimone, L., Birman, B.F., Yoon, K.S. (2001). What makes
professional development effective? Results from a national sample of teachers.
American Educational Research Journal, 38 (4), 915-945.
217
Garfield, J. (2003). Assessing statistical reasoning. Statistics Education Research
Journal, 2(1), 22-38.
Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997). What we know about using
researching findings: Implications for improving special education practice.
Journal of Learning Disabilities, 30(5), 466-478.
Goffman, E. (1967). The presentation of self in everyday life. NY: Doubleday.
Goffman, E. (1974). Frame analysis: An essay on the organization of experience.
Cambridge: Harvard University Press.
Grossman, P.L. & Stodolsky, S.S. (1995). Content as context: The role of school subjects
in secondary school teaching. Educational Researcher, 24(8), pp.5-11.
Halverson, R., Grigg, J., Prichett, R., & Thomas, C. (2005). The new instructional
leadership: Creating data-driven instructional systems in schools. Wisconsin
Center for Education Research Working Paper No. 2005-9. Madison, WI:
Authors.
Hamilton, L.S., Stecher, B.M., Marsh, J.A., McCombs, J.S., Robyn, A., Russell, J.L.,
Naftel, S., Barney, H. (2007). Standards-based accountability under No Child
Left Behind: Experiences of teachers and administrators in three states. Santa
Monica, CA: The RAND Corporation.
Hargreaves, A. (1994). Changing teachers, changing times: Teachers‟ work and culture
in the Postmodern age. New York: Teachers College Press.
Harvey, J. & Housman, N. (2004). Crisis or possibility? Conversations about the
American high school. Washington, DC: National High School Alliance.
Hatch, M.J. (1997). Organization theory: Modern, symbolic and postmodern
perspectives. NY: Oxford University Press Inc.
Hemsley-Brown, J. & Sharp, C. (2003). The use of research to improve professional
practice: A systematic review of the literature. Oxford Review of Education,
29(4), 449-470.
Heritage, M. (2007, October). Formative assessment: What do teachers need to know and
do? Phi Delta Kappan, 140-146.
Herman, J. & Gribbons, B. (2001). Lessons learned in using data to support school
inquiry and continuous improvement: Final report to the Stuart Foundation. LA,
CA: Center for the Study of Evaluation, University of California, Los Angeles.
218
Herman, J.L., Osmundson, E., Ayala, C., Schneider, S., Timms, M. (2006). The nature
and impact of teachers‟ formative assessment practices, technical Report 703. Los
Angeles, CA: Center for the Study of Evaluation.
Hodder, I. (2000). The interpretation of documents and material culture. In N. K. Denzin
& Y. S. Lincoln (Eds.), Handbook of qualitative research , 2
nd
edition (pp. 703-
716). Thousand Oaks, CA: Sage.
Honig, M. (2006). Complexity and policy implementation: Challenges and opportunities
for the field. In M. Honig (Ed.), New directions in education policy
implementation: Confronting complexity (pp.1-23). Albany, NY: State University
of New York Press.
House, E. (1996). A framework for appraising educational reforms. Educational
Researcher, 25(7), 6-14.
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the ―data driven‖ mantra:
Different conceptions of data-driven decision making. In P.A. Moss (Ed.),
Evidence and decision making (National Society for the Study of Education
Yearbook, Vol. 106, Issue 1, pp.105-131). Chicago: National Society for the
Study of Education. Distributed by Blackwell Publishing.
Ingram, D. Louis, K.S., Schroeder, R.G. (2004). Accountability policies and teacher
decision-making: Barriers to the use of data to improve practice. Teachers
College Record, 106(6), 1258-1287.
Jepperson, R.L. (1991). Institutions, institutional effects, and institutionalization. In
W.W. Powell and P.J. DiMaggio (Eds.), The new institutionalism in
organizational analysis, (pp.143-163). Chicago: University of Chicago.
Johnson, J. H. (2000). Data-driven school improvement. Journal of School Improvement,
1(1), XX.
Kerr, K.A., Marsh, J.A., Ikemoto, G.S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons
from three urban districts. American Journal of Education, 112(3), 496-520.
Labaree, D. (2004). The trouble with ed schools. New Haven, CT: Yale University Press.
Lachart, M.A. (2001). Data-driven high school reform: The breaking ranks model.
Providence, RI. Northeast and Islands Regional Educational Laboratory at Brown
University.
219
Lachat, M.A. & Smith, S. (2005). Practices that support data use in urban high schools.
Special Issue on Transforming data into knowledge: Applications of data-based
decision making to improve instructional practice. Journal of Education Change
for Students Placed At-Risk, 10(3).
Lafee, S. (2002). Data-driven districts. School Administrator, 59(11), 6-7, 9-10, 12, 14-
15.
Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment:
Minute by minute, day by day. Educational Leadership, 63(3), 18-24. Retrieved
September 12, 2007 from http://www2.esu3.org/esu3/workshopDocs/Article.pdf
Light, D., Honey, M., Heinze, J., Brunner, C., Wexler, D., Mandinach, E., & Fasca, C.
(2005). Linking data and learning: The Grow Network study. New York:
Educational Development Center, Inc.
Lincoln, Y.S. (2002). On the nature of qualitative evidence. A paper presented for the
Annual Meeting of the Association for the Study of Higher Education.
Sacramento, CA.
Lincoln, Y.S., & Guba, E.G. (1995). Naturalistic inquiry. Thousand Oaks: Sage
Publications.
Lipman, P. (1997). Restructuring in context: A case study of teacher participation and the
dynamics of ideology, race, and power. American Educational Research Journal,
34(1), 3-37. (Alexander, 2003)
Little, J.W. (1999). Teachers‟ professional development in the context of high school
reform: Findings from a three-year study of restructuring schools. Washington,
DC: National Partnership for Excellence and Accountability in Teaching.
Loeb, S., & Plank, D. (2007). Continuous improvement in California education: Data
systems and policy learning. Berkeley, CA: Policy Analysis for California
Education (PACE). Retrieved October 18, 2007 from
http://pace.berkeley.edu/reports/Data_Systems_October_2007.pdf
Louis, K.S. (1999). Making meaning of the relationship between research and policy: An
epilogue. Educational Policy, 13(1), 199-212.
Louis, K.S., & Dentler, R.(1988). Knowledge use and school improvement. Curriculum
Enquiry, 18(1), 33-62.
Malen, B. (2006). Revisiting policy implementation as a political phenomenon: The case
of reconstitution policies. In M. Honig (Ed.), New directions in education policy
implementation: Confronting complexity (pp.83-104). Albany, NY: State
University of New York Press.
220
Mandinach, E., Honey, M. Light, D., Heinze, J., & Rivas, L. (2005). Technology-based
tools that facilitate data-driven decision making. Paper presented at the annual
meeting of the American Educational Research Association, Montreal.
Mandinach, E.B. Honey, M. & Light, D. (2006). A theoretical framework for data-driven
decision making. Paper presented at the annual meeting of American Educational
Research Association, San Francisco.
March, J. (1994). A primer on decision making. How decisions happen. NY: Free Press.
Marsh, J. A., Kerr, K. A., Ikemoto, G. S., Darilek, H. Suttorp, M., Zimmer R. W., &
Barney, H. (2005). The role of districts in fostering instructional improvement:
Lessons from three urban districts partnered with the Institute for Learning.
Santa Monica, CA: RAND Corporation.
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision
making in education. Santa Monica, CA: RAND Corporation.
Martinez, M. & Bray, J. (2002). All over the map: State policies to improve the high
school. Washington, DC: The Institute for Educational Leadership
Mason, S. (2002). Turning data into knowledge: Lessons from six Milwaukee public
schools. A paper presented at the annual conference of AERA, New Orleans,
April 2002.
Matland, R.E. (1994). Synthesizing the implementation literature: the ambiguity-conflict
model of policy implementation. Journal of public administration research and
theory, 5(2), 135-174.
McDonnell, L.M. & Elmore, R.F. (1987). Getting the job done: Alternative policy
instruments. Educational Evaluation and Policy Analysis, 9(2), 133-152.
McLaughlin, M.W. (1987). Learning from experience: Lessons from policy
implementation. Educational Evaluation and Policy Analysis, 9(2), 171-178.
McLaughlin, M.W. (1990). The Rand Change Agent Study revisited: Macro perspectives
and micro realities. Educational Researcher, 19(9), 11-16.
McNeil, L. (1986). Contradictions of control: School structure and school knowledge.
New York: Routledge
Merriam, S.B. (1998). Qualitative research and case study applications in education.
San Francisco, CA: Jossey-Bass.
Meyer, J. & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth
and ceremony. American Journal of Sociology, 83, 340-363.
221
Miles, M.B., & Huberman, A.M. (1994). Qualitative data analysis, 2
nd
edition.
Thousand Oaks, CA: Sage Publications, Inc.
Moore, M., Goertz, M., & Hartle, T. (1983). Interaction of federal and state programs.
Education and the Urban Society 15(3), 453-478.
Murphy, J., & Adams, J.E. (1998). Reforming America‘s schools 1980-2000. Journal of
Educational Administration, 30(5), 426-444.
National Association of Secondary School Principals (2004). Breaking ranks II:
Strategies for leading high school reform. Reston, VA: National Association of
Secondary School Principals.
National Center for Education Statistics. (2007). Findings from the condition of
education. Washington, DC: U.S. Department of Education.
National Center for Educational Statistics. (2007). Mapping 2005 proficiency standards
onto the NAEP scales (NCES-2007-482). U.S. Department of Education.
Washington, DC: Author.
National Commission on Excellence in Education (1983). A nation at risk: The
imperative for educational reform. Washington, DC: Government Printing Office.
Newman, F. (1992). Student engagement and achievement in American secondary
schools. New York, NY: Teachers College Press.
No Child Left Behind Act of 2001. Public Law 107-110.
O‘Day, J. (2002). Complexity, accountability, and school improvement. Harvard
Educational Review, 72(3), 293-329.
Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven: Yale
University Press.
Oakes, J., Wells, A. S., Jones, M. & Datnow, A. (1997). Detracking: The social
construction of ability, cultural politics, and resistance to reform. Teachers
College Record, 98(3), 482-510.
Oliver, C. (1991). Strategic responses to institutional pressures. Academy of Management
Review, 16, 145-179.
Olsen, B. & Kirtman, L. (2002). Teacher as mediator of reform: An examination of
teacher practice in 36 restructuring schools. Teachers College Record, 104(2),
301-324.
222
Petrides, L., & Nodine, T. (2005). Anatomy of school system improvement: Performance-
driven practices in urban school district. San Francisco, CA: NewSchools
Venture Fund.
Powell, W. & DiMaggio, P. J. (1991). The new institutionalism in organizational
analysis. Chicago: University of Chicago Press.
Rhine, S. (1998). The role of research and teachers‘ knowledge base in professional
development. Educational Researcher, 27(5), 27-31.
Rowan, B. & Miskel, C.C. (1999). Institutional theory and the study of educational
organizations. In J. Murphy and K.S. Louis (Eds.), Handbook of research on
educational administration, 2
nd
edition (pp 359-384). San Francisco: Jossey-Bass.
Ruiz-Primo, M.A., & Furtak, E.M. (2007). Exploring teachers‘ informal formative
assessment practices and students‘ understanding of the context of scientific
inquiry. Journal of Research in Science Teaching, 44(1), 57-84. Retrieved
September 12, 2007 from http://www3.interscience.wiley.com/cgi-
bin/fulltext/113510306/PDFSTART
Rumberger, Russell. (2004). Why student drop out of school. In G. Orfield (ed.)
Dropouts in America: Confronting the graduation rate crisis (pp.131-155),
Cambridge, MA: Harvard Education Press.
Sabatier, P. & Mazmanian, D. (1981). The implementation of public policy: A
framework of analysis. In D. Mamanian & P. Sabatier (Eds.), Effective Policy
Implementation, (pp. 3-36). Lexington, Massachusetts: Lexington Books.
Sanger, J. (1996). The compleat observer? A field guide to observation (pp. 1-38).
London: Falmer Press.
Schield, M. (2005). Information literacy, statistical literacy, and data literacy. Retrieved
from International Association for Social Science Information Serves and
Technology at www.iassitdata.org.
Scott, W.R. (2001). Institutions and organizations, 2
nd
Edition. Thousand Oaks, CA:
Sage Publications, Inc.
Shirley, D. & Hargreaves, A. (October, 2006). Data-driven to distraction: Why American
educators need a reform alternative—where they might look to find it. Education
Week. Retrieved on September 17, 2007 from
http://www.edweek.org/ew/articles/2006/10/04/06/hargreaves.h26.html
Siskin, L.S. (1994). Realms of knowledge: Academic departments in secondary schools.
Washington, D.C.: Falmer Press.
223
Skrla, L. & Scheurich, J. (2001). Displacing deficit thinking. Education and Urban
Society, 33(3), pp.235-259.
Slavin, R.E. (2002). Evidence-based education policies: transforming educational
practice and research. Educational researcher, 31(7), 15-21.
Smith, P. (1998). The new American cultural sociology: an introduction, (pp.1-14).
Cambridge: Cambridge University Press.
Smith, S.H. & O‘Day, J. (1993). Systemic school reform. In S. Fuhrman & B. Malen
(Eds.), The politics of curriculum and testing. Politics of Education Association
Yearbook (pp.233-267). London: Taylor & Francis.
Snyder, J., Bolin, F. and Zumwalt, K. (1992). Curriculum implementation. In P. Jackson (ed.)
Handbook of Research on Curriculum (pp. 402-435). New York: Macmillan.
Spillane, J.P. (1998). State policy and the non-monolithic nature of the local school
district: Organizationl and professional considerations. American Educational
Research Journal 35(2), 33-63.
Spillane, J.P., Reiser, B.J. & Reimer, T. (2002). Policy implementation and cognition:
Reframing and refocusing implementation research. Review of Educational
Research, 72(3), 387-431.
Stake, R. (2000). Case studies. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of
qualitative research , 2
nd
edition (pp.435-454). Thousand Oaks, CA: Sage
Publications.
Steen, L.A. (2001). Mathematics and democracy: The case for quantitative literacy.
Washington, DC: National Council on Education and the Disciplines.
Stoll, L., Bolam, R., McMahon, A., Wallace, M., Thomas, S. (2006). Professional
learning communities: A review of the literature. Journal of Educational Change,
7, 221-258.
Supovitz, J.A. & Klein, V. (2003). Mapping a course for improved student learning:
How innovative schools systematically use student performance data to guide
improvement. Philadelphia, PA: Consortium for Policy Research in Education
(CPRE).
Talbert, J.E. & McLaughlin, M.W. (1996). Teacher professionalism in local school
contexts. In I.F. Goodson & A. Hargreaves (eds.), Teachers‟ professional lives.
London: Falmer Press.
Thorton, B. & Perrault, G. (2002). Becoming a data-based leader; An introduction.
National Association of Secondary School Principals Bulletin, 86(630), 86-96.
224
Togneri, W., & Anderson, S. (2003). Beyond Islands of Excellence: What Districts Can
Do to Improve Instruction and Achievement in All Schools. Washington, DC:
Learning First Alliance.
Tyack, D. & Cuban, L. (1995). Tinkering towards utopia: A century of public school
reform. Cambridge: Harvard University Press.
Tyack, D.B. (1974). The one best system: A history of America urban education.
Cambridge, MA: Harvard University Press.
U.S. Department of Education (2005). Retrieved from
http://www.ed.gov/nclb/accountability/achieve/nclb-hisp.html on November 1,
2005.
Valencia, R. (1997). Conceptualizing the notion of deficit thinking. In Richard R.
Valencia (Ed.), The evolution of deficit thinking. London: Falmer Press.
Walker, E.M. (2004). The impact of state policies and actions on local implementation
efforts: A study of whole school reform in New Jersey. Educational Policy,
18(2), 338-363.
Watson, J. & Callingham, R. (2003). Statistical literacy: A complex hierarchical
construct. Statistics Education Research Journal 2(2), 3-46.
Wayman, J. & Stringfield, S. (2006). Using computer systems to involve teachers in data
use for instructional improvement. American Journal of Education, 112,
(August), 463-468.
Wayman, J., Stringfield, S., & Yakimowski, M. (2004). Software enabling school
improvement through analysis of student data. Report No. 67. Baltimore, MD:
Center for Research on the Education of Students Placed At Risk.
Wayman, J.C., Cho, V., & Johnston, M.T. (2007). The data-informed district: A district-
wide evaluation of data use in the Natrona County School District. Austin: The
University of Texas.
Webb, N.L. (2002). Assessment literacy in a standards-based education setting. Working
paper no. 2002-4, Madison, Wisconsin: Wisconsin Center for Education
Research.
Weber, K. & Glynn, M.A. (2006). Making sense with institutions: Context, thought and
action in Karl Weick‘s theory. Organization Studies, 27(11), 1639-1660.
Weick, K. (1995). Sensemaking in organizations. Thousand Oaks: Sage Publications.
225
Weick, K.E., Sutcliffe, K.M., Obstfeld, D. (2005). Organizing and the process of
sensemaking. Organization Science, 16(4), 409-421.
Wilkins, J.L. (2000). Preparing for the 21
st
century: The status of quantitative literacy in
the United States. School Science and Mathematics, 100(8), 405-418.
Wilson, M. (2005). Constructing measures: An item response modeling approach. New
Jersey: Lawrence Erlbaum Associates, Inc.
Wilson, S. & Berne, J. (1999). Teacher learning and the acquisition of professional
knowledge: An examination of research on contemporary professional
development. Review of Research in Education, 24, 173-209.
Woody, E.L. (2004). Voices from the field: Educators respond to accountability. CA:
Public Analysis of California Education.
Yin, R. (2003). Case study research, 3rd edition. Beverly Hills, CA: Sage
Publications.
Young, V. M. (2006). Teacher's use of data: Loose coupling, agenda setting, and team
norms. American Journal of Education 112(4), 521-548.
226
Appendix A: Interview Protocol
Participant‘s Name: __________________________ Date: __________________
Position: _______________________________________________________________
[Introduction: Begin with a few minutes of explaining the study, who you are, and the purpose of the study.
Explain that while the interview will be taped, their responses are strictly confidential. Let them know if
there is something they would like to say off tape, they can inform you and the recorder will be shut off for
their comment. Also, let them know the approximate length of the interview and ask if they have any
specific questions before beginning.]
I. Background/Context
1. What subject & grade do you teach? How long have you been at this school? What is your prior
experience and training?
2. Could you tell us a little about the students and community that you serve? How would you
describe the general culture of your school/system?
3. When/why did the use of data become an important part of your school improvement process?
II. Data-Driven Decision making Structures
A. Goals
1. School/Classroom Level-
Does your school have performance goals? Do teachers establish their own goals for their
classes?
How were these goals established?
How do you know when the goals have been met? Probe: benchmarks, indicators.
How frequently do you re-assess the goals?
B. Data Sources and Management
1. What types of data do you collect and examine most frequently? How often?
2. What types of data do you find most useful when making decisions about instruction and
curriculum? Why?
3. What kinds of data do classroom teachers have access to and collect? (e.g., state/district
assessments; school, classroom, standardized, teacher-created, etc.)
4. When do teachers receive student achievement data (e.g. from interim assessments, state
assessments)? In what format (e.g. spreadsheets, handouts, online)
5. How often do you get data from the district? What types of data do they provide?
C. Structural Support
227
1. What types of support are most important in helping teachers look at and use data for instructional
improvement?
2. What meetings take place around data? What data do teachers bring to meetings and what
outcomes are expected from them or what actions are they responsible for?
3. Does your school have informal and/or formal grade/dept. groups, small learning communities, or
other collaboration opportunities for teachers to talk about instruction and student achievement?
4. Is time allocated for collaboration among teachers, with respect to analyzing data and developing
action plans? How often?
5. Is time allocated for implementing action plans (e.g. re-teaching particular standards, working
with struggling students)? How you do know if they are successful? Is any additional support
provided in developing and implementing action plans?
6. Have you attended any training on data-driven decision making? Has your district/system
sponsored professional development for schools that focus on using data to make decisions? In
what specific areas? Is the professional development offered voluntary or mandatory? School or
grade level specific training?
7. If you have attended trainings on data driven decision making, which ones did you find useful? If
you think they were not helpful, why? Who determined which PD to provide and who would
attend?
8. Have you had access to other opportunities to learn about the use of data to make decisions?
(Probe: other possible resources such as books, classes at the University, etc.)
9. Do you have a person on staff to support you in the use of data? If so, what type of support is
provided to you?
10. Does your school recognize or reward teachers for using performance data to inform decisions?
Are schools recognized publicly or rewarded in other ways for using performance data to inform
decisions?
11. What further types of training/support do you think will be needed to help teachers use data for
instructional improvement?
III. Uses of Data
1. How do you use data? Why do you use data?
2. Do you have the authority to make changes in the education program as you see fit, if they are
based on data? If not, who makes these decisions (lead, principal, system)?
3. Can you think of an instance when you used student performance data to change your instruction?
How frequently does this happen? Example?
4. Do you share student achievement data between teachers/classrooms? Between schools? In what
formats? If so, do you find this helpful?
228
5. Do you share data with your students? Why? How? Example?
6. Is there a teacher who you have observed to be a particularly effective user of student performance
data? Could you describe that teacher and his/her approach to data to enhance student learning and
classroom instruction?
7. Do you have teachers that struggle with data use? Why? How would you describe a teacher who
struggles with data use?
IV. Culture of Data Use
1. What are your beliefs about the importance of using data for decision making?
2. What are the expectations of the district/system/school about how you should use data (i.e., are
there specific practices and behaviors that you expect from administrators, teachers, and students
such as developing a school report card, bulletin to parents, etc.)
3. Why do you think teachers use data?
4. Do teachers share data with each other? In what types of settings? How comfortable are they with
this?
5. What do you see as strengths in how your school uses data for decision making?
6. What challenges has your school run into trying to use data for decision making? How have you
dealt with these challenges at the school level? Examples?
7. Have you noticed a shift in beliefs about the use of data?
8. To what extent have beliefs about students changed/shifted as a result of focusing on data?
V. Results, Outstanding Needs, and Sustainability
1. Reflecting upon your data-driven decision making practices at the school level overall…
What works?
What doesn‘t? Why?
What do you wish you knew that you cannot find out today?
What do you wish you were able to do (if you had the information, tools, practices,
decision making authority, etc.)?
2. What do you see as the next steps for the district to support schools‘ use of data? What are the next
steps for your role as principal/leader in supporting teachers and other school staff? What are the
next steps for teachers in using data?
3. Is there anything aside from data-driven decision making that you think might be making a
difference or helping to raise test scores in this school?
229
[Concluding Remarks/Questions: Is there anything else we should know? Thank them for their cooperation
and time. Inform them we will share our report with them once it is done and that we might need to contact
them for follow-ups]
Document Request:
School schedule (meetings, etc), assessment calendar, meeting minutes
Professional development agendas and calendar
Sample reports from the district, sample reports given to teachers/leads/coaches, reports you use
and find most helpful
Sample action plans based on data – school and/or classroom level
Anything else you think would be helpful for us to look at to get a sense of how you use data.
230
Appendix B: Focus Group Protocol
Purpose of FGD: To understand how teachers make sense of and use data.
Role of the Facilitator is to:
1. Ask questions
2. Probe for further information
3. Keep the discussion on track
4. Maintain good and respectful relationships among participants
5. Encourage some to participate and limit those who dominate
Role of the Recorder: Our sessions will be audio taped and later transcribed. However, the recorder
should take accurate notes about the general topic of discussion paying close attention to the dynamics of
interaction, the participations‘ reactions, and other visual details unable to be captured by the tape.
Routine:
1. Everyone introduces themselves
2. Facilitator should explain the purpose of the study (e.g., Thank you for agreeing to participate
in the focus group discussion on how teachers think about and use data).
3. Housekeeping
Briefly review consent form (esp. ensuring confidentiality & need for audio-taping)
Ask them to fill out the Participant Demographic Info Sheet
Ask them to wear name tags
4. Ground Rules:
Everyone participates
All ideas are equally valid—no right or wrong answers
Respect each others‘ views
Ask each participant to begin their first comments by stating their first name and their
department affiliation.
The Questions:
1. What types of data do you collect and use?
2. What types of data do you find most useful for informing your teaching and student learning?
3. What do you think are some of the benefits of looking at data?
4. What do you think are some of the challenges of looking at data?
5. Has looking at data changed the way you think about teaching?
6. Has looking at data changed the ways you think about students?
7. What do you consider to be data? (Is there a difference between good/bad data)
231
Appendix C: Data Meeting Observation Protocol
I. Meeting Overview
Date:
Type of data used for discussion:
Time:
Location:
Materials used/handed out (Describe):
Topic:
Purpose/Objective:
Formal Agenda(Y/N):
Format of data presented (Describe):
Participants:
II. Guideline Questions for Observation:
1. What was the nature of the discussion?
2. What were the prompts used to analyze data?
3. How freely is data discussed? Were weaknesses/areas of needs openly shared?
4. Was there evidence of joint-problem solving, sharing strategies for analysis and use of data?
5. Were there any decisions made based on these analyses? What type (instructional,
organizational)?
6. Where there short/long term goals or action plans established as result of these analyses?
7. Is there a plan for follow-up on implementation or effectiveness of action plans?
232
Appendix D: Classroom Observation Protocol
Summary Sheet
Observation Date: _____________ Start time: _______ End Time: _______
School: ______________________ District: __________________________
Teacher: _____________________ Grade Level & Subject: ______________
Lesson Topics: ________________________________________________________________
Student Characteristics:
Total in class _____ Students designated as ELL ______
Students designated as GATE _____ Students with special needs ______
Other (Specify) _________________
Instructional Snapshot Checklist:
1. Is the Learning Objective:
___ Evident to students ___ Not evident to students ___Unable to determine
2. General Instructional Strategies (circle all that apply)
a. Centers/ Stations
b. Cooperative learning
c. Discussion
d. Homework
e. Lecture
f. Modeling
g. Objectives/ Feedback
h. Practice
i. Presentation
j. Project/lab
k. Providing directions
l. Re-teaching
m. Review/ Closure
n. Student-led
o. Teacher directed Q&A
p. Teacher-led
q. Testing
3. Student Action:
a. Reading
b. Writing
c. Listening
d. Talking
e. Working with hands-on materials
4. Student Grouping:
a. Whole Group
b. Small Group
c. Paired
d. Individual
e. Other
233
5. Bloom‘s Taxonomy (circle all that apply)
a. Evaluation: Make judgments and justify answers/positions
b. Synthesis: Put information together in new ways
c. Analysis: Break down information in new ways
d. Application: Use information in new ways
e. Comprehension: Understand information
f. Knowledge: Recall information
General Guideline Questions for Observation:
1. What is the general pattern in which the teacher interacts with students?
2. What types of formal data (e.g., test scores) does the teacher refer to or use?
3. What types of informal data (e.g., observational, anecdotal, check-ins) does the teacher refer to or use?
4. How frequently does the teacher check in with students to check for understanding?
5. What evidence do you see of data influencing/directing the classroom lesson?
6. If the teacher discusses data with students, how does the teacher present it? What do the teacher and
students do with it?
7. What evidence do you see of students taking ownership for their learning?
Observation Pre/Follow-Up Interview
I appreciate you letting me observe your class. I have some questions I‘d like to ask you related to this
lesson. (Record)
If applicable ask:
1. What is the name/title of this course
2. What class period is this
3. Can I have a copy of the instructional materials you used for this lesson
Learning Goals:
1. I‘d like to know a bit more about the students in this class.
a. Tell me about the ability levels of students in this class.
b. How do they compare to students in the school as a whole.
c. Are there any students with special needs in this class?
2. What was the specific purpose of today‘s lesson?
3. How do you feel about how the lesson played out?
4. What do you think students gained from today‘s lesson?
5. Is there anything about this particular group of students that led you to plan this lesson this way?
234
Observational Survey of Classroom:
Dimensions of Formative Assessments
A. Teacher Questioning Techniques Frequency
1. How frequently does the teacher elicit recall answers?
2. How often teacher asks ―range-finding‖ questions to reveal what students know
(Example: What are some of the causes of the civil war? One response is
―slavery‖)
3. How frequently does the teacher ask students to justify their responses?
4. How frequently does the teacher direct questions at individual students?
5. How frequently does the teacher ask questions of the general class?
6. Total frequency of teacher eliciting student responses
B. Teacher Feedback Techniques Frequency
1. Teacher makes comments about what has been done well.
2. Teacher makes comments about what needs improvement.
3. Teacher provides guidance for improvement.
4. Teacher provides explicit criteria and/or rubric for student work.
5. Teachers provide students opportunities to follow up on comments.
C. Student Peer Assessment Frequency
1. Students brainstorming
2. Students use a rubric to check each other‘s work
3. Students ask each other questions
4. Students give each other constructive feedback
5. Other
235
D. Student Self Assessment Frequency
1. Students use a rubric to check their work
2. Students conduct self-reflection
3. Students generate and answer their own questions
4. Students ask questions
5. Other
236
Appendix E: Survey
Teachers’ Beliefs about Data Usefulness
(*Formatting Adjusted for Dissertation Submission Purposes)
A. Were the following types of information useful for guiding instruction in your classroom/school(s)?
(Please mark one box in each row).
Types of Information Not
Available
Not
Useful
Minimally
Useful
Moderately
Useful
Very
Useful
1. Student performance results on state
test(s) disaggregated by…
_______ _____ _____ _____ _____
a. Individual student proficiency levels
b. Classrooms (i.e., class periods)
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
d. Item analysis (e.g., by standards,
topic, skills)
e. School-wide
2. Student performance results on district
test(s) disaggregated by..
a. Individual student proficiency levels _______ _____ _____ _____ _____
b. Classrooms (i.e., class periods)
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
d. Item analysis (e.g., by standards,
topic, skills)
e. School-wide
3. Student performance results on common
school department-created tests
disaggregated by…
_______ _____ _____ _____ _____
a. Individual student proficiency levels
b. Classrooms (i.e., class periods)
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
d. Item analysis (e.g., by standards,
topic, skills)
e. School-wide
237
4. Student performance results from the
previous year
5. Student portfolios
6. Individual teacher-created homework
7. Individual teacher-created assessments
8. Lesson plans developed from analysis of
student performance data
9. School improvement plans developed
from analysis of student performance data
10. Stakeholder surveys (e.g., student, parent,
teachers‘ perceptions)
11. Student attendance rates
12. Student mobility rates
13. Student retention rates
14. Student dropout rates
15. Student discipline data (e.g. office
referrals)
16. Please write in any other types of data you find useful for instructional planning that is not listed above:
______________________________ ___________________________________
______________________________ ____________________________________
______________________________ ____________________________________
Please indicate the extent to which you agree or disagree with the following statements.
Statements Strongly
Disagree
Disagree Undecided Agree Strongly
Agree
1. My school has a strong focus on using data
to make decisions.
2. Because of my school‘s focus on using
data…
----------- --------- ----------- ------ ----------
a. Most teachers search for effective
teaching strategies
b. Most teachers rely on multiple-
choice tests in their classroom
assessments
c. Most teachers rely on open-ended
tests (e.g., essays, portfolios) in
their classroom assessments
d. Most teachers pay attention to
238
achievement differences among
different student subgroups
e. Most teachers monitor student
learning
f. Most teachers focus on test
preparation
3. Most teachers look at students‘ results on
standardized tests because…
_______ _______ _______ _____ ______
a. It helps to identify learning gaps
b. It is required by the school
administration
c. It is required by the state‘s
accountability system
d. It is required by our district
e. It helps teachers to identify
instructional needs
f. It guides school improvement
plans
g. It guides department improvement
plans
4. Most teachers in this school have the
knowledge to interpret the data given to
them.
5. Most teachers in this school have the skills
to use data to improve their instruction.
6. Most teachers feel that they have so much
data they don‘t know what to do with it.
7. Most teachers in this school are comfortable
sharing their student performance data with
one another.
8. Most teachers think that there is too much
data.
School Context: During the school year, how many times did you engage in each of the following activities with
other teachers and administrators?
Activities Never 1-3
Times a
Year
1-2 a
Month
1-2 a
Week
Daily/
Almost
Daily
1. Creating lesson plans resulting from data analysis
239
2. Analyzing instructional practices because of student
data
3. Reviewing test score results
4. Observing another teacher for at least 30 minutes at
a time
5. Receiving feedback from another teacher who
observed in your instruction
6. Participating in teacher collaboratives, networks, or
other professional groups
7. Developing an instructional improvement plan after
examining data
8. Administering re-assessments to find out what
students know about a topic
9. Keeping track of the effectiveness of instructional
strategies
10. Discussing student performance data for each class
11. Reviewing assessment results to identify individual
students who need supplemental instruction
12. Reviewing assessment results to identify curricular
topics requiring more or less emphasis
B. What are the benefits to using student performance data to inform your instruction?
What are the challenges to using data?
C. Demographic Background
1. Number of years teaching at this school: _______________
2. Total years of teaching: ____________________________
3. Title/Professional Role: _____________________________________________________
4. If you are a teacher, please the classes that you teach:
Subject(s): __________________
__________________
__________________
__________________
__________________
Grade level(s): __________________
__________________
__________________
________________
5. Please list any other duties or leadership positions you hold (e.g. department chair, mentor)
240
6. Educational Level Attained (Please check all that apply):
BA _____ BS _____
MA _____ MS _____
M.Ed. _____
Ed.D. _____
Ph.D. _____
Credential(s): _____________________ Credential Subject: ____________________
Advanced Certification: _________________________________________________________
241
Appendix F: Coding List
Domain
Factor Code
District Framing of
DDDM
Core Tasks Diagnostic Framing
Prognostic Framing
Motivating Framing
Resonance:
Credibility
Resonance:
Salience
School Context Structural Organizational Chart
Class size
Student demographics
Departmental Layout
Training on DDDM
Access to data
Decision making structure
Teacher Background Demographic Length of time at school, race, & gender
Professional Total number of years taught
Number of years at site
Department affiliation
Other roles/duties
Orientation towards data use—knowledge &
comfort
Pedagogy Beliefs about learning
Beliefs about students
Beliefs about the purpose of teaching
Sensemaking Structured opportunities Teacher collaboration time
Faculty meetings
Beliefs about DDDM Beliefs about the purpose of data use
Beliefs about what counts as data
Beliefs about DDDM successes
Beliefs about DDDM challenges
Logic of practice—overall justification/rationale
for DDDM
Nature of data discussions Who participates, what is said/not said, what
conclusions are drawn
242
Appendix G: Survey Results
Teachers‘ Beliefs on Data Usefulness (represented by percentages)
Types of Information
Mesa Adams
NA Not
Useful/
minimally
useful
Moderately
Useful/
Very Useful
NA Not Useful/
minimally
useful
Moderately
Useful/
Very Useful
1. Student performance results on state
test(s) disaggregated by…
____
__
_____ _____ ___
___
_____ _____
a. Individual student proficiency
levels
3.5 14 82.4 2.2 21.8 76.1
b. Classrooms (i.e., class periods) 5.3 29.8 64.9 10.9 15.2 73.9
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
5.3 35.1 57.9 6.5 45.6 47.8
d. Item analysis (e.g., by
standards, topic, skills)
3.5 21 75.4 2.3 11.3 86.4
e. School-wide 7.0 49.1 40.3 2.3 27.3 70.5
2. Student performance results on
district test(s) disaggregated by..
____
__
_____ _____ ___
___
_____ _____
a. Individual student proficiency
levels
3.5 10.5 50.9 10.9 19.5 50
b. Classrooms (i.e., class periods) 5.3 24.6 66.7 10.9 17.4 71.8
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
12.3 45.6 38.6 21.7 30.4 47.9
d. Item analysis (e.g., by
standards, topic, skills)
5.3 17.5 73.7 4.3 6.5 87
e. School-wide 12.3 43.8 42.1 4.3 17.0 78.3
3. Student performance results on
common school department-created
tests disaggregated by…
____
__
_____ _____ ___
___
_____ _____
a. Individual student proficiency
levels
1.8 17.5 77.2 4.3 4.3 91.3
b. Classrooms (i.e., class periods) 3.5 29.8 63.2 6.5 10.9 82.6
c. Student subgroups (e.g., race,
gender, ELL, special ed.)
10.5 45.6 40.4 21.7 26.1 55.4
d. Item analysis (e.g., by
standards, topic, skills)
3.5 24.6 70.2 8.7 10.9 78.2
e. School-wide 8.8 45.6 42.1 19.6 30.4 50
243
4. Student performance results from the
previous year
7 24.6 59.6 4.3 8.7 82.7
5. Student portfolios 26.3 33.3 36.9 30.4 28.2 41.3
6. Individual teacher-created
homework
10.5 24.6 61.4 2.2 30.4 67.4
7. Individual teacher-created
assessments
8.8 14.1 75.5 8.7 13.1 78.3
8. Lesson plans developed from
analysis of student performance data
7.0 26.3 61.4 6.5 10.9 82.6
9. School improvement plans
developed from analysis of student
performance data
7.0 35.1 54.4 2.2 15.2 82.6
10. Stakeholder surveys (e.g., student,
parent, teachers‘ perceptions)
12.3 43.8 40.3 10.9 37.0 52.2
11. Student attendance rates 19.3 28.1 49.1 2.2 15.2 82.6
12. Student mobility rates 26.3 36.8 35.1 8.7 28.2 60.9
13. Student retention rates 26.3 36.9 35.1 4.3 19.5 76.1
14. Student dropout rates 22.8 38.6 35.1 2.2 30.5 67.4
15. Student discipline data (e.g. office
referrals)
22.8 38.6 35.1 4.3 30.5 65.2
244
Appendix H: Mesa’s Data Discussion Protocol
Part 1 – To be completed before coming to reflection meeting
Teacher: ______________
Course: ______________
Assessment or Unit of Study: ________________
Date: ____________
A. Reflection on Curriculum, Assessment, and Instruction
1. What standards were taught and assessed?
2. What level of cognition do these standards require students to use? (Look at the verb stated in the
standard then refer to Bloom‟s Taxonomy on the back of this sheet).
□ Knowledge……………...Lower
□ Comprehension
□ Application
□ Analysis
□ Synthesis
□ Evaluation………………Higher
3. What strategies were used to teach these standards?
4. What opportunities, besides the benchmark, were students given to demonstrate mastery of these
standards (graded homework can be considered here)?
B. On-the-Surface Benchmark Analysis
5. Which question(s) did all/most of my students answer correctly?
6. Which question(s) did all/most of my students NOT answer correctly?
7. Which standard(s) were all/most of my students proficient in?
8. Which standard(s) were all/most of my students NOT proficient in?
C. Under-the-Surface Benchmark Analysis
9. Did certain class periods of mine outperform others? Possible causes?
□ 0 per
□ 1 per
□ 2 per
□ 3 per
□ 4 per
□ 5 per
□ 6 per
245
Part 2 – To be completed as a course-alike group
Teacher: ______________
Course: ______________
Assessment or Unit of Study: ________________
Date: ____________
D. Under-the-Surface Benchmark Analysis
10. Which question(s)/standard(s) did all or most of our students do well on?
11. Which question(s)/standard(s) did all or most of our students NOT do well on?
12. Did certain classrooms outperform others? If so, which ones?
E. Exploring Root Causes (successful items):
13. Which strategies proved to be effective across all classrooms?
F. Exploring Root Causes (unsuccessful items):
14. Which strategies did not yield the expected or desired results?
15. Did the strategies align with the level of cognition of the standard? If not, what could be changed in
order to align them?
□ Yes
□ No
G. Data-Driven Decision Making
16. Which standard(s) do students still need support in mastering, which future lessons can these standards
be incorporated into, what strategies will provide the best opportunity for students to be successful, and
how will they be assessed?
Standard Future Lesson Strategies Assessment(s)
H. Solutions
17. Who will develop new or modified activities/strategies or assessments (Mid-Qtr only)?
What needs revision? Who will revise it? When will revisions be made?
□ Test question #s
□ Add questions
□ Activity
□ Strategy
246
□ Other
□ Test question #s
□ Add questions
□ Activity
□ Strategy
□ Other
□ Test question #s
□ Add questions
□ Activity
□ Strategy
□ Other
□ Test question #s
□ Add questions
□ Activity
□ Strategy
□ Other
Abstract (if available)
Abstract
The growing body of research on data-driven decision making (DDDM) details the implementation and positive results at the district and elementary school levels (Armstrong & Anthes, 2001
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The pursuit of equity: a comparative case study of nine schools and their use of data
PDF
Multiple perceptions of teachers who use data
PDF
Holding on and holding out: why some teachers resist the move toward data-driven decision making
PDF
Examining the applications of data-driven decision making on classroom instruction
PDF
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
How districts prepare site administrators for data-driven decision making
PDF
Does data-driven decision making matter for African American students?
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
Do we really understand what we are talking about? A study examining the data literacy capacities and needs of school leaders
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
The implementation of data driven decision making to improve low-performing schools: an evaluation study of superintendents in the western United States
PDF
The distinguishing characteristics of a high performing urban public high school: a descriptive analysis self-study
PDF
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
PDF
Data-driven decision-making practices that secondary principals use to improve student achievement
PDF
Examining the implementation of district reforms through gap analysis: making two high schools highly effective
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
The effects of campus friendships and perceptions of racial climates on the sense of belonging among Arab and Muslim community college students
Asset Metadata
Creator
Park, Vicki
(author)
Core Title
Beyond the numbers chase: how urban high school teachers make sense of data use
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Education (Policy, Planning and Administration)
Publication Date
09/08/2008
Defense Date
07/29/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accountability,data-driven decision making,frame analysis,high school reform,OAI-PMH Harvest,policy implementation,teachers' mediation of reform
Place Name
California
(states),
educational facilities: Adams High School
(geographic subject),
educational facilities: Mesa High School
(geographic subject)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Datnow, Amanda (
committee chair
), Astor, Ron Avi (
committee member
), Brewer, Dominic J. (
committee member
)
Creator Email
lattepark@gmail.com,vickipar@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1587
Unique identifier
UC1432368
Identifier
etd-Park-2375 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-113318 (legacy record id),usctheses-m1587 (legacy record id)
Legacy Identifier
etd-Park-2375.pdf
Dmrecord
113318
Document Type
Dissertation
Rights
Park, Vicki
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
accountability
data-driven decision making
frame analysis
high school reform
policy implementation
teachers' mediation of reform