Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
(USC Thesis Other)
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
TEACHER EDUCATION PROGRAMS AND DATA DRIVEN DECISION MAKING:
ARE WE PREPARING OUR FUTURE TEACHERS TO BE
DATA AND ASSESSMENT LITERATE?
by
Jenniffer Michelle Killion
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2009
Copyright 2009 Jenniffer Michelle Killion
ii
DEDICATION
This dissertation is dedicated to the loving memory of my grandfather, Ray
Killion. My grandpa was my biggest fan in all areas of my life, and it was he who
instilled in me a love for all things Trojan. I miss not being able to share this
accomplishment with him now, yet I know that I will be able to tell him one day when I
see him again in Heaven. Fight on!
iii
ACKNOWLEDGEMENTS
This dissertation would not have been possible without the expertise, guidance
and support from my outstanding chair, Dr. Amanda Datnow. Her knowledge and
research in the areas of Data Driven Decision Making and school reform are second-to-
none. Many, many thanks, Dr. Datnow! This dissertation also reaped the benefits of
critical review and evaluation by my other committee members, Dr. Gisele Ragusa and
Dr. Jamy Stillman. Your feedback helped me broaden my thinking on teacher education,
and made my focus a bit sharper. Thank you to all three committee members for the time
you all have invested in this process. I would also like to thank Jennifer Stover for the
transcription of all of the data collected for this study.
Much gratitude, admiration, and respect goes to my former professor and current
mentor, Dr. Claire V. Sibold at Biola University. Claire has inspired and encouraged me
in my educational endeavors for the last twenty years, and was extremely helpful
throughout the dissertation process. Thank you, Claire, for your help and inspiration, and
for also being a friend.
I would also like to acknowledge my former principal in the Dallas Independent
School District for introducing me to Data Driven Decision Making. Mr. Rodney
Cooksy is by far the most outstanding administrator I have ever worked for, and his
expertise and willingness to jump into the DDDM process and to train his teachers how
to use data in the classroom is one of the reasons I chose this topic for my study. Thank
you, Mr. Cooksy!
iv
A world of thanks goes to all of the wonderful women in my His Alone Sunday
School class at Church of the Open Door. Without all of your prayers and support, the
accomplishment of this task would have been a lot more difficult and strenuous. I would
also like to thank my small group in the Friday morning Women’s Bible Study at COD
for their unceasing prayers, especially when things got difficult and frustrating. All of
you have played a role in the completion of this dissertation.
I would also like to thank my amazing friends for their support and
encouragement throughout this process. Thank you to David and Tira Young, Trista
Hoffman, Jennifer Hicks, and Amy Grubb for being there through the triumphs and the
tears, and for understanding when I had to say “no” because I had to work on this project.
Finally, my deepest love and gratitude goes to my family, Dad, Mom, Jill, Kay,
Wanda, and Linda for all of their love, support, and encouragement through the three
years of this program, and especially through the last year with the dissertation process.
Thank you for your prayers, for keeping me accountable, and for providing never-ending
support when times were tough. I love you all and could not have done this without you.
v
TABLE OF CONTENTS
Dedication ii
Acknowledgements iii
List of Tables vii
Abstract viii
Chapter 1: Overview of the Study 1
Age of Accountability 1
No Child Left Behind 2
Teacher Education 4
Preservice Coursework 5
Key Terms and Definitions 7
Research Questions 9
Significance of the Study 10
Chapter 2: Literature Review 11
Introduction 11
Background on Educational Reform Context 12
Accountability and Standards-Based Reform 12
No Child Left Behind 17
Highly-Qualified Teachers 18
P-16 Initiative 23
Data Driven Decision Making 27
Background Information 27
Positive Outcomes and Facilitating Data Use 28
Developing a Culture of Data Use 29
Barriers to Data Use 34
Classroom Assessment and High-Stakes Testing Data 37
Differentiated Instruction 44
Teacher Education 48
Traditional Coursework 48
Assessment Coursework 49
Effective Teacher Preparation Programs 54
Conclusion 58
Chapter 3: Methodology 62
Introduction 62
Sample and Population 64
Data Collection Procedures 66
Data Analysis Procedures 67
vi
Ethical Considerations 69
Limitations of the Study 70
Researcher’s Subjectivity 71
Summary 71
Chapter 4: Data Analysis and Interpretation of Findings 72
Introduction 72
Education Faculties’ Beliefs About the Need to Learn How to
Use Data 74
General Faculty Beliefs 75
TPA and PACT 85
Data Literacy and Analysis not Intentionally Taught 91
What Programs are Doing to Provide Basic Assessment Literacy 96
Changes in Program Due to TPA and PACT 97
Reading Assessments 102
Modeling Other Forms of Assessment 109
How Preservice Teachers are Being Taught to Use Data to
Differentiate Instruction 121
Focus on English Language Learners and Special Populations 122
The Missing Link: Most are not Using Data to Teach
Differentiation 133
Conclusion 140
Chapter 5: Summary and Implications of Findings 143
Introduction 143
Connections to Prior Research 146
Research Sub-Question 1 147
Research Sub-Question 2 149
Research Sub-Question 3 152
Summary of Findings in Relation to Existing Literature 156
Implications for Future Research 159
Implications for Policy and Practice 161
Conclusion 163
References 165
Appendix A 170
Appendix B 172
Appendix C 174
Appendix D 176
vii
LIST OF TABLES
Table 1: Teacher Education Faculty’s Beliefs about the Need
for Preservice Teachers to Learn How to Use Data 95
Table 2: What Programs are Doing to Provide Basic Assessment
Literacy to Preservice Teachers So They Are Data-Literate 120
Table 3: How Preservice Teachers are Being Taught to Use Data
to Differentiate Instruction 139
viii
ABSTRACT
Accountability and standards-based reform are buzz words in educational settings
today. Much of this is due to the passage of the No Child Left Behind Act of 2001. As a
result of this increased accountability, school districts, administrators, and teachers are
utilizing data to drive instructional decisions and practices. This focus on data driven
decision making has had a far-reaching impact, from the top level of the federal
government who essentially mandated data-use, down to the classroom where students
learn every day.
Because data driven decision making is so widely used, it is expected that
teachers have quantitative knowledge and understand how to use the data in their
classrooms. However, there appears to be a gap in the knowledge of our teachers where
assessment and data literacy are concerned. Experts cite a lack of training in sound
assessment practices as a major problem in schools today, and many are looking to
teacher preparation programs to help close this gap.
This research study looked at three university teacher preparation programs in
order to find out what they are doing to prepare teachers for data and assessment
practices in schools. This study was qualitative in nature, and relied on interviews with
teacher educators, a focus group, and student surveys for data collection purposes. The
goal was to add new knowledge and begin a research base on data literacy in teacher
preparation programs.
ix
Findings in this study show that teacher preparation programs have a greater focus
on teaching assessment within coursework, particularly within reading methods courses.
Data collection is also a focus, especially with regards to student demographic data. This
study also shows that teacher preparation programs teach about differentiated instruction
for special student populations, including English Language Learners, Gifted and
Talented students, and special needs students. However, utilizing data to differentiate
instruction appears to be an area that needs greater focus.
1
CHAPTER ONE
Overview of the Study
Age of Accountability
For nearly two decades, the K-12 education system in the United States has been
immersed in a system of accountability unlike any seen previously in its existence. And
from all indications, this system of accountability is going to be around for quite some
time (McTighe & Brown, 2005). In 1990, then-President George H.W. Bush and
governors from each state around the country came together and adopted six national
education goals aimed at educational improvement for all states (Goertz, 2001). At this
point, the focus on education shifted from inputs to outcomes, which in turn led to
educational accountability. States were asked to provide challenging content and
performance standards for all of their students. Additionally, federal and state education
policies were asked to align both vertically and horizontally in order to provide for more
effective policy guidance (Goertz, 2001).
With the passage of these new education goals, federal and state policy makers
have essentially established the following for standards-based reform: 1) high academic
standards; 2) accountability for student outcomes; 3) the inclusion of all students in
reform initiatives; and 4) flexibility to foster instructional change (Goertz, 2001). After
the initial implementation of standards-based reform, and following in the footsteps of his
father, President George W. Bush amended the Elementary and Secondary Education Act
of 1965, and reauthorized it as the No Child Left Behind Act of 2001 (NCLB), or Public
Law 107-110 (Linn, Baker, & Betebenner, 2002).
2
Because accountability and data driven decision making are relatively new, there
is a limited research base from which to pool resources and information. However, the
deeper we get into No Child Left Behind and standards-based reform, the more prevalent
research will become in this area. Schools and districts are already finding that they can
use student achievement data for myriad purposes, including the evaluation of progress
toward state and district standards, monitoring student performance, and to judge the
efficacy of curriculum and instructional practices (Datnow, Park & Wohlstetter, 2007).
No Child Left Behind. There are several requirements that schools, districts, and
states must adhere to with the changes brought about by NCLB. These include changes
in testing and accountability systems, content standards, timely reporting of student test
data, and ensuring highly qualified teachers are in every classroom (Linn et al., 2002).
Prior to the passage of NCLB, several states already had testing programs based
on state content standards. With NCLB, all states were required to have tests aligned to
content standards. These tests were put into place to hold schools and districts
accountable for all student learning outcomes. Schools must demonstrate steady gains in
student achievement and close the achievement gap that exists among various subgroups
of student populations (Linn et al., 2002). If schools fail to show student growth and do
not meet their improvement targets, they must adopt “scientifically based” instructional
approaches or programs (Linn et al., 2002; Suskind, 2007). Interestingly enough, the
term “scientifically based” is used throughout the NCLB law 111 times (Linn et al., 2002;
Suskind, 2007), yet there seems to be a lack of consensus on the part of educators and
3
researchers as to an exact definition of “scientifically based”. Nevertheless, it is clear
that decisions about educational programs are expected to be based on evidence.
One of the components of NCLB that comes out of high stakes testing includes
Adequate Yearly Progress (AYP), which is a way for schools and districts to demonstrate
growth, or lack thereof, from year to year. These data, along with other testing data, are
made available to the public. According to NCLB, states are allowed to aggregate their
data for up to three years when determining whether or not they have met the AYP
requirement (Linn et al., 2002). However, many will argue that AYP is difficult to
measure, as the school climate changes from year to year, including cohorts of students
and teacher and administrative turnover (Linn et al., 2002). This certainly presents a
challenge for schools as they try and meet requirements of having all of their students at
the proficient level.
Perhaps one of the most challenging components of the law is the requirement of
having a highly qualified teacher in every classroom. This portion of the law went into
effect at the end of the 2006-2007 school year (Berry, Hoke, & Hirsch, 2004). A highly
qualified teacher is defined as one who has: 1) fulfilled the state’s certification and
licensing requirements; 2) obtained at least a bachelor’s degree; and 3) demonstrated
subject matter expertise (Berry et al., 2004). With these stringent requirements, schools
may be put in situations that require them to lose teaching staff who have not met all of
the aforementioned requirements. This puts additional strain on the education system.
Additionally, state departments of education are required to publicly report what they are
4
doing to improve teacher quality, as well as identify the distribution of highly qualified
teachers across low-and high-poverty schools (Berry et al., 2004).
Teacher Education
With all of the above-mentioned components and requirements of NCLB, it is
only natural that teacher education programs would be affected, at least to some degree.
Schools and colleges of education have an increased burden in assuring the public, as
well as state and federal governments, that they are sending out highly qualified
graduates to teach the students of America. This highly qualified component has “fueled
the fires of the already-politicized debate on teacher preparation” (Kaplan & Owings,
2003, p. 687).
Moreover, while data literacy is not explicitly mentioned as part of NCLB’s
“highly qualified” definition, it seems that teachers must have these skills in order to help
their students make the required gains in achievement. According to Datnow et al.
(2007), “The theory of action underlying NCLB requires that educators know how to
analyze, interpret, and use data so that they can make informed decisions in all areas of
education, ranging from professional development to student learning” (p. 10). In other
words, teachers are required to prepare their students to do well on these high-stakes
tests, and in the process, they must be able to interpret and apply the data produced by
these tests to classroom instruction. However, high-stakes tests are not the only means
of determining student achievement and learning, therefore teachers also use other forms
of assessment in the classroom, including formative assessment practices and authentic
assessment tasks, and must be able to interpret and apply the data these assessments yield
5
in order to drive instructional practices. Naturally, this leads back to our teacher
education programs and their preparation of our future teachers. Are the teacher
education programs preparing their preservice teachers to use data in the classroom?
Currently, there is not much research available in the area of data literacy and preservice
teachers.
Preservice Coursework. Given all that has been discussed in regards to highly
qualified teachers and student achievement, it would be helpful to have a general
understanding of the coursework preservice teachers typically engage in. Traditional
teacher preparation coursework generally consists of pedagogical preparation and
subject-matter coursework, although these are often times learned independently of one
another (Kaplan & Owings, 2003). Often, students who pursue teaching major in Liberal
Studies, which gives them a broad overview of several subjects that would be in line with
a traditional liberal arts education. This is especially prominent in small or private
universities. Other courses may include children’s literature, teaching for the fine arts,
and physical education coursework. Because schools of education differ in course
offerings, programs, curricula, and quality of both faculty and students, it can often be
difficult to gauge how well-prepared teachers are as a whole. However, research does
show that both pedagogical training and subject-matter knowledge impact student
achievement, especially where math and reading are concerned (Kaplan & Owings,
2003).
For the purposes of this study, the area of particular concern in teacher training is
data and assessment literacy. As noted above, assessment and data literacy are important
6
qualities to possess in our current era of standards-based reform and accountability.
Student assessment is now part of a daily routine for teachers. In fact, researchers
suggest that teachers spend between one-third and one-half of their teaching career
assessing their students (Stiggins, 1999; Wise, Lookin, & Roos, 1991). However,
research shows that many teachers, including preservice teachers, lack the skills needed
to assess and evaluate their students (Volante & Fazio, 2007). In fact, Volante and Fazio
(2007) suggest that “…proficiency with appropriate assessment and evaluation practices
would appear to be a requisite skill for improving the quality of the teaching and learning,
particularly within these highly accountable educational contexts” (p. 750). Interestingly
enough, preservice teachers recognize this disparity in their coursework and
“overwhelmingly endorsed the development of a specific course(s) focused on classroom
assessment and evaluation” (Volante & Fazio, 2007, p. 759). Research also shows that
for those who did participate in a course where assessment and evaluation were taught,
students did not fully comprehend the material that was covered (Volante & Fazio, 2007).
Research conducted by Richard Stiggins (2002) suggests that both teachers and
administrators have not been properly trained or prepared in assessment and how to use it
as a teaching and a learning tool. Furthermore, he reveals that only approximately twelve
states require any form of assessment competency for licensing, and that there are no
exams at the state or federal levels that test the assessment competency of teacher
candidates.
Moreover, undergraduates, especially those in four year bachelor plus credential
programs, are not required to take any research methods coursework or coursework on
7
interpreting and using data (Campbell, Murphy & Holt, 2002, cited in Volante & Fazio,
2007). This is generally reserved for graduate students. Given the fact that schools are
required to use student test data in various ways under the NCLB requirements, perhaps
teacher educators should be training their students in how to read and interpret these
various types of student data. If they are not taught how to use and interpret assessments
and other forms of student data, they will be thrown into an uncomfortable and
potentially frightening situation as a new teacher, which will attribute to an already-low
self-efficacy when it comes to assessment and student data. However, if schools of
education provide a course that would, at minimum, introduce them to basic data and
assessment literacy, they would most likely be able to transition into the classroom
feeling a bit more confident in working with the high-stakes testing data (Volante &
Fazio, 2007), as well as other forms of student data that are pertinent to classroom
instruction. Furthermore, new teachers also need to be trained in how to adjust their
classroom instruction based on a variety of data; thus, the need for skills and techniques
to differentiate instruction are also essential.
In light of this research, it is all the more imperative that we better understand
how and whether teacher educators are attempting to provide their preservice candidates
with appropriate instruction and methods in data literacy and assessment.
Key Terms and Definitions
The definitions and key terms used throughout this study must be defined in order
for the reader to fully comprehend the significance of the study, as well as its outcomes.
There are three key terms that should be defined at the outset, and are found throughout
8
this study. These terms are data, assessment, and differentiated instruction. For the
purpose of this study, data are pieces of factual information such as grades, test scores,
attendance rates, graduation rates, socio-economic status, and other pieces of information
collected by a teacher used in determining student progress, learning, and/or achievement
in the classroom. Assessment is defined as measuring student knowledge, progress, or
improvement as related to classroom instruction, content, and/or standards through tests,
projects, homework, teacher observation, portfolios, and other similar means.
Differentiated instruction, or differentiation, is a way of thinking about teaching and
learning. Differentiation can be defined as “a conceptual approach to teaching and
learning that involves careful analysis of learning goals, continual assessment of student
needs, and instructional modifications in response to data about readiness levels,
interests, learning profiles, and affects” (Tomlinson, 1999, 2003, cited in Brimijoin, 2005,
p. 254).
Assessment itself is a very broad term as used by teachers and educators. Within
the broad definition of assessment, there are a variety of types including formative and
summative assessment, formal and informal assessment, and authentic and alternative
assessment. Formative assessment, or evaluation, can be defined as “Evaluation
conducted before or during instruction to facilitate instructional planning and enhance
students’ learning” (Ormrod, 2006, p. 527). Conversely, summative assessment or
evaluation is conducted after teaching and instruction in order to assess students’ final
achievement, and is typically done at the end of a unit of study, grading period, or even at
the end of the school year (Ormrod, 2006). These types of assessments or evaluations of
9
student learning are very broad in coverage, and can encompass some of the other forms
of assessment that follow.
More specific forms of assessment include formal versus informal, and authentic
and alternative assessment. Formal assessment typically consists of tests and quizzes,
those items typically associated with assessment in the traditional sense of the word.
Informal assessments include assessing student learning through observations, group
activities and interaction, and other ways of gauging student learning that do not involve
traditional “graded” pieces of work (Ormrod, 2006). Authentic assessment typically
involves the use of higher-order thinking skills by students which they use to “perform,
create, or solve a real-life problem, not just choose one of several designated responses as
on a multiple-choice test item” (Parkay and Stanford, 2007, p. 388). Finally, alternative
assessment is oftentimes used to distinguish other forms of assessment from traditional
“paper and pencil” tests and quizzes, and typically refers to authentic assessment (see
previous definition), portfolio assessment, performance-based assessment, alternate
assessment, and project-based learning (Parkay and Stanford, 2006). While this is
certainly not an exhaustive list of assessment types, variations, and components, the most
important terms for the purpose of this study have been presented and defined here.
Research Questions
This study used qualitative research methods to examine how teacher education
programs are preparing preservice teachers for data use in the classroom. Faculty and
students from three university teacher preparation programs participated in this study.
10
Specifically, this study addressed the overarching question: How do different
teacher education programs prepare preservice teachers to use data to inform instruction?
The study also addressed the following sub-questions:
1. What are the education faculties’ beliefs about the need for preservice
teachers to learn how to use data?
2. What are programs doing to provide basic assessment literacy to preservice
teachers so that they are data-literate?
3. How are preservice teachers being taught to use data to differentiate
instruction?
Significance of the Study
As mentioned previously, there is not a large research base on data-driven
decision making, and there is an even more limited research base on teacher preparation
in regards to data and assessment literacy. This study aims to add to that research base,
particularly in the area of data and assessment in teacher preparation programs. With
standards-based reform and accountability through high-stakes testing, it is imperative
that new teachers come into the profession with some sort of basic literacy in data and
assessment that will help them analyze and use a variety of essential student data in their
classrooms, not just in relation to high stakes testing, but especially for daily instructional
decisions in the classroom.
11
CHAPTER TWO
Literature Review
Introduction
Data driven decision making and accountability have been at the forefront of
education since the advent of NCLB in 2001 (Goertz, 2001). With this law, there were
many changes that had to be implemented in various aspects of our education system,
including the alignment of standards and curriculum, high stakes testing and assessment,
and the assurance of a highly qualified teacher in every classroom (Linn et al., 2002).
Because of these requirements, schools and colleges of education are feeling increased
pressure to produce teachers who have the requisite skills to be successful in this age of
accountability (Darling-Hammond & Baratz-Snowden, 2005). Additionally, there are
implications that these schools and colleges of education need to prepare their teacher
candidates to be data and assessment literate, as teachers are essentially required to use
student achievement data, and other forms of data, to drive their classroom instruction
(Datnow et al., 2007).
Given the fact that we are in an age of accountability unlike any previously seen,
changes in our teacher education system are only a natural response. The purpose of this
literature review is to explore areas that appear to affect data and assessment literacy both
in new and veteran teachers, with an ultimate focus on teacher education programs. This
occurs under the following categories:
12
1. Background information that includes accountability and standards-based
reform efforts, including the No Child Left Behind Act, the “highly qualified”
component of NCLB, and the P-16 Initiative, including its potential effects on
teacher education.
2. Data-driven decision making, including classroom assessment, teachers’
assessment literacy, and differentiated instruction.
3. Teacher education, including traditional and assessment coursework, and
effective teacher preparation programs.
The review concludes with implications for studying teacher education programs in the
context of the aforementioned categories.
Background on Educational Reform Context
Accountability and Standards-Based Reform
Accountability and standards-based reform stem directly out of legislation for the
improvement of the American education system. Over the last two decades, numerous
reform efforts have been passed in order to try and improve America’s schools and close
the achievement gap, especially where traditionally underserved students are concerned.
However, not everyone is thrilled with the direction our nation has gone in regards to
public education, and some feel as if the federal government has done more harm through
these new accountability standards than it has good (Harvey, 2003). Conversely, many
believe that the government has good intentions, and that the premise of standards-based
13
reform measures and accountability has the potential to bring about effective change in
our school system (Linn, 2003).
According to Robert Linn (2003), the federal and state legislation regarding
accountability is “intended to improve the quality of education for all students” (p. 3).
While this is a noble course of action to take, there appears to be some disconnect as to
who all was to be involved in this new legislation of accountability. Linn (2003), along
with Porter and Chester (2001, cited in Linn, 2003), argue for a shared responsibility
regarding accountability. Essentially, shared responsibility includes students, teachers,
administrators, parents, and policy makers working together toward greater student
achievement. They further argue that, with the way the system is currently set up, the
primary focus for accountability falls in the laps of educators and students (Linn, 2003).
While educators and students should be held accountable for teaching and learning in the
classroom, there are other people who also come into play who are often let off the hook,
thus keeping the onus on educators and students. In addition, researchers also have a
shared responsibility when it comes to accountability and they must provide “solid
information about strengths and weaknesses of alternative approaches and interventions,
one of which is the accountability system itself” (Linn, 2003, p. 3).
Schmoker and Marzano’s (1999) discussion of standards based education and
accountability essentially aligns with what Linn, Porter and Chester (2001) hold to as far
as shared responsibility, though they discuss it in terms of collective purpose. Everyone
must be on the same page and be working toward the same goals if standards-based
14
education is truly going to work. Numerous examples of schools with teams of teachers
working together are given throughout the course of their article that demonstrates how
effective standards-based reform can be if everyone works collaboratively, with that
sense of shared responsibility or collective purpose. However, none of these examples
includes anyone outside of the academy, including policy makers and parents. If teachers
are going the extra mile to work together, it seems only logical that policy makers and
parents should be involved and do their part to ensure that standards are being met. This
includes taking responsibility for student learning that goes beyond what teachers have
the time and resources to do in the classroom on a daily basis. Support, collaboration,
and active participation appear to be essential among all parties in order for standards-
based education to be deemed successful.
The discussion of standards in the Schmoker and Marzano (1999) article revolve
around two essential questions: 1) Do we have sufficiently clear standards? and 2) Are
state and professional standards documents truly helping us achieve the focus and the
coherence that are vital to success? The authors present answers to the very questions
they pose. They discuss issues with district curriculum, scope, and sequence being far
too broad in coverage, and not particularly user friendly, which essentially encourages
teachers to teach whatever they want, not what the curriculum states (Schmoker &
Marzano, 1999). In other words, many district standards-based reform efforts are really
operating under chaos, not coherence.
Furthermore, the expectation is for school improvement efforts to rely on
standards, yet we are trying to cover far too many topics in order for our standards to be
15
helpful and accurately assessed. Schmoker and Marzano (1999) provide a comparison of
U.S. school textbooks with that of Germany and Japan, only to find an overabundance of
topics covered in the U.S. school textbooks, while the textbooks that students use in
Germany and Japan have a much smaller base of knowledge that is covered in more
depth. When looking at test scores, students in Germany and Japan consistently
outperform students in the United States, especially in math and science (Schmoker &
Marzano, 1999). This information helps support the position that we have far too many
standards than can reasonably be covered. As a result, this places a large burden on the
teachers to try and cover all of the curricula, and to cover it in depth enough for their
students to perform well on the high-stakes tests. It would seem that, based on the
research cited, this is nearly impossible. Perhaps the standards and curricula could be
looked at with a more critical stance, rather than immediately placing blame on the
teachers for not being able to teach all of what has been deemed as “required”.
Because standards-based education is intended to provide alignment and
coherence to what children are learning, efforts need to be made to make their use more
effective. Schmoker and Marzano (1999) provide some suggestions for how standards-
based education could be improved. First, we should start with the standards that are
actually assessed. If teachers focus on those standards that will be covered on state
assessments, they can ensure stakeholders that students are being taught what they need
to learn. Second, they suggest that we “add judiciously to the list of standards you will
teach and assess” (Schmoker & Marzano, 1999, p. 20). Their suggestion here is for
districts to take a close look at their standards documents and only keep those that
16
contribute to becoming “reflective thinkers, competent workers, and responsible citizens”
(Schmoker & Marzano, 1999, p.20). Finally, they suggest that we cut down on the
number of topics covered, and keep it to those that can be taught and assessed with
reasonable effectiveness. However, could this last suggestion by Schmoker and Marzano
(1999) lead to the phenomenon that is now known as “teaching to the test”? This is one
area of great concern by both educators and researchers (McTighe & Brown, 2005).
The “teaching to the test” syndrome is certainly a big issue that has been born out
of high-stakes testing and standards-based reform, and may be a result of increased
pressure for student performance from policy makers and district officials. Teacher
educators, in particular, are trying to guard against this in their coursework with their
preservice teacher candidates, and rightly so (Darling-Hammond, 2004). In fact, Darling-
Hammond (2004) discusses standards based-reform and the accompanying assessment
system as “a tool for classroom-level instructional planning” (p. 1077). She also states
that “standards and assessments are used to support the existing professional
accountability structure by providing more information to guide collective as well as
individual teaching practice” (Darling-Hammond, 2004, p. 1078). Accountability for
standards in education should involve changes in both teachers and schools in order to
increase the chances of students actually meeting the standards set before them (Darling-
Hammond, 2004). In order to aide in this achievement, Darling-Hammond (2004)
suggests the following three areas in which to focus our attention:
17
1. Ensuring that teachers have the knowledge and skills they need to
teach to the standards;
2. Providing school structures that support high quality teaching and
learning; and
3. Creating processes for school assessment that can evaluate students’
opportunities to learn and can leverage continuous change and
improvement. (p. 1078)
Standards-based reform is here for the long haul, both with its strengths and
shortcomings, as we seek to ensure that all students are receiving the best education
possible, one that they are entitled to as citizens of the United States. Such is the premise
of the No Child Left Behind Act of 2001.
No Child Left Behind
In 2001, President George W. Bush amended the Elementary and Secondary
Education Act of 1965, and reauthorized it as the No Child Left Behind Act (Linn et al.,
2002). The premise of the law is to provide each child with the best education possible,
and in the process, hold states, school districts, and teachers accountable for student
achievement through standards-based education and testing.
As a result of this law, many states have had to make changes in their testing and
accountability systems in order to meet more stringent requirements. Some of these
requirements include annual reporting of student test data, the disaggregation and public
reporting of that testing data, meeting adequate yearly progress (AYP) goals, and
participation every other year in the National Assessment of Educational Progress
(NAEP) in reading and mathematics (Linn et al., 2002). The assessment portion of the
law is designed for schools to show steady gains in student achievement with evidence
18
showing that they are closing the achievement gap between various student subgroups.
As far as performance standards on these assessments, all students are to be at the
“proficient” level or higher within 12 years of implementation (Linn et al., 2002).
While the premise of No Child Left Behind may have been born out of good
intentions, many educators and researchers feel that it could actually end up doing more
harm than good. James Harvey (2003), in an opinion piece, suggests that “No Child Left
Behind will join a long history of failed federal promises to transform public schools,” (p.
18). His position is that this piece of legislation actually ignores strategies that were
previously used in attempting to improve public education, even though some of those
strategies fell short, and instead is focused completely on the equality of results (Harvey,
2003). Harvey (2003) believes that NCLB will ultimately collapse on itself because of
internal contradictions it makes, such as making promises it cannot keep, the fact that it is
woefully underfunded, and that it largely ignores best practice.
Highly Qualified Teachers. One of the major components of NCLB is the
requirement that a “highly qualified” teacher is in every classroom, even in traditionally
hard-to-staff schools. There is some debate among educators and researchers as to what
exactly “highly qualified” means (Berry et al., 2004). However, for the sake of NCLB,
the federal government defines a highly-qualified teacher as one who has: 1) fulfilled the
state’s certification and licensing requirements; 2) obtained at least a bachelor’s degree;
and 3) demonstrated subject matter expertise (Berry et al., 2004). This provision of
NCLB was supposed to go into effect for the 2005-2006 school year, but an extension
was granted to put the law into effect for the 2006-2007 school year. The highly
19
qualified provision was put into place as “teachers must have the preparation and skills to
teach students to the highest standards” (Darling-Hammond, 2005, p. 237). Even then,
districts have struggled to meet this requirement, especially in urban school areas. In
fact, Darling-Hammond (2005) suggests that various budget crises have led to poor
districts lowering their standards to hire teachers, thus attributing to a huge decline in
educational equality for these students, as they are not being taught by highly qualified
professionals.
The highly qualified teacher component has direct ties to teacher preparation
programs. Schools and colleges of education must now ensure that their graduates can
pass state licensing exams and have a strong knowledge base on subject matter they are
certified to teach. Additionally, Kaplan and Owings (2003) bring up the fact that there is
already a huge debate on teacher preparation, and the “highly qualified” component has
only brought more attention to that situation.
Kaplan and Owings (2003) acknowledge that high-quality teaching does make a
difference in student achievement, as supported by research. There is sufficient evidence
to show that the quality of teaching strongly correlates to student achievement, even more
so than family dynamics and ethnicity. There is also a correlation between teachers’
verbal ability and basic skills with that of student achievement. However, Kaplan and
Owings (2003) point out that there is not sufficient evidence to prove that mere content
knowledge alone can affect student achievement in a positive manner. Thus, verbal skills
20
and content knowledge may be necessary for student achievement, but they do not
provide sufficient conditions for high quality teaching and learning (Kaplan & Owings,
2003).
Another issue that is discussed in the “highly qualified” debate is traditional
teacher preparation programs versus alternative routes to certification. This is a hot issue,
and many are in favor of the alternative programs as they say there is no evidence that
shows teacher education programs provide better teachers (Kaplan & Owings, 2003).
Thus, those in favor of alternative certification programs state that a correlation cannot be
drawn that teacher education graduates actually cause increases in student achievement.
However, Glass (2008) has recently published a report stating that research shows
alternative certification programs, such as Teach for America (TFA), cannot prove that
they work better than traditional teacher education programs. The research does show
that
…teachers participating in alternative programs such as TFA are clustered in
poor, urban schools; are no more likely—and possibly are less likely—to remain
in teaching after their initial commitment period; …and have yielded conflicting
data as to their effectiveness compared with their fully certified counterparts.
(Glass, 2008)
Moreover, there are many experts and researchers who state that teacher preparation does
increase student achievement. One of those experts is Stanford Professor Linda Darling-
Hammond. She states that “teacher preparation is a stronger correlate to student
achievement than class size, overall spending, or teacher salaries and accounts for 40% to
21
60% of the total achievement variance after taking students’ demographics into account”
(Darling-Hammond, 2000, cited in Kaplan & Owings, 2003, p. 689). Darling-Hammond
(2004) also states that poor and minority students are least likely to have highly qualified
and well-prepared teachers, and this effects not only student achievement, but grade
retention and graduation issues among these students as well, leaving them on an uneven
playing field. Others also discuss the strong effect that teacher education coursework has
versus that of extra subject-matter coursework. The importance of studying learning
processes has also shown to provide more effective teaching behaviors and increased
student achievement (Kaplan & Owings, 2003).
There is mixed research in the area of certification and quality, and some have
stated that there is not a consensus on “how to train good teachers or ensure that they
have mastered essential skills and knowledge” (Hess, cited in Kaplan & Owings, 2003, p.
690). There is also discussion on the dramatic difference in quality among teacher
preparation programs, and that while those from really good schools of education are
well-prepared, others are not (Dean, Lauer & Urquhart, 2005). Additionally, licensing
from state to state varies substantially, with some states being more rigorous in their
standards than others. Yet some states have pushed ahead and actually strengthened
licensing standards for educators in an effort to ensure their teachers are able to
effectively teach students from diverse backgrounds (Darling-Hammond, 2005). Fiscal
concerns have been raised as well, given the fact that professional coursework costs
students a lot of time and money, and then they enter the profession severely underpaid,
22
compared to graduates from other majors. Kaplan & Owings (2003) state that “teacher
certification standards are too varied and in most cases too low to ensure teacher quality”
(p. 691).
Other issues that have come to the forefront that are directly related to the “highly
qualified” component of NCLB include studies that suggest the United States needs a
near-complete overhaul of teacher preparation (Darling-Hammond, 2005). While the
majority of U.S. teacher preparation occurs at the undergraduate level, many other
countries, including Germany, France, Japan, and Taiwan, have moved it almost
exclusively to the graduate level. Future teachers study other disciplines as
undergraduates, then apply to graduate teacher preparation programs where they engage
in intensive methods and pedagogical coursework, and complete one-year long
internships (Darling-Hammond, 2005). These countries also have intense professional
development training, and pay their teachers a much higher salary than what is found in
the U.S.. This allows them to attract and maintain the best possible teachers in their
school systems, thus almost eliminating teacher shortages (Darling-Hammond, 2005).
Darling-Hammond (2005) suggests that perhaps the U.S. should look at some of these
programs in other countries, along with the implementation of professional development
schools, in order to improve teacher education and send out highly qualified teacher
candidates. This potential overhaul of teacher preparation programs may be closer than
we think, due to standards-based reform efforts that are now reaching higher education.
23
P-16 Initiative
Along with standards-based reform at the K-12 level, a relatively new initiative
has been put into place that includes higher education. This has come to be known as the
P-16 Initiative. The name itself refers to the grade levels—pre-school through 4 years of
college (the “16”), and in some cases, through “20”, which would then include master’s
degrees. Many states have already implemented P-16 initiatives—in fact, over 30 states
claim to have them. A few states that do have the initiatives in place are already showing
significant results from the system, including Florida and Oregon (Chamberlain &
Plucker, 2008).
The P-16 concept links all levels of education together with state agencies,
legislatures, and businesses. The main purpose of a P-16 system is to “provide a
seamless education system from preschool through college graduation” (Harris, Cobb,
Pooler, & Perry, 2008, p. 493). Harris et al. (2008) argue that this stems from the belief
that our education system lacks coherence and connectiveness. While many states have
had P-16 systems in place for a while, they are just now starting to look at them as far as
including a data component that is designed to track student achievement from early
childhood into adulthood (Chamberlain & Plucker, 2008).
The P-16 systems are created and implemented in a variety of ways. Many are
agency-initiated, which would include the state department of education, or an institution
of higher education. Others were created by governors or legislative mandate. Once the
initiatives are in place, there are typically members placed on the P-16 committee which
24
nearly always includes representatives from K-12 education and higher education, along
with other members appointed by the governor or agencies. Additionally, business
people, other community leaders, early childhood representatives and those from state
legislatures serve on the committee. Typically, those on the committee do not hold any
type of legislative power, but rather serve in an advisory capacity to make
recommendations to those in power. The responsibilities of this committee include
reviewing high school graduation requirements and college readiness, determining ways
to better align educational systems, and how to improve teacher education and
professional development (Chamberlain & Plucker, 2008).
Goals of the P-16 initiatives include reducing achievement gaps and doing a better
job of preparing students for all levels of education. As mentioned previously, they are
also looking at increasing accountability by adding a data component to track student
achievement and education effectiveness. One of the goals is to implement longitudinal
data systems in order to track students throughout their educational careers. Another
goal involves aligning curriculum and standards across all levels of education. All of
these, in turn, require extensive collaboration among the various levels of education
(Chamberlain & Plucker, 2008).
While many states claim to have a P-16 initiative in place, there appears to be
somewhat of a disconnect as far as utilizing the system as it was intended, thus leaving a
gap in achieving the goals that have been set. Some states have had this reform effort in
place long before the No Child Left Behind Act was even thought of, yet they are not
25
actually using the system on a consistent basis (Chamberlain & Plucker, 2008). If reform
efforts are to truly continue, and the alignment of standards and curriculum is indeed a
goal, states need to get serious about using the P-16 committees they have put in place.
Because of this disconnect in some states, one component of the P-16 Initiative
that is starting to draw attention is the evaluation of the system itself, as very little
evaluation of the initiatives that are currently in place has been done. States should
consider evaluating areas such as the contributions of the participants, the expansion
possibilities where participants are concerned, and the quality they are seeing in student
outcomes (Chamberlain & Plucker, 2008). Chamberlain and Plucker (2008) also suggest
that in order to begin a productive evaluation system, the following must be considered:
“examining the system’s goals, the activities undertaken to achieve those goals, the
performance indicators to determine whether or not the goals have been achieved, and
how the goals and activities should be revised” (p. 478). They suggest that the evaluation
systems should “be broad enough to go beyond the student level” (Chamberlain &
Plucker, 2008). Additionally, performance indicators must be defined and agreed upon
for each goal by all members on the P-16 committee, as well as by those entities the
committee serves (Chamberlain & Plucker, 2008).
With the focus on aligning standards and curriculum, along with other important
factors as discussed above, it seems only natural that the P-16 Initiative would have some
impact on teacher education. Harris et al. (2008) suggest that this initiative will “require
educators at all levels to become more deeply involved in the full spectrum of student
learning” (p. 493). According to Harris et al. (2008), teacher education is sitting at the
26
“intersection” between the P-12 and higher education learning systems. As a result of the
position teacher education appears to be in, Harris et al. (2008) suggest the following
seven implications that the P-16 Initiative may have as far as teacher education is
concerned. They state that a P-16 system will:
• Increase the visibility and responsibilities of all partners in teacher
education;
• Require an expansion of existing partnerships between teacher education
programs and the P-12 system and a redefinition of governance structures
and the use of resources;
• Require higher education to share expertise and other resources for the
ongoing professional development of the P-12 teaching force;
• Offer opportunities to link P-12 and postsecondary data systems and use
longitudinal data for research on teaching and learning;
• Higher education will benefit from both the positive and negative
experiences that P-12 education has had with standards, assessment, and
the politics of accountability;
• Require educators at all levels to be knowledgeable about changes in
curriculum, pedagogy, and standards at all levels, to communicate across
levels, and to respond to these changes; and
• Increase the need for teacher education programs to prepare highly
qualified teachers, especially in areas where shortages exist.
(Harris et al., 2008, pp. 494-496)
As one can see, the list of potential implications for teacher education is long and
daunting. It appears that in the era of standards-based reform we are currently living in,
we must be prepared to take action in order to ensure the success of all students at all
levels. In order to do this effectively, teachers and teacher educators must have a firm
grasp on how to assess their students using a variety of forms and strategies, not simply
just “high-stakes” assessments, and interpret those results to ensure that these goals are
being met and that their students are indeed successful.
27
Data-Driven Decision Making
Background Information
As noted above, data-driven decision making (DDDM) has become the focus of
education policy and practice (Mandinach, Honey & Light, 2006). Data-driven decision
making in education refers to “teachers, principals, and administrators systematically
collecting and analyzing various types of data, including input, process, outcome and
satisfaction data, to guide a range of decisions and help improve the success of students
and schools” (Marsh, Pane, & Hamilton, 2006, p. 1).
As a result of NCLB, schools are required to disaggregate and analyze student test
scores and other forms of data, and report the results to the public. Because of this
requirement, administrators and other district officials have invested time and money in
both commercial and “home grown” data-driven tools and support systems in hopes of
keeping better track of student performance. However, many suggest that these tools do
more to support administrators than actual classroom teachers (Mandinach et al., 2006).
Furthermore, with the standards and accountability movement, “district and school
administrators are being asked to think very differently about educational decision
making, and are being asked to use data to inform everything from resource allocation to
instructional practice” (Mandinach et al., 2006, p. 3). These same leaders are also
expected to “chart” the effectiveness of these strategies while using complex, and
oftentimes contradicting, assessments from state, district and local sources in order to
monitor student progress (Mandinach et al., 2006). One of the problems that stems from
28
this is that these leaders are only considering one type of assessment, thus leaving out
formative and alternative assessments and the resulting data that are used by teachers in
their classrooms on a routine basis. However, there are ways to use the high-stakes
assessment data to impact schools in a positive manner.
Positive Outcomes and Facilitating Data Use
Research shows that there are positive outcomes from data use in school districts.
Diamond and Cooper (2007) studied how a sampling of Chicago elementary schools used
testing data for school improvement. This sample included schools that were deemed
high performing, as well as schools that were on probation. They found that across all
schools studied, the school personnel “paid attention to accountability messages
regarding students’ test scores” (Diamond & Cooper, 2007, p. 248). Another common
factor they found was that student testing data was used to make decisions regarding the
allocation of resources for instructional purposes. The third commonality was that school
personnel all engaged in test preparation activities, which was followed by the finding
that the schools focused more on mathematics and reading than other subjects (Diamond
& Cooper, 2007). The last two commonalities may find some room for debate among
educators and researchers, particularly where “teaching to the test” (McTighe & Brown,
2005) and ignoring other forms of assessment and data are concerned.
Kerr, Marsh, Ikemoto, Darilek, and Barney (2006) studied three urban school
districts and found positive outcomes as well. They found that the teachers and
principals in these districts had access to multiple forms of data to use for instructional
29
planning and decision-making. Another positive finding was the fact that teachers in two
of the three districts were more apt to use data to guide instruction in their classrooms
(Kerr et al., 2006). Additionally, the staff in these same two districts “reported more
extensive and frequent use of data to identify areas of weakness and to guide instructional
decisions” (Kerr et al., 2006, p. 511). Teachers in the two districts also stated that their
principals helped them with the analysis of the data on a regular basis. These same
principals were found to adjust instruction and teaching methods based on what the data
show (Kerr et al., 2006). All of these positive outcomes provide hope for schools that are
looking to increase data use on their campuses in hopes of improving instruction for all
students, and they also show that teachers are able to use data effectively.
However, facilitating data use in schools can, at first glance, appear to be a
daunting task. Datnow et al., (2007) identified six key strategies in their study of four
school systems that implemented DDDM effectively. These strategies are: “building a
firm foundation for DDDM; establishing a culture of data use and continuous
improvement; investing in an information management system; selecting the right data;
building school capacity for DDDM; and analyzing and acting on data to improve
performance” (Datnow et al., 2007, pp. 6-7). These key strategies can serve as a guide for
those looking to further develop data use in schools.
Developing a Culture of Data Use
In order for teachers and other staff members to feel comfortable using data, they
must have leaders, both in their local schools and districts, who model effective data use
30
and have the capacity to lead them in that direction. Earl and Katz (2002) suggest that
“having the capacity for leading schools in a data rich world requires that leaders develop
an inquiry habit of mind, become data literate, and create a culture of inquiry” (p. 13).
With these three characteristics developed in the leader of the school, the teachers have a
model to emulate in their growth and experiences with data use.
The first characteristic noted above is that of an “inquiry habit of mind” (Earl &
Katz, 2002). Earl and Katz (2002) describe this as “a way of thinking that is a dynamic
iterative system with feedback loops that organizes ideas towards clearer directions and
decisions and draws on or seeks out information as the participants move closer and
closer to understanding some phenomenon” (p. 14). Furthermore, they suggest that
leaders who have an inquiry habit of mind possess the following qualities: they value
deep understanding; reserve judgment and have a tolerance for ambiguity; and they take a
range of perspectives and systematically pose increasingly focused questions (Earl &
Katz, 2002). The first quality, valuing a deep understanding, requires that leaders take a
step back and recognize that they do not know something, but are determined to clarify
and understand the situation. The second quality, reserving judgment, requires that they
take their time thinking through and processing information, rather than just taking any
information or decision that is “hasty” or “unsubstantiated”. In other words, they take the
time to stop and think, particularly in situations that are not clear-cut (Earl & Katz, 2002).
The last quality, taking a range of perspectives and systematically posing increasingly
focused questions, simply means that the leaders think about the issue from multiple
31
perspectives and “view the situation through a myriad of lenses and to narrow the
investigation” (Earl & Katz, 2002, p. 16).
The second characteristic is that of becoming data literate. With the
accountability and public reporting of school progress, data literacy is an essential skill
for both leaders and teachers. Earl and Katz (2002) suggest that data literate leaders:
think about purpose(s); recognize sound and unsound data; are knowledgeable about
statistical and measurement concepts; make interpretation paramount; and pay attention
to reporting and audiences. Leaders that think about different purposes are aware that
they will need different types of data to answer different questions as they try to
understand different phenomenon. Data literate leaders are also able to discern good
quality data from data that may be questionable. They are familiar with the “language of
data”, which means understanding the principles of measurement and statistics (Earl &
Katz, 2002). Along with understanding the language of data, leaders must also be able to
interpret the data. Earl and Katz (2002) state “Interpretation, then, is thinking—
formulating possibilities, developing convincing arguments, locating logical flaws and
establishing a feasible and defensible notion of what the data represent. It requires a
blend of wisdom, logic, and inquiry mindedness” (p. 21). Finally, data literate leaders
pay attention to their audience and provide data in a form that audience can relate to.
They present the information clearly and use it to “explain and justify their decisions to
those who care to know” (Earl & Katz, 2002, p. 22).
Finally, Earl and Katz (2002) discuss the fact that leaders develop a culture of
inquiry. This includes involving others in interpreting and engaging with the data;
32
stimulating an internal sense of “urgency”; making time; and using “critical friends” (Earl
& Katz, 2002). Leaders who encourage others to engage with data provide a path to a
shared purpose in order to attain goals they have set. They are also encouraging their
mentees to become data-literate through this process. The second quality, stimulating an
internal sense of “urgency”, helps schools to refocus on key issues and determine paths to
success when they are not making the progress they thought they were. A culture of
inquiry leader also makes time. Earl and Katz (2002) state:
Leaders and the people who work with them are going to need time, and lots of
it—to think about the important issues, to decide what data is relevant and make
sure they have it, to consider the data and try to make sense of it, to argue and
challenge and reflect, to get more information, to argue and challenge and reflect
again, to formulate and reformulate action plans, to prepare thoughtful and
accessible ways to share their learning with the community and to stand back to
consolidate what they have learned. (p. 25)
The last quality of a leader who develops a culture of inquiry is that they use “critical
friends”-- people who offer both support and criticism, rather than someone who just
criticizes. “Critical friends” are data-literate, are often outsiders, and provide information
and feedback on issues and circumstances that those on the inside often cannot see (Earl
& Katz, 2002).
Leaders in schools and districts who develop the characteristics and qualities
discussed above can provide a positive, data-rich environment to those under their
guidance. In addition to developing and passing on those characteristics, these leaders
should provide some sort of systematic way of developing specific data skills in their
teachers and other staff members.
33
Mandinach et al. (2006) provide a framework for developing the skills needed in
data-driven decision making. This framework is built on three levels: data, information,
and knowledge. Out of each level comes two cognitive skills that are necessary in the
decision making process, “At the data level, the two relevant skills are ‘collect’ and
‘organize’. The skills at the information level are ‘analyze’ and ‘summarize’. At the
knowledge level, ‘synthesize’ and ‘prioritize’ are the skills seen as relevant” (Mandinach
et al., 2006, p. 8). Each one of these plays an important role in the stakeholder’s use of
data and subsequent decisions made based on those data.
The levels described above follow a logical process that aids in effective data use.
At the first level, data, decisions must be made regarding both collection and organization
of specific data. Mandinach et al. (2006) discuss how each stakeholder needs different
types of data for different purposes, thus the choice of what data to collect is a key issue
for these stakeholders. After each stakeholder has collected data that are relevant for the
task at hand, that data must then be organized in a systematic way so that it makes sense
and meaning can be extracted from it (Mandinach et al., 2006).
The second level, information, requires stakeholders to analyze and summarize
the data for informational purposes (Mandinach et al., 2006). Again, different
stakeholders will analyze their data according to the purpose for which it was collected.
Because each set of data were selected for different purposes by the stakeholders, the
analyses could be broad or narrow. Once that analysis has taken place, the information
collected must be summarized (Mandinach et al., 2006). Mandinach et al. (2006) state,
34
“Educators are bombarded with information from all directions and from many sources.
It is therefore vital to have concise and targeted summaries of information that then can
be transformed into usable knowledge, …” (p. 8). This leads to the third level, which is
the knowledge level. In order to turn this information gained through the data into usable
knowledge, stakeholders must synthesize their information. Once they have done this,
the final step in the process is to prioritize that knowledge (Mandinach et al., 2006). In
other words, stakeholders must determine which issues or deficiencies must be addressed
first in order to solve the other issues brought to light by the data.
Finally, Mandinach et al. (2006) summarize this six-step process as a decision. In
other words, the stakeholder started with data they collected and organized. Then they
analyzed and summarized the information in order to synthesize and prioritize the
knowledge gained from the data. Now, they must make a decision based on what they
have learned through the process. The decision is then implemented and an impact or an
outcome results from that implementation (Mandinach et al., 2006). It is important to
note that this process is often-times a cycle that occurs over again until the desired result
is achieved (Mandinach et al., 2006). With correct training and support, perhaps the
process described above can aide new and veteran teachers in their development and use
of a wide variety of data in the classroom, including standardized test scores, student
work, and teacher-made classroom assessments.
Barriers to Data Use
Leaders and other district officials still face some tough barriers when it comes to
using data, even when they have created and fostered a culture of data-use. One of the
35
big issues is being able to process and disseminate these data to those who need it in a
timely manner (Mandinach et al., 2006). Another barrier to data use at the district level is
that of “technical challenges.” These include entering and storing data, analysis of the
data, and the presentation of the data (Mandinach et al., 2006). Within these issues
comes the quality and interpretation of data, and “the relationship between data and
instructional practices” (Cromey, 2000, cited in Mandinach et al., 2006). Finally, district
leaders and officials must be concerned with the lack of educators’ knowledge and
training with the use of both assessments and data (Mandinach et al., 2006). Most
teachers are not trained in data use or measurement and assessment literacy (Popham
1999, cited in Mandinach et al., 2006). This lack of data and assessment literacy on the
part of many teachers is certainly a large barrier for districts to overcome.
Young (2006) reports that teachers also experience many barriers when it comes
to data use. These barriers include technical difficulties with the data systems, lack of
quantitative knowledge, policies and practices, and data that are not useful in their
classroom setting (Young, 2006). Furthermore, when teachers receive their students’ test
scores at the end of the year, they are already aware of what their students can and cannot
do, based on their performance in class. Thus, many teachers see data as untimely and
unnecessary (Young, 2006). Young (2006) also reports that teachers’ primary focus on
data centers around student work that was done in the classroom. Teachers use this work
to gauge student progress and to determine the next steps in their instructional plans, and
do not simply rely on high-stakes testing data to determine the proficiency and
36
achievement levels of their students, nor do they rely only on high-stakes data to make
daily instructional decisions.
Other studies have been conducted as far as looking at barriers and resistance to
data use by educators. The RAND Corporation recently published a report titled,
“Making Sense of Data-Driven Decision Making in Education: Evidence from Recent
RAND Research” (Marsh et al., 2006). This report provides a summary of some of the
factors influencing data use by educators. Some of the factors that are discussed include:
the accessibility of the data; the quality of the data that is provided; the timeliness and
motivation to use the data; and the support needed for implementing DDDM into the
classroom (Marsh et al., 2006). When considering many of these factors, it becomes a bit
clearer as to why many teachers struggle to use various forms of data in their classroom,
and rely heavily on student work instead.
In addition to the aforementioned studies, Mandinach et al. (2006) also state that
teachers hesitate to use any single data source to make decisions about the strengths and
weaknesses of their students, particularly in the case of data that come from high stakes
testing. Rather, teachers prefer to use multiple sources of data to judge student learning
and inform their instructional practices, including the use of homework assignments, in-
class tests, classroom performances, and anecdotal notes from observations (Mandinach
et al., 2006), thus corroborating the findings from other studies stating that teachers rely
on student work as their main source of data. Teachers also tend to look for classroom-
wide patterns in data, which results in their decision making strategies lacking in
“systematicity, from student-to-student, class-to-class, and year-to-year” (Mandinach et
37
al., 2006). This results in unintentional bias and the oversight of key statistical concepts
such as distribution, validity, and reliability (Mandinach et al., 2006). Ingram, Louis, and
Schroeder (2004) also suggest that, even when teachers look at student work and other
forms of data, they do not necessarily make changes in their instructional approaches, but
rather continue using those same strategies to teach new topics, whether those strategies
proved effective or not.
While teachers may indeed use student work as a primary data source, Hoff
(2006) suggests that tools still need to be developed to help teachers use other forms of
data in the classroom. He suggests that, once these tools are developed or improved
upon, teachers need to be trained in using them in order for DDDM to really take off.
Data must also be made available to the teachers in a timely manner, and programs must
be easy to use so that teachers can plan lessons to help their students achieve where they
have fallen short. Furthermore, teachers must be trained to ensure their understanding of
the actual data themselves (Hoff, 2006).
Classroom Assessment and High-Stakes Testing Data
As discussed previously, all teachers must be prepared to use a variety of
assessments, including high-stakes tests, and the resulting data in their classrooms. This
includes both new and veteran teachers. Athanases and Achinstein (2003) conducted a
study on California’s Beginning Teacher Support and Assessment (BTSA) program of
mentor and student teachers. Their study looked at what tools and/or skills mentor
teachers thought they needed most for mentoring new teachers in helping them focus on
student learning. When Athanases and Achinstein (2003) began their study, they used a
38
questionnaire for the mentors to gauge areas where they thought they needed the most
knowledge in order to support both teacher and student learning. By far, the largest
category was that of assessment (multiple domains of assessment). In other words, this
was the knowledge that mentors needed most in order to focus the new teacher on
individual student learning. The mentors discussed the need to have command of a wide
range of assessment tools and practices, to know formal and informal assessment
strategies, how to track students over time with the necessary evidence, and how to assess
student learning during instruction. They also discussed the need to be able to examine
student work carefully (Athanases & Achinstein, 2003).
Other issues concerning assessment that were brought up as a result of the study
include the knowledge of standards and how to gauge curricular alignment with those
standards, as well as knowledge of formative assessment of the new teacher (Athanases
& Achinstein, 2003). The study showed transcripts and dialogue between the mentor
teacher and the new teacher, and how the mentor was able to coach the new teacher in
regards to individual student learning through assessment. As a result, new teachers were
able to see how classroom data and assessment can inform instruction.
Athanases and Achinstein (2003) found prominent themes and problems from
their study. One of the problems mentioned is that both new and veteran teachers report
a lack of university preparation and professional development work as far as classroom
assessment is concerned. Included with this problem is the fact that a lot of the
assessment knowledge is discussed and learned under the broad umbrella of pedagogical
content knowledge (Athanases & Achinstein, 2003). This lack of preparation at the
39
university level leads to potential problems with new teachers being able to use test
results and the accompanying data, as well as other forms of assessment, to guide the
instruction of their students.
Many experts believe that there is too strong of a focus on data based on
standardized tests. Stanford professor Linda Darling-Hammond and University of
California-Los Angeles Professor Emeritus W. James Popham are two of these experts.
Darling-Hammond (2004) argues for a “more limited and appropriate role for test data as
a component of accountability systems” (p. 1080), while James Popham (2003) argues
that classroom instruction can improve if teachers use the correct data. In addition to her
argument for a more limited role regarding data, Darling-Hammond (2004) claims that
Assessment data are helpful for creating more accountable systems to the extent
that they provide relevant, valid, timely, and useful information about how
individual students are doing and how schools are serving them. However,
indicators such as test scores are information for the accountability system; they
are not the system itself. Accountability occurs only when a useful set of
processes exists for interpreting and acting on the information in educationally
productive ways. (p. 1081)
Both of these experts hold similar positions in the use of testing data, though Popham
(2003) provides a bit more detail in regards to utilizing tests and assessments in the
classroom.
Popham (2003) believes that standardized testing data do not provide teachers
with the information they need in order to make instructional improvements, and that it is
the least helpful for improving instruction, even though test data are the most widely used
in the U.S. today. Instead, Popham (2003) argues for instructionally beneficial data,
40
which come from instructionally useful tests, which can also be applied to large-scale
assessments. He proposes five attributes for these instructionally useful tests. The first
attribute is that of significance. In order for a test to be considered “significant”, in
Popham’s (2003) opinion, it must measure a “high-level cognitive skill or a substantial
body of important knowledge” (p. 48).
According to Popham (2003), another important attribute of instructionally useful
tests is teachability. A test must measure something that is “teachable”. The third
attribute is that of describability. This ensures that a test provides clear descriptions of
the skills and knowledge that it measures, which in turn helps teachers to properly design
their instructional activities (Popham, 2003). Reportability is the fourth attribute, which
entails providing the results of the assesement at a specific enough level to help inform
teachers about the effectiveness of their instruction. Finally, Popham (2003) suggests
that assessments should be non-intrusive. In other words, they should not take too long
to administer, as many tests today do. Longer tests lead to shorter instructional time for
teachers, which can lead to frustration.
Popham (2003) also discusses “dismal” data—data that have diminished the
quality of education that we provide to our students. He brings up three specific types of
assessments that provide “dismal data”. The first is nationally standardized achievement
tests, as they lack describability, teachability and reportability (Popham, 2003). The
second is standards-based tests, which try to assess far too many standards and don’t
measure them correctly. These tests also don’t tell teachers which part of their
instruction needs to be modified.
41
Finally, Popham (2003) addresses teachers’ classroom assessments. He has
provided a list of questions for teachers to ask themselves regarding the assessments they
give their students. These questions include:
• Do my classroom assessments measure genuinely worthwhile skills and
knowledge?
• Will I be able to promote my students’ mastery of what’s measured in my
classroom assessments?
• Can I describe what skills and knowledge my classroom tests measure in
language sufficiently clear for my own instructional planning?
• Do my classroom assessments yield results that allow me to tell which
parts of my instruction were effective or ineffective? and
• Do my classroom tests take too much time away from my instruction?
(Popham, 2003).
If teachers can answer these questions and use them as a guide to help inform the
assessments they create in their classroom, perhaps the quality of those assessments will
improve.
While Popham (2003) essentially argues against standardized testing, he does
provide suggestions as to what educators can do to improve the use of data. First, he says
educators should disregard data from any test that is not instructionally useful; and
second, they should push for the installation of instructionally useful tests so that the data
that those assessments produce will lead to better-taught students. Popham (2003) states
“To educators, the wrong data can often be seductively appealing. But the right data will,
in fact, help teachers do a better job with students. Those are the data we need” (p.51).
However, in order for any type of data to be useful and improve student achievement,
teachers must have a firm grasp on assessment.
42
Student assessment, whether formal or informal, is part of a daily routine for
teachers. In fact, researchers suggest that teachers spend between one-third and one-half
of their teaching career assessing their students (Stiggins, 1999; Wise, Lookin, & Roos,
1991). With so much time devoted to assessment, it would be natural to draw the
conclusion that teachers know how to properly assess student work and use that
information to plan instruction accordingly. However, the research does not necessarily
agree.
In one study, Graham (2005) and fellow professors took a close look at teacher
candidates and their assessment assumptions and practices in their study of the University
of Georgia Network of English Teachers and Students (UGA-NETS). They focused on
two major assumptions connected to preservice teachers’ learning about assessment in a
university/school collaboration. Their first major assumption was that “if teacher
candidates are to assess student learning in new, more reflective and powerful ways, their
university and school-based mentors must be willing to assess differently themselves”
(Graham, 2005, p. 608). The second assumption was that “teachers are torn between
external demands of high-stakes tests and their own intuitive pull to focus on their
classroom-based assessments” (Graham, 2005, p. 608). This has implications for new
teachers as they are caught in the battle over accountability and evidence of student
learning.
Graham (2005) believes that university professors and mentor teachers must also
be able to look at themselves and their assessment practices in order to positively
influence the teacher candidates. At the forefront of discussion was the fact that mentor
43
teachers are feeling an increase in the pressure for accountability of student learning, and
they worry about how teacher candidates will survive in this new era of accountability
unless they have new kinds of experiences with assessment. As a result, the teachers and
professors in the Network focused on creating units of study with various criteria,
including the pre-assessment of student needs, matching learning goals and assessments,
and implementing “a variety of formative and summative assessments to document
student learning and performance, to make planning decisions, and to change directions
and strategies as needed” (Graham, 2005, p. 609). This was a whole new approach to
planning and assessment, and with this approach, changes in traditional habits of
planning would be required. The Network would now implement a “backwards
planning” model for training mentor teachers.
The actual study itself took place over two years and included two cohorts of
student teachers and mentors, labeled G1 and G2. The first group of student teachers
worked with mentors who were not required to go through the “backwards planning”
training and unit creation, whereas the mentors for G2 were required to do so. This
appeared to have a very strong effect on the attitudes toward planning and assessment for
the first group—they thought “backwards planning” took too much time, and simply
relied on old, “traditional” forms of assessment they saw their mentor teachers use. This
was not the case for G2, as they had mentor teachers who were required to use this
process. Not only did this help the student teachers see the benefits of “backwards
planning” and student assessment for learning, but it also helped the mentor teachers be
44
able to see how they could assess their students more effectively in this age of
accountability.
Graham’s (2005) research revealed a few new insights, including teacher
candidates’ lack of knowledge on how to assess student learning, as well as on
assessment practices in general. Additionally, she found that the power that a mentor
teacher carries as far as the beliefs and teaching styles of student teachers also had a
tremendous impact.
The lack of assessment literacy on the part of both preservice and inservice
teachers naturally leads to a lack of data literacy. Mandinach et al. (2006) found that
teachers are reluctant to use data and the tools for data use because they have not been
properly trained. There is a need to train teachers on the use and understanding of data,
as well as on the tools that are available to help with data use (Mandinach et al., 2006). If
teachers are trained and have a stronger knowledge base on assessment and data, they
will be able to adjust instruction for their students, which will allow them to do a better
job of meeting individual learning needs.
Differentiated Instruction
Part of the purpose of teachers utilizing assessments and data is to aid in grouping
students for instructional purposes (Brimijoin, 2005). This is called differentiation.
Differentiation can be defined as “a conceptual approach to teaching and learning that
involves careful analysis of learning goals, continual assessment of student needs, and
instructional modifications in response to data about readiness levels, interests, learning
45
profiles, and affects” (Tomlinson, 1999, 2003, cited in Brimijoin, 2005, p. 254).
Brimijoin (2005) also states that teachers who engage in differentiation based on
assessment results and data see an increase in student engagement with content and have
better results transferring their learning. However, studies show that many teachers lack
the pedagogy to differentiate instruction effectively (Brimijoin, 2005).
Based on a case study she conducted, Brimijoin (2005) provides a discussion of
the essential skills teachers must have in order to differentiate effectively. These skills
include: clarity of learning goals; ongoing assessment; informing instruction; respectful
tasks; appropriate strategies; flexible grouping; and classroom community (Brimijoin,
2005). Teachers who are competent in differentiation generate “explicit definitions of the
knowledge, understandings, and skills that students will gain from a learning experience”
(Brimijoin, 2005, p. 255). Ongoing assessment is also a key skill in differentiation.
Teachers must continually assess their students’ responses to the curriculum, instruction
and assessment. These assessment results are then used to adjust the instruction for the
students, which is seen as informing instruction. As far as respectful tasks, teachers who
differentiate well provide instructional tasks that are interesting and important to their
students, yet also provide valuable learning experiences (Brimijoin, 2005). Teachers
must also use instructional strategies that are research-based and that keep their students
engaged in the content and lesson at hand. Another important differentiation skill is that
of flexible grouping. Brimijoin (2005) states that “Teachers who differentiate well
ensure that students interact with content and each other in a multitude of ways every
week of the school year” (p. 256). The last essential differentiation skill is that of
46
developing a community of learners. This community of learners celebrates differences
and belonging, among other things (Brimijoin, 2005). Brimijoin (2005) concludes her
article by suggesting that differentiation is all the more important in classrooms today
because of the ever-increasing diversity in our classrooms.
Corroborating Brimijoin’s (2005) concluding argument, we can look to George’s
(2005) philosophy on differentiated instruction in the regular classroom. George (2005)
states that “educators committed to public education must find ways of providing
excellence and challenge to all students” (p. 185). Moreover, George (2005) discusses
the benefits of students being in a heterogeneous classroom, while also being split into
appropriate learning groups where students can develop “important personal and social
knowledge, skills and attitudes essential to success in adult life, while simultaneously
providing opportunity for varied types and degrees of academic achievement” (p. 186).
He also states that by utilizing “classroom-level strategies” such as differentiated
curriculum and instruction, as well as appropriate assessment techniques, teachers will be
better equipped to support all of their students in this heterogeneous classroom setting,
regardless of their background (George, 2005).
Differentiated instruction is seen as one of the keys to meeting NCLB and
accountability measures. McTighe and Brown (2005) state that “standards and
differentiation can not only coexist, they must coexist if schools and districts are to
achieve the continuous improvement targets imposed on them by NCLB” (p. 242).
However, on the contrary, some educators’ efforts to meet these requirements have
actually resulted in the use of poor teaching strategies that are at odds with what
47
educational research has found (McTighe & Brown, 2005). These poor instructional
practices include: broad or overwritten curricula; the perception by teachers that they
must cover everything in the “mile wide, inch deep” curricula; worksheet-based teaching
practices for all students, regardless of ability level; and the proverbial “teaching to the
test” (McTighe & Brown, 2005). All of these “teaching” strategies are out of alignment
with what research has deemed as best practice, and they are also at odds with what
differentiated instruction advocates.
In response to these widely-used teaching strategies, McTighe and Brown (2005)
discuss the fact that research-based guidelines for student learning can be effectively
targeted through a backward design planning model combined with differentiated
instruction. They suggest that
curriculum standards need to be unpacked to identify…the big picture; …that
students learn best when they are engaged in purposeful, active, and inquiry-
driven teaching and learning activities...; assessments should require students’
demonstrations of understanding, not just recall of information or formulaic
modeling; …effective instruction accommodates differences in learners’ readiness
level(s), interest, and learning profiles. (McTighe & Brown, 2005, p. 236)
In other words, differentiated instruction and standards-based education can work
together to effectively bring about learning in the diverse student populations being
served in schools today. Evidence of this learning can be found through a variety of
assessments and their accompanying data. Utilizing tools that are provided through
experts in education like McTighe and Brown (2005), as well as through the use of best
practices like differentiated instruction, teachers can reach students at their level and help
them grow as learners who reach their potential. In the next section, I will review
48
literature on the role of teacher education in helping teachers develop such skills and
practices.
Teacher Education
Traditional Coursework
Teacher education has traditionally been composed of methods courses for
teaching specific subject matter. These courses generally include methods and pedagogy
for teaching reading, mathematics, science, and social studies (Kaplan & Owings, 2003).
Other coursework typically includes an Introduction to Teaching course that provides an
overview of the profession, as well as perhaps a curriculum course where students learn
to write lesson plans and develop units of study. In some cases, children’s literature,
music, art, and physical education courses are added to the requirements. For those at
smaller colleges and universities, preservice teachers are often Liberal Studies majors,
thus they receive a broad overview of several subjects that would be expected of a liberal
arts education. However, coursework varies according to state requirements and the size
of the institution; therefore, there may be some teacher candidates who have had more
courses in teaching than others, causing quite a variance in teacher education experiences.
Other factors involved in quality of teacher preparation programs and coursework include
curricula, quality and experience of faculty who teach in the program, and the quality of
students admitted to the program (Kaplan & Owings, 2003).
49
Assessment Coursework
One area in particular that is under scrutiny is that of assessment coursework in
teacher education. Typically, undergraduates, particularly those in four year bachelor
plus credential programs, are not required to take coursework on assessment (Volante &
Fazio, 2007; Wise, Lukin & Roos, 1991). According to Wise, Lukin and Roos (1991)
this missing piece of coursework on assessment has been known for the last thirty years,
yet not much has been done to resolve the issue. However, with the push towards high
stakes assessment and data literacy due to NCLB, there is a renewed interest in providing
preservice teachers with coursework in assessment (Stiggins, 2002). In fact, Stiggins
(2002) discusses the fact that school districts are spending thousands of dollars in
professional development in order to train teachers in assessment and data. Rather than
leave the burden on individual schools and districts, he recommends that teacher
preparation programs include this form of training in their programs, and that it be tied to
state teacher licensing requirements (Stiggins, 2002).
In fact, a few years prior to his last study where he makes the aforementioned
suggestion, Stiggins (1999) actually came out with a “road map” for teacher education
institutions to evaluate their programs to be certain that they were indeed implementing
assessment training somewhere in their coursework. He (and others) believe that change
can be made by advocating for “relevant classroom assessment training in all colleges of
education” (Stiggins, 1999, p.23). Furthermore, he discusses how only half of the states
require any form of assessment literacy or coursework as part of their professional
licensing standards. Stiggins (1999) also points out that numerous professional
50
associations of educators have adopted standards that address assessment literacy, and
expect that teachers are competent in this area. Thus, mounting pressure is being placed
on colleges and schools of education to graduate assessment-literate students (Stiggins,
1999).
Additionally, Stiggins (1999) discusses his belief that “productive classroom
assessment training promotes an understanding of, an ability to apply, and a commitment
to meeting standards of valid and reliable classroom assessment” (p. 24). In an attempt to
promote this understanding of classroom assessment, Stiggins (1999) provides a list of
ways that assessment literacy can be developed. One of the suggestions he makes for
teacher educators is to include a unit on assessment methods in an educational
psychology course, an introduction to teaching course, or even within a subject methods
course. All of these courses are generally taken by preservice teachers in order to fulfill
the course requirements in schools and colleges of education. By including assessment
units here, each person is virtually assured of receiving some form of assessment training.
Another suggestion he makes is to include a separate course on assessment methods
(Stiggins, 1999). This would allow for an entire course to be devoted to assessment
concepts and literacy, and perhaps could include data literacy as well. Additionally,
Stiggins (1999) suggests modeling of appropriate assessment methods and techniques by
faculty members, and a course designed specifically around assessment would be an ideal
place to model methods and techniques. Finally, Stiggins (1999) suggests that student
teachers be placed with master teachers who are assessment-literate and can provide
hands-on instruction for them during that student teaching placement. By learning more
51
about assessment in a classroom setting, the student teachers can see how theory fits into
context.
Furthermore, Stiggins (1999) provides schools and colleges of education with
some suggestions for content requirements within their coursework. The first
requirement he suggests is having preservice teachers connect assessments to clear
purposes. This essentially means that there are a variety of assessment users in the school
system, and each user needs assessment data in a different form, at a different time, and
generally for a different purpose (Stiggins, 1999). Not everyone needs the exact same
data in most cases, and teachers need to be able to recognize the differences. They also
need to understand what types of data they need in order to make instructional decisions
in their classrooms.
Another requirement is clarifying achievement expectations. In other words,
teachers need to know what content is required of their students, and how to transform
those requirements into meaningful instructional activities and assessments. Applying
proper assessment methods is the third requirement posed by Stiggins (1999). His goal
here is to ensure that teachers understand that “no method is, by its nature, superior to
another” (Stiggins, 1999, p. 26). Teachers need to select appropriate assessment methods
for the task at hand. The fourth requirement suggested is that of developing quality
assessment exercises and scoring criteria and sampling appropriately. This competence
speaks for itself, in that teachers need to be able to write high-quality assessments, score
them accurately, and select an appropriate amount of information to include on the
assessments. The final three requirements proposed by Stiggins (1999) are self-
52
explanatory. These include: avoiding bias in assessment; communicating effectively
about student achievement; and using assessment as an instructional intervention. Each
of these requirements are proposed by Stiggins (1999) in the form of a self-study, where
he encourages the schools and colleges of education to pose self-study questions in order
to determine where, or if, they are training their preservice teachers for assessment
according to these competencies. Stiggins (1999) provides a list of questions for each
competency in order to facilitate the self-study.
In addition to providing various ways of including assessment training in
education coursework, multiple professional associations such as the American
Federation of Teachers (AFT), the National Education Association (NEA), and the
National Council on Measurement in Education (NCME) suggest that teachers be held
accountable for demonstrating knowledge in specific assessment competencies. These
competencies include the following: being able to choose and develop assessment
methods that are appropriate for instructional decisions; the ability to administer, score
and interpret results of various types of tests; use those results to make decisions about
instructional planning, curriculum, and school improvement; developing valid procedures
for grades on report cards; being able to clearly communicate assessment results to
parents and other stakeholders; and to recognize inappropriate forms of assessment,
including those assessments that may be deemed illegal or unethical (Stiggins, 1999).
Additionally, the National Board Certification, provided by the National Board
for Professional Teaching Standards (NBPTS) is also focused on the improvement of
teachers’ assessment practices. Sato, Wei, and Darling-Hammond (2008) conducted a
53
study on the improvement of teachers’ assessment practices in light of participation in the
National Board Certification process. Teachers who participated in this study showed
marked improvements in their formative assessment practices in the classroom versus the
control group that consisted of teachers who did not participate in the national
certification process. Some of the specific assessment practices that showed
improvement include a greater focus on student learning versus being concerned only
with student grades; assessment practices were integrated into ongoing classroom
instruction; assessment practices were utilized daily, and tests and quizzes became less of
a focus; and a greater emphasis was placed on “teaching for conceptual understanding
and aligning those assessments better with learning goals” (Sato et al., 2008, p. 691).
If teachers are well-trained in data and assessment literacy, implementing and
meeting the above criteria should not pose a problem. Stiggins (1999) notes “If teachers
do not understand how to produce quality assessments and use them well, their students
are placed directly in harm’s way” (p. 27). By asking teacher education programs to
help in assessment literacy and training, school districts can cut down on the amount of
money they are spending on professional development in this area, and new teachers will
have a greater sense of self-efficacy as far as data and assessments are concerned
(Volante & Fazio, 2007).
As shown through the literature, teachers, both new and veteran, appear to be
lacking in some basic assessment knowledge. Teacher preparation programs can help lay
a solid foundation in order to close this knowledge gap.
54
Effective Teacher Education Programs
Teacher education expert and Stanford professor Linda Darling-Hammond
studied seven highly effective teacher preparation programs around the country. These
seven programs vary in size, and some are private while others are public institutions, and
consist of the following colleges and universities: Alverno College, Bank Street College,
Trinity University, University of California Berkeley, University of Southern Maine,
University of Virginia, and Wheelock College. While studying each of these programs,
Darling-Hammond (2006) found eight common threads among the programs regarding
the conceptualization of the knowledge base required for effective teaching. These eight
common threads include the following:
• They emphasize understanding learners and learning as central to making
sound teaching decisions;
• They understand that subject matters;
• They unite the study of subject matter and children in the analysis and
design of curriculum;
• They see learners, subject matter, and curriculum as existing in a
sociocultural context that influences what is valued and how learning
occurs;
• They seek to develop a repertoire of teaching strategies and an
understanding of their purposes and potential uses for diverse goals and
contexts;
• They place extraordinary emphasis on the processes of assessment and
feedback as essential to both student and teacher learning;
• They seek to develop teachers’ abilities as reflective decision makers who
can carefully observe, inquire, diagnose, design, and evaluate learning and
teaching so that it is continually revised to become more effective;
• They see teaching as a collaborative activity conducted within a
professional community that feeds ongoing teacher learning, problem
55
solving, and the development of ever more sophisticated practice.
(Darling-Hammond, 2006, pp. 81-83)
Each of these conceptualizations of knowledge can be sorted into three main categories
that Darling-Hammond (2006) has labeled as: “1) Knowledge of learners and how they
learn and develop within social contexts; 2) Conceptions of curriculum content and
goals—understanding of the subject matter and skills to be taught in light of the social
purposes of education; and 3) Understanding of teaching in light of the content and
learners to be taught as informed by assessment and supported by productive classroom
environments” (p. 83). The need for this type of knowledge to be taught in teacher
preparation programs can be summed up in the following statement by Darling-
Hammond (2006):
If teachers must ensure successful learning for students who learn in different
ways and may encounter a variety of difficulties, then teachers need to be
diagnosticians and planners who know a great deal about the learning process and
have a repertoire of tools at their disposal. In this view, teaching requires a
professional knowledge base that informs decisions about teaching in response to
learners. (p. 80)
In addition to Darling-Hammond’s (2006) study, Dean et al. (2005) report on effective
teacher preparation programs in light of what the U.S. Department of Education has
deemed “effective teacher preparation”. The only college that shares the distinction of
“effective” by both studies is Alverno College.
In 2000, the U.S. Department of Education established the “National Awards
Program for Effective Teacher Preparation” (Dean et al., 2005). This award was
established to “recognize teacher preparation programs that have compelling evidence of
their effectiveness in readying teachers to help all students meet high academic
56
standards” (Dean et al., 2005, p. 284). The award was also established with the goal of
furthering discussion on the elements of effective teacher preparation. Dean et al. (2005)
took a close look at four programs that have won this award: Alverno College, East
Carolina University, Fordham University, and Samford University. They found five
common elements in each program that are considered pillars for establishing effective
teacher preparation programs. These elements are: licensure requirements; standards;
accreditation; P-12 partnerships; and continuous improvement (Dean et al., 2005).
Licensure requirements are essential to keep in mind as teacher education
programs look to improve their practice. Most programs, including the award winners,
do this on a regular basis. Teacher education programs must monitor changes in state
licensing requirements in order to ensure their teachers meet those requirements and will
be eligible for certification. Each of the four winners of the Effective Teacher
Preparation Award pay very close attention to these requirements, and make adjustments
in coursework and field experiences as needed (Dean et al., 2005).
With a national focus on standards in education, it seems only natural that good
teacher preparation programs would also have a focus on standards. This is the second
common element among the four award-winning focus schools. In each of these
programs, standards are an essential component in program design (Dean et al., 2005).
These standards include the corresponding state standards for teacher preparation, as well
as standards created by the National Council for Accreditation of Teacher Education
(NCATE). Additionally, Samford, East Carolina, and Alverno “align their program goals
with the standards of the Interstate New Teacher Assessment and Support Consortium
57
(INTASC) and with those of the National Board for Professional Teaching Standards
(NBPTS)” (Dean et al., 2005, p. 284). The program goals also aide in data collection and
evaluation for these institutions. The state of North Carolina requires annual
performance reports from its teacher preparation programs, thus encouraging institutions
to evaluate their programs systematically each year (Dean et al., 2005).
Accreditation is the third common element. Each of these programs is accredited
by NCATE. One of the main focuses of NCATE is on evidence of subject-matter
knowledge and teaching effectiveness of the candidates coming out of its member
institutions (Dean et al., 2005). Dean et al. (2005) state that this focus has resulted in
most program leaders “thinking more carefully about how they collect and use data to
monitor and improve program effectiveness” (p. 285).
Forging strong P-12 partnerships is another common element. By partnering with
local schools and districts, teacher education programs can receive feedback about their
program and its effectiveness. As a result of this feedback, programs can improve and
enhance the field experiences of their teacher candidates. However, it can sometimes be
difficult to establish these relationships due to funding issues and a lack of understanding
of how the other setting functions (Dean et al., 2005).
Continuous improvement is the final common element among the award-winning
programs. Each of the programs view their work as ongoing, and they “take steps to help
faculty members embrace a culture of continuous improvement” (Dean et al., 2005, p.
285). Samford University specifically follows models and designs aimed at
58
improvement, including the use of the Essential Changes Model and the Quality
Principles of Change (Dean et al., 2005). Dean et al. (2005) also suggest that “Another
aspect of the culture of continuous improvement is connecting with other institutions to
learn about and share examples of best practices in teacher preparation, including
program evaluation” (p. 286). This can involve the use of professional publications,
websites, and presentations (Dean et al., 2005).
As a result of studying these award-winning institutions, Dean et al. (2005) have
come up with some strategies to improve the evaluation of all teacher preparation
programs, which is essential to improve teaching and learning. They suggest that
programs take four actions: 1) foster commitment to using data for improvement; 2) build
partnerships and connections; 3) model effective communication and collaboration; and
4) promote participation in the evaluation system (Dean et al., 2005). The goal is that
through the implementation of each of the common elements, in conjunction with the
actions just presented, teacher education programs can all produce highly effective, or
“highly qualified” teachers, a stringent requirement in our current age of accountability.
Conclusion
After a thorough examination of the literature, it can be concluded that standards-
based reform and legislative policy such as No Child Left Behind are here for the long
haul (McTighe & Brown, 2005), and that both have had a strong impact on the K-12
education system. Furthermore, standards-based reform is moving its way up to higher
education in the form of the P-16 Initiative (Chamberlain & Plucker,2008), and this
59
Initiative is already having an impact on teacher preparation programs in the United
States, with many more implications on the horizon (Harris et al., 2008). Because of this
reform effort, school districts and teacher preparation programs are faced with the task of
ensuring that all teachers are highly qualified, per government regulations (Berry et al.,
2004). The highly qualified component has left a gap in traditionally hard-to-staff
schools in urban and rural areas, where test scores are traditionally lower and schools are
labeled as “needing improvement”. There is sufficient evidence that shows a highly
qualified teacher does have an impact on student achievement (Kaplan & Owings, 2003;
Darling-Hammond, 2000).
As a result of this era of standards-based reform, schools are turning to data use in
order to measure student growth and academic progress, as required by state and federal
governments (Mandinach et al., 2006). Data driven decision making (DDDM) is a hot
topic in education, as schools are required to disaggregate and analyze student test scores
and report the information to the public. Studies have shown that districts and schools
are seeing positive outcomes in light of increasing their data use (Diamond & Cooper,
2007). However, there are barriers that exist for data use as far as leaders and other
district officials are concerned. Barriers to data use at the district level include technical
challenges such as entering and storing data, analyzing the data, and the presentation of
the data (Mandinach et al., 2006). Evidence shows that barriers also exist for teachers in
the classroom. This includes the accessibility of the data, the timeliness of the data, and a
lack of quantitative knowledge (Young, 2006). These barriers can be somewhat removed
60
through developing a culture of data use, as studied and described by Earl and Katz
(2002).
The research on data use and barriers, particularly where teachers are concerned,
led me to look at assessment practices of teachers in general, where it was discovered that
many teachers are not prepared for any form of assessment in the classroom (Stiggins,
2002). It was suggested by Stiggins (2002) and others that teacher education programs
need to start preparing future teachers with a strong foundation in assessment practices in
order to alleviate pressure from school districts that are spending thousands of dollars and
hours of time training teachers in these assessment practices. The review also looked at
differentiated instruction and its implications for use in standards-based reform efforts
(McTighe & Brown, 2005).
Studying assessment practices of teachers in the classroom caused me to examine
teacher education. I looked at traditional coursework in teacher preparation programs,
and found that the bulk of the courses were methods courses (Kaplan & Owings, 2003),
and that most programs ended with a one-semester student teaching experience. Finally,
I looked at assessment coursework in undergraduate teacher preparation programs, and
discovered that most programs do not implement a separate course on assessment
(Volante & Fazio, 2007; Wise, et al., 1991). Stiggins (1999) provided a road map of
sorts for what teacher preparation programs could do to implement assessment training in
their coursework.
61
Upon examination of the relevant literature and topics covered in this review, a
gap in the research slowly begins to appear. While there is some research and other
informative articles written on overall classroom assessment, there is a huge gap when it
comes to actual data literacy, both in new and veteran teachers. Moreover, while there
has been discussion of the need for assessment coursework in teacher education
programs, data literacy, in general, has not come up in these discussions. This is the gap
that my research starts to fill.
Given the current accountability system we are functioning under as a nation, data
literacy is becoming an extremely important skill for teachers to have. If teachers are
going to be held responsible for closing the achievement gap among their students, they
must be able to accurately, and confidently, interpret and use multiple forms of data. The
question is, are teacher education programs laying a solid foundation for data literacy,
and if so, how?
62
CHAPTER THREE
Methodology
Introduction
This chapter covers the methodology that was used in conducting the research for
this study. It describes the design, instrumentation, data collection procedures, and data
analysis. As mentioned previously, the purpose of this study was to examine data literacy
in preservice teachers, as data literacy is an important skill in our age of accountability
and standards-based reform. As a result, teacher education programs need to be
preparing their candidates to be assessment and data-literate, beginning the moment they
set foot in the classroom. This study hoped to generate new knowledge in order to fill an
existing gap in the research. The gap, as mentioned previously, is that there is little, if
any, research on how teacher education programs are preparing their candidates to be
data-literate.
This study took place at three universities in southern California with various
sizes of teacher education programs in order to answer the following research questions:
• How do different teacher education programs prepare preservice teachers
to use data to inform instruction?
1. What are the education faculties’ beliefs about the need for preservice
teachers to learn how to use data?
2. What are programs doing to provide basic assessment literacy to
preservice teachers so that they are data-literate?
63
3. How are preservice teachers being taught to use data to differentiate
instruction?
This study was qualitative in nature. Qualitative methods worked best for this study as I
looked to answer “how” and “why” questions in my research. I also looked at real life
situations in their natural setting (Merriam, 1997). Furthermore, I looked to understand
how the parts (courses) work together to form a whole (programs) to prepare future
teachers for data and assessment in context (Merriam, 1997).
Interviews and surveys were my primary tools for collecting qualitative data,
although one focus group at one of the schools was also conducted. According to
Merriam (1998), interviews are “a common means for collecting qualitative data” (p.71).
In fact, she goes on to discuss that in all qualitative studies, at least some of the data are
collected through the interview process. The main purpose of an interview is to obtain
special information, that which we cannot observe. Patton (2002) explains it as follows:
We interview people to find out from them those things we cannot directly
observe….The fact is that we cannot observe everything. We cannot observe
feelings, thoughts, and intentions. We cannot observe behaviors that took place at
some previous point in time. We cannot observe situations that preclude the
presence of an observer….We have to ask people questions about those things. (p.
341)
Therefore, Patton (2002) concludes that the purpose of conducting an interview is to
allow us to “enter into the other person’s perspective” (p. 341). By conducting interviews
with a variety of faculty, I was able to obtain information from and gain the perspective
of those working directly with the preservice teachers. Through the focus group and
64
surveys with teacher candidates, I was able to gain information on their perceptions of
their coursework, as well as their self-efficacy as far as being prepared to use data in the
classroom.
Sample and Population
This study focused on three universities located in southern California, all of
which have teacher education programs. The universities ranged in size and breadth of
programs, and included one large private institution (School A), one small private
institution (School B), and one public institution (School C). The universities were
selected based on a few criteria, including location, degrees offered, and the number of
students currently enrolled in teacher education coursework . School A, although it is a
large private institution, has a smaller teacher education program with approximately 75
students, and School C, though a large public institution, also has a smaller teacher
education program with approximately 80 students. School B, while a smaller private
institution, has approximately 90 students currently enrolled in teacher education
coursework. Each institution provided undergraduate coursework as well as graduate
coursework, and two of the three required a “fifth year” from their students in order to
obtain a teaching credential; although School A is currently phasing out its undergraduate
program altogether and will strictly offer masters and doctoral degrees in the next year.
All offered both multiple and single subject credentials. Only the small private institution
offered a teaching credential as part of the initial bachelor’s degree. Each institution
offered either a Master of Arts in Teaching (MAT) or a Master of Education (M.Ed.), and
65
these programs included both preservice and a few inservice teachers. The students were
a mixture of multiple subject credential candidates and single subject credential
candidates at each institution that was studied. Most of the students were considered full
time students, while just a few were part time students. The students who participated in
the focus group or via survey were currently enrolled in the program and either currently
had one of the professors who was interviewed, or had one of those professors in the
previous semester. Students were only allowed to participate under those circumstances.
Each of the ten professors interviewed held full-time positions in the teacher education
program, and all of them were women. Nine of the ten professors held doctorate degrees,
either in the form of a Ph.D. or an Ed.D.. One of the professors was participating in a
“Teacher in Residence” program, and had been teaching at the institution in that role for
six years. The years of experience in teacher education for the professors who
participated in the study ranged from four and a half years to just over thirty years.
The goal was to study a variety of institutions that implement different
coursework in their teacher education programs, as allowed by the California
Commission on Teacher Credentialing and other state agencies. I contacted teacher
education directors at each institution via email and explained my study to them. I then
used their feedback to select the professors in their program whom I interviewed for my
research. I asked the professors whom I interviewed to provide information to their
students regarding my online survey to collect data from them. This selection of
institutions, professors, and students falls under purposeful sampling (Patton, 2002).
66
Data Collection Procedures
The primary source of data collection was semi-structured interviews of both
graduate and undergraduate teacher education faculty. These interviews focused on the
faculties’ beliefs on the need for preservice teachers to be data-literate, and how this is
being accomplished in their courses. I interviewed three full-time faculty at the public
institution, three full-time faculty at the large private institution, and four full-time faculty
at the small private institution. At each institution, the teacher education director was
included as one of the faculty interviews. Therefore, I had a total of ten faculty
interviews across three institutions. Each interview lasted approximately one hour, and
was digitally recorded. Upon completion, all interviews were fully transcribed by an
outside vendor. The protocol that was used for the interviews can be found in Appendix
A.
I also collected data through one focus group of three teacher candidates enrolled
in education coursework at the large private university. This focus group looked at how
well-prepared these candidates felt as far as using data in the classroom for instructional
purposes. My original goal was to conduct a focus group of teacher candidates from
each professor whom I interviewed. I created an email inviting the students of the
professors whom I interviewed to participate in the focus groups. I provided detailed
information and offered $10 gift cards for those who opted to participate. I then sent the
email to the directors of the teacher education programs at two of the schools, and they in
turn forwarded the email to their students via a listserv. I had approximately eight
67
students respond from one institution, and scheduled a focus group with them. When I
conducted the focus group, only three of the students were present. After this experience,
I talked with the director of the teacher education program at the second institution, and
she stated that the students were just too busy this time of year, and that it would likely be
as difficult there as well. The focus groups proved to be a daunting task and as a result,
an online survey was created that contained the same questions as the focus group
protocol. Students from each professor interviewed at each institution participated in the
online survey, which used text boxes to allow students to type extended responses to the
questions, thus preserving the qualitative nature of the study. The protocol for the focus
group can be found in Appendix B, and the protocol for the online survey can be found in
Appendix C.
Data Analysis Procedures
Exact data analysis procedures followed Creswell’s (2003) six steps for
qualitative data analysis. Creswell (2003) states that, “the process of data analysis
involves making sense out of text and image data. It involves preparing the data for
analysis, conducting different analyses, moving deeper and deeper into understanding the
data, representing the data, and making an interpretation of the larger meaning of the
data” (p. 190).
The first step in qualitative data analysis is to organize and prepare the data
(Creswell, 2003). This included the transcribing of the interviews and focus groups, as
well as the downloading and converting of survey data into a text document. In total, the
68
interviews and focus groups yielded approximately 180 pages of transcribed data. The
second step was to read through the data. This was done to get a “general sense” of what
the data included, as well as tone and overall impression (Creswell, 2003). Step three
involved the coding of the data. I organized my data into “chunks”, put them into
categories and labeled them (Creswell, 2003). The coding process was enhanced as I
used HyperRESEARCH™ to code my data. Because I collected a variety of data, I was
able to triangulate those data. Patton (2002) describes the purpose of triangulation not as
to necessarily show that all the different sources lead to the same results, but rather to
“test for such consistency” (p. 248). The fourth step involved the generation of “a
description of the setting or people as well as categories or themes for analysis”
(Creswell, 2003, p. 193). The list of codes generated for this study can be found in
Appendix D. The coding process continued here as small themes and categories were
generated through this process. A list of codes appears in Appendix D. Each theme and
category was thoroughly analyzed. Step five involved how the theme and description
was represented in the qualitative narrative (Creswell, 2003). I detailed each of the
themes individually, then connected them together as a whole, as the research allowed.
Finally, step six required making an interpretation of the data (Creswell, 2003). This is
where I provided an explanation of what I have found through conducting my research. I
have connected it to the literature, and provide additional areas that need research in the
future in chapter five of this study.
69
Ethical Considerations
Ethical considerations included such issues as informed consent, confidentiality,
and protection of participants’ anonymity. My study was deemed exempt by the USC
Institutional Review Board, as they found no risk to human subjects. Therefore,
informed consent was not needed at two of the universities where I conducted my
research. However, the small private institution required me to go through their
Protection of Human Subjects process as they did not accept my exempt status from
USC’s IRB like the other two institutions did. I was approved by their board, and I
obtained informed consent from each individual at the small private institution before I
conducted any interviews or collected any survey data. Because I was required to obtain
informed consent at one of the universities, I followed what Creswell (1998) suggested.
He states that the consent form address issues such as:
• The disclosure of the purpose of the study;
• Data collection procedures;
• Researcher’s confidentiality commitment to participants;
• Voluntary withdraw information;
• Possible benefits from participating in my study;
• A statement of possible risks from participating in my study; and
• Signature and date information from both the researcher and participant
Following Creswell’s (1998) suggestion ensured that all subjects who participated in my
research study understood the important details, and that they were indeed participating
on a voluntary basis. It also reiterated that they may withdraw from the study at any time.
The size of some of the universities that were studied could have created
problems with anonymity. To minimize the risks to anonymity, I did not reveal
70
participants’ names to anyone. Data that was gathered during the research process was
carefully guarded, and kept confidential. In presenting my findings here in chapter four, I
did not use any names of institutions, professors, or teacher candidates in order to help
ensure their continued anonymity.
At all times during the data collection, analysis, and reporting process, I followed
the University of Southern California’s procedures, including the approval of the
Institutional Review Board (IRB), which was actually an exemption, as well as the
procedures of the other institution involved in my study. In particular, I adhered to their
guidelines for ethical conduct in research.
Limitations of the Study
This study is bound by time, location, and transferability. This study occurred
over a three-month period; therefore, it was not a longitudinal study. This potentially
limits the findings as it does not show any trends over time of changing teacher education
programs as data use for accountability increases.
This study was limited by the fact that only universities in southern California
were used for the actual research part of the study. Therefore, the study did not take into
consideration any other institution of higher education found in any of the other 49 states
in the United States of America. The goal was that the findings from this study would be
able to transfer to other institutions in other locations, but this is not guaranteed; thus, it is
a boundary as well.
71
Researcher’s Subjectivity
Issues of bias may have presented themselves as I am a part-time teacher
education faculty member at a small, private institution in southern California. I
interviewed teacher education faculty, thus my ability to remain completely impartial
could have been compromised as my own philosophies regarding teacher education
surfaced in my thoughts as I went through this process.
Summary
In this chapter, I have given a description of my data collection methods and
analysis that I used in conducting my research. In chapter four, I present my research
findings and analysis, and in chapter five, I present a summary and implications of those
findings.
72
CHAPTER FOUR
Data Analysis and Interpretation of Findings
Introduction
Accountability in public education within the United States has achieved much
greater focus the last eight years because of the standards arising from the No Child Left
Behind Act of 2001. Schools are being held to higher standards of student learning
because of NCLB and are required to demonstrate proficiency in these standards through
such means as high stakes testing. One way to demonstrate this proficiency comes in the
form of making what is called “adequate yearly progress.” In order to report AYP to
both the state and federal governments, schools and districts are required to use student
achievement and demographic data. This use of student data has trickled all the way
down from the district office to the classroom teacher. Teachers are expected to be able
to use and interpret multiple forms of student data to make routine instructional decisions
in the classroom. As a result school districts are spending tens of thousands of dollars
annually to train their teachers in data and assessment skills. This not only affects
operational and training budgets, it also demands a significant commitment of time and
resources. Arguably, one way to reduce costs while improving the efficiency and
effectiveness of data usage by classroom teachers is to begin preparing teachers to use
student data and assessment information throughout the course of their teacher
preparation programs. But is this occurring?
The purpose of this study is to expand the very limited research base regarding
how teacher preparation programs are addressing the need to prepare their candidates in
73
the interpretation and use of student data, which includes demographic information, test
scores, and other data collected in the classroom resulting from student work. This
chapter represents an analysis and interpretation of the research data collected in a
qualitative study of three teacher preparation programs and their approaches to educating
teacher candidates in their use of student data to inform and enhance instruction. This
study looked closely at education programs at three universities in southern California, in
particular to gain an understanding of their approach to data and assessment literacy.
This study also looks at how these programs are attempting to help their candidates
become proficient at using student data to inform instructional decisions in the classroom,
particularly where differentiated instruction is concerned.
The chapter is organized around the themes that emerged from the qualitative data
analysis. The themes emerging from the qualitative data collected were clarified by
coding the data and searching for themes within codes. This chapter is organized
according to each of the sub-questions below, and the major themes within each of the
sub-questions.
Research Question:
How do different teacher education programs prepare preservice teachers to use
data to inform instruction?
Sub-Questions and Themes:
• What are the education faculties’ beliefs about the need for preservice teachers
to learn how to use data?
74
1) General Faculty Beliefs about the Need to Learn how to Use Data
2) The Influence of the Teacher Performance Assessment (TPA) and
Performance Assessment of California Teachers (PACT)
3) Data Literacy and Analysis not Intentionally Taught Program-Wide
• What are programs doing to provide basic assessment literacy to preservice
teachers so that they are data-literate?
1) Changes in Programs Due to TPA and PACT
2) Reading Assessments
3) Modeling Other Forms of Assessment
• How are preservice teachers being taught to use data to differentiate
instruction?
1) Focus on English Language Learners and Other Special Populations
2) The Missing Link: Most are not Using Data to Teach Differentiation
Education Faculties’ Beliefs about the Need to Learn How to Use Data
Three themes were prominent in this first sub-question regarding the need for
preservice teachers to learn how to use student data. The first theme explores teacher
education faculty’s beliefs regarding the need for their teacher candidates to learn how to
use student data. The second theme that surfaced was the importance of learning to use
75
student data due to the Teacher Performance Assessment (TPA) and the Performance
Assessment for California Teachers (PACT). As a result of California SB 2042, and later
SB 1209, teacher candidates in the state of California are required to demonstrate
proficiency in specific teaching areas through the use of Educational Testing Service’s
Teacher Performance Assessment, or through the alternative TPA used at some
institutions called the Performance Assessment for California Teachers. Both the TPA
and the PACT require teacher candidates to use various forms of student data to
demonstrate proficiency in certain areas, including lesson planning and assessment.
Therefore, teacher education faculty now understand the importance and relevance that
data use has in the training of preservice teachers.
The third prominent theme that emerged from this sub-question was the lack of
intentionality on the part of programs to directly teach data literacy and analysis. While
faculty members acknowledge the importance of teaching their students about student
data use in classroom situations, particularly because of TPA and PACT implications,
they feel that what is being done in their teacher preparation programs is not as
intentional as it should be as far as explicit instruction on student data use. This
sentiment was especially prominent in School A. School B and School C also
acknowledge that it is an area within their programs that needs improvement.
1. General Faculty Beliefs Regarding the Need to Learn How to Use Data:
One of the first themes to emerge was general faculty beliefs regarding the need
for their students to learn how to use student data. The beliefs of the faculty emerged in
76
their responses to questions about data literacy within both their programs and their
coursework. The faculty who were interviewed, as a whole, appear to understand the
important role that student data play in education today, and they also appear to
understand the variety of data that teachers and schools are required to use, particularly
where assessment and high-stakes testing are concerned. Many of the professors’ beliefs
regarding the need to learn how to use student data are expressed through course topics
and assignments, specifically.
All three professors at School A implement some form of data use or instruction
in their coursework, particularly in the context of student assessment. One professor at
School A stated the following when asked about data literacy promotion and skills
students obtain in her course:
I think they leave my course with an understanding that you really need to use
multiple indicators and you really need to use authentic indicators and that
probably multiple choice tests and true/false are not the most authentic and
reliable indicators. And while they’re one piece of data we collect that they really
need to be balanced with a whole lot of others….
This professor also discusses how her students learn to gather reliable student data and
what to do with it once they have it. She says:
I do give them multiple indicators about how to collect information and data on
student learning. And I spend one to two sessions on it, and I think most of the
other professors do. It comes at, kind of the end of the semester when we’ve
talked about strategies and all kinds of things to use in social studies. I actually
walk in and give them a pop quiz as a fake and they struggle with it. I let them
squirm for about a minute and then I can’t take it anymore either. And then I say,
you know, this is not real. Just think about how you feel right now. In terms of
data, collecting data on what kids know. You’re really collecting data on how
they manage their stress and is this an authentic data collection type of thing?
And I get them to understand pop quizzes are not an authentic or reliable way to
collect data on little kids now.
77
This professor’s students are exposed to how certain forms of data collection can be
unreliable, as what they were trying to measure does not actually come through in the
data that is collected. This is especially important in the realm of high-stakes testing
today, and the fact that not all tests are valid or reliable, nor do they necessarily assess
that which was intended to be assessed. This same professor also goes on to talk about
some shortfalls when it comes to student data use, especially in her class. It is clear that
she believes data use is important through her words when she states:
One of the things we’ve tried to do in the past which I have to tell you I have not
been real successful at, and that is to have students keep track of data through
student progress. So, you know, we have intermittently here and there asked
students to produce evidence that the students in their classes are learning, but,
and I know, I think this kind of sounds like a cop out because it sounds like kind
of one to me, but student teachers are trying to learn so many things at one time
and I think that’s something that’s fallen by the wayside. And, I think that
outcomes of student learning are not something that we have really insisted that
they produce. Now they produce outcomes of their own learning. We have a
verification of competency form that they have to track their meeting the
standards. We have a pretty comprehensive Rubric that is the PACT Rubrics that
we have them use about tracking their own learning, but I think we have not been
good enough in having them put the proof of their good teaching into student data
that shows high-learning outcomes, and I’m hoping that that will be something
we’ll be better at in the new program.
The other two professors at School A also tend to specific assignments and topics
in their coursework in order to promote data literacy in their students. One of these
professors is the reading methods professor, and assessment and data are a large focus in
her class, with a lot of “hands on” experience. Her beliefs regarding the need to learn
how to use student data are expressed in the following narrative:
Both the reading classes, both elementary and secondary and then the bilingual
methods classes, that I think I’ve probably, realistically have done work in data
literacy because the other classes have been just more theoretical-based classes.
78
So in the reading methods class in both of them, I guess the way I’ve promoted
data literacy is by, there’s a whole segment in the class when they’re exposed to,
and it runs about two to three weeks, they’re exposed to assessments in the area of
literacy, or assessments in the area of language development and so the way that I
teach it to them is the first thing we do is we go over literature on it…. We go
over those [assessments], we talk about the implications of what the results of
those mean, how the students are labeled in their school systems and what it
means for the programs and the instruction. And then what they do, they don’t do
assessments such as these, we talk about understanding language development so
after having discussions around language development, levels of language
development like silent period, to pre-production, to early production; the early
developmental stages of language acquisition, we then have them go out and learn
the story of an English learner. So they also have a case study and their case
study is go interview the student. Go interview the student, interview their
teachers, their parents, their brothers, everybody. Find out what was their primary
language, when did they start learning English, what program instruction were
they in, talk to the teachers, what were their strengths, what were their
weaknesses, and then they have to start connecting that to trying to understand
why they are where they are in language. So let’s say they’re a 7th grader and
they’re still at an English language development level two, they’re still at early
production, then they have to try to make sense of that. Well looking up their
story, so this is more of assessing and gathering data on the learner, to make better
decisions about what does that mean then and how I teach them this chapter from
the science book. If I know this child doesn’t understand what he’s reading then
how do I have to do it differently? So we don’t get into formal assessments with
them, it’s more qualitative, interview, get a narrative story out of what you know
about this child as a language developer and the analysis is why have they or have
they not reached a certain level of English language development based on their
timeline and their history. And different things come up. So that’s the data, the
way we use data in that class. To understand the learner. So we don’t get into
now let’s look at instruction as much. They have a separate assignment that’s
developing a, like an ELD
1
SDAIE
2
lesson but it’s not connected back to the case
study…. and we go over things like what are all the different types of assessments
out there. High-Stakes testing and what does that mean, looking at the CST
3
,
looking at, just things like that. But it is just to kind of give them an overall
picture of what assessment looks like in the schools in literacy and accountability.
What is the accountability of the schools?
1
English Language Development
2
Specially Designed Academic Instruction in English
3
California Standards Test
79
Again, this professor is providing a lot of hands-on, practical use and application of
assessment and data in her coursework, thus expecting her students to be able to use a
variety of student data, both in their coursework and in the classroom.
The third professor at School A emphasizes data literacy in her teacher candidates
through looking at behaviors of the students, discussing the importance of studying and
understanding student work, and through lesson design. Her beliefs regarding the need
for student data use can be seen in the following statement:
I hope that our students are getting an awareness of how to read children in terms
of looking at their behaviors both individually, in dyads, in triads and groups in
order to really look at what the signatures of behavior mean in terms of learning
and terms of social interactions and so forth. So that’s what I share with them; to
be able to read the behavior of children. The second thing that I know that we’re
trying to do is, that they have an assignment this week to bring in students’ work
and we are going to spend our time looking at the implications of student work on
two levels. One is, does it really match the objectives that in fact when we have
tried to teach in a lesson and secondly, what does it inform us about in terms of
grouping and new lessons. Do we have to re-teach, what do we have to re-teach,
why do we have to re-teach it, etc.? ….So I try to move them away from the
statistical or the grading kind of thing to really reading. What does this tell you?
We have all different kinds of simulations where in fact I have dummied up kids
and their work and we sort them and look at the implications for designing
lessons. And then the fourth thing that we’re going to do, which I think is really
important is that toward the end of the semester, they have to then look at
constructing their own assessment tools and reflect on not how good the lesson
was but how able those assessment tools are to inform them of what they were
learning and what needs to be learned. So going backwards to go forwards…. It’s
a really important issue. It’s very different than defending yourself against really
understanding children’s work by just grading it which I see happens all the time.
They’re putting stars, and pluses and happy faces or whatever they do. They tell
me 25% of the kids are proficient. Well, what does that mean? And I think the
other part of it is that I think there is a big difference between measuring what
children do now versus what they could potentially do. And so even if the child is
proficient doesn’t mean that you’re through. It doesn’t mean who they are. Well,
they got an A. So what? You know, does it really mean anything until you can
80
say, is this the best the child could do? What does it tell me the kid can be doing?
So you know, the whole concept of pushing them and not resting on a grade.
Teacher candidates in this professor’s course seem to be learning that grades are not the
“end-all” to understanding student learning and progress, and that there are other ways of
obtaining information (data) on your students. And that perhaps while grades are
important, they must move beyond them in order to ascertain where students are truly at
in their educational progress.
Looking at School B, it is also evident that professors have beliefs about the need
for their teacher candidates to learn how to use student data. The reading methods
professor at this institution discusses where the program first introduces data and
assessment, and goes on to talk about her beliefs from her own courses. She states:
Well, first of all I think they’re giving them an overview in Intro to Teaching
about assessment and the importance of it. Different kinds of assessments. In
addition to that is that we’re exposing them to different kinds of assessments by
the way in which we structure our own classes. We use different assessments
such as we use portfolios as an example. We use rubrics as a way of
demonstrating to them how to evaluate students in another way. And then they
have the more pen and pencil or pen and paper kinds of exams that are fairly
traditionally designed…. I think they are exposed to the demographic data. When
I was teaching the methods class that was one of the first things we talked about
was demographic data and looking at attendance records and how poverty impacts
students’ success in school and so forth. So we looked at a lot of different home
languages that are spoken and then who are the resources, the resource people in
the schools that can assist teachers and things like that.
Teacher candidates here are given exposure to different types of data including that
which rubrics yield, as well as portfolios and student demographic data. All of these are
relevant and used widely in schools today.
81
Another professor at School B discusses the importance of gathering student
information, just as a few of the professors at School A discussed. This professor also
discusses the importance of observing your students and using that information to help
guide your lesson. She states:
So we spend a lot of time in class talking about what kind of information should
you get and what should you do with that information, so that’s a big focus in my
class. And then we spend a whole day too talking about formative assessment,
like gathering information informally in class and making adjustments in your
lesson, either the one you’re working on or the current lesson to meet the needs
that you’re learning about. So it’s not just standing up there teaching a lesson. I
really drive that home. You need to always be looking are your students tracking
with you and how do you know? What information do you get that tells you that
they’re listening? If they’re looking at you that doesn’t tell you anything. So we
spend a lot of time on that….We even take sample data like someone has gathered
in a TPA, information about students. It’s not, yeah, it could be CELDT
4
scores,
it could be cum(ulative) file, standardized test store data, it could be whatever.
And here’s what this person decided to do to meet this student’s needs. What
would you change about that and what else would you add and that kind of
thing…. Look at sample data and what would you do differently.
Corroborating with this professor’s response and inherent beliefs on data use is a third
professor from School B. Both of these professors teach curriculum methods courses,
one for multiple subject credential students, and one for single subject credential students.
This third professor discusses the need to gather information, or data, on students for the
TPA as well as for general understanding of the students one is teaching in the classroom.
She responds:
With their TPA 2 which is embedded in my class, they choose a focus student
from their field experience that is an ELL student and then they choose a student
that has another learning challenge or need. So they are gleaning information
about these students and making decisions for their TPA based on that. So based
on that information, so that is data gathering and then it is also taking that data
4
California English Language Development Test
82
and using it to drive instruction. And that’s the way TPAs are set up so they’ll be
doing that again in three and again in four. So it is embedded in the program
through the TPAs…. Task two and four is just a special learning challenge is the
way the state, you know. So they can choose a student who has a 504
5
or an IEP
6
or as an unidentified GATE student. They don’t have to. It could be an ADD
7
,
you know, it could be just a student who just doesn’t participate or who is very
shy or who’s below grade level in reading but isn’t really an identified learning
disabled student, because they could be low in reading because their parents
moved every two years or whatever. So whatever it is, they can go to
cum(ulative) files, they talk to their master teacher, they observe the students,
they do student surveys. We encourage them to do a student survey, we
encourage them to just, you know, any sources they can get. But all of that
information is pertinent to driving their instruction and deciding what kind of
accommodations or instructions the student needs. Whether they’re visual, I gave
them a whole group of different little tests they can do to, you know, that can help
them try to determine if they are kinesthetic or a visual learner. You know, what
kind of learner they are, things they like to do to show the teacher that they have
learned the material. So, just that kind of data.
Both of the curriculum methods professors at School B are responsible for one of the
TPA activities in their course, so their beliefs on the need to learn how to use student data
somewhat revolve around that TPA. Yet, they also both appear to believe that collecting
information on students in the classroom is extremely important, especially in terms of
understanding one’s students and being able to adjust instruction appropriately.
Two of the professors at School C also have some general beliefs about the need
for their students to understand how to use student data effectively. One professor in
particular discussed how her students collect data for the PACT, but she also brings in
other factors including trend analysis. This professor’s beliefs are expressed in the
following comment:
5
Section 504 of the Rehabilitation Act and the Americans with Disabilities Act
6
Individualized Education Program
7
Attention Deficit Disorder
83
You know, thinking about assessment and the purpose of assessment, but the
students collect a lot of data in preparation for the TPA [PACT] demographic
data. They collect achievement score data, you know, aggregate data, school-
level and classroom-level. Our credential, I’m thinking of the credential students
and then they have to write an instructional context piece in which they analyze
that. They analyze it a little bit mostly just reporting it so it sets a context for the
rest of their performance assessment. So that’s one way in which they use data to
descriptive data. And then they gather assessment results and those range from
written products to tests scores to maybe student… when I say student I mean our
students here. Student-constructed tests for the classroom. Some of them create
Rubrics and things like that. As part of the TPA [PACT] they’re required to
create assessments and evaluate the results of those assessments, and so
depending on how they’re assessing their students they may do kind of a little bit
more sophisticated data analysis. Some of them do a trend analysis; looking for
patterns and student responses. Say for example a group of kindergartners in an
oral-language activity, the way in which the candidates analyze that data is pretty
different from high school math class where they are sort of looking at multiple-
choice tests in terms of the candidates can construct it. And so in some ways
what one of the things that we’re thinking about right now is that we’re going to
do a lot more modeling of data-mining, so to speak where sample data sets for the
student so that the kindergarten teachers can have that experience of looking at
disaggregating and looking at different aspects of achievement that they might not
ordinarily capture in that setting and by the same token, the high school math
teachers are going to be looking at a lot more qualitatively at student responses
and looking at that kind of information so we’re trying to provide a little bit more
experience with the students that’s not only based on their field-setting but where
they’re actually digging around in data that we bring in.
This same professor also discusses a few specific courses and what the teacher candidates
do to work with student data within those particular courses, as well as within student
teaching placements. She continues:
Now the undergraduates in Foundation courses, we have them as part of their
coursework they go and obtain data on school achievement patterns on socio-
economic neighborhood demographics, ethnic and linguistic, composition of the
schools and they collect data on the English-learner reclassification rates. And
the Master students conduct a curriculum inquiry so they, on an in-service level
then, they gather and analyze data from their own classrooms in order to evaluate
the effect of the curriculum they create….I think that’s sort of the big thing, but
then of course once you collect the data it’s sort of like well why? What are you
going to do with it and you know one of the challenges in particular with our
multiple-subject student teachers because they have two different placements.
84
They have to have two different placements. They don’t always have enough
time to really use the results of the data they gathered to inform instruction so I
think that’s also something else that we need to find a way to help them get those
experiences that they’re not having, not able to have them as deeply in their own
field experience.
The second professor at School C talks about teaching her students how to gather
data on ELL students, and to focus on the language levels and how that affects instruction
in the classroom. This is very similar to the professors at Schools A and B who also
talked about teaching their students how to do this. School C’s professor’s beliefs come
through in the following statement:
They really only look at two students in depth. One for the TELL
8
class and one
student in B-CLAD
9
. So that’s where the secondary students would get some
practice, but they look at multiple forms of data. They look at their CELDT
scores for these two students, but I think it ends up helping them overall. I think
they end up finding a lot more about the rest of their student CELDT scores. Like
once they start investigating, I think it kind of peaks their interest, like who else is
in. First they have to find out who are their English-learners in their classroom
then it kind of leads to what are their CELDT scores, because I ask them to try to
look up two different types of English-learners. Maybe one who’s more of a
beginner and maybe one who’s more intermediate and so it kinda leads them to
having them look at some data. And then they do some other, they gather as
much assessments as they can. They do some observations, and then what they
do, and this is about their speaking/listening in English proficiency. So then they
kind of analyze, they try to pull together all that data and come up with some
conclusions, draw some conclusions about their levels of English-proficiency,
what would be some next steps they could take to help the child keep moving and
progress in English.
She also works with her students in regards to understanding the information that the
CELDT scores provide. This mirrors what some of the other professors at Schools A and
B discussed. She states:
8
Teaching English Language Learners
9
Bilingual Crosscultural Language and Academic Development
85
I mean we do look up CELDT scores. We’ll look up old, we actually use one of
my former class CELDT scores like, you know, the table the data about the
CELDT scores and we look at those and we’ll use that and that’s in the TELL
class to like, if this was your class, what conclusions could you draw about who
are your English-learners, what levels are they at, what kind of groupings or what
kind of maybe instruction would you need to provide for your students? So we do
look at that so we kind of look at some, as a whole group and they, so they all
have the same type of data and then at tables they would talk about it.
Once again, ELLs
10
and CELDT score data come into play as far as student data use.
This professor appears to focus on how the data the CELDT scores yield can influence
classroom instruction.
It is apparent through the interview data that teacher education faculty at all three
institutions do have some strong beliefs regarding the need for their teacher candidates to
learn how to use student data. This is especially prominent in regard to student
demographic and background data, as well as with ELL levels and how that information
helps to guide instruction. Some of the professors also discuss the need to learn how to
use student data for the purposes of the TPA and PACT activities that are required of
their students. This is discussed in the following section.
2. TPA and PACT:
The bar has been raised significantly for departments, colleges, and schools of
education in the state of California with the required implementation of the TPA and
PACT. Teacher candidates are now required to demonstrate proficiency in a variety of
teaching areas, and student data and assessment are one of the big pieces in this puzzle.
As a result, teacher preparation programs are feeling increased pressure to make sure
10
English Language Learners
86
their students do well on these activities, as their teacher candidates cannot be certified to
teach in a California public school without demonstrating proficiency in all required
areas.
One of the first questions asked in each interview session with faculty members
was how they promote data literacy, both in the program at the school as well as within
their own coursework. Interestingly enough, as faculty members at each of the three
institutions responded to this question, many of them actually spoke in terms of data and
assessment being the same thing. Perhaps this is due to the TPA and PACT activities that
they must prepare their students for. After further questions were asked in order to
clarify that the two were separate entities, the distinction became a bit clearer. A
professor at School A, when asked how she promotes data literacy in her coursework,
stated the following:
I don’t. I mean I do a piece on assessment and the various ways we talk about 10-
20 ways where we can assess children, because you know, when student teachers
in their kind of neo-natal stage talk about student assessment, they always think
quiz and test, so we’re talking about more authentic ways, how to collect data in
terms of student learning, so I do it in that sense. I don’t call it “data literacy”, I
call it “assessment”, but I do give them multiple indicators about how to collect
information and data on student learning. And I spend one to two sessions on it,
and I think most of the other professors do.
This supports the idea stated previously that many professors equate data and assessment
as almost being the same thing, even though data is the information that comes out of the
assessment. Another professor at the same school stated the following when asked about
data literacy promotion within the program at the institution:
87
I know we have the PACT and the PACT does clearly have assessment as one of
its five areas; I think there’s six now, but five Rubric areas. And the students are
always reminded now of assessment in planning and understanding students and
instructional decision-making but it’s something that comes up over and over in
what they will be addressing in their final PACT, but other than the PACT, not
that I’m familiar with.
Again, this professor equated the data literacy promotion question with that of
assessment, and tied it in directly to the PACT, which is what this school uses to assess
its teacher candidates. This equating of data and assessment may be significant in that if
the faculty who are teaching in teacher preparation programs are not making a big
distinction between data and assessment, perhaps their students are not understanding the
distinction themselves. In fact, the research data from this study show that less than half
of the students at School A and School C could define data literacy, and with those who
could define it, assessment was linked strongly to the definition.
While School A is a large private institution and School C is a large public
institution, both participate in the PACT rather than the TPA. Therefore, their teacher
preparation programs are both dealing with the same type of assessments for their teacher
candidates. This, in turn, means that School C is also concerned with the teacher
candidates’ ability to use student data. A professor at School C responded with the
following when asked how they promoted data literacy within their program:
We piloted the PACT from the very beginning so we’ve been using that now for
several years. We piloted the Cal-TPA many years ago and then when as PACT
was being developed, we were one of the key sites that was developing the PACT,
so we have a lot, a pretty long experience with it. And I would think that’s
probably been a big impetus to help us think more broadly about assessment and
also in more detail. You know, thinking about assessment and the purpose of
assessment, but the students collect a lot of data in preparation for the TPA
88
[PACT] demographic data. They collect achievement score data, you know,
aggregate data, school-level and classroom-level.
Again, this professor is discussing data and assessment as almost the same thing, and the
response was also linked back directly to the TPA and PACT. Another professor stated
the following when asked about data literacy promotion within the program: “It’s not
primarily geared at teaching them data analysis at this point.” Schools seem to be
focused on data use as required by the TPA and PACT, and are preparing their students
in their coursework in order to be successful in the required activities; however, data
analysis seems to be lacking in the program and coursework.
At School B, however, where two of the four faculty who were interviewed saw
data and assessment as separate entities that ultimately link back together, two-thirds of
the students could define data literacy and saw it as a separate piece from assessment.
This is significant because teacher candidates need to understand what student data entail,
as well as the variety of data that exist in education and the purpose for each kind of data,
and then they can link that information as something that comes out of assessing their
students. All of this correlates directly to the TPA and PACT, as teacher candidates are
required to collect and interpret a variety of data on a student in order to successfully
complete some of the TPA and PACT activities. In fact, one of the professors at School
B had the following to say regarding how she promotes data literacy in her coursework:
In my class, I don’t focus on assessing at all, but we do the TPA #2 in my class
which is one whole section is what accommodations will you make for an English
Language Learner? And the other section is what accommodations will you make
for a special needs student? So we spend a couple of class periods just talking
about English Language Learners and well, the TPA includes the whole
component of gathering information about your students…. So we spend a lot of
89
time in class talking about what kind of information should you get and
what should you do with that information, so that’s a big focus on my class.
The tie-in with the TPA and PACT is once again evident here, as faculty work with their
teacher candidates to ensure that they understand the types of data that are important for
both understanding their students and for successfully completing the TPA and PACT
activities.
All three schools emphasize demographic data in their coursework, including
students’ primary language, socio-economic status, and CELDT scores. Again, this goes
back to specific tasks required as part of the TPA and PACT. Faculty focus on teaching
their candidates how to gather background information and demographic data on their
students, and then require the candidates to use this information in case study
assignments. The students for these case studies are usually English Language Learners
(ELLs), which once again ties back to the TPA and PACT activities.
While faculty may be spending time in coursework on demographic data due to
the TPA and PACT, the teacher candidates gave some mixed reviews. Some students at
School A are not entirely comfortable with looking at student demographic data and
actually applying it, either in their coursework or in a real-life classroom setting. One
student responded, “I do not feel comfortable at all with using data.” Another student at
School A echoed that response, “I’m not comfortable past analyzing a student’s ELD
level from cum(ulatives).” Conversely, there are a few students at School A who do feel
a bit more comfortable and one stated, “I feel confident that I can interpret statistical data
achievement data to use in course papers or projects.” Another stated they felt
90
“Competent, but I do not enjoy it.” This certainly sends mixed messages on how well
teacher candidates feel they have been taught to use student data, and all will be required
to do so with their PACT activity.
Students at School C also had very similar responses when asked how
comfortable they felt using student data both in coursework and in real life classroom
settings. Here is what one student had to say: “I am fairly comfortable using data in my
coursework. I think that to become more confident I would have to learn more.” Other
students at School C echoed with similar responses: “I feel like I don’t know what to do
yet”, and “I do not feel that comfortable because I have not had that much
time/opportunity to use data in my classes.” It seems that the students feel a little
comfortable with what they have learned, but that they really would like more practice
and application with it. Here is what one student had to say when asked about using
student data in an actual classroom: “I would feel somewhat comfortable using data in a
real-life classroom setting with practice at first and with guidance.” Another student
alluded to the same when she said “I feel pretty comfortable with practice.” Again, while
faculty have focused on demographic data use in order to prepare the students for PACT,
it is fairly easy to see that students at School C feel like they are not completely prepared
with data skills.
The survey data from School B is slightly different than what was found at
Schools A and C. The students at School B generally feel more comfortable with student
data use in coursework or class projects. One student said, “I feel very comfortable using
data in class projects. I feel that I know how to read the information accurately to assist
91
students to their specific needs.” Another student responded, “I feel somewhat
comfortable using data for my coursework or other projects. I try to be detailed and
specific with the data I gather from my students, so that my interpretations will be
accurate.” Yet, there was one student who did not feel as comfortable, and states, “I feel
semi-comfortable. I’ve used it a few times and was not completely secure in my data or
interpretation.” However, when it comes to using student data in the classroom, the
students at School B feel a bit less comfortable. One student says, “I feel somewhat
comfortable. I would definitely like to obtain more practice, and I am also aware that I
would need and seek support/guidance as needed.” Another student feels even less
comfortable and had this to say: “I do not feel that I have adequate training in where to
obtain data and how to implement it in the classroom. I think I would feel more secure if
data retrieving and organizing were modeled for me in my courses….” Thus, while
faculty at School B have mixed feelings on how well they are promoting data literacy in
their program, the students seem to corroborate that sentiment with their responses, as
there are different comfort levels present among the students. Yet, all students at School
B are required to pass the TPAs in order to be certified, which means they use student
data to complete one or more of the required activities.
3. Data Literacy and Analysis not Intentionally Taught Program-Wide:
While the TPA and PACT requirements are certainly influencing teacher
education faculty’s beliefs regarding the importance of student data use by their teacher
candidates, at the time of data collection, it did not appear that the programs were totally
focused in their efforts to improve in this area; however, all three programs were in the
92
process of making changes in order to address data in a more effective manner. This was
especially prominent at School A.
Faculty members at all three universities stated that while there were certainly
assignments and lectures that addressed student data use, their programs, as a whole,
were not intentionally addressing the issue under their current structure. When
discussing how the program at School A was currently promoting data literacy in its
coursework, a professor responded with the following:
You know, I think it’s pretty non-deliberate. While I think everybody includes it
in their class, I, you know it’s in one of those places that maybe we’re not holding
a high-enough priority for. Like, I have an assessment piece in social studies and
I know that out in student teaching, students keep data on their students. And we
talked about data in the intro class also in terms of statistics and data. If your
whole focus is on student learning rather than on teacher teaching, you’ve got to
keep data and I don’t think that we’ve evolved to that understanding of how data
is going to inform us as much as we need to be. I really don’t think we’re there
yet.
While the “old” program at School A did not promote data literacy to a large extent, the
“new” program that they will be implementing within the next few months does attempt
to do this on a larger scale. Another professor at School A discussed this during an
interview and had the following to say:
Well we have, I don’t know if it’s program-wide, but it’s so hard because now we
do but it’s because we’re redesigning and doing this new MAT
11
, but the old
program really didn’t have a uniform kind of vision of what we wanted to say
about data and assessment. The new one does very clearly so based on what
we’ve done so far, not that I know of.
11
Master of Arts in Teaching
93
The professors at School A, as a whole, are excited about the implementation of the new
program, and it would be interesting to see how intentional the instruction becomes
regarding data literacy in their teacher candidates.
Professors at Schools B and C had a slightly different take regarding the
intentionality with data literacy promotion. At School B, one professor, who has been at
the school for over twenty years, stated, “I think we’re doing a fairly good job in this
area.” However, another professor, who is new at the school, had a different perspective:
I think that’s one of the things other research will show you is that we are
information, no, we are data rich and information poor. With teachers even out in
the classrooms. In veteran teachers, they have all these programs that will show
them all this data, but once they get the data, they don’t know what it means and
what to do with it. And so that’s something I think we, every university needs to
do better with, that certainly if it is an ongoing problem, it is certainly something
that is a pervasive problem I think throughout most of education.
There seems to be dissonance at School B regarding how well the program is promoting
data literacy within its candidates. This conclusion can be corroborated with the quotes
above, showing two professors seeing the issue in two different ways.
The professors at School C also feel that they are doing a relatively good job with
promoting data literacy. Yet, they also feel like they could be doing more. As one
professor stated:
I think considering the developmental nature of becoming a teacher, I think what
we give them is appropriate. I definitely think it could still be a little more, at
least for me, I think it could still be a little bit more on my… I mean instead of
just giving them the assignment and having them, I try to have them come
together and bring data in class and talk about it, but it still tends to be very
informal like, just bring what you have and let’s kind of see where you are where
it would be really I think for me, thinking about developing data literacy to really
say bring in your talking and listening samples by this date. Sit with your
94
partners, talk about what you’ve observed, what conclusions can you draw about
these observations. So have it be more structured so it’s kind of embedded and
it’s kind of implied they’ll come to some conclusions, but I don’t really do… I
don’t have the time for really explicit data analysis, do you know what I mean?
While the professors at all three schools clearly feel that data literacy and the correct use
of student data is important, they find that within both their programs and their
coursework, there is definitely a lack of intentionality.
Table one, on the following page, provides a summary of the research data from
each program that participated in the study in regards to faculty beliefs on data use, and
how they are implementing student data use within their programs.
95
Table 1: Teacher Education Faculty’s Beliefs About the Need for Preservice Teachers to
Learn How to Use Data.
General Faculty Beliefs
About the Need to Learn
How to Use Data
Influence of TPA
and PACT
Data Literacy and Analysis
Not Intentionally Taught
Program-Wide
School A • Use multiple indicators,
not just one type of data;
• How to gather valid and
reliable data;
• Observing students and
student behaviors;
• ELLs and CELDT score
data;
• Addressed in
assignments and course
topics
• Multiple indicators;
• Faculty equate data and
assessment as almost
being the same thing;
• Address data and
assessment due to PACT
requirements;
• Focus on student
demographic data
• Students focused on data
use for PACT
requirements
• Assignments and
lectures addressing
data, but not on a
program-wide level;
• Non-deliberate;
• Implementing a “new”
program that hopes to
address data and
assessment on a more
intentional level
program-wide;
School B • Exposure to and
gathering of a variety of
data types;
• Observing students and
student behaviors;
• ELL and CELDT score
data;
• Addressed in
assignments and course
topics
• Which types of student
data to collect for TPA
activities;
• Focus on student
demographic data;
• Case study assignments
utilized with focus on
ELLs due to TPA
requirements
• Dissonance amongst
professors as to how
well program addresses
data literacy;
• More intentional focus
on what to do with
student data;
• Changing program
structure may address
dissonance
School C • Trend analysis;
• Gathering a variety of
data types;
• ELL and CELDT score
data;
• Addressed in
assignments and course
topics
• Faculty equate data and
assessment as almost
being the same thing
• PACT made faculty think
more broadly about data
and assessment;
• Focus on student
demographic data;
• Students focused on data
use for PACT
requirements
• Faculty address data
literacy to some extent
in coursework;
• Desire to implement
data literacy on more
intentional level in
program
96
What Programs Are Doing to Provide Basic Assessment Literacy to Their Candidates
The second sub-question had three prominent themes regarding basic assessment
literacy in teacher education candidates. The first theme that emerged from the research
data was all three programs were currently making changes in order to provide more
focus in the area of assessment. The reasons cited for these changes were once again the
TPA and PACT. Two of the three programs stated that the TPA and PACT results were
showing that their students were not performing well on the assessment portion of these
required activities. As a result, all three programs are undergoing significant changes in
order to improve their students’ performance in the area of assessment.
The second theme that emerged was that the reading methods courses were
focused on providing their teacher candidates with a variety of assessment types and
resources. Coverage of assessment occurred in more depth in reading methods courses
than in other teacher preparation courses.
The last theme that emerged under this particular sub-question was that of
modeling a variety of assessment types. Each program felt that it was important to
provide a wide variety of assessment types to their students in order to stress the fact that
one cannot rely on simply one set of data from one specific assessment. Teacher
candidates need exposure to a variety of ways to assess their students so that they can
take the data from multiple sources in order to make the most informed instructional
decisions for their students.
97
1. Changes in Program Due to TPA and PACT:
As discussed in the previous section on data literacy, the TPA and PACT are
having a rather large impact on teacher preparation programs in the state of California.
An essential component of both the TPA and PACT is that of assessment, as teacher
candidates must show that they are proficient in their ability to accurately assess their
students. As a result, all three programs that participated in this study have changed, or
are in the process of changing, various parts of their programs in order to have a deeper
focus on assessment. Schools B and C cited TPA and PACT scores on the assessment
portion as specific reasons for making changes to their teacher preparation programs.
School A did not specifically mention student scores, but was already in the midst of
changing their program and went ahead and included assessment as a part of that change
due to PACT in general.
One of the first questions asked of the professors regarding assessment was what
their programs were doing specifically to promote assessment literacy within their
students. A professor at School A stated: “I don’t think we’ve kind of discussed
program-wide anything on assessment, but like I said now we have, but that’s something
we haven’t done yet.” This same professor, when discussing one of the major reasons
for changes in the program regarding assessment as being due to the PACT, added the
following:
There’s a whole rubric area in assessment. So those are the three things that we
have in place currently that are more programmatic-wide assessments that we use
to monitor student learning but in terms of specifically the way an assessment is
98
understood and delivered and practiced in individual courses, there’s nothing
uniform.
This professor was specifically referring to the fact that under the “old” program at
School A, assessment was not discussed in terms of the program, but under the “new”
program being implemented this year, things are different. Another professor at the same
school had a bit of a different take on the program’s assessment promotion, and stated, “I
think assessment is a pretty strong thread throughout the program…”. When asked more
specifically about the program’s promotion of assessment, this same professor responded,
“Assessment is woven throughout each content area and content-specific assessment
strategies are taught in each one of the content methods classes.” So, while School A is
undergoing a transition between two programs, both of these professors saw things just a
bit differently with regards to how assessment literacy has been approached. The third
professor who was interviewed at School A had quite a different outlook than the other
two professors, and when asked about assessment literacy in the current program, the
professor stated the following:
I think like all other programs, we’re individuals that teach classes and so I’m not
sure that there’s a consistent perception across every class. The difference
between looking at the different ways we look at achievement. I mean, some
people look at achievement in terms of quantitative measure. Some people look
at achievement in terms of its qualitative features. I’d like to teach them how to
look at achievement in a qualitative way. To be able to, like a good painting, to
really understand the nature of children’s work, it should be like a painting where
you look for points of harmony and points of color. You really read it like in fact
you read a painting. What does this tell you about the composition and the artist
behind it?
There are obvious differences in the perceptions of how assessment literacy is being
taught throughout the “old” program at School A. With the implementation of the “new”
99
program in the coming months, perhaps there would be a set of more uniform answers
among the professors.
School B has also focused on changing their program in order to make assessment
more uniform across required courses. As a whole, the professors at School B appear to
be united in the need to make changes in their program regarding how they approach
assessment. One of the professors cited the following as a need for changes in the current
program: “It tends to be the hardest TPA for our students to pass and our scores on that
one percentage-wise are lower passing rate…on the assessment one. Which tells me
there’s weaknesses in our program on assessment because they are like I don’t know how
to do this.” This same professor stated the following when asked about program changes
due to student TPA results:
I had one task force that was an assessment task force so she collected from every
professor what do you do related to assessment in your course? Another one was
the English Learner task force, another one was Special Needs. Because those
were the three areas candidate weaknesses were emerging on TPAs three and
four. So they have gotten from the faculty, so they brought their findings and we
saw… basically there’s a little bit being done in every course but there’s overlap
and it’s too basic. So somebody starts with this and the next person says oh, I’ll
do assessment and they really repeat 60% of what was already done and they
didn’t know how much was being… so we’re getting better communication so
that we can say the Intro class is responsible for this piece of assessment. So then
when they get to Psych
12
, they understand that so you can build on that. You
don’t have to review that other than a very cursory review. So I think that’ll help
too so then the Curriculum professor doesn’t feel like how can I teach them
everything about assessment?
One of the other professors interviewed at School B agrees with those sentiments and
when asked about changes in the program regarding assessment, stated, “That’s where
12
Psychological Foundations of Education course
100
we’re starting. We identified areas of weakness in our candidates, mainly from TPA
data. One of them was assessment. They don’t know what to do with assessment data
enough we think.” When asked about the task force mentioned by the previous professor
interviewed, this professor states:
Yeah, but back to the task force, we identified that there are weaknesses in our
students. What are we doing to promote their literacy and assessment and special
needs in ELL. Because there’s such a huge gap, and so now we’re going, this
task force is going to be deciding, doing backwards planning. Like here’s what
students should know, where should it be covered in what depth and in what
class? And so we’re, [name omitted] will primarily be the one making those
recommendations.
Finally, a third professor at School B also agrees with the sentiments of the other two
professors. When asked about how the program promotes assessment, the professor
states:
I can’t speak to what other folks to. I do know that when I look at some of the
data that I have seen, I think that is a weakness for our program. And that’s just
my personal opinion….For me, that’s what I look at mostly is the TPA data. So I
think that by looking at some of the data that I’ve seen through the TPA that that’s
a place that we definitely can shore up and improve. I think our students have a
good knowledge of formative and summative, it’s just, again, I think they need
more direct instruction or experience with performance assessments and that sort
of thing.
It is evident that School B has relied heavily on TPA data regarding their students’
performance in order to make significant changes in how they treat assessment in their
coursework. This has a natural link back to data literacy as well, as data are a result of
assessments.
School C did not produce data that alluded to program change on the large scale
of Schools A and B. When asked what the program was doing to promote assessment
101
literacy in its candidates, one professor at School C simply responded with the following:
“The PACT is a decent assessment, so it’s helping us be able to improve the program.”
When asked follow up questions about specific program improvements based on the
PACT, the professor simply discussed the various components of PACT. Another
professor at School C also mentioned program change in terms of the PACT, and was
able to be more specific. This professor stated:
We saw that out of the tasks the students do which is Planning, Instruction,
Assessment and Reflection. But in the four main tasks that they have to do, the
one they always fell back, fell down on was assessment. And it’s like they don’t
know how to look at student work. So that forced us to pay more attention to how
do we help them do that? And interestingly we just went to, they have a yearly
conference for new implementers and experienced implementers for this
assessment…,so we just got back and we had a meeting yesterday with both the
elementary and secondary and we were talking about this one presentation
we went to where they were talking about assessment and how to help your
students not only look at patterns of performance of your whole class, but another
thing also the assessment gets at is what are your sub-groups doing? What
patterns do you see going on with your English-learners, with kids that are IEP’s,
with kids that are just struggling because there’s a socio-economic difference, or
whatever. Starting to look at how are you disaggregating this data and so this one
instructor at another unnamed institution, she… I didn’t go to the presentation but
I saw her handout and I saw an assignment she gave her students that leads them
right down that path into looking at the whole group and then breaking it down.
So we’re saying we got to do more of that. So just as of yesterday, we are seeing
where we can fine-tune it some more….And another thing we are going to do is
try to be more consistent about bringing elementary and secondary together to
have these discussions and align it more so it’s more of a programmatic approach
because we’ve been very different in how we’ve approached this.
While School C may not be implementing changes that are as drastic at what Schools A
and B are implementing, nevertheless, they are making changes in their program based
on what PACT data has shown them regarding assessment skills in their students.
102
The qualitative data show that all three schools rely on data they receive from
student scores on the TPA and PACT activities in order to assess their programs. These
scores have helped to drive the programs into reflecting on the effectiveness of how they
teach assessment to their teacher candidates, and have resulted in changes, whether large
or small.
2. Reading Assessments:
When discussing assessment with the faculty members at each of the schools,
regardless of the coursework or area of expertise they taught in, a theme emerged
pertaining to a focus on assessment in the reading methods courses. Both elementary and
secondary credential candidates at all three schools learn about a variety of reading
assessments available for use in the classroom. This focus on reading in particular could
perhaps be due to NCLB, accountability, and high-stakes testing, as reading is an area of
intense focus for all K-12 schools; thus, there would be a strong need for teacher
candidates to learn about these in their preservice coursework. Additionally, multiple
subject credential candidates in the state of California are required to take the Reading
Instruction Competence Assessment (RICA) in order to be certified to teach reading.
Interestingly enough, this was only mentioned very briefly by the professors at all three
schools and is not considered a large focus at all, even though it is a requirement set forth
by the state.
While interviewing professors at School A, the research data showed that not just
the reading methods professor focused on assessments that pertain to reading and literacy
103
specifically. In fact, a professor who taught a Social Studies methods class responded
with the following when asked about assessment in her coursework: “They do a literacy,
a social studies literacy project and their assessment choices are part of every project they
do and they do like five projects in my class.” This same professor continued with the
following:
I think assessment is a pretty strong thread throughout the program because
especially in the literacy and language areas because they work with Open Court
Reading and Open Court Reading is very rigid and very structured and we want
them to know that there are other choices and many of the choices that they read
are social studies choices. So I incorporate that in my conversation a lot.
The focus of this class may be methods of teaching social studies, but literacy is woven
into one of the course projects, and assessment is looked at closely as part of each project.
Additionally, this professor is addressing specific reading curricula (Open Court) and
how that may factor into other content areas. However, the bulk of assessment as it
relates to reading is accomplished in the reading methods courses here.
The teacher candidates at School A are exposed to a wide variety of assessments
throughout their reading methods class. One of the reading methods professors provided
a wealth of information regarding coursework on assessing students’ reading abilities.
This professor stated:
So in the Reading Methods class in both of them … there’s a whole segment in
the class when they’re exposed to, and it runs about two to three weeks, they’re
exposed to assessments in the area of literacy, or assessments in the area of
language development and so the way that I teach it to them is the first thing we
do is we go over literature on it. So here’s where it comes from, for example
Marie Clay’s Reading Recovery. You know, here’s where the work stems from,
here’s what she did. Then I show them the tools; like they actually get the
104
assessment tools. I go through how you would conduct it and then they have to
try it out on themselves.
Not only do students experience exposure to the type of assessment and how to conduct
it, but they also get hands-on practice with it during the actual methods class. This same
professor continues:
So when they’re doing a running record, I mean as silly as it is to do it with each
other, because I tell them the one thing that they want to make sure is that they
know how to administer the assessments in a way that their administration is not
going to distract the student from showing you their performance on it so the
running record is one I tell them I know you guys think might be a little silly but
this is… I want you guys to practice. Practice doing the notations because it takes
a long time to have to checkmark everything that’s correct and then do the “R” for
repeat and insert and things like that.
After practicing the assessments on each other, the students in the course then receive an
entire packet full of a variety of assessments and the pertinent information on each. In
turn, each assessment is gone over in much the same manner as described previously, as
the professor states below:
So once they actually try out the assessment, then they have a whole battery of
assessments that I give them. They get a whole packet of the actual assessment
tools that they’re going to need which is this really fat packet. The week that I
do assessment, I go over some of the more time consuming in-depth assessments
like the Running Record and the Miscue Analysis, but every class whatever my
topic is, we go over an assessment for how to assess that they’ve met that, but
there’s still those two weeks of just assessment…. For example, when we go over
emergent readers, we talk about the CAP test, the Concepts About Print test, and I
show them how to do it and I give them the tools so they can go out and do it. I
model it for them so then the next time we’re doing Phonics, we go over the
names test, we go over… you know things like that. But that week, I go more in
depth into with those more comprehensive tests and the bigger more
comprehensive tests….So every assessment once it’s introduced tells you what it
is, what it’s used for, how you perform it and then they get the actual assessments.
105
Another important part of what is taught regarding assessment in these reading methods
courses at School A is the fact that students need to understand the “whys” of the
assessment. The professor continues with this explanation:
We could definitely do a better job of this I think in the Reading Methods class
because we go over this, we teach them how to do it, the purpose of each
assessment; they always have to understand the rationale. Like why would you
give this assessment? What do you want to learn from it or what do you want to
learn from the student so which assessment would you use in terms of why this
particular formal assessment…
Not only do students receive practice with actual test administration, but they are taught
“why” and what to do with the information that is gleaned as a result of these various
reading assessments.
By looking at the data cited above, it would appear that teacher candidates at
School A are very well-prepared in terms of exposure to a variety of reading assessments,
as well as in how to administer them properly and what to do with the results. When
students were asked about specific assessment skills they learned in their coursework,
they replied with responses such as: “For reading assessments, she gave us a lot of
examples” and “We learned about different assessments and what they assess. I had
practice with the San Diego Quick assessment and the Qualitative Spelling Inventory.
I’ve also done a fluency test as a student teacher.” While this question was aimed at
assessment skills in general, a few students’ responses were strictly focused on the
reading assessments they learned.
School B was also very focused on reading assessments, which are learned solely
in their reading methods coursework. The reading methods professor stated the
following when asked about assessment skills that students learn in their coursework: “In
106
the reading methods course we go through at least ten different types of informal
assessments. Those would include such things as informal reading inventory and I use a
specific one called Classroom Reading Inventory which I’ve had real good results with
over the years.” This same professor goes on to discuss other forms of assessment the
students are introduced to within the course:
We use the “Linking Reading Assessment to Instruction” text which is excellent
and it does have a variety of different assessments in it such as different ways of
looking at story structure. For example, story retellings with quantitative data
analysis. The Klesius-Holman and Phonics Word Analysis Test. I go over the
Yopp-Singer Phonemic Awareness, Running Records, the Garfield survey that
tests attitudes and interests in reading. How to interview students about their
interests in reading. Fry’s Instant Word list, and you know just a host of those.
It is evident that students receive instruction in a variety of reading assessments, mostly
informal. Just as occurs in the reading methods courses in School A, these students also
receive hands-on “practice” with giving the assessments, as well as the appropriate
audience and how to interpret results. The professor continues:
So we cover them. And the students, yes, I actually involve them in becoming the
expert on a particular one and presenting it in class and what is the purpose. We
look at what is the purpose in giving this. Who would be the targeted audience?
Who would you want to take this? And then we look at how to administer it. We
go through the different instructions on how to do that, different forms that are
available. How do you then interpret… how do you come up with the results and
then also how do you interpret those and then where do you go from here? What
are the student’s areas of weakness and so we go through all of that and it’s on the
board, in different handouts related to it. They have their books with them so they
can go through those and then they take having them do profiles of two students
so they assess them using at least three assessments. So then they compile this
into a summary; a summary of the different assessments and they’re supposed to
indicate the results and then talk about strengths and weaknesses primarily
focusing on what the student’s needs are and going back and pulling things
actually from the assessments that they’ve given. So if a student for example, on
the Graded Word List missed certain words, they try to look at those and see if
there is a pattern here. Are they having difficulty with vowels or vowel
107
combinations of some sort. Diphthongs, you know, consonant diagraphs, all of
those different things and then pulling from the actual test say here’s evidence
they’re struggling with this particular area. Then they’ll do recommendations for
further instruction.
As a result of the reading methods course and its focus on assessment, it would seem
logical that students would come away with a variety of skills for assessing their
students’ reading abilities.
Student surveys from School B were full of responses on assessment skills they
obtained in their coursework. In fact, two-thirds of the students had at least one reference
to the reading methods course in their responses, and some even included other
assessment skills in their responses as well. When asked about assessment skills they
obtained in their coursework, students at School B said, “I learned how to assess kids
through various tests such as the CRI
13
and Garfield attitude test”, and “I’ve learned how
to administer many different types of assessment such as Fry’s Instant word list, cloze
procedure, Klesius-Holman phonics, instructional reading inventory, story maps and
story frames.” Other students at School B stated, “How to give multiple reading
assessments: Cloze, Running Records, Fry’s Instant Word List, CRI, etc.”, as well as “In
the Teaching Reading, we have discussed quite a few assessments, such as Cloze reading,
Classroom Reading Inventory, Klesius-Holman, Fry’s Instant Word List, Reading
Attitude Surveys, and much more.” The data here seem to tell the story that students at
School B have obtained some very important skills in reading assessment.
13
Classroom Reading Inventory
108
School C yielded slightly different data in that none of the professors interviewed
currently teach reading methods coursework. However, one of the professors used to
teach reading methods several years ago, and another professor currently works with the
secondary credential students in the English subject area. The professor who used to
teach reading methods briefly discussed the course when asked about assessment literacy
within the program. This professor stated, “I used to teach the Elementary Language Arts
with the RICA
14
when it started, and so as part of the preparation for the RICA, students
look at a variety of different ways of assessing reading, running records, fluency counts,
observations, portfolio of student written responses, different things like that.” This helps
demonstrate that perhaps part of the focus with the reading assessments at all three
schools may be due to the RICA exam, yet the professors who mentioned RICA at all
three schools only did so very briefly.
In another interview at School C, the professor who works with secondary
English credential candidates also talked about reading assessment in terms of what the
program does to promote assessment literacy in its candidates. The professor actually
discussed her course specifically in this case in terms of a particular assignment she
requires her students to complete. She stated the following:
So around that time what we usually do is I’ve assigned that big scaffolded lesson
plan and what I have them do is bring in a draft of it. Just what they got so far. It
doesn’t even have to be written up, but are you thinking of, bringing in some of
the strategies, just even a list of ideas that you’ve brainstormed and run it by other
people and say do you think, should that come before that, do you think that
builds on that, and just get some feedback. So we do lots and lots of that. I do
that in the summer with their Literacy Across the Content class because that’s the
14
Reading Instruction Competence Assessment
109
first one. They don’t write a plan but they put together a project where they have
to choose a piece of text with two strategies in each of five areas. Two
appropriate strategies in vocabulary and concept development, in assessment, in
writing, in comprehension, and in pre-reading or pre-writing strategies. And so
choose something like a KWL
15
chart, might be in pre-reading and a quick write,
tapping into their prior knowledge. And so they have to do that and then they also
have to make the black line masters that go with it because they’ve never done
that before. Or maybe they’ve done a little but it’s still not a lesson plan. But still
I give them time to say okay, I’d like to use this strategy, do you think it fits better
here in comprehension or maybe that should be in assessment. So they get a lot
of feedback that way.
While the secondary students in this course do not appear to be receiving the variety of
assessment options that the students at Schools A and B receive, they are being exposed
to assessment within their coursework in the English methods class. However, none of
the students at School C mentioned any specific reading assessments when asked about
assessment skills they have learned in their coursework, although other types of
assessment skills were mentioned in abundance, as will be shown in the next section.
3. Modeling Other Forms of Assessment:
Another theme that emerged from the research data as assessment was discussed
with faculty members was the need to teach their students about a variety of forms of
assessment. With such an intense focus on high stakes testing and standards-based
reform, K-12 schools are often only concerned with one piece of data—student test
scores. However, teacher education faculty want to stress to their teacher candidates that
the test scores are only one piece of information and cannot possibly tell you everything
about your students. In order to help their teacher candidates see beyond the high stakes
test scores, they have introduced other forms of assessment throughout the coursework.
15
KWL Chart: displays what students Know, Want to know, and what they have Learned
110
The faculty at all three schools studied believe that a variety of assessment information
and student data helps give the teacher a more complete picture of her students.
All three professors who were interviewed at School A discussed the need to
teach their teacher candidates how to conduct both formal and informal assessments on
their students. However, much of what was discussed during the interview sessions
focused on informal assessment skills in particular. Additionally, all three professors at
School A spoke of modeling these various forms of assessments for their students within
the coursework as well. In other words, the teacher candidates not only learned about the
assessments, the assignments they completed for their classes were assessed in a variety
of ways by the professors in order to model what it would look like. When asked about
what specific assessment skills the teacher candidates leave with from her methods
course, one professor stated:
They would leave my class knowing how to do project-based assessment. They
leave my class knowing… I mean I have a whole thing of 20 indicators. They
leave my class knowing how to do quick writes or how to assess quite writes.
They leave my class knowing how to do student analysis; so papers, projects.
They leave my class doing actual constructions of environments, so hands-on
types of things. Let me see, what else do we do? We do living histories, so they
leave my class doing presentations that enact a class lesson. We don’t do any
multiple choice or true/false, just because I think they do enough of that at the
school sites. They do journals.
So the students in this methods course learn a variety of assessment methods, and then
see these methods demonstrated by the professor within their own coursework.
The reading methods professor at School A discussed the assessment skills taught
in her course, as well as concerns she has about them actually being able to implement
them in real-life situations. This professor states the following:
111
I try to show them different types of assessments; formal and informal. I try to
keep it to those big categories formal and informal and within those we can get
into formative assessments, summative assessments, because you have formal
summatives and informal summatives and you have formative assessments
like the Open Court Unit test and you have summative assessments that are
formal, but you can also have informal summatives like collecting student work
and like collecting writing samples and whatever you’ve been doing with the
student. So I guess that’s the only other thing. I try to give them a battery of
assessments so that when they get out there, they’re better at selecting which ones
to use and why, but I’m still battling with feeling, when I do some of my student
teaching observations or when students send me e-mails, they’re still going back
to well, I have to give this assessment. And I tell them you have to be leaders in
your work out there and you have to say this isn’t showing me anything so I’m
wasting my time. If I don’t do it the right way, the kids aren’t going to learn, so
why do I bother to do it in the first place. But it’s a hard battle….
This same professor also talks about the specific assessment skills students learn from
their methods coursework in this class:
I guess the different skills would be things like how to set up an environment for
collecting and conducting assessments. They go away with understanding the
impact of the administrator on the assessment results. The actual different
assessment tools and how to use them; how to actually conduct them.
Understanding the rationale for giving assessments and then understanding the
information that the assessments provide to make instructional decisions. So I
think those are the key tools that I try to give them with assessment is kind of the
environment, how to do it, what you’re doing and why you’re doing it. Those are
the main things I think. I hope what they’re coming away with.
It would appear that the students really are being introduced to quite a variety of
assessment types throughout their program at School A. The third professor interviewed
also discusses specific assessment skills learned, and provides yet another perspective on
their students’ assessment literacy. After being asked about assessment skills that
students leave her courses with, this professor replied:
I think they should leave with an understanding of how curriculum is a derivative
of needs both coming and going. I think they should leave with the skill of being
able to understand the differences between formative and summative assessment
112
and all the different ways, both formal and informal and quantitative and
qualitative ways to measure. We’re going to put a really huge push on the idea of
performance-based assessment which is very, which is in a way the state is kind
of pushing in the idea of standardized assessment. They should leave with the
skill of understanding that if you’re a good teacher you can scan a room and
understand the behaviors without ever looking at work. Behaviors do give you an
indication about how well kids are working or not working. And that they should
be able to, as part of the skill set, to understand how to do group questioning and
engage kids in dialogue, which is one of the most important indicators of whether
a kid learns.
According to these professors, students at School A receive a plethora of assessment
skills and ideas that they can take with them to the classroom, both during student
teaching and as a new professional.
Students at School A corroborated the words of the professors here when they
were asked about assessment skills they have obtained and how comfortable they felt
using them. One student learned, “Authentic, portfolio, formal, informal, cumulative,
formative, norm based. I have seen an emphasis on authentic assessment in my
coursework and text books.” Another student states “Having students do work in
journals so I can read over to formatively assess students. Using a rubric to not just
assess a student’s product but also their progress.” A third student corroborates with,
“Creating rubrics; grading according to rubrics; information about the varied types of
assessments, from exams to portfolios and how to implement them in the classroom.”
These same three students all report feeling “very comfortable” with using a variety of
assessments in their classroom instruction.
113
School C also has a focus on teaching their candidates about a variety of
assessment tools and options. One of the professors stated the following when asked
about what the program does to promote assessment literacy within its candidates:
Our elementary candidates are in the classroom during the spring when the
teachers are giving the standardized assessments, but I don’t think we do quite as
much at the elementary level in terms of deconstructing those assessments. We
do a little bit in the foundations year in undergrads but we really try to concentrate
on alternative forms of assessment in the Methods courses, so that there is
performance assessment, portfolios, written responses….
As the conversation continued about assessment, it gravitated into using a portfolio
system to assess English Language Learners. The teacher candidates at School C
participate in and use the CRLP—California Reading and Literature Project. They learn
how to collect a variety of student data and information on their ELLs in order to gain an
accurate picture of those students’ abilities in terms of language and literacy. This is
actually one of their course requirements and it focuses on assessing students and using a
variety of that assessment data to determine the child’s actual language proficiency level.
The professor states:
…it’s a portfolio system ….It’s really a very good way of organizing
observational data, test score data, really basically any types of data and helping
the students to integrate multiple perspectives on it on a child’s progress over
time. So it involves interviews with the child, interviews with the family, samples
of written work, samples of oral language, samples of writing, so we use that in
the B-CLAD
16
courses and also in the elementary TELL class in order to help the
students get a more rounded picture than they would get from the CELDT alone.
So as a culminating project, they have to take…they gather information on two
students and they have to make a determination as to the child’s language
proficiency, but they can’t use only the CELDT data. So they have to look at the
CELDT data and then see if it’s corroborated by these other types of assessments,
16
Bilingual Crosscultural Language and Academic Development
114
and so one of them is this assessment called Express Assessment that was
developed by CRLP
17
. And then they do an interview with the child and
interview the family. They observe talking and listening, take observational
notes, they analyze a sample of the child’s writing, they get any information they
can from the teacher about the child’s reading proficiency, using whatever
assessments the teacher has used, and so then they have to use all that evidence to
either support or reject the CELDT score. So if the CELDT score says they’re
early intermediate they can see samples that would suggest using a matrix that
actually comes from the CRLP professional development of English proficiency
that the data would actually suggest the child’s, you know, leaning toward
advanced, you would want your instruction to teach to that level or even beyond
so that you’re accelerating the child instead of just teaching to the level that the
CELDT said. So that’s just one example.
Students in this course actually see how assessment and data fit together, as well as how
relying on simply one form of assessment does not give a completely accurate picture of
what that child may or may not be able to do. This is extremely important given the
focus on high-stakes testing data in the schools today. It is important that teacher
candidates see this variety of assessment options and the resulting data as legitimate ways
of determining a child’s learning level, whether an ELL or not.
Other professors at School C also talked about the variety of assessment skills that
students learn in their coursework. The CRLP was mentioned briefly here by one
professor, but was not a focus like it was with the previous professor. This professor was
asked about assessment skills learned in her course, and responded with the following:
“They’re just more of an authentic assessment of the listening/speaking. There’s the
listening/speaking was an authentic form of assessment so they’ll actually watch the
students and make their observations as a way to collect evidence on how students are
17
California Reading and Literature Project
115
using language in different contexts.” A third professor at School C says that her
students learn the following in regards to assessment skills:
Another thing that we try to get them to do across, a lot of ours is just observing
students. You know, understanding what informal assessment is about. That it
counts and in fact, you’ll be doing a lot more of that than formal assessment. So
what does behavior tell you? At secondary, one of the first things that I teach
them in the Foundational classes is the kids are not lazy when they’re not paying
attention to you. They’re avoiding… it’s avoidance behaviors that have been
years in the making and when they say this is boring, this is stupid it may mean I
don’t get it or you’re not doing a good enough job of telling me this or making me
care about it. And just watching body language and facial expressions tells you a
world of things. What are they doing? Are they constantly asking to go to the
bathroom, sharpen their pencils, something like that, it’s because they’re avoiding
their work. Why are they avoiding their work? So, and that’s the thing, and
they’re saying how can I tell how they’re doing when they don’t turn anything in?
That’s telling you something right there!
As with one of the professors at School A, this professor focuses on the importance of
watching student behavior and using informal assessment of these behavior patterns and
student responses in order to gauge how well students are learning. This emphasizes the
fact that teachers are constantly assessing their students, and that assessment is not simply
tied to what has now become widely accepted as simply looking at test scores.
When the students at School C were asked about assessment skills they learned in
their coursework, they talked about things such as “We focus a lot on student
monitoring”, and “I learned the difference between formal and informal assessment. I
learned a variety of ways to do informal assessments as well as the importance of
checking in with students every day.” Another student states, “I have learned that
standardized tests are not the only measure of assessment and that it can often be
inaccurate. I have also learned how to assess the language abilities of ELLs.” Each of
116
these students, when asked about how comfortable they felt using assessments in the
classroom, stated that they felt “very comfortable” doing this. Therefore, it would seem
that what the professors at School C are doing in terms of preparing their students to be
assessment literate is indeed working.
The data from School B is similar to Schools A and C, but does not provide quite
as much variance in the types of assessment students are taught. However, there is still a
definite focus on the assessment skills students have learned in their reading coursework.
In fact, when asked about specific assessment skills the students leave her coursework
with, the reading methods professor states, “I say they would have the ability to
implement a variety of assessments. That would be number one. So a variety of
assessments to interpret results.” This professor also talks about what the students see in
their fieldwork assignment in regards to assessment. She goes on to say:
In other cases, they’re exposed to what the school’s assessments are and they’ve
had some experiences in implementing those assessments. So they’ve either
observed them, gone through them or actually implemented them so that’s
something else that they’re doing. So they’re familiar with what different schools
are doing or what their school is doing.
This can be helpful in that the students not only get the experience with a variety of
assessment types in their coursework, but they are also exposed to what is actually going
on in the local schools today regarding assessment.
Another professor at School B who was interviewed actually teaches the last
course that multiple subject students take prior to student teaching. When asked about
assessment skills her students leave her course with, her answer was relayed in terms of
the TPA. She states:
117
In their TPA I encourage them to talk about don’t just look at the cum(ulative) file
and look at that standardized data or don’t just look at their class grade but a lot of
them choose to say, I would, they don’t actually do the assessment but I would do
an informal student survey about their interests, about their home life, about what
language they speak, about whatever. So I’m sort of planting the ideas of
different assessments that you can do, but they don’t actually do them for this
assignment.
When pressed about other assessment skills she teaches in her coursework, she responded
with, “Well, I don’t… it’s all just informal I would say. I mean I feel like perhaps that’s
a limit but I also don’t think the last course they take shouldn’t be where they learn to
assess. They should already know and I could perhaps apply it and get more specific
about what do you do with this information?” Therefore, students in this last course do
not interact quite as much with a variety of assessments as perhaps they do in other
coursework.
A third professor interviewed at School B also discussed assessment skills in
terms of the TPA. This professor teaches the curriculum methods course for single
subject students, which is their final course before the student teaching experience. She
states:
The first TPA that they do in their student teaching which is TPA3 is assessment.
So I’m doing some prep work and turning the corner for them in my class to their
student teaching experience. So that has to do, I’ve used a lot with… just making
them aware of not only, like to make a good test it’s not just making more
questions it’s higher-order thinking activities whether that be a performance-
based assessment or you know a paper and pencil one. So we worked on that a
good bit and that’s one of their culminating activities is a performance-based
assessment, a formative assessment and then a summative assessment by which I
had them even count the number that they have in. I’ve categorized them in type
one, two and three according to Blooms Taxonomy, so they have to really break
down and determine how many of what type they have. Because I want them to
see that doing recall questions is very easy but they don’t need to have a huge
percentage of recalling.
118
This same professor goes on to talk about the need to “beef up” instruction on
assessment, and because it is her first year teaching this course, she is making some
necessary adjustments. She continues:
There was something already there about assessment but I really beefed it up.
And again, I wanted to make sure they really understood that concept, the
formative and summative concretely. I wanted to make sure they understood
performance-based assessments. Just as valid and for some students more so.
And that a variety of assessment types gives them the best data. You know,
you’ve got students that just don’t perform on pencil/paper tests but will jam up if
you do an oral presentation or will jam up if you do a PowerPoint or show you
talents in art if you give them the opportunity to do something, you know, using
their skills otherwise. That if you just set the objectives and with performance,
for new teachers, for teacher candidates that are unsure and still learning that
scares the crap out of them because it’s not something that’s concrete that they
can hold onto and that whole idea of letting, trying to do that just freaks them out.
I mean it really does. So we kind of go at, we go at it and that’s one of the
requirements for the end of my class is that performance-based assessment.
All three professors at School B do a variety of assessment activities within their
coursework, even though it is not necessarily to the extent that School A and School C
professors do. This may change as School B is undergoing program changes in order to
improve in the area of assessment based on the TPA data. They know that this is a weak
area, so with the changes and more uniform course structure that was discussed
previously, perhaps a greater variety of assessment activities will come to fruition here.
When the students at School B were asked about the assessment skills they leave
their coursework with, the students responded with statements such as, “I have learned
how to vary my assessments based on my students’ ability, what type of assessments are
useful for different situations, and how to score the tests.” Another student responded,
“We have learned formal and informal assessments and the benefits and negative effects
119
of both. From exams, presentations, reading reports, oral reports, short answers,
journaling, teacher just asking the class as whole responses. We have covered many.” A
third student replied, “I have learned about some formal and informal assessments. I
have learned about what type of knowledge each type of testing format addresses and the
ideal situation to use them in.” When these same students were surveyed about how
comfortable they felt using and implementing assessments in the classroom, these
students stated that they “felt confident” and “feel comfortable” doing so. Once again, it
seems that students are assessment literate based on what the research data shows, even
though the program sees room for improvement.
With such a large focus on student assessment in schools today, particularly in the
arena of high-stakes testing, it is important that preservice teacher candidates receive
instruction in a variety of assessment options. All three schools studied here appear to
indicate that they are doing just that, though perhaps to differing degrees. Regardless, all
three know that their students need this instruction, as the assessment scores for both the
TPA and the PACT could use improvement.
Table two, displayed on the following page, provides a summary of what each
program is doing to provide basic assessment literacy to its teacher education candidates.
120
Table 2: What Programs are Doing to Provide Basic Assessment Literacy to Preservice
Teachers So They Are Data-Literate
Changes in Program Due to
TPA and PACT
Reading Assessments Modeling Other Forms
of Assessment
School A • “Old” program had no
assessment coherence;
• “New” program will
focus on assessment
practices program-
wide;
• Woven throughout
methods courses;
• Content-specific
assessment strategies
addressed in subject-
matter coursework
• Literacy woven into
other methods
coursework;
• Exposure to a wide
variety of reading
assessments;
• Students learn to
administer these reading
assessments and
practice on each other;
• Students learn purpose
of each assessment and
when to use it
• Formal and informal
assessment strategies;
• Project-based
assessment;
• Quick Writes;
• Journals;
• Formative and
summative types and
strategies;
• Quantitative versus
qualitative methods;
• Observations;
• Portfolios;
• Rubrics
School B • TPA scores lower than
desired;
• Weakness in program
based on TPA data is
assessment;
• Assessment Task Force
was created
• Exposure to a wide
variety of reading
assessments;
• Students learn to
administer the reading
assessments and
practice on each other;
• Students become
“experts” in one
particular assessment
and present it to class,
including purpose and
result interpretation
• Focus on implementing a
variety of assessments
and interpreting the
results;
• Exposure to what local
schools implement in
terms of assessment;
• Formal versus informal
assessments;
• What to do with
information assessments
yield;
• Performance-based
assessments;
• Formative and
summative assessments
School C • PACT scores lower
than desired;
• Utilizing PACT scores
to improve program;
• Focus on looking at
student work
• Exposure to a variety of
reading assessments;
• Literacy Across Content
class exposes students
to appropriate reading
strategies
• Alternative assessments;
• Portfolios;
• Written responses;
• California Reading and
Literature Project;
• How assessments and
data fit together;
• Observing students;
• Formal and informal
assessment practices
121
How Preservice Teachers Are Being Taught to Use Data to Differentiate Instruction
The final sub-question wraps up the purpose of this study, and had two prominent
themes that emerged from the research data. The first theme was that there is a huge
focus on English Language Learners (ELLs) and other special populations, including
Gifted and Talented Education (GATE) and special needs students, regarding
differentiation. While all three of these populations are addressed frequently in the
coursework, the ELLs definitely dominate the bulk of what is taught regarding
differentiated instruction, particularly at Schools B and C. This is not surprising, as the
state of California has a very large ELL population, and does not provide what would be
considered traditional bilingual instruction for the ELL students. Therefore, teacher
preparation programs have really felt the need to concentrate on teaching their candidates
how to effectively teach students whose first language is not English. However, students
of all types also need instruction based on need in order to achieve success in the
classroom. Teachers must use a variety of strategies to meet these students’ needs, while
also meeting the needs of a variety of other learners in the same classroom. As a result,
lesson plans and activities must be “differentiated”— or adjusted—to meet the needs of
large groups of learners within the same classroom. The three schools that were studied
for this research all teach their students how to differentiate the instruction for a variety
of learners in the classroom, particularly for ELLs.
The second theme that emerged from this part of the research was that of the
“missing link”, as most professors are not teaching their students how to use student data
122
to differentiate instruction. Two of the three programs indicated that they were aware
that this next step, using actual student data to differentiate instruction, was where they
were missing the mark. The third program is doing this to some extent, but knows that
they also need to improve in this area. The fact that all three programs feel that they are
not doing an adequate job with teaching their candidates to use student data to
differentiate instruction shows that perhaps this is an area where programs may be able to
improve, and that further research may be needed here as well.
1. Focus on English Language Learners and Other Special Populations:
As mentioned previously, the state of California has a very large English
Language Learner (ELL) population, and we are seeing the growth of this sub-group
across many other states as well. There are also students who have special learning needs
due to disabilities, and there are students who are considered “gifted” learners.
Professors at all three schools were asked how they specifically teach their teacher
candidates about differentiated instruction within their coursework. By looking at the
research data, it would appear that there is a larger emphasis on differentiating for the
ELL population at Schools B and C. The reading methods professor at School B talked
about how she teaches differentiated instruction in her coursework as follows:
We talk about making some connection with English-language learners and what
are some different ways of approaching teaching content, vocabulary
development, how do you break down comprehension for them. So how do you
scaffold learning? How do you break down your presentations as a teacher
providing you’re giving instructional input? I also talk about comprehensible
input which seems to be somewhat foreign to them. I thought they were picking
up on that with Methods but coming in they did not know what that means so they
123
go into that. I also go through a list of different things that they can do [to] assist
students. Then I talk about the three, at least three tiers; so differentiated
instruction and what they would do to accommodate an individual student. I also
spend time talking about different ways of grouping and grouping is a way of
differentiating instruction. We talk about flexible grouping, inner-grade, intra-
grade grouping, homogenous, heterogeneous grouping; the benefits of doing it
different ways.
Students in this class appear to learn differentiation techniques for ELLs, as well as how
to implement small group instruction for differentiation in the classroom. This
instruction on differentiation for ELLs appears to continue in another course. A second
professor at School B discusses instruction on differentiation in her course as follows:
I mean we have a couple of formal days where that is our topic, differentiating
instruction. Period. And then we talk about differentiating for English Language
Learners and special needs a little bit. I mean I try to incorporate it into
everything. Gosh! I teach them how to create a differentiated unit. Like we
watch a whole video on that and they take parts of a differentiated unit and
evaluate it. A sample, you know. Is this really, based on what we know about
what a differentiated lesson or unit looks like, does this one measure up and often
times they say no and then I make them suggest well how would you differentiate
this then? And then they actually do a unit plan.
With both the critique and development of a unit that is built around differentiation, it
would seem that these students are gaining a lot of knowledge on this topic.
A third professor at School B also teaches a lot about differentiation in her course.
This is the first of two professors out of the ten who were interviewed who taught
differentiation based on assessment and other pertinent student data, and thus answered
both questions on differentiation and data to differentiate simultaneously, without being
prompted. When asked about how she teaches differentiated instruction, this professor
responded with:
We did do a great deal of instruction on differentiation. Adaptations. Going
through and clarifying for students, you know, because you want to make sure the
124
students get that, well I mean the teachers, having done a lot of professional
learning with teachers. You know, some teachers still balk at that idea of
accommodations. They don’t understand the difference between modifications
and accommodations. And they don’t understand that your purpose is to help
every student have access to the curriculum. And so that’s one of the things I
wanted to make sure my students really understood that accommodations were
ways of getting students to have access to their curriculum. Basically the idea
that there’s not one size that fits all and that goes back, it’s a circle because it goes
back to that information they found out about their students from the beginning
which is what they’re supposed to do. When they walk into a classroom they’re
supposed to find out about their students whether through having the time to go
through cum-files or doing a brief class survey, but they need information about
their students. And then once they have that information, they need to take that
and use that to differentiate. And for, because they can’t forget. We tell them
that when they make up lessons, they think of the middle of the class, the lump.
The middle of the class. That’s where they do their lessons, but then they have to
realize that’s just a starting point that then from there, they have out wires in both
directions. They have their GATE students or their very high-achieving student
that are on one end and they have their struggling students on the other. And then
they need to remember that those students change places all the time because one
student can be great at grammar so they’re up here, but when you get to literature
and hyperbole and all that kind of stuff they may go to the bottom of the pack;
they may switch. So that it’s always, everybody’s always changing places. No
one student is pegged one place all the time. So that they’re always looking for
ways to give their student access, and that’s basically what differentiation is all
about. Giving students access to the curriculum. Measuring what they can do in
different ways.
Again, this professor sees the natural connection that differentiation is a result of a
variety of student data, and teaches her students accordingly. A fourth professor at
School B also briefly discussed how differentiation occurs within the coursework, “I’d
say probably [name omitted]’s class the Methods of Teaching Linguistically-Diverse
Students, that’s what that class is really all about.” It is clear that School B has a rather
focused effort towards differentiating for ELLs, yet they also appear to address
differentiation in terms of what other special populations need as well.
125
When the students at School B were surveyed about differentiated instruction, all
seventeen of them were able to amply define what it was and were quite clear about it.
The students responded to the question about the skills on differentiation they learned in
their coursework with answers such as: “I know that each group needs to be taught
differently. ELL need concrete objects or lots of pictures. Gifted students need to be
challenged to meet their own individual needs while low learners [need] to have
coursework/homework adjusted to meet their individual needs as learners.” Another
student responded with: “I have learned mostly about instruction for ELLs and learning
disabilities, such as scaffolding [with] totally physical response, gestures, repetition, and
realia. For students with learning disabilities I have learned techniques such as increasing
wait time during discussion and modifying independent practice/homework.” A third
student states, “I have learned how to modify or enhance assignments so as [to] engage
ELL students, slower learning students or accelerated students. I have learned most
about teaching ELL students and different methods to do so.” None of the students
responded negatively or stated anything about not learning skills related to
differentiation. All seventeen students listed two-to-three sentence responses with very
specific information. Based on these student responses, it appears that the professors are
doing what they say they are doing in their courses regarding differentiation.
School C was also asked about how they teach their teacher candidates to
differentiate instruction for a variety of learners. One professor at the school talked
specifically about how her class really focuses on differentiated instruction for ELLs.
She focuses on the fact that just because they are “labeled” as an ELL does not
126
necessarily mean that they are all functioning at the same level. When asked about the
skills her students learned, she replied with the following:
From my classes we talk about how to differentiate instruction for English-
learners versus if you have native English-speakers in your classroom. If you
have English-learners in your classroom and then even taking it to the level of
okay, now you’ve got beginning English-learners versus intermediate English-
learners versus advanced English-learners because all of them require something
different. You can’t just lump your English-learners into one category. So we
really talk about the kinds of support and the kind of… it’s very hard for them to
get their minds around because we have the ELD standards in California. It’s very
complex and again just scratching the surface of what that means, but I think it
helps them be aware that I can’t treat all my kids the same, but I also am not just
going to put my beginners off in a corner and say I’ll get to you later or just sit by
somebody who speaks Spanish and you’ll get it too. What are you doing to do,
how are you going to adjust your instruction to make sure that those kids are also
at grade-level standards? What do you do?
This professor appears to address what some see as a significant issue in getting teachers
to understand the multiple learning differences and abilities they have in their classrooms,
and that there are varying degrees of each type of learner, whether ELL, special needs, or
GATE.
Another professor at School C also places a heavy emphasis on ELLs and the
various levels they may be at in the classroom. She stated that:
Where I can start getting into it more is in the TELL class and the way I do it is
through different levels of English language proficiency. So to scaffold the lesson
plan what they have to do is they have to give me kind of a breakdown of the
language proficiency levels in their classroom and, according to the ELD
standards that’s what we focus on, and then how many are advanced, how many
are at beginning, how many are at early intermediate, things like that. And also,
how many native English speakers do you have? And then how many kids would
you say are struggling and then how many are on IEP? So they fill out a context
for learning form that tells us this is the makeup of my classroom and this is the
kind of setting I’m teaching and this… so it’s very personalized so we understand
okay, the way you’re planning your lesson is based on this group of students in
127
this particular setting but if we see like they say well I have a class of 31 and 12
of them are re-designated English learners but they never mention anything about
English learners in their lesson plan or their instruction, then it’s like wait a
minute! You’re totally… you’ve got to differentiate for them; at least in
language if not in many other ways how you plan that lesson and how much is
kinesthetic and what have you done to be… a lot of them just do here’s a handout
and it’s like well, are you showing anything up on the overhead or the doc cam?
Are you demonstrating something, are you modeling, what are you doing? Or are
you just giving them a handout and just kind of droning on about that. So there
again, that’s getting them to start thinking about different ways to teach it even if
it’s whole class but it might help these specific populations of kids get it a little bit
better. Bring in a collaborative grouping, a heterogeneous grouping, things like
that. So it’s not the be-all-in-the-end-all of differentiation, but it’s leading to
more differentiation over time and helping them understand no matter what class
you’re assigned to, there’s going to be a range of proficiency levels or
performance levels in your students.
While differentiation appears to be taught in some of the coursework, it seems that
perhaps this is an area where the program could place a bit more emphasis. A third
professor stated:
We teach them about flexible grouping. Hopefully they’re flexible and not
written in stone. And then they also learn about IEP’s in their Inclusive Practices
class but that’s not until the spring quarter. But they, I’m not sure how many
strategies they really learn for differentiation, I think that’s another area where we
could look at for the group. We can see to some extent in their PACT how they
would think about those, think about that.
School C does appear to be teaching differentiation, especially where ELLs are
concerned, and yet the last two professors quoted here would seem to think that perhaps
the program could be doing a bit more in regards to teaching about differentiated
instruction.
All fifteen students who were surveyed at School C could define differentiated
instruction without any problem. When asked about the skills for differentiation they
learned in their coursework, several students responded with statements such as: “We
128
have learned to provide many different ways for students to learn. There has been an
especially great focus on this during our discussions of ELLs, students with behavior
issues, disabilities, and learning problems.” Another student responded: “I have learned
that no two students are alike and that they learn in different ways. I cannot make 35
different lessons for 35 different students, but I can consider how to encompass as many
different teaching strategies as I can in one unit so that a variety of techniques will
effectively teach students.” Conversely, there were some students who made remarks
such as, “I have learned some things, but not a lot about differentiated instruction”, and
“We have not learned much about differentiated instruction, just that it exists”, and “Not
much.” While students may be at different points in the program, which may account for
the disparity in responses, it would seem that the disparity here corroborates the fact that
a few of the professors feel like this is an area where the program could be doing just a
bit more.
School A does not have quite the emphasis on differentiation for ELLs as Schools
B and C appear to have, but they are certainly addressed. Something that appeared a bit
more prominent at School A was the fact that each professor focuses a bit more on
differentiation according to their areas of expertise. One professor at School A, whose
expertise is with special needs students, stated the following:
Well, before I teach them differentiated instruction, I teach them how to identify
areas that need differentiation. So when I teach them strategy, I teach them how
to enhance with depth. Depth and complexities is a really important part of our
program. So I show them how to include this for kids who are more able. And
then I’m a Special Ed teacher, so I teach them how to front load language. I teach
them how to break it down. When they do the learning centers through [name
omitted] class which we teach these pretty in mesh, they have to do cards for
129
activities that is for gifted learner on grade level and a student who is maybe
struggling or who has different kinds of skills. So we do a lot with differentiation.
We give examples, and in terms of we do a whole project on a struggling student
so they need to observe a social studies lesson and they need to identify how that
student is struggling and they go back and re-write the lesson based on how it
would better meet the needs of the student. So we have a whole project in the
class that deals with differentiated learning.
The reading methods professor at School A also focuses on differentiation, but
she does it in two different ways for two different courses that she teaches. The first
course she talked about was that of the reading methods course. In fact, this professor
actually answered both questions that were to be posed regarding differentiation
simultaneously. The first question asked of professors was how they teach differentiated
instruction, and the second was how they prepare their candidates to use student data to
differentiate instruction. As just stated, she answers both simultaneously, and was the
second of two professors out of the ten who were interviewed who answered it
simultaneously because it was a natural outgrowth of how she teaches differentiated
instruction, again without being prompted. She stated the following:
In terms of differentiated instruction, it’s connected back to assessment, because
if you can understand what your students are able to do, then you’re going to do a
better job of being able to adapt and modify your curriculum in a way that’s going
to meet their needs. And so in my Reading Methods class, it’s specifically tied
into the methodologies because what I find is that differentiation is difficult for
students because they don’t know how to meet individual needs or group needs
when they have a whole class sitting in front of them. How do I… what are the
other kids doing when I’m working with these kids? How do I pull certain groups
of kids? So I connect it really close to methodologies, so we focus heavily on
three main methods in the Reading Methods class that would allow teachers to
differentiate based on their assessments and the data they’ve collected on the
students, the data they collected through their assessments, and that’s guided
reading with… guided reading within a reader’s workshop or guided reading as
part of centers and then writers workshop. Because what I show them is that you
130
can have the class engaging in purposeful instruction while freeing yourself up to
go around and work individually with students. Because in the workshop model,
the whole class gets the mini lesson, they all work independently on the focus or
whatever they’re doing as a writer or as a reader and then you go around
individually and confer with students and so you are directly giving them one-on-
one instruction on what they need. And then that differentiation happens based on
your notes that you’re taking as you confer with them about their writing and as
you see their writing so that could be considered informal in terms of their
anecdotals and their kind of on the spot with what students are doing. And then
you give them strategies right then and there based on what their needs are. And
the same happens in reader’s workshop, but then the way that guided reading can
happen within there and guided writing is that if you start to see a common need
between students, you pull them for a small guided writing lesson or pull them for
a small guided reading lesson. So in terms of differentiation, I think it’s two-step.
The first one is understand that differentiation is about how instruction needs to
be based on students needs and the only way you can identify those needs is
through assessment and gathering the data that you need to determine what to do
about their needs. But then the second step is giving them the, I guess, the setting
in which to do that work and that’s in the Methods piece. So the assessment is the
first step. That’s what helps us understand what the need is and in order to
differentiate the instruction once I know what that should look like, I try to really
focus in on those three major methods in order for them to know what that is.
Students in the reading methods class appear to have the “differentiated” piece of their
coursework demonstrated to them in a way that is purposeful and meaningful, and has a
natural tie-in to the student data. This same professor also teaches a Bilingual Methods
class, and discussed how she teaches differentiated instruction in that class as follows:
Now that’s a little bit different than in the Bilingual Methods class because in the
Bilingual Methods class we talk a lot about language differentiation, so we talk
about understanding what ELD level your students are at, and if you know that
they’re at these different ELD levels what does that mean in terms of how you
instruct whole group, small group, whatever it is. So if I know I have ELD level
ones and twos in my class, that means for the most part they can comprehend
what I’m saying if I speak clearly, simple sentences, use very deliberate speech,
then those are the instruction decisions I’m going to make based on what I
understand about my students and the data we’ve collected from CELDT scores
and the assessments I’ve done in my class as well to gather more information.
131
And so that happens all day long so it’s not… I don’t tie it to any particular
method, it’s throughout your whole day. If you know this about your kids, what
are you going to do throughout your day to make sure they understood Science,
they understood Social Studies, they understood Language Arts? So it’s, if
they’re ELD ones and that means when I’m presenting my science lesson, these
are the ways I need to supplement my program so that they understand what I’m
talking about. So it’s kind of more throughout the whole day where as in the
literacy program I think I’ve been more purposeful in talking about the particular
methods to use to find time to differentiate your instruction.
Something that stands out as a focus at all three schools is that students are taught to look
at the variety of language levels in their classrooms amongst the ELLs. Faculty at all
three institutions appear to stress the importance of taking note of these levels—from
beginning to intermediate to advanced—and asking their students how they will
accommodate for each of these levels, not just merely an ELL. As a result of this
particular professor’s courses, students appear to have been taught the connection
between differentiated instruction with assessment and resulting data.
The third professor at School A is an expert in the area of Gifted and Talented
students. When asked about differentiation in her coursework, she stated the following:
Differentiation, I mean so it’s other words [sic] it’s a formidable part of what I
hope I‘m teaching. Then in fact the kids are on a continuum from those who are
ready to go, those who are not yet ready and those who are just ready. I mean, I
mention this all the time, so when you’re looking at a lesson are you looking at
that spectrum of learners on that continuum? Are you looking at in fact what does
this lesson mean to those who are ready to go who know some of it and those that
are just ready and those who are not yet ready and yet they’re all part of the same
lesson? What does that mean in terms of adjustment in facing, adjustment in
terms of content of the lesson, adjustment of resources, adjustment of assistance,
how much assistance you will or will not give, what does it mean in terms of
grouping, what does it mean in terms of the time for the lesson? So all of those
are features. And then I try to teach them the features that are particular to
differentiation for gifted kids. For all kids. What does this mean in terms of
critical thinking, creative thinking, problem solving? What does it mean in terms
132
in the study of big ideas? What does this mean in terms of the study of the nature
of the discipline?
This professor in particular appears to be focused on having her students really think
through the differentiation process—all levels of it. This may encourage them to plan
their lessons more completely, and perhaps more effectively, if they are taught how to
truly think through the process.
It appears that students enrolled in the teacher preparation program at School A
receive a variety of perspectives on differentiated instruction throughout their
coursework. When these students were asked about differentiated instruction, only two
of the fourteen students could not define what it was. When asked about specific
differentiation skills they learned in their coursework, many students responded with
answers such as: “To vary different aspects of the objective to fit what you feel students
need to work on; splitting the class so that some students are working independently and
some students are working with an aid or the teacher for reinforcement or other tasks.”
Another student stated they learned, “Models of teaching, and creating objectives that
target different areas (skills, content, etc.).” On the other hand, three of the students
responded to the question with “My knowledge of the subject is vague”, “Unsure”, and “I
don’t know what it is”. Another interesting item to note is that none of the students at
School A mentioned the ELL population in their responses to the differentiated
instruction skills they have learned.
133
2. The Missing Link: Most are not Using Data to Teach Differentiation
As established in the previous section, professors at all three of the schools
studied do teach their students how to differentiate instruction for various student
populations, and this was found to be especially prominent for ELLs in Schools B and C.
Additionally, when asked how they taught their students to differentiate instruction, two
of the ten professors interviewed--one from School A and one from School B-- included
using student data and assessment information as part of their response. Therefore, it
seems probable that the students in those classes know how to use data to inform
instruction. However, this does not appear to be happening in all courses, although there
were certainly a variety of responses to this question.
One professor at School A, who also happens to be the director of the teacher
education program at the school, stated the following when asked about how she teaches
her students to use student data to drive instruction: “Oh, I think that’s the missing link. I
think that’s what we need to do better.” When pressed for further information on what
she meant by this, she replied:
You know, we talk a lot about using data to inform teaching and we talk a lot
about using evidence. And in fact PACT is very big on using evidence to inform
teaching. Personally, I think we talk a lot. I don’t know how much we actually
do. And I think it comes up more in the second semester just through, you know,
professional intuition. But I guess the point I’m saying too, I think it happens but
it doesn’t deliberately happen. We’re not deliberately teaching it and I think we
need to do better with that.
Given the fact that the third professor at School A did not respond with how she teaches
her students to use data to differentiate instruction, it would seem that this is an area
134
where perhaps the program could be a bit more deliberate in its instruction, even though
one of the three professors here included this information as a response to how she
teaches differentiated instruction in her coursework. If all of the professors are not
approaching their coursework this way, students may be receiving mixed messages, or
may not have that information reinforced.
Students at School A seem to corroborate the response (or lack thereof) from the
two professors cited above. When surveyed about how comfortable they feel using
student data and assessment information to differentiate instruction, the majority of the
students responded with answers such as: “I think I could use some more information on
how to differentiate instruction on a bigger level”; “Not comfortable at all”; “Don’t know
yet”; “I don’t know what this is”; and “I would need more practice and information.”
Therefore, student responses show that this area likely is a “missing link” within the
program. In fact, when students were asked to make suggestions on how their program
could improve in this area, the following responses were given: “Possibly improve on
differentiated instruction and data. Assessment is covered thoroughly”; “I think we
should have more practice interpreting sample data, assessment data so that we can come
up with theoretically sound differentiation techniques and tools”; and “I’m not too sure if
the program really specifically taught us these things. Therefore, there’s lots to be
improved!” Again, student responses demonstrate that this is likely a place where the
program could use improvement.
This same pattern seems to hold at School B. One of the professors here
answered the question about how she teaches differentiated instruction combined with
135
how she teaches them to use data to differentiate. But beyond that, it appears that School
B may be in the same place as School A. One other professor here, when asked how she
teaches her students to use data to drive instruction, replied, “Um, just with the TPA.
Although we do spend like four or five weeks on TPA stuff.” Once again, the TPA
surfaces here, but it does not appear that this skill of using data and assessment results to
differentiate instruction permeates the program. However, one of the professors here
does feel that this is occurring in her class. She responded to this same question with:
Well, I ask them, how many students do you have that are English-language
learners taking the CELDT and what are their different levels? If you have a
student that’s at Level 1 versus, you know which is beginning as opposed to being
at an intermediate level, how would you work with that student differently than
you would work with the other student, or how would you work with your
English-language learners that do not have academic language versus students
that are fluent? So those kinds of things. So I try to weave that through but I
know that there is at least one place where I do spend a lot of time talking about
that. I give them examples.
Although this professor believes she is implementing this in her course, it would appear
that teaching her candidates how to use student data and assessment information to
differentiate is limited.
Students at School B were also surveyed regarding how comfortable they felt
using data to differentiate instruction. Responses from these students indicate that
approximately two-thirds of them feel “fairly comfortable” doing this, but most feel like
they could use further instruction here. Some of the student responses include, “I feel
somewhat comfortable, but I think this would be an area in which I would seek support
from more experienced teachers, perhaps more than in other areas of instruction” and “I
136
would be semi-comfortable using data and assessment information but not completely at
ease because I am not familiar enough with them to feel like I can accomplish what needs
to be accomplished.” Yet a few students are very confident in this area, and responded
with “I feel comfortable doing this” and “I think I am able to look at data and assessment
information and know how my lessons need to be differentiated for different students.”
Additionally, when students were asked about areas in which their program could
improve with teaching them how to use data and assessment information and to
differentiate, they responded with statements like “More practice will all three”; “I think
the program could improve by focusing on how to interpret results of the assessments and
then provide differentiated instruction to match those results”; as well as a response that
stated “I think our program does a good job in all these areas. I think they almost over-
prepare us.” Students at School B provided a variety of responses, and most of them did
mention specific ways in which they feel their program could be improved.
Looking at the research data from School C, it would appear that this is a
“missing link” here as well. In fact, only one professor could respond to this question
with an answer. When asked how students are taught to use data to differentiate
instruction, this one professor replied:
I really don’t… I’m trying to think, I think within the Learning Record
assignment, we talk about next steps, like if you’re able to use some evidence
you’ve collected, multiple forms of assessments to come to a conclusion about
where a child is in their language level, then what would be some next steps you
could take. So I think that’s one way, but again, definitely not enough time in
class for that. It’s almost like, gosh, it would be so fantastic if they could have a
course that really focused in on data and assessments. I mean obviously, it needs
to be integrated within Methods, because that makes sense too, but finding the
time to do that, there’s just so much to teach.
137
As stated at the beginning of the paragraph, this is the only professor who could respond
to the question, and even then, it is apparent that she feels using data to differentiate
instruction is not occurring at School C as much as it perhaps could be.
The research data collected from student surveys seems to corroborate this
position. In fact, ten of the fifteen students surveyed responded that they are “not
comfortable” at all using data to differentiate. Students’ responses include “I am not very
comfortable doing this”; “Not comfortable at all, I’m not sure at all how to do it”; and
“Not at all, since I have learned nothing about it.” Yet, there are a few students who do
feel comfortable and their responses were “Very comfortable and will be more
comfortable with practice” and “Comfortable. Data and assessment information provide
the details needed to differentiate instruction for students in order that they may receive
the best resources for learning and understanding.” When students were asked about how
their program could improve in data, assessment, and differentiated instruction, students
answered:
• “Teach us about data and how to interpret assessment results. Differentiation
I think is something that comes with practice”;
• “Didn’t learn anything about data or differentiation. Differentiation should be
introduced earlier in the education program, as I know it is important but lack
sufficient knowledge in them”;
• “The program needs improvement in providing more methods for
differentiated instruction…”; and
• “I think we need to learn more about collecting data, assessment, and
differentiated instruction. I feel like we talked about it but I did not have any
concrete way of applying the learning. Thus, I tend to forget the big picture
on the ideas.”
138
As with the other schools, most of the students at School C believe their program could
make improvements in these areas. The information obtained from students at School C
corroborates with the information, or lack thereof, provided by the professors at School
C.
It appears that teaching students to use data to differentiate instruction in the
classroom is indeed a “missing link” at all three schools. This is corroborated with the
responses by the professors and the students at each of the schools, although, as discussed
previously, two of the professors (one at School A and one at School B) appear to be
incorporating this information in their coursework.
On the following page, Table three lays out a summary of what each program is
doing to instruct their candidates on how to use data to differentiate instruction.
139
Table 3: How Preservice Teachers are Being Taught to Use Data to Differentiate
Instruction
Focus on English Language Learners and
Other Special Populations
The Missing Link: Most are not Using
Data to Teach Differentiation
School A • Focus in methods courses according
to professors’ areas of expertise;
• Identification of areas needing
differentiation;
• Taught in terms of “depth and
complexities”;
• Connected back to assessment
information;
• Connected to methodologies;
• Needs to be based on students’ needs
• Bilingual methods course;
• Adjustment of content based on
continuum of learning
• Not deliberately teaching the
connection between data and
differentiation in most courses;
• Reading methods course focuses on
the purpose of assessment data to
drive instruction, including
differentiation
School B • How to scaffold learning;
• Three tier lesson planning;
• Grouping strategies and types of
groups;
• Create differentiated units;
• Woven across coursework;
• Differences between
accommodations and modifications;
• Providing access to curriculum;
• Methods of Teaching Linguistically
Diverse Students course focuses
heavily on differentiation
• Focus on data for differentiation
through TPA tasks;
• CELDT data utilized for ELL
modifications in curriculum course;
• Utilize student information for
differentiation
School C • Taught about different levels of
ELLs and how to accommodate for
each;
• Scaffolded lesson plans;
• Context for Learning forms
introduced and implemented for
planning;
• Teaching English Language Learners
course;
• Grouping strategies and techniques
• California Reading and Literature
Project and “next steps” based on
information collected
140
Conclusion
This chapter presents a detailed analysis of three teacher preparation programs
regarding data literacy, assessment, and differentiated instruction. The overarching
research question focused on how different teacher preparation programs are preparing
their preservice teachers to use data to inform instruction, as well as specific sub-
questions that provided detailed information regarding education faculty’s beliefs on the
importance of data use, how the faculty provide basic assessment literacy to their
candidates, and how they teach their candidates to use data to drive instruction. After
collecting and analyzing the data from all three institutions, major themes emerged and
were linked to the appropriate sub-question. The most prevalent data collected from both
faculty and students were then used to tell the story of these themes and their connections
to the sub-questions.
Legislation involving public education that has occurred both on a national and a
state level has had a large impact on teacher preparation programs, particularly in the
state of California. Because of the federal No Child Left Behind Act of 2001, schools are
using data and assessment tools on a scale not previously seen. This burden has trickled
down all the way to the classroom teacher, as well as to the programs that prepare these
teachers. As a result of SB 2042 and SB 1209, teacher candidates in the state of
California must now pass a series of performance assessments, called the TPA or PACT,
in order to prove their competence in a variety of areas related to student assessment and
instruction. This, in turn, has caused teacher preparation programs to look closely at how
141
they teach their students about data and assessment. As demonstrated by this study, three
programs in the state of California have made, or are in the process of making, changes to
their programs due to TPA and PACT requirements, especially in the area of student and
assessment data. These programs are also making changes due to their teacher
candidates’ performance on the assessment portion of the TPA and PACT.
While each of the three institutions studied here indicate that it is important for
their teachers to know how to use data and assessment appropriately, especially when
TPA and PACT are considered, all three programs feel that they are not as intentional in
this instruction as they could be. However, all three programs appear to be addressing
assessment to a large degree. Two of the three programs demonstrate a very heavy focus
on reading assessment instruction within the reading methods coursework, and all three
programs appear to be modeling a variety of assessment forms and strategies for their
students. These include formal and informal assessments, as well as formative and
summative assessments.
The piece that fits all of this together is that of using student data from these
assessments, as well as other pertinent student data, to make informed instructional
decisions in the classroom, particularly in the area of differentiated instruction. All three
schools discussed how they teach their candidates about differentiated instruction, and
two of the three schools were found to have a very strong focus on the ELL population.
While the third school addressed it briefly, it was not as strong as what the other schools
claimed to be doing. In addition to differentiating for ELLs, teacher candidates are taught
142
how to provide differentiated instruction for special needs students and GATE students as
well. However, one crucial element appears to be missing from all three schools, at least
as a common thread throughout all coursework. This missing element is teaching their
candidates how to use actual student and assessment data to differentiate instruction.
Many of the professors are not doing this, and one even called this the “missing link”.
This is where it would appear that growth is needed for the three programs in this study,
and perhaps for other teacher preparation programs. After all, the student and assessment
data provide the information that is needed in order to effectively plan and prepare
lessons that will meet the needs of the students in the classroom, regardless of language,
other struggles, or exceptional ability.
143
CHAPTER 5
Summary and Implications of Findings
Introduction
With the advent of the No Child Left Behind Act of 2001, schools are under more
pressure than ever to demonstrate student achievement and proficiency through a greater
focus on standards-based education, including high-stakes testing. Schools and districts
are required to report student achievement gains or losses on these tests, as well as
attendance rates and sub-group data, in order to determine if they have made the required
annual yearly progress, or AYP. In order to aide in reporting student achievement gains
and losses, as well as to gauge student learning over the course of the school year,
districts have increased their focus on data, and many are practicing what has come to be
known as data driven decision making. This essentially means that teachers are required
to use a variety of student data in order to make daily instructional decisions in the
classroom. These data range from test scores on district-mandated benchmark exams and
language proficiency assessment results, to homework and class tests and even the high-
stakes test data themselves. In order for teachers to be able to use these data effectively,
they must have the essential background knowledge on sound assessment practices, as
well as some quantitative knowledge in terms of how to read the data in order to use them
appropriately and strategically for instruction. With teachers under such pressure to
utilize student data on a variety of levels, most essentially to inform classroom
instruction, it is only natural that teacher preparation programs in colleges and
universities begin to address data use with their preservice teachers, as they will be
expected to use data once they step into a classroom.
144
Because of this increased focus on data driven decision making, this qualitative
study aimed to discover what teacher preparation programs are currently doing to address
DDDM in their coursework. Since DDDM is relatively new, the research base is rather
limited. Furthermore, the research base regarding teacher preparation programs and
DDDM is even more scarce; thus, the purpose of this study was to find out what teacher
preparation programs are doing to prepare preservice teachers for data and assessment
practices in schools, as well as to add to the limited research base in this specific area.
The research question this study addressed specifically was: How do different teacher
preparation programs prepare preservice teachers to use data to inform instruction? In
order to help answer the overarching research question more fully, the following sub-
questions were also addressed in the data collection process:
1. What are the education faculties’ beliefs about the need for preservice
teachers to learn how to use data?
2. What are programs doing to provide basic assessment literacy to preservice
teachers so that they are data-literate?
3. How are preservice teachers being taught to use data to differentiate
instruction?
Data for this study was collected in the form of interviews with teacher education faculty
at three different institutions with teacher preparation programs for both undergraduate
145
and graduate students. Additionally, a student focus group was used at one of the
institutions, and online surveys were used to collect data from students at all three
institutions that participated in the study.
Some of the major conclusions drawn from this study are that teacher education
faculty definitely recognize the need for their preservice teachers to know and understand
how to use student data. These beliefs can be found in terms of course topics addressed
and assignments that aligned with those course topics. Education faculty were
particularly focused on the TPA and PACT tasks that their teacher candidates are
required to complete in order to be certified in the state of California, and changes were
taking place in all three programs in order to help align coursework with the TPA and
PACT requirements. However, professors at all three institutions as a whole believe they
are not doing enough within the programs to intentionally focus on data analysis and use.
This becomes evident when looking at the student surveys as many students could not
define data literacy, and were generally uncomfortable with using student data both in
their coursework and in classroom settings.
When it comes to assessment practices, all three schools are focused on the TPA
and PACT requirements which essentially dictate how assessment is taught within
coursework. Additionally, the reading methods courses have a large focus on a variety of
assessments and give the students hands-on practice with several assessments that are
learned in the coursework. The faculty also deliberately focus on teaching a variety of
assessment types throughout their courses, and model this with various forms of
assessing students within the courses they teach.
146
Finally, the faculty as a whole are focused on teaching their students how to
differentiate instruction in the classroom for a variety of student populations, but seem
particularly concerned with the ELL population. However, according to many of the
professors, what appears to be missing from all three programs is the lack of teaching
their candidates how to use student and assessment data to differentiate instruction,
although a few of the professors appear to be doing this within their coursework. This
lack of a connection between data and instruction once again becomes evident in student
survey responses, as many students state they have not been taught how to do this, or
want more practice doing this, as they feel uncomfortable. Certainly this is an area that
should continue to be addressed.
In the next section, I will connect the findings from my research to the literature
that was reviewed in chapter two of this study. Some of the findings here concur with
what prior research has shown; however, some of the findings suggest that there have
indeed been major improvements in teacher education, particularly in the area of
assessment.
Connections to Prior Research
The review of the literature for this study looked at the following:
1. Background information that includes accountability and standards-based
reform efforts, including NCLB, the “highly qualified” component of NCLB,
and the P-16 Initiative.
147
2. DDDM, including classroom assessment, teachers’ assessment literacy, and
differentiated instruction.
3. Teacher education, including traditional and assessment coursework, and
effective teacher preparation programs.
Connections will be made to the pieces of literature that have an impact as far as the
findings and implications of this study. These connections will be organized and
presented according to each research sub-question.
Research Sub-Question #1: What are the education faculties’ beliefs about the need for
preservice teachers to learn how to use data?
Because of NCLB and the era of standards-based reform that the American
education system is currently in, it is imperative that teachers enter the classroom with
sound data and assessment skills and practices. In order for them to have the ability to
use these data and assessment skills, they must be given a strong foundation in their
teacher education coursework. As this study shows, teacher education faculty members
clearly believe that data use is important and that it is something their students should
learn how to use and apply in their teaching. While there was no specific literature that
was reviewed in regards to faculty promoting data use, these findings can certainly be
connected to the literature on developing a culture of data use. Earl and Katz (2002)
studied how to develop this culture of data use, and while faculty may not be “district
personnel”, they certainly take on the role of “leader” for their students, and are the ones
148
who are in constant contact with the preservice teachers at this point in their education.
Therefore, parallels can certainly be drawn. As Earl and Katz (2002) suggest, data
literate leaders think about purpose(s); recognize sound and unsound data; are
knowledgeable about statistical and measurement concepts; make interpretation
paramount; and pay attention to reporting and audiences. Additionally, Mandinach et
al. (2006), discuss a framework for developing skills that are needed for successful
DDDM. These data skills include collecting and organizing, analyzing and summarizing,
and synthesizing and prioritizing.
Many of the teacher education faculty interviewed for this study are teaching their
students to collect and organize data, recognize sound and unsound data, make
interpretations of (or analyze) those data, and synthesize the data for instructional
decision making. This can be seen in the assignments they have their students complete,
including the gathering of student demographic data, which pieces of data (or what types
of data), are important, why they are important, and how to use that data effectively.
Faculty also teach their students to interpret the data they have gathered correctly,
especially when it comes to assessment data—both formal and informal. The focus here
is to know how to interpret the data that has been collected and then what to do with it to
improve instructional practices. Teacher education faculty want to be sure that they are
not teaching their candidates to rely only on standardized testing data as the end all to
student achievement information, and they want their candidates to avoid the “teaching to
the test” phenomenon. Ideally, faculty want to teach their candidates to utilize student
and assessment data in thoughtful and relevant ways. The TPA and PACT requirements
149
and activities are strongly linked to faculty members working with their students on these
types of data collection and analyses, as their students are required to pass the TPA and
PACT tasks in order to obtain certification. In the next section, I will address sub-
question #2 and connect the literature related to assessment with the findings from the
study on assessment practices.
Research Sub-Question #2: What are programs doing to provide basic assessment
literacy to preservice teachers so they are data-literate?
Assessment literacy is crucial in order for teachers to effectively understand and
address the learning needs of their students. Past research has shown that teacher
education programs were not doing as much as they should have been doing to teach
assessment to their students. However, the results of my study show that perhaps this has
changed, as much more is now being done than what was originally discussed in much of
the literature. According to a 1991 study by Wise, Lookin, and Roos, even at that time,
assessment coursework had been missing from teacher education programs for the last 30
years. Athanases and Achinstein (2003) also discuss this lack of assessment preparation
at the university level, showing that growth was still needed in this area to ensure sound
assessment practices of teachers in the classroom. However, in 2002, Stiggins discussed
a renewed interest in assessment coursework for teacher preparation due to NCLB. This
corroborates the findings in my study, as it looks as if teacher educators are beginning to
focus more on sound assessment practices in order to give their students the tools they
need to survive in the standards-based reform era. In his 1999 study, Stiggins suggested
150
that teacher educators include assessment units as part of the methods coursework, and
that it would be ideal for assessment to a separate course in and of itself. In that same
study, Stiggins (1999) goes on to state that preservice teachers need to have training that
prepares them to select the most appropriate assessment method for the task at hand. In
both his 1999 and 2002 studies, Stiggins suggests that assessment competency and
coursework be tied to teacher certification and licensing requirements. Finally, Graham
(2005) suggested that teacher education professors should be willing to look at their own
assessment practices, and be open to change in how they assess their preservice teachers
so that they could model a variety of assessment practices themselves.
Based on the data collected for this study, and its resulting conclusions, it appears
that teacher education at some institutions in the state of California have a greater focus
on assessment practices within coursework. This is in clear opposition to what previous
research has shown in regard to assessment practices in teacher education, and therefore
can be seen as an improvement within teacher education at the institutions that were
studied. Each of the professors interviewed at all three institutions that were studied
discussed how they were integrating and modeling a variety of assessment types and
practices within their specific courses, most of which were methods courses. This
finding validates Stiggins’ (1999) suggestion that the teaching of assessment be tied in to
methods courses. These professors focus on formal and informal assessment practices, as
well as formative and summative assessment practices within the courses they teach.
This was found to be especially prominent in the reading methods courses, as preservice
teachers learn anywhere from 10 to 20 types of assessment tools and instruments for
151
diagnosing, assessing, and teaching their students in the core subject of reading. The
students in these reading methods courses even practice administering the assessments to
one another so that they are both confident and competent when they administer them in
an actual classroom setting. Some of the assessment and diagnostic tools they are
learning include Classroom Reading Inventories, Concepts About Print, interest surveys,
phonemic awareness and phonics assessments, and other forms of reading assessments
for use in the classroom. Additionally, they are taught which reading assessments to use
at which times and for what purposes, thus corroborating with Stiggins’ (1999)
suggestion that they be taught which assessments to use for what task.
Each of the institutions that were studied also mentioned a distinct focus on
assessment practices in the coursework due to the TPA and PACT requirements.
Because preservice teachers must pass the tasks assigned as part of the TPA and PACT in
order to be certified, the professors have become more conscientious with teaching
assessment, and the programs themselves at all three institutions are currently undergoing
changes in order to ensure that their students are prepared and able to meet these
requirements. At two of the institutions that were studied, several professors mentioned
that assessment was the weakest area for their students on the TPA and PACT, which is
one of the reasons why they have an increased focus in this area. They are concerned
with their students meeting the certification requirements, which is why some of the
changes are being made within these programs. By linking certification to the TPA and
PACT task competency and completion, the state of California appears to have done what
152
Stiggins (1999 and 2002) suggested with the tying in of assessment competency and
certification requirements.
Finally, some of the professors discussed the fact that they try to model various
forms of assessment for their students by changing some of the assessment practices they
use within their courses, which is what Graham (2005) has suggested in her study. These
professors discussed the use of grading rubrics and portfolios that they have implemented
to assess their preservice teachers. This provides the preservice teachers with a model of
how to both complete and implement rubrics and portfolios in their own classrooms, and
they get first hand experience on the benefits of using alternative forms of assessment.
Additionally, professors are implementing the use of journals and “quick writes” into
their courses, thus utilizing and modeling another form of student assessment. Some of
the professors in this study are also teaching their preservice teachers how to observe
their students and use anecdotal notes about these observations in order to assess what the
students are learning. Observing the behaviors of students appears to be taught at all
three institutions that were studied. In the next section, I will discuss the pertinent
literature and research findings as they relate to using data to differentiate instruction.
Research Sub-Question #3: How are preservice teachers being taught to use data to
differentiate instruction?
Using student data to differentiate instruction in the classroom is one of the
benefits of DDDM. By utilizing pertinent student data to make instructional decisions,
teachers can more effectively teach the variety of students they have in their classrooms.
153
While professors at all three institutions were able to discuss how they teach
differentiated instruction in their coursework, many of them feel that they are missing the
mark as far as teaching their students how to use data to differentiate instruction in the
classroom. The connections to the literature here are mixed, in that the professors are
teaching differentiation, but appear to be lacking in the data portion of differentiation. In
her study, Brimijoin (2005) discussed the fact that teachers who engage in differentiation
based on assessment results and data see an increase in student engagement with content
and have better results transferring their learning. However, she also discovered that
teachers generally lack pedagogy to do this. Brimijoin (2005) also discusses the
importance of ongoing assessment for differentiation, as this can help the teacher
determine appropriate grouping strategies as well as instructional strategies. Both
Brimijoin (2005) and George (2005) focus on the importance of flexible grouping within
the differentiated classroom. Finally, in their study, McTighe and Brown (2005) discuss
how differentiated instruction and standards-based education work together for diverse
student populations, and evidence of this learning can be found by using a variety of
assessments and accompanying data.
My study connects to the literature here in a few different ways. First, as stated
previously, the professors are teaching their preservice teachers about differentiated
instruction, especially where ELLs and other special populations are concerned.
McTighe and Brown (2005) discuss the need for differentiated instruction for diverse
student populations in the standards-based reform era. Preservice teachers are learning
things like writing lesson plans with at least three “tiers” in order to meet the needs of
154
three major groups in the classroom—the GATE students, the “average” student, and the
students who struggle, whether they are ELLs or have other special learning needs.
Many of the professors spend several class sessions talking about differentiation, and
they also include differentiation requirements in course assignments and projects,
including units of study and lesson plans themselves. Some of the professors are also
requiring their preservice teachers to “shadow” a student in order to pick up on individual
learning needs. Part of this “shadowing” experience also includes gathering data on the
student in the form of assessing student interests, collecting demographic data, and
interviewing the parents. While the preservice teachers are engaging in the collection of
data on their students through a variety of coursework assignments, I am not sure that this
necessarily meets the recommendations of Brimijoin (2005) in terms of ongoing
assessment to differentiate instruction. Perhaps this could be due to the fact that they
have limited interaction with the students at this point, as most are not in the classroom
full time yet.
Some of the professors, in particular the reading methods professors, are also
teaching their preservice teachers about the need for flexible grouping in the
differentiated classroom. This is the suggestion of both Brimijoin (2005) and George
(2005). However, not all professors are teaching this in their coursework, so this
connection to the literature is dependent upon each individual professor who participated
in the study. Other areas of the study which appear to be missing the mark as far as
connecting with the literature include the findings that most of the professors feel (and
the teacher candidate data corroborate this) that they are not teaching their students how
155
to make the connection between data and using that information to differentiate
instruction, which is clearly the recommendation from Brimijoin (2005) and McTighe
and Brown (2005). However, there are two professors who are doing this with their
students as a natural extension of teaching about differentiation. One is the reading
methods professor at School A who has her students utilize the data yielded from the
reading assessments to make adjustments to instructional practices for the students based
on their levels and needs. She explicitly refers to this as “differentiated instruction”. The
second professor is a curriculum methods professor at School B, and also teaches her
students that the purpose of assessing and gathering data is so that they can differentiate
instruction and tailor their lessons to individual student needs.
Clearly, there needs to be more of a focus within each of these programs on how
to use data to differentiate instruction, as only two of the 10 professors interviewed were
inherently doing this. Additionally, student survey data yielded information that shows
that this is certainly an area for improvement. Therefore, the connections to the literature
in this section are a bit limited due to the inconsistency across programs and within
schools in this regard.
One of the issues commonly discussed in regards to teacher preparation programs
is the lack of collaboration among faculty members, which leads to a lack of coherence
within the programs (Russell, McPherson, and Martin , 2001). While it is certainly fine
to have unique teaching styles and opinions as to what constitutes good teacher
preparation, often this lack of collaboration can lead to confusion and inconsistencies in
preparation. As Russell et al. (2001) state, “Preservice teachers are aware of, and
156
affected by, inconsistencies within their program and dramatically different approaches to
teaching and learning on the part of the faculty. As a result, they often report
experiencing programming that appears fragmented…” (p. 46). Renowned teacher
education expert Ken Zeichner (2006) also discusses issues of inconsistencies in teacher
preparation programs, and calls for increased focus and collaboration between teacher
preparation programs and the universities which house these programs. Perhaps this
issue of minimal collaboration has led to a lack of coherence within programs, and is
playing a role in some of the dissonance and inherent differences seen in the results of
this study.
Summary of Findings in Relation to Existing Literature
The review of the literature suggests that teacher preparation programs have long
had the need to focus on assessment practices within their coursework. This can be found
in Stiggins’ work (1999 and 2002), as well as within Athanasis and Achinstein’s (2003)
and Wise, Lookin and Roos’ (1991) studies on assessment practices within teacher
preparation programs. Another angle in this study is that of data use within teacher
preparation coursework. Faculty beliefs regarding the need for their students to use data
were studied, and can be connected to the literature on developing a culture of data use
through Earl and Katz (2002) and Mandinach et al.’s (2006) studies. And finally, using
data to differentiate instruction wrapped up the study, and connect to Brimijoin’s (2005),
Graham’s (2005), and McTighe and Brown’s (2005) studies on utilizing assessment data
to differentiate instruction for a variety of learning needs. When the findings and
157
recommendations of the pertinent literature from chapter two are pulled together, it can
be summarized as follows:
• Teacher preparation programs need a more intentional focus on teaching
specific assessment types and strategies within their coursework
• Assessment should be covered within methods courses, and perhaps a
separate course in assessment altogether should be implemented
• Assessment literacy should be tied to teacher certification and licensing
• Leaders cultivate a culture of data use through thinking about purposes;
recognizing sound and unsound data; being knowledgeable about
statistical and measurement concepts; they make interpretation paramount;
and they pay attention to reporting and audiences
• A framework for developing skills that are needed for successful DDDM
includes collecting and organizing, analyzing and summarizing, and
synthesizing and prioritizing the data
• Teachers who utilize assessments and the resulting data to differentiate
instruction see an increase in student engagement with content
• Teachers engage in ongoing assessment practices to help with grouping
and instructional strategies
• Flexible grouping is essential in the differentiated classroom
158
• Differentiated instruction and standards-based education can work
together for diverse student populations
• Evidence of learning can be found by using a variety of assessments and
the accompanying data
The findings of this study connect to the literature described above, and also allow
for new information to be presented. These findings will also add to the database of
research, and are as follows:
• Teacher preparation programs in California have a renewed focus on
assessment in coursework
• The TPA and PACT tasks have huge implications for this renewed focus
on assessment and are resulting in program changes based on the data they
have produced
• Teacher candidates in the state of California are required to pass a series
of tasks that are part of the TPA and PACT in order to be certified
• Teacher education faculty believe their students need to know how to use
student data, and structure portions of their courses accordingly
• Preservice teachers are being taught how to collect, organize, and interpret
student data
159
• Preservice teachers are required to use student data and assess students in
order to successfully complete TPA and PACT tasks
• Preservice teachers are receiving a lot of instruction in assessment in
methods courses, and particularly in reading methods coursework
• Teacher educators are modeling a variety of assessments for their students,
and are utilizing alternative assessments with the preservice teachers in
coursework
• Teacher educators are focused on teaching differentiated instruction
throughout their courses, and have a sustained focus on ELLs and other
special population groups
• Teacher educators want to be more deliberate in their courses with
teaching their preservice teachers how to use data to differentiate
instruction
While this study added new research to the existing database on DDDM and
teacher preparation programs, it certainly is not all-inclusive, and there are still many
gaps left that must be filled. In the next section, I will discuss some implications for
future research.
Implications for Future Research
With the advent of NCLB in 2001, and the focus on standards-based reform in our
public schools, districts have turned to using student data to improve instructional
160
practices and achievement in the classroom. As a result, teachers are facing ever
increasing pressures to be data and assessment literate, and to use these results to drive
instruction. This filters down to teacher education programs and how we prepare our
preservice teachers to be data and assessment literate, as well as how we prepare them to
use data to differentiate instruction. Stiggins (1999) discusses how teacher preparation
programs need to do more in the area of assessment with their teacher candidates, and
that assessment should also be tied to certification and licensing requirements (1999 and
2002). Furthermore, Brimijion (2005), George (2005) and McTighe and Brown (2005)
all recommend that teachers utilize assessments and data to differentiate instruction in the
classroom. This study demonstrated that some teacher preparation programs in the state
of California have a more intentional focus on assessment, and that teacher certification
and licensing in the state of California are now tied to TPA and PACT results (of which
assessment is a large focus). The study also showed that professors are not as intentional
as they could be in teaching their preservice teachers to utilize data to differentiate
instruction. As a result, the following areas are in need of further research:
• Evaluating teacher preparation programs in other states in order to
determine if there is an increased focus on assessment and student data on
a national level
• Analyzing certification and licensing requirements of other states to
determine if there are competency measurements similar to the TPA and
PACT that have been implemented in the state of California, of which
assessment is a major component
161
• Conducting a longitudinal study of new teachers (years one through three)
to determine the transferability of data and assessment practices that were
learned in teacher education coursework
The next section will address implications for policy and practice. Recommendations are
geared towards state agencies, universities, and school districts.
Implications for Policy and Practice
Based on the findings of this research and their connections to the literature
review, the following recommendations are made to the Departments of Education in
each state, the California Commission on Teacher Credentialing, colleges and
universities, and school districts:
1) Departments of Education: It is recommended that Departments of
Education across the remaining 49 states study the California Commission on
Teacher Credentialing’s Teacher Performance Assessment tasks so that they
may implement similar measures to ensure their teachers are prepared for data
and assessment in the classroom.
2) California Commission on Teacher Credentialing:
A) It is recommended that colleges and universities be required to implement
a separate course on assessment and data-use in the classroom, with a
specific focus on how to utilize student data to differentiate instruction for a
variety of learners in the classroom.
162
B) It is recommended that colleges and universities be required to implement
a quantitative research methods course as part of the teacher education
coursework so that preservice teachers have some quantitative knowledge
before they enter the classroom, and do not have to wait until they pursue
graduate degrees to receive formal training in quantitative measures.
3) Colleges and Universities:
A) It is recommended that teacher preparation programs in colleges and
universities implement separate coursework on assessment and data literacy
for their teacher candidates. This coursework should include instruction on
some of the widely-used technology programs that school districts utilize in
DDDM.
B) It is recommended that teacher preparation programs add a course in
quantitative research methods to the undergraduate curriculum so that new
teachers enter the classroom with some sort of knowledge base on
quantitative measures.
C) It is recommended that teacher preparation programs broaden their scope of
teaching about differentiated instruction to include slow learners, students
with 504s, and other types of students who typically fall through the cracks.
D) It is recommended that teacher preparation programs implement a more
deliberate focus on using data to differentiate instruction. This includes the
163
use of DDDM vocabulary so that teacher candidates are familiar with the
terms once they are in real-world classrooms.
E) It is recommended that teacher preparation programs collaborate with
school districts to ensure that preservice teacher candidates are paired with
master teachers who are themselves competent in data, assessment, and
differentiated instruction so that the preservice candidates can observe and
intern with those who are well-versed in these practices for both field
experiences and student teaching experiences.
4) School Districts: It is recommended that school districts collaborate closely
with local colleges and universities in order to develop a solid base of master
teachers who have demonstrated exemplary competence in data, assessment,
and differentiated instruction. These master teachers should be utilized to
mentor and work with preservice teacher candidates in both field experience
and student teaching placements to ensure that these new teachers are learning
from the best teachers in the district.
Conclusion
Today’s classroom teachers are under more pressure than ever to ensure that their
students are achieving at the highest levels. This pressure filters down from the federal
government’s NCLB mandate, through state departments of education, right into local
school districts. Schools are required to report student achievement data in order for the
164
state and federal governments to determine if they are making AYP. Because there are
numerous implications for both the school and district based on whether or not AYP has
been met, both administrators and teachers are focused on student assessment and
achievement in ways that have not been previously seen. As a result of this increased
pressure for students and teachers to “perform”, many districts have implemented data
driven decision making practices. However, school districts are spending countless hours
and tens of thousands of dollars in order to train their teachers in how to utilize student
data at the classroom level. One way to combat this problem is to increase the focus on
data, assessment, and differentiated instruction within teacher preparation coursework.
This way, new teachers come out of school with a knowledge base from which to work,
and can strategically use data and assessment information to make informed instructional
decisions that will be most beneficial for the students with whom they have been
entrusted to educate.
The current research base on data driven decision making is not very deep, but is
growing due to the popularity of this “best practice”. Specifically, the research base on
how teacher preparation programs are preparing their preservice teachers to be data-
literate is almost non-existent. This is one of the reasons why I conducted this study. My
study adds to this limited research base, and provides avenues and suggestions for further
research into how teacher educators can better prepare their students to be data literate in
an increasingly data driven world.
165
References
Athanases, S.Z., & Achinstein, B. (2003). Focusing new teachers on individual and low
performing students: The centrality of formative assessment in the mentor’s
repertoire of practice. Teachers College Record, 105(8), 1486-1520.
Berry, B., Hoke, M., Hirsch, E. (2004). The search for highly qualified teachers. Phi
Delta Kappan, 85, 684-690.
Brimijoin, K. (2005). Differentiation and high-stakes testing: An oxymoron? Theory
into Practice, 44(3), 254-261.
Campbell, C., Murphy, J.A., & Holt, J.K. (2002). Psychometric analysis of an
assessment literacy instrument: Applicability to preservice teachers. Paper
presented at the Mid-Western Educational Research Association, Columbus, OH.
Chamberlain, M. & Plucker, J. (2008). P-16 education: Where are we going? Where
have we been? Phi Delta Kappan, 89(7), 472-479.
Creswell, J.W. (1998). Qualitative inquiry and research design: Choosing among five
traditions. Thousand Oaks, CA: Sage Publications, Inc.
Cromey, A. (2000). Using student assessment data: What can we learn from schools?
Oak Brook: North Central Regional Educational Laboratory.
Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of
state policy evidence. Education Policy Analysis Archives, 8(1).
Darling-Hammond, L. (2004). Standards, accountability, and school reform. Teachers
College Record, 106(6), 1047-1085.
Darling-Hammond, L. (2005). Teaching as a profession: Lessons in teacher preparation
and professional development. Phi Delta Kappan, 87(3), 237-241.
Darling-Hammond, L. & Baratz-Snowden, J. (2005). A Good Teacher in Every
Classroom: Preparing the Highly Qualified Teachers our Children Deserve. San
Francisco, CA: John Wiley & Sons.
Darling-Hammond, L. (2006). Powerful teacher education: Lessons from exemplary
programs. San Francisco, CA: Jossey-Bass Publishers.
Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high
performing districts use data to improve instruction for elementary school
students. Los Angeles, CA: Center on Educational Governance, USC Rossier
School of Education.
166
Dean, C., Lauer, P., & Urquhart, V. (2005). Outstanding teacher education programs:
what do they have that others don’t? Phi Delta Kappan, 87(4), 284-289.
Diamond, J.B., & Cooper, K. (2007). The uses of testing data in urban elementary
schools: Some lessons from Chicago. In P.A. Moss (Ed.), Evidence and decision
making (National Society for the Study of Education Yearbook, Vol. 106, Issue 1,
pp. 241-263). Chicago: National Society for the Study of Education. Distributed
by Blackwell Publishing.
Earl, L. & Katz, S. (2002). Leading schools in a data rich world. In K. Leithwood, P.
Hallinger, G. Furman, P. Gronn, J. MacBeath, B. Mulforld, & K. Riley (Eds.),
The second international handbook of educational leadership and administration.
Dordrecht, Netherlands: Kluwer.
George, P.S. (2005). A rationale for differentiating instruction in the regular classroom.
Theory into Practice, 44(3), 185-193.
Glass, G.V. (2008). Alternative teacher certification oversold. Retrieved May 14, 2008,
from http://epicpolicy.org/newsletter/2008/05/alternative-teacher-certification-
oversold
Goertz, M.E. (2001). Redefining government roles in an era of standards-based reform.
Phi Delta Kappan, 83, 62-66.
Graham, P. (2005). Classroom-based assessment: Changing knowledge and practice
through preservice teacher education. Teaching and Teacher Education, 21, 607-
621.
Harris, W.J., Cobb, R.A., Pooler, A.E., & Perry, C.M. (2008). Implications of P-16
education for teacher education. Phi Delta Kappan, 89(7), 493-496.
Harvey, J. (2003). The matrix reloaded: The internal contradictions of no child left
behind are creating a nightmare. Educational Leadership, 61(3), 18-21.
Heritage, M. & Yeagley, R. (2005). Data use and school improvement: Challenges and
prospects. In J.L. Herman & E. Haertel (Eds.), Uses and misuses of data for
educational accountability and improvement (National Society for the Study of
Education Yearbook, Vol. 104, Issue 2, pp. 320-339). Chicago: National Society
for the Study of Education. Distributed by Blackwell Publishing.
Hess, F. M. (2002). Advocacy in the guise of research: The laczko-kerr-berliner study of
teacher certification. Bi-Monthly Bulletin: Progressive Policy Institute 21st-
Century Schools Project.
167
Hoff, D.J. (2006). Delving into data. Education Week, 25(35), 12-18.
Ingram, D., Louis, K.S., & Schroeder, R.G. (2004). Accountability policies and teacher
decision-making: Barriers to the use of data to improve practice. Teachers
College Record, 106(6), 1258-1287.
Kaplan, L.S., & Owings, W.A. (2003). The politics of teacher quality. Phi Delta
Kappan, 84, 687-692.
Kerr, K.A., Marsh, J.A., Ikemoto, G.S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons
from three urban districts. American Journal of Education, 112(4), 496-520.
Linn, R.L. (2003). Accountability: Responsibility and reasonable expectations.
Educational Researcher, 32(7), 3-13.
Linn, R.L., Baker, E.L., & Betebenner, D.W. (2002). Accountability systems:
Implications of requirements of the no child left behind act of 2001. Educational
Researcher, 31, 3-16.
Mandincah, E.B., Honey, M., & Light, D. (2006). A theoretical framework for data-
driven decision making. Paper presented at the Annual Meeting of AERA, San
Francisco, CA.
Marsh, J.A., Payne, J.F., & Hamilton, L.S. (2006). Making sense of data-driven decision
making in education. Retrieved January 30, 2008, from
http://www.rand.org/pubs/occasional_papers/OP170/
McTighe, J. & Brown, J.L. (2005). Differentiated instruction and educational standards:
Is détente possible? Theory into Practice, 44(3), 234-244.
Merriam, S.B. (1998). Qualitative research and case study applications in education,
revised edition. San Francisco, CA: Jossey-Bass Publishers.
Ormrod, J.E. (2006). Educational psychology: Developing learners, 5
th
edition. Upper
Saddle River, NJ: Pearson Merrill Prentice Hall.
Parkay, F. W. & Stanford, B.H. (2007). Becoming a teacher, 7
th
edition. Boston:
Pearson and Allyn & Bacon.
Patton, M.Q. (2002). Qualitative research & evaluation methods, 3
rd
edition. Thousand
Oaks, CA: Sage Publications, Inc.
Popham, W.J. (1999). Why standardized tests don’t measure educational quality.
Educational Leadership, 56(6), 8-15.
168
Popham, W.J. (2003). The seductive allure of data. Educational Leadership, 60(5), 48-
51.
Porter, A., & Chester, M. (2001). Building a high-quality assessment and accountability
program: The Philadelphia example. Paper presented at a Brookings Institution
Conference. Washington, D.C.
Russell, T., McPherson, S., & Martin, A.K. (2001). Coherence and collaboration in
teacher education reform. Canadian Journal of Education, 26(1), 37-55.
Sato, M., Wei, R.C., & Darling-Hammond, L. (2008). Improving teachers’ assessment
practices through professional development: The case for national board
certification. American Educational Research Journal, 45(3), 669-700.
Schmoker, M. & Marzano, R.J. (1999). Realizing the promise of standards-based
education. Educational Leadership, 56(6), 17-21.
Spillane, J., & Miele, D. (2007). Evidence in practice: A framing of the terrain. In P.A.
Moss (Ed.), Evidence and decision making (National Society for the Study of
Education Yearbook, Vol. 106, Issue 1, pp. 46-73). Chicago: National Society for
the Study of Education. Distributed by Blackwell Publishing.
Stiggins, R.J. (1999). Evaluating classroom assessment training in teacher education
programs. Educational Measurement: Issues and Practice, 18(1), 23-27.
Stiggins, R.J. (2002). Assessment crisis: The absence of assessment for learning. Phi
Delta Kappan, 83(10), 758-765.
Suskind, D.C. (2007). Going public: NCLB and literacy practices in teacher education.
Language Arts, 85(5), 450-455.
Tomlinson, C. (1999). The differentiated classroom: Responding to the needs of all
learners. Alexandria, VA: Association for Supervision and Curriculum
Development.
Volante, L., & Fazio, X. (2007). Exploring teacher candidates’ assessment literacy:
Implications for teacher education reform and professional development.
Canadian Journal of Education, 30, 749-770.
Wise, S.L., Lukin, L.E., & Roos, L.L. (1991). Teacher beliefs about training in testing
and measurement. Journal of Teacher Education, 42(1), 37-42.
Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team
norms. American Journal of Education, 112, 521-547.
169
Zeichner, K. (2006). Reflections of a university-based teacher educator on the future of
college-and university-based teacher education. Journal of Teacher Education,
57(3), 326-240.
170
Appendix A
Teacher Education Faculty Interview Protocol
Participant’s Name _____________________________ Date __________________
Position __________________________________________________
[Introduction: I will explain the study, who I am, and the purpose of the study. I will
explain that the interview will be taped, and that their responses are strictly confidential.
I will also let them know that if they would like to say something off-tape, I can stop the
recorder during that time period. I will tell them that the interview will last for
approximately one hour, and then I will ask if they have any specific questions.]
I. Background—Laying the Foundation
A. Please tell me about your history as a teacher education faculty member.
(probes: how long have you been here; what are your research interests)
B. What role do you play in the teacher education program? (probes: clinical
faculty; researcher; course writer; committees)
C. What courses do you teach in the program?
II. Data Literacy
A. General Program
1. What is the program doing to promote data literacy? (research question
1)
2. What should the program do differently (or change)? (research
question 1)
B. Coursework
1. How do you promote data literacy? (research question 1)
2. What specifically do you do in your courses to promote data literacy?
(research question 1)
3. What specific data skills do the students leave your course with?
(research question 1)
4. How do you know they have mastered those skills? (research question
1)
III. Assessment Literacy
A. General Program
1. What is the program doing to promote assessment literacy in its
candidates? (research question 2)
B. Coursework
1. How do you promote assessment literacy? (research question 2)
171
2. What specifically do you do in your courses to promote assessment
literacy? (research question 2)
3. How do you know if your students are assessment-literate? (research
question 2)
4. What types of assessment skills should your students leave your
courses with? (research question 2)
IV. Differentiated Instruction
1. What do you teach your students regarding differentiated instruction?
(research question 3)
2. How do you prepare your students to use data to differentiate
instruction? (research question 3)
172
Appendix B
Preservice Teacher Focus Group Protocol
Participant’s Name __________________________________Date _________________
[Introduction: I will explain the study, who I am, and the purpose of the study. I will
explain that the session will be taped, and that their responses are strictly confidential. I
will also let them know that if they would like to say something off-tape, I can stop the
recorder during that time period. I will tell them that the interview will last for
approximately one hour, and then I will ask if they have any specific questions.]
I. Background—Laying the Foundation
A. Please tell me about your experience in this teacher education program.
(probes: what year are you in; what courses have you taken)
B. What is your goal upon completion of this program? (probes: teaching full
time; pursue graduate degree; travel; pursue a job in another area/profession)
II. Data Literacy
A. General
1. How would you define data literacy? (research question 2)
B. Coursework
2. What specific data skills have you obtained in your coursework? (research
question 2)
3. How comfortable do you feel using data? (probes: in coursework; in a
real-life classroom setting) (research question 2)
III. Assessment Literacy
A. General
1. How would you define assessment literacy? (research question 2)
B. Coursework
1. What specific assessment skills have you learned in your coursework?
(research question 2)
2. How comfortable do you feel creating, using, and interpreting
assessments? (probes: in coursework; in a real-life classroom setting)
(research question 2)
IV. Differentiated Instruction
1. How would you define differentiated instruction? (research question 3)
2. What skills have you learned in your coursework related to differentiated
instruction? (research question 3)
173
3. How comfortable to you feel using data and assessment information to
differentiate instruction? (probe: real-life classroom setting) (research
question 3)
V. Conclusion
1. What skills, if any, do you feel you are lacking in data and assessment
literacy? Differentiated instruction? (research questions 2 and 3)
2. How confident are you in your ability to use the skills you have been
taught? (probes: very, somewhat, not at all) (research questions 2 and 3)
174
Appendix C
Preservice Teacher Online Survey Protocol
I. Background Information:
Please answer each question below to the best of your ability.
A. Which professors have you had thus far in the program?
B. How long have you been in the teacher preparation program?
C. What degree are you pursuing?
D. What teaching courses have you already completed? (you can use the actual
title or the course number)
E. What is your goal upon completion of this program? Will you pursue a full-
time teaching position? Travel? Pursue a job in another profession?
F. What type of credential are you pursuing? Multiple Subject or Single Subject.
II. Data Literacy
Please answer each question below regarding data literacy as specifically as
possible. If you do not know an answer, please state that in the response box.
For the purpose of this study, data is defined as “factual information (grades,
test scores, attendance rates, graduation rates, SES, etc.) used in determining
student progress or achievement in the classroom.”
A. How would you define “data literacy”? If you can’t define it, please state
that.
B. What specific data skills have you obtained in your coursework? Provide as
many details as possible.
C. How comfortable do you feel using data in your coursework/class projects?
Be as detailed as you can.
D. How comfortable do you feel using data in a real-life classroom setting? (i.e.
as a student teacher and/or in your own classroom). Be as specific as you can.
III. Assessment Literacy
Please answer each question below regarding assessment. If you do not
know an answer, please state that in the response box. For the purpose of
this study, assessment is defined as “Measuring student knowledge, progress,
or improvement as related to classroom instruction, content, and/or standards
175
through tests, projects, homework, teacher observation, portfolios, and other
similar means.”
A. How would you define assessment literacy? Be as specific/detailed as you
can. If you can’t define it, please state that.
B. What specific assessment skills have you learned in your coursework? Be as
detailed as possible.
C. How comfortable do you feel creating assessments for a classroom setting?
Be as specific/detailed as possible.
D. How comfortable do you feel using assessments in a classroom setting? Give
details, if possible.
E. How comfortable do you feel interpreting assessment results in a classroom
setting? Be as specific/detailed as possible.
IV. Differentiated Instruction, or “Differentiation”
Please answer each question below concerning differentiated instruction, or
“differentiation”. If you do not know an answer to a question, please state
that in your response.
A. How would you define “differentiated instruction” or “differentiation”? If
you can’t, please state that here.
B. What skills have you learned in your coursework related to “differentiated
instruction”, or “differentiation”? Be as specific/detailed as possible.
C. How comfortable do you feel using data and assessment information to
“differentiate” instruction? Be as specific/detailed as possible.
V. Program Assessment
Please respond to the following questions regarding your teacher education
program and their ability to teach you how to use data and assessment
information to differentiate instruction.
A. What areas do you think your program needs to improve on in regards to
data, assessment, and differentiated instruction? Be specific for each area.
B. What skills, if any, do you feel you are lacking in data and assessment literacy
and differentiated instruction? Be as specific as you can.
C. How confident are you in your ability to use the skills you’ve been taught?
Be specific.
176
Appendix D
Codes for the Data
I. Faculty
A. Data Literacy
1. Data literacy promotion—program
2. Changes in program to promote data literacy
3. Data literacy--coursework
4. Data skills leaving course
5. Data skills mastery
B. Assessment Literacy
1. Assessment literacy—program
2. Assessment literacy—coursework
3. Assessment skills students leave course with
4. Assessment literate students—how do you know?
5. TPA
6. PACT
C. Data to Differentiate Instruction
1. How teach differentiated instruction
2. Prepare students to use data to differentiate instruction
II. Students
A. Data Literacy
1. Define data literacy
2. Data skills obtained in coursework
3. Comfort level—data use in coursework
4. Comfort level—data use in classroom
B. Assessment Literacy
1. Define assessment literacy
2. Assessment skills learned in coursework
3. Comfort level—creating assessments
177
4. Comfort level—using assessments
5. Comfort level—interpreting assessments
C. Differentiated Instruction
1. Define differentiated instruction
2. Skills in coursework on differentiation
3. Comfort level—data and assessment to differentiate
D. Program Improvement
1. Skills lacking in data, assessment, and differentiated instruction
2. Program improvement in data, assessment, and differentiated instruction
Abstract (if available)
Abstract
Accountability and standards-based reform are buzz words in educational settings today. Much of this is due to the passage of the No Child Left Behind Act of 2001. As a result of this increased accountability, school districts, administrators, and teachers are utilizing data to drive instructional decisions and practices. This focus on data driven decision making has had a far-reaching impact, from the top level of the federal government who essentially mandated data-use, down to the classroom where students learn every day.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Does data-driven decision making matter for African American students?
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
How districts prepare site administrators for data-driven decision making
PDF
Do we really understand what we are talking about? A study examining the data literacy capacities and needs of school leaders
PDF
Multiple perceptions of teachers who use data
PDF
Holding on and holding out: why some teachers resist the move toward data-driven decision making
PDF
Examining the applications of data-driven decision making on classroom instruction
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
District level practices in data driven decision making
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
Beyond the numbers chase: how urban high school teachers make sense of data use
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
The pursuit of equity: a comparative case study of nine schools and their use of data
PDF
Data-driven decision-making practices that secondary principals use to improve student achievement
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
Teaching in the context of NCLB: a qualitative study of the impact on teachers' work, morale, and professional support
PDF
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
The implementation of data driven decision making to improve low-performing schools: an evaluation study of superintendents in the western United States
PDF
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
Asset Metadata
Creator
Killion, Jenniffer Michelle
(author)
Core Title
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publication Date
04/01/2009
Defense Date
03/11/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assessment literacy,data driven decision making,data literacy,differentiated instruction,OAI-PMH Harvest,preservice teachers,Teacher education
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Datnow, Amanda (
committee chair
), Ragusa, Gisele (
committee member
), Stillman, Jamy (
committee member
)
Creator Email
jenniekillion@gmail.com,jkillion@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2047
Unique identifier
UC1229087
Identifier
etd-Killion-2746 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-210607 (legacy record id),usctheses-m2047 (legacy record id)
Legacy Identifier
etd-Killion-2746.pdf
Dmrecord
210607
Document Type
Dissertation
Rights
Killion, Jenniffer Michelle
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
assessment literacy
data driven decision making
data literacy
differentiated instruction
preservice teachers