Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Elementary administrators and teachers' perceptions of the teacher evaluation process in California's public schools
(USC Thesis Other)
Elementary administrators and teachers' perceptions of the teacher evaluation process in California's public schools
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
NOTE TO USERS
This reproduction is the best copy available.
®
UMI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ELEMENTARY ADMINISTRATORS AND TEACHERS’
PERCEPTIONS OF THE TEACHER EVALUATION PROCESS
IN CALIFORNIA’S PUBLIC SCHOOLS
by
Jon David Sand
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2005
Copyright 2005 Jon David Sand
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
UMI Number: 3180405
Copyright 2005 by
Sand, Jon David
All rights reserved.
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
®
UMI
UMI Microform 3180405
Copyright 2005 by ProQuest Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
ProQuest Information and Learning Company
300 North Zeeb Road
P.O. Box 1346
Ann Arbor, Ml 48106-1346
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
University of Southern California
Rossier School of Education
Los Angeles, California 90089-0031
This dissertation written by
Jon David Sand ___
under the discretion of hfe» Dissertation Committee,
and approved by all members of the Committee, has
been presented to and accepted by the Faculty of the
Rossier School of Education in partial fulfillment of the
requirements for the degree of
D octor of Education
Date Q
tw*jLL fhmm
Karen Symms Gallaghe Symms Gallagher/ Ph.D/, \ Dean
Committee
- Dennis B oeevar, Ph.D.
. Chairperson •
'Lawrence 0, Ficus, Ph.D.
✓
CT » Gross, Ph.D.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ii
DEDICATION
This dissertation is dedicated to my wife, Noreen Elizabeth Johnson Sand,
and my children Tiffany Elizabeth, Jan-Erik, and Heather Noelle Sand. My
wife’s support throughout this process has been instrumental in pursuing my love
of learning and applying it as an educational professional. She deservedly shares
in this achievement. My daughter Tiffany has taught me the importance of hard
work and perseverance balanced with compassion and a sense of humor. My son
Jan-Erik (USMC) has reminded me of the value of setting high standards and
blazing new frontiers. My daughter Heather has kept me looking at data and
circumstances with a critical eye while looking for innovative ways to problem
solve. In quiet moments and reflection, serendipitous new knowledge is formed.
My family has been supportive, and patient throughout my graduate endeavors.
Finally, I dedicate this dissertation to my parents, Leif Ivar and Anna
Sparring Matheson Sand. My father’s work habits and experiences broadened my
appreciation for education and international affairs. My mother’s understanding,
unconditional love, and commitment to higher education kept me on the path in
this journey of life-long learning.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iii
ACKNOWLEDGEMENTS
I would like to express deep appreciation to the many individuals who
have provided support and assistance during my doctoral studies at the University
of Southern California. USC represents much more than Trojan pride. It
represents a commitment to excellence and the importance of helping others along
the way as we seek to make our positive mark on the landscape of society.
First, much appreciation and respect is extended to Dr. Dennis Hocevar,
my dissertation committee chair. Dr. Hocevar has demonstrated his vast
knowledge and commitment to statistics, the importance of meaningful inquiry,
and for mentoring graduate students. His heart and humor challenged us along
the way. His disdain for bureaucracy was equally appreciated. Much is also
extended to Dr. Lawrence O. Picus, also on the faculty of the Rossier School of
Education at the University of Southern California. Committee member Dr. Picus
demonstrated the value and importance of critical analysis of educational issues,
the development of radical solutions, if necessary, and their application to local
and national settings. His humor was equally appreciated. In addition, I wish to
extend respect and appreciation to Dr. Jerry Gross who served as the third
member of my dissertation committee. He encouraged me to pursue my doctorate
at USC, and has been a supportive presence for many years - as my
superintendent, colleague, and friend.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iv
Many thanks axe extended to members of the USC faculty and staff for
their professionalism, leadership, and assistance over the last three years. Dr. Bob
Ferris, advisor - the role of the Superintendent, Dr. Robert Rueda - the
importance of critical academic inquiry and scholarship, Dr. David Marsh - my
graduate advisor, Dr. Carol C. Wilson - career and “life” counselor, and many
other faculty members whose support and guidance has enriched my eclectic
perspectives, also deserve much praise. Mrs. Mary Orduno, Mrs. Lisa Galvan,
and most notably, Ms. Debbie Chang deserve much credit and praise for their
important contributions, support, and assistance through the quagmire of graduate
studies. A special thanks and heartfelt acknowledgement goes out to Dr. Terry
Deal for his candor, brilliance, creativity, sensitivity to all that is important, and
willingness to challenge the status quo. Terry deserves his accolades and
demonstrates daily why he considered among the Who’s Who in Business and
Educational arenas. He also grows a terrific white wine!
Three individuals have had a considerable impact on my doctoral studies.
Dr. (and Major) Randy Wormmeester (USMC, Ret.) has been a friend and
classmate for three years. I have appreciated his candor, intellect, and humor
along the way. He was instrumental in supporting me across the finish line.
Randy not only makes you proud to be an American, but also demonstrates why
academic practitioners have a responsibility to challenge complacency and do
what is right. Dr. Wayne Diulio, classmate, has also demonstrated all that is right
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
V
in public education. A site administrator, Wayne brought tremendous insights
and aromatic cigars to our many class and dissertation discussions. His
willingness to go above and beyond has been much appreciated. Dr. Shelley
Lang deserves much praise and credit for my success in this doctoral endeavor.
She epitomizes the standard of excellence in all that she does. She was a
supportive influence - encouraging me to get the most out of this USC program.
She was a profound influence and model for others during the reformation of the
Rossier School of Education’s doctoral programs. Her knowledge of, and
experience in, instructional improvement is much respected.
Completion of this dissertation also leads to the mention of Dr. Jody
Dunlap, Assistant Superintendent of Personnel with the Conejo Valley Unified
School District. Her nudging, encouragement, and professionalism led me to
enter this endeavor. Dr. Bob Fraisse, Superintendent of the CVUSD, has been
very supportive throughout this experience. I thank him and respect him
immensely. I equally admire his leadership and model for educational leadership.
Finally, I wish to thank Mrs. Linda Faverty, the Director of Elementary Education
for the CVUSD. Linda has been a friend, mentor, and supportive influence in my
professional career and educational endeavor.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
TABLE OF CONTENTS
DEDICATION ..............................................................................................ii
ACKNOWLEDGEMENTS........................................................................................iii
LIST OF TABLES......................................................................................................Ix
LIST OF FIGURES.................................................................................................. xlii
ABSTRACT.............................................................................................................. xiv
CHAPTER I 1
THE PROBLEM............................................................................................... 1
Introduction........................................................................................................1
Statement and Background of the Problem.................................................... 2
Purpose of the Study......................................................................................... 7
Significance of the Study................................................................................. 8
Research Questions........................................................................................ 10
Methodology....................................................................................................12
Assumptions....................................................................................................14
Limitations.......................................................................................................14
Delimitations....................................................................................................16
Organization of the Study...............................................................................16
Statement of Intent..........................................................................................17
Definition of Terms 18
CHAPTER II .........................................................................................................23
REVIEW OF THE LITERATURE...............................................................23
Introduction.................................................................................................... 23
History of Teacher Evaluation.......................................................................28
Progression Over Time - Past to Present.........................................32
Current Research on Evaluation....................................................... 34
Models of Evaluation......................................................................... 37
School Accountability and Governance in Teacher Evaluation..................43
Collective Bargaining........................................................................47
Administrative Practices and Teacher Evaluation.................... 49
The Role of the Principal as Evaluator.............................................51
Evaluation and Teacher Performance.................. 53
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER III 58
RESEARCH DESIGN AND METHODOLOGY....................................... 58
Introduction.....................................................................................................58
Problem Statements........................................................................................59
Propositions....................................................................................................59
Conceptual Model.......................................................................................... ,6 1
Research Questions Restated.........................................................................63
Design of Study..............................................................................................64
Assumptions....................................................................................................68
Limitations............................................................................................ 68
Delimitations...................................................................................................70
Instrumentation...............................................................................................70
Validity and Reliability..................................................................................72
Data Collection...............................................................................................73
Analysis of Data.............................................................................................73
CHAPTER IV ............................................................................................................. 75
FINDINGS...................................................................................................... 75
Introduction..................................................................................................... 75
Distribution Results........................................................................................76
Internal Reliability..........................................................................................77
Reliability Analysis........................................................................................79
Data Disaggregation....................................................................................... 81
Descriptive Statistics......................................................................................83
Responses to Research Questions - Analysis of the Elementary
Data................................. 86
Analysis of the K-12 Data Comparison of the Means of Different
School Levels................................................................................................112
Qualitative Analysis of Item 36: Comments.............................................. 119
Evaluation Process............................................................................119
Collective Bargaining...................................................................... 122
Administrative Training Issues....................................................... 125
Impact on Teacher Practices............................................................ 128
Beginning and Experienced Teachers.............................................130
Preference for Alternative Evaluation Methods............................ 130
Use of Student Achievement........................................................... 131
Use of Student Input........................................................................ 131
Value Added Measures............................................................ 132
Summary of Findings - Elementary Quantitative Data.............................132
Summary of Findings - K-12 Quantitative D ata...................... 134
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER V...................................................................................... 136
CONCLUSIONS, ^COMMENDATIONS, and IMPLICATiONS 136
Introduction..................................................................................... 136
Purpose of the Study.....................................................................................137
Literature.......................................................................................................138
Research Questions...................................................................................... 141
Sample and Methodology............................................................................ .142
S elected F indings........................... ..............................................................144
Summary of Findings - Elementary Quantitative Data, Part 1.................144
Responses to Research Questions............................. 144
Summary of Findings - Elementary Quantitative Data, Part II................145
Summary of Findings - K-12 Quantitative Data....................................... 147
Summary of Findings - K-12 Qualitative Data..........................................150
Conclusions............... 151
Recommendations.........................................................................................153
Implications for Further Research...............................................................154
REFERENCES . . . . . 156
APPENDICES........................................................................................................... 163
A. Survey Questionnaire........................................................................ 163
B. Survey Question # 36 Respondents’ Comments............................ 171
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF TABLES
Table Page
1. Descriptive Statistics for the First Half of the Data__................................... 78
2. Descriptive Statistics for the Second Half of the Data _ _ _ _ _ ................ 79
3. Reliability Statistics - Dependent Variable #1: Perception of
Adherence to Standards..................................................................................80
4. Item-Total Statistics - Dependent Variable #1: Perception of
Adherence to Standards..................................................................................80
5. Reliability Statistics - Dependent Variable #2: Degree of
Satisfaction.....................................................................................................81
6. Item-Total Statistics - Dependent Variable #2: Degree of
Satisfaction.......................... 81
7. Descriptive Statistics: Teacher Evaluation Responses - Frequency of
Elementary Survey Responses...................................................................... 83
8. The Mean and Standard Deviation Scores for the Responses to
Survey Statements 19 - 35............................................................................. 84
9. Descriptives - Administrators and Teachers - at or above an
API of 800....................................................................................................... 86
10. ANOVA - Administrators and Teachers - at or above an
API of 800....................................................................................................... 87
11. Multiple Comparisons - Administrators and Teachers - at or above an
API of 800....................................................................................................... 87
12. Descriptives - Administrators and Teachers - below an API of 800..........89
13. ANOVA - Administrators and Teachers - below an API of 800................89
14. Multiple Comparisons - Administrators and Teachers - below an
API of 800........................................................................................................90
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
X
15. Descriptives - Administrators and Teachers - Student Enrollment
above 10,000...................................................................................................92
16. ANOVA - Administrators and Teachers - Student Enrollment
above 10,000...................................................................................................92
17. Multiple Comparisons - Administrators and Teachers - Student
Enrollment above 10,000............................................................................... 93
18. Descriptives - Administrators and Teachers - Student Enrollment
below 10,000................................................................................................... 94
19. ANOVA - Administrators and Teachers - Student Enrollment below
10,000...............................................................................................................95
20. Multiple Comparisons - Administrators and Teachers - Student
Enrollment below 10,000............................................................................... 95
21. Descriptives - Administrators and Teachers - with more than
five years of experience..................................................................................97
22. ANOVA - Administrators and Teachers - with more than
five years of experience..................................................................................97
23. Multiple Comparisons - Administrators and Teachers - with
more than five years of experience................................................................98
24. Descriptives - Administrators and Teachers - with five or less
years of experience......................................................................................... 99
25. ANOVA - Administrators and Teachers ~ with five or less years of
experience...................................................................................................... 100
26. Multiple Comparisons - Administrators and Teachers - with five or
less years of experience................................................................................ 100
27. Factorial ANOVA: Between-Subjects Factors - Elementary:
Independent Variables.................................................................................. 102
28. Factorial ANOVA: Between-Subjects Effects/Dependent Variable:
Standards........................................................................................................102
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
xi
29. Factorial ANOVA; Position - Dependent Variable:
Standards.................................................................. 103
30. Factorial ANOVA: Years of Experience: Teacher - Dependent
Variable: Standards......................................................................................103
31. Factorial ANOVA: Years of Experience: Administrator - Dependent
Variable: Standards.......................................................................................103
32. Factorial ANOVA: District Size - Dependent Variable: Standards.........104
33. Factorial ANOVA: School’s API above 800 - Dependent Variable:
Standards........................................................................................................104
34. Factorial ANOVA: Between-Subjects Factors - Independent
Variables........................................................................................................ 106
35. Factorial ANOVA: Between-Subj ects Effects/Dependent Variable:
Satisfaction ............................................................................ 106
36 Factorial ANOVA: Position - Dependent Variable: Satisfaction.............107
37. Factorial ANOVA: Years of Experience: Teacher - Dependent
Variable: Satisfaction................................................................................... 107
38. Factorial ANOVA: Years of Experience: Administrator - Dependent
Variable: Satisfaction...................................................................................107
39. Factorial ANOVA: District Size - Dependent Variable:
Satisfaction....................................................................................................108
40. Factorial ANOVA: School’s API above 800 - Dependent Variable:
Satisfaction.................................................................................................... 108
41. Analysis of the K-12 Data: Comparison of the Means of Different
School Levels................................................................................................114
42. K-12 Mean Responses to Independent Variables: Standards
Descriptives....................................................................................................115
43. K-12 Mean Responses to Independent Variables: Standards
ANOVA.................................................... 115
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
xii
44. K-12 Mean Responses to Independent Variables: Standards
Multiple Comparisons.................................................................................116
45. K-12 Mean Responses to Independent Variables: Satisfaction
Descriptives.................................................................................................. .117
46. K-12 Mean Responses to Independent Variables; Satisfaction
ANOVA.........................................................................................................117
47. K-12 Mean Responses to Independent Variables: Satisfaction
Multiple Comparisons..................................................................................117
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF FIGURES
1. The Teacher Evaluation Process: Relationships and Links between
Teacher Evaluation and Overall School Improvement...............................61
2. Elementary School Level Histogram - Responses to Dependent
Variables: Standards.................................................................................... 110
3. Elementary School Level Histogram - Responses to Dependent
Variables: Satisfaction................................................................................ 111
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
xiv
ABSTRACT
This quantitative study examined administrators and teachers’ perceptions
of the teacher evaluation process in California’s elementary public schools. It
investigated problematic areas - standards and practices along with measures of
satisfaction, the influence of Collective Bargaining, and the extent to which
current evaluation practices impact classroom instruction and student
achievement. Specific survey Instrument questions framing standards for the
evaluation of educational personnel and indicators of satisfaction were developed.
Invitations to complete an online survey questionnaire were distributed to all K-
12 school districts throughout California. One thousand and twenty-three
responses were received from K-12 site principals, assistant principals, deans,
teachers, and other administrators were disaggregated into elementary, middle,
and high school levels. This study evaluated the elementary findings and
compared them to the K-12 totals.
Descriptive statistics, reliability analyses, ANOVAs, and multiple
comparisons provided meaningful data for responses to dependent variables and
specific survey items. The independent variables of district size, Academic
Performance Index (API) scores, and years of experience framed twelve research
questions. Findings from one open-ended response indicated considerable
opinions regarding the teacher evaluation process, the impact of Collective
Bargaining, administrative training issues, the impact of evaluations on teacher
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
XV
practices, the need for differentiated evaluation instruments for beginning and
experienced teachers, a preference for alternative evaluation methods, the use of
student achievement and input, and perceptions of value-added measures.
A review of current literature indicated that teacher evaluation has the
potential of greatly influencing classroom practices and student achievement.
This quantitative study documented the perceptions of California’s public
elementary administrators and teachers’ perceptions of the teacher evaluation
process. This study noted perceptions of the timeliness of feedback, the adequacy
of evaluator training and resources, and degree to which evaluations were linked
to opportunities for professional development and other local support services,
among others. Specific recommendations were developed, while implications for
further research were established. This study concluded that this considerable
quantitative data could provide the basis for improving the efficacy of the teacher
evaluation process in California’s public elementary schools with implications for
K-12 public schools, as well.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1
A
CHAPTER 1
THE PROBLEM
Introduction
Evaluation of public K-12 teachers in the State of California strives to
enhance classroom practices and facilitate professional development while
concomitantly satisfy state and local legislative and policy requirements. In this
era of high-stakes testing and calls for increased accountability of public
education for both student achievement and the use of available resources, the
process of evaluation of certificated personnel must be examined in terms of
overall efficacy, and its role in maximizing the achievement of desired goals.
Teacher evaluation has the potential of greatly influencing classroom
practices and student achievement. Teacher evaluation, when done well, has a
significant influence on a school’s culture (Danielson, 2002). Simply, culture
refers to the way things are done around here (Bolman & Deal, 2003; Deal &
Kennedy, 1982, 2000). Much can be said for the importance of organizations
working towards shared values and the success of the common good. Effective
evaluations provide stakeholders assurances of high standards for teacher
performance while promoting professional learning. In a particular school, a
culture of mutual respect and shared high expectations for students and staff can
yield positive outcomes and enhanced success (Danielson, 2002).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2
Despite considerable research, past practices, and state and local
mandates addressing the process and functions of evaluations, often times,
individual teacher evaluations serve to fulfill a requirement, and then simply get
filed away never to be seen or considered again. Much time and energy on the
part of evaluators, teachers, and occasionally support staff is seemingly wasted.
Unless careful consideration is given to the perceptions of principals and other
site evaluators and teachers about evaluation and the evaluation process, time
and resources might be better allocated to other important school functions and
personnel. To date, such perceptions by evaluators and teachers among
California’s K-12 public schools are limited to site qualitative case studies, with
teacher evaluation given little merit. Literature suggests that some quantitative
examination of evaluation practices has occurred outside of California. Such
efforts still remain limited in scope and do not reflect current trends in school
governance, accountability, and standards-based instructional practices.
Statement and Background of the Problem
Public school districts have often stated in governing board policies that
regular and comprehensive teacher evaluations foster instructional
improvements, raise students’ levels of achievement, hold staff accountable,
recognize exemplary practices, assist in determining deficits in performance,
initiate remedial support, or help determine future employment.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3
Nonetheless, teacher evaluation as a process and means to enhance classroom
practices has been problematic (Heneman & Milanowski, 2003). Such
difficulties include a lack of common beliefs about good teaching, a lack of
common understanding regarding evaluation design and procedures, concerns
over bias and a lack of trust, and differing degrees of apathy (Danielson &
McGreal, 2000; Johnson, 1999; Peterson, 2000).
Current evaluation practices have little, if any, influence on actual
classroom practices. Rather than supporting professional growth and
accountability mandates, evaluation functions have become increasingly viewed
as perfunctory exercises that must be endured, and meaningless, time-
consuming experiences (Millman & Darling-Hammond, 1989; Sando, 1995;
Stiggins, 1988; Stronge & Tucker, 2003). In 1988, the Joint Committee on
Standards for Educational Evaluation identified several criticisms of personnel
evaluation practices. Among them, research and learned opinions corroborate
current views that teacher evaluation has been of little value (Stronge & Tucker,
2003; Stufflebeam, 1988).
Researchers suggest that effective teacher evaluation helps teachers
improve instructional practices, and builds trust, openness, and professionalism.
Teacher evaluation is also enhanced when the perceptions of teachers and
principals are similar in terms of purposes, methods, and efficacy (Bastarache,
2000; Millman & Darling-Hammond, 1989). Still other educators consider
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
4
teacher evaluation as hindered by contractual provisions and saddled by a long
history of distrust between teachers and their evaluators (Bastarache, 2000).
Without a quantitative analysis of current principals and teachers’ perceptions of
teacher evaluation in California’s K-12 public schools, the merits and impetus
for meaningful change remain lacking. It is hoped that such a study will foster
open dialogue, build trust, and encourage reflective practices. Evaluation that
leads to professional growth requires teachers to look honestly at their
weaknesses and strengths (Howard & McColskey, 2001). Understanding the
perceptions of evaluators and teachers of the evaluation process and its impact
on classroom practices is therefore significant.
The teacher evaluation process has undergone an evolution of sorts.
From solely a determiner of employment to one linked to professional standards,
it has encompassed many models and instruments at both state and local levels.
Regardless of the paradigm, inherent problems in teacher evaluation must be
addressed (Boyd, 1989; Frase & Streshly, 1994) Some of the noted weaknesses
include:
• Evaluation ratings may be biased or represent subjective viewpoints
• Evaluations do not always provide meaningful feedback for both
principals and teachers
• Professional growth plans, when used, are not necessarily linked to
evaluations
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
5
• Training supports (for both evaluators and teachers) has either not
occurred, or has been viewed as inadequate.
As teachers are encouraged to look and reflect honestly upon their
strengths and weaknesses relative to their own classroom practices, they must do
so without a sense of jeopardizing their employment status, or the threat of an
administrative transfer to another school site. It has been said that because
principals are the front-line implementers of evaluation policy, their beliefs
about barriers (real or perceived) are likely to influence their actions (Painter,
2000). Administrators face conflicting roles, instances of overwhelming
demands on time, behind the scenes power struggles, crippling limits to their
own behavior, and feelings of frustration (Peterson, 2000). Such challenges are
only exacerbated by supervision and evaluation practices, which neither the
principal, nor the teachers value or respect. This includes various perspectives
on the impact of time relative to both administrators and classroom teachers.
Establishing and supporting a school culture of open dialogue in which all
stakeholders are partners in life-long learning can precipitate meaningful
professional growth and collaboration. The perceptions of principals and
teachers must be considered for tangible guidance, genuine participation, and
data validity.
Collective Bargaining influences teacher evaluation in many ways. In
California, the school district is the teacher’s official employer. As such,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6
teachers (and other school employees) have a right to union representation.
Union contracts are negotiated through collective bargaining at least once every
three years and cover important items such as salaries, benefits, working
conditions, class size, job assignment, evaluation, and grievance procedures
(EdSource, 2004). Collective bargaining agreements differ among school
districts. Such agreements do not prioritize instructional practices and student
achievement. Rather, such agreements focus on the protection of the
employees’ rights. At times, these protections limit or restrict the district’s
ability to effectively follow-up on a teachers’ evaluation, particularly when such
evaluations indicate negative performance. Such contracts also hinder employee
termination and limit efforts to enforce remedial interventions and provide
professional development. Some teachers for reasons discussed later in this
study are resistant to change, feel threatened by evaluation of any measure, and
express concern by potential demands on their particular workload. While
expressed and implied levels of concern can be addressed through effective
dialogue and staff development, many teachers needing improvement continue
to resist change and hide behind their protective tenure and contracts.
Recent legislation has provided for additional support for beginning and
identified sub-par teachers. Evaluation instruments do not necessarily match
best practices and the respective needs of beginning and experienced teachers.
One-size does not fit all (Sando, 1995). Evaluation procedures and protocols
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7
must be differentiated to determine and recognize appropriate skills and
developmental instructional practices best suited for respective teachers.
Differentiated evaluation could also be aligned with appropriate and meaningful
professional development support and services.
Purpose of the Study
The purpose of this study was to survey elementary public school
administrators and teachers in California as to their perceptions of the teacher
evaluation process. This study also examined their perceptions regarding the
impact of teacher evaluation on actual classroom practices. Teachers’ voices
relative to emerging teacher evaluation trends have been missing. An increased
understanding of teachers’ perceptions relative to evaluation could provide vital
information on enhanced teaching and learning methodologies, in addition to
apparent weaknesses of newly implemented evaluation processes and systems
(Ovando, 2001). Additionally, this study analyzed differences in the perceptions
of teachers and evaluators based on independent criteria such as years of
experience in each respective role, the school’s academic performance index
(API), school size, and their years of experience, relative to dependent variables.
Such criteria included items addressing the appropriateness of using of students’
test scores in evaluating teachers, the support teachers receive from site and
district administrators, the timeliness of feedback, and opportunities to link
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8
evaluation outcomes to professional development and other support services,
among others.
Significance of the Study
This proposed study has merit and will prove significant in a number of
ways. Never before have schools been pressured to make the most of existing
resources, all the while encourage, and even demand administrators and teachers
to increase student achievement. Adequate Yearly Progress (AYP) targets,
academic performance index (API) scores, revisions to state mandated student
assessments, now referred to as the Student Testing and Reporting (STAR)
program, school choice, and ever-increasing scrutiny from the public
collectively place a considerable burden upon California’s educators. Decreases
in public school funding and the restrictive use of categorical resources further
require district and school site personnel to reexamine the status quo. Teacher
evaluations take precious time and resources. Such practices are not exempt
from careful, fair, and worthwhile scrutiny to determine how the process and
practices can be improved, and/or even eliminated. If the general consensus and
accepted opinions that current teacher evaluation practices are ineffective and a
waste of time prove valid through this quantitative survey, legislators and
educators must work together to remedy such practices or re-direct vital
resources where they can be better utilized.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
9
Obtaining the perceptions of school administrators and teachers about
the teacher evaluation process will also assist in the alignment of other human
resource management systems with the district’s evaluation system. Teacher
evaluation should be linked by design and content with recruitment, induction,
and professional development (Heneman & Milanowski, 2003). One can
assume that no single state law, board policy, or school site practice will
instantly resolve current issues and concerns regarding teacher evaluation and its
potential impact on classroom practices. The data collected in this survey will
shed significant light on the status of current perceptions, and provide relevant
input towards serious efforts ultimately to improve student achievement. Data
from this survey will provide voices to teachers and administrators, and within
this context, encourage broader partnerships in working together to meet the
needs of all stakeholders.
Considering the multifaceted challenges facing public school educators
today, stakeholders need to make informed decisions based on sound research
and concrete data. Efforts to improve classroom practices must recognize the
link between improved teaching and school improvement. Performance
improvement can include growth by individuals and groups of teachers; in the
programs and services extended to students, parents and the community; and in
the school’s ability to accomplish its mission (Stronge & Tucker, 2003). Results
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10
of this survey will serve to lend credence to, or redirect prescriptive efforts
towards teacher evaluation practices in the State of California.
Research Questions
Prior to the development of specific research questions, an extensive
review of current literature was used to identify particular evaluation themes and
issues facing public education in this era of accountability and No Child Left
Behind (NCLB) under the Elementary and Secondary Education Act (ESEA) of
2001 (NCLB, 2004). The following questions emerged:
1. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving at or above an API
of 800?
2. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving at or above an API of
800?
3. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving below an API of
800?
4. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving below an API of 800?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
11
5. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving more than 10,000
students?
6. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving more than 10,000
students?
7. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving 10,000 or less
students?
8. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving 10,000 or less
students?
9. What are administrators’, with more than five years experience, attitudes
or perceptions of teacher evaluations in public elementary schools?
10. What are teachers’, with more than five years experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
11. What are administrators’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
12. What are teachers’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
12
Methodology
The focus of this study was to determine the perceptions of California’s
public elementary administrators and teachers towards teacher evaluation.
Survey responses were obtained from K-12 teachers and administrators. The
data were subsequently disaggregated into three levels — elementary, middle and
high school. School district superintendents were contacted via U.S. Mail, or e-
mail. A letter of introduction was prepared to introduce this quantitative study,
and asked willing superintendents to encourage their site administrators and
respective K-12 teachers to log online to a website and complete the survey.
Directions for completing the survey were available on the website along with a
three-page information sheet for non-medical research approved by the
University of Southern California’s Institutional Review Board (IRB). Data was
collected via the online survey service, and later extensively analyzed by
researchers.
Names of the school districts and their superintendents were obtained
from the State of California’s Department of Education website;
http://www.cde.ca.gov/re/sd/index.asp, which was public information. The
survey questionnaire was available from any computer with web access.
Participation in the survey was strictly voluntary and required
approximately five (5) minutes to complete. Completion of the survey was not
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
13
tracked, and participants remained anonymous. In fact, respondents were not
asked their name, school district, or mail/e-mail address.
The population for the study included certificated district-level and site
administrators, and teachers throughout the State of California representing
public schools in grades Kindergarten through 12. A minimum participation of
200 subjects was required representing elementary schools, middle and/or
Intermediate schools, and high schools. A convenience sampling (Gall, Gall, &
Borg, 2002) was considered to ensure a minimum of representative participation
and responses. In addition, a stratified sampling was also considered with letters
of introduction randomly distributed to superintendents in each of California’s
counties. In order to adequately represent statewide perspectives and permit
inference that the results might generalize, it was decided to distribute letter of
introduction to all of California’s K-12 superintendents.
A Likert-type scale was used in development of the survey to measure
teachers and administrators’ perceptions of the teacher evaluation process. The
responses were then compared to various independent variables including:
school-size, gender, education level, SES, and their years of experience as a
teacher and/or administrator, among others. Seventeen questions addressed
statements of standards, satisfaction/effectiveness, and issues pertaining to the
teacher evaluation process. An additional item permitted brief comments to be
added to the survey questionnaire. Both aggregated and disaggregated
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
responses to the survey were analyzed to allow the researchers to examine the
data for their respective grade levels. This study focused on the elementary
school data.
Assumptions
This study made the following assumptions:
1. The respondents to the survey represent a generalizable sampling from
the targeted population as a whole.
2. The methods and procedures selected for this study were appropriate for
the subject being studied.
3. The responses to the survey questionnaire were accurate and provided
accurate and reliable data.
4. The respondents were truthful and sincere in their answers.
5. The researcher’s analysis of the data was accurate and represented the
perceptions and responses of the participants.
6. Study participants have sufficient knowledge of the teacher evaluation
process within their respective school site or school district.
Limitations
The following are limitations (Creswell, 2002) of the study:
1. The respondents to this study needed to have access to a computer with
Internet capability.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
15
2. Some school districts offer year-round education. As such, some
potential respondents may have been off-track, and therefore unaware of
the opportunity to participate in this survey.
3. The study was limited to include only K-12 public school teachers and
administrators.
4. Although sent, not all of California’s public school district
superintendents received the letter of introduction and request to
encourage their administrators and teachers to participate in this study.
5. The data collected in this study was limited to the number of respondents
to the survey.
6. Only certificated K-12 public school administrators and teachers in
California were asked to participate in this study. Therefore, the results
may not be generalizable to similar participants in other schools and
districts outside the State of California. Further, any conclusions drawn
from the data are not necessarily generalizable to private schools here in
California, or elsewhere.
7. The responses to survey questions were anonymous, and therefore did
not provide the opportunity to ask follow-up questions or for clarification
to any voluntary comment(s). (See Survey item # 36)
8. Some beginning teachers and novice administrators participating as
respondents may not be able to answer all applicable survey questions
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
16
due to either a lack of experience or background knowledge relative to
particular items.
Delimitations
The following are delimitations (Creswell, 2002) of the study:
1. This study was quantitative in design and implementation. Interviews or
other methodologies germane to qualitative studies were not included,
with the exception of Item # 36 - comments.
2. Only certificated K-12 administrators and teachers serving in
California’s publics schools were asked to participate.
3. The study required the voluntary respondents to utilize Internet access to
complete the survey questionnaire online.
Organization of the Study
Chapter One of the study included an introduction of the problem, a
statement and the background of the problem, the purpose of the study, the
significance of the study, the articulated research questions to be answered, the
methodology, assumptions, limitations and delimitations, the organization of the
study, a statement of intent, and definitions of terms.
Chapter Two presented a comprehensive review of the literature relevant
to the topic of teacher evaluation. The review of the literature provided a
framework for the study and included both the background and context for the
research problem.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
17
Chapter Three provided a detailed description of the research procedures
that were used in this study. Included in this section is the literature that was
used to guide the design of the study, and support the conceptual framework.
Assumptions, limitations, and delimitations were revisited in this chapter. A
discussion of the instrumentation, data collection, analysis, and the validity and
reliability of procedures and methods was also presented.
Chapter Four presented the findings of the study both in narrative
descriptions of the data and in comprehensive graphic representations, to
facilitate meaningful understanding. Responses from administrators and
teachers were highlighted with the inclusion of comments, which added
perspective and unique insights.
Chapter Five summarized the study and selected findings. Conclusions
from the study were presented. Implications for teacher evaluation practices and
processes along with summary recommendations were provided.
This study concluded with references, appendices presenting the survey
instrument, complete respondents’ comments, and an author’s note. Tables and
figures were noted in the Table of Contents.
Statement of Intent
The intent of the quantitative study was to provide a broader, yet more
accurate understanding of the perceptions of K-12 teachers and administrators in
California’s public schools towards the teacher evaluation process. This study
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
further focused on the perceptions of elementary level teachers and
administrators. This study considered the roie of the collective bargaining
process and its impact on the evaluation process, as well. Finally, the data
collected in this study sought to provide insight(s) regarding the respondents’
views towards the impact of teacher evaluations on classroom instructional
practices.
Definition of Terms
The following terms are defined for the purpose of this study. The
definitions were collected from current literature and from online sources
(EdSource, 2004; Oakley, 1998; Peterson, 2000; Stufflebeam, 1988).
• Accuracy Standards - standards established by the 1988 Joint Committee
on Standards for Educational Evaluation, which require that the obtained
information is technically accurate, and that conclusions are linked
logically to the data (Stufflebeam, 1988).
• BTSA - California’s Beginning Teacher Support and Assessment
program. Teachers participating in the BTSA program are fully
credentialed first or second year teachers and/or teachers with an out of
state credential. BTSA is a two-year induction program based on the
California Standards for the Teaching Profession (CSTP) (CDE, 1999)
and the SB 2042 Induction Standards. The California Formative
Assessment and Support System for Teachers (CFASST) in the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
framework for integrating formative assessment activities with teacher
support and mentoring. Teachers participating in BTSA are provided
with a support provider who works with them to complete the two-year
program. These support providers are experienced teachers who are
trained in the CSTP and CFASST. BTSA teachers are expected to
complete all CFASST paperwork, and 48 hours of staff development
over the two-year program.
• Certificated Employees - those employees directly involved in the
educational process that include both instructional and non-instructional
employees such as teachers, administrators, supervisors, and principals.
Per the California Education Code, certificated employees must be
properly credentialed for their specific position. Detailed descriptions
for credentialing requirements may be obtained at
http://www.ctc.ca.gov/, while specifics related to credentialing and the
California Education Code can be found at www.leginfo.ca.gov.
• Elementary Schools - schools representing varied configurations of
grades Kindergarten through six, which may include grades K - 8.
• Differentiation - designing instructional strategies to meet the unique
and diverse needs of all learners; designing evaluation to meet the unique
and diverse needs of all teachers.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
• Feasibility Standards - standards established by the 1988 Joint
Committee on Standards for Educational Evaluation which call for
evaluation systems that are as easy to implement as possible, efficient in
their use of time and resources, adequately funded, and viable from a
number of other standpoints (Stufflebeam, 1988).
• Formative Evaluations - assessments used to provide feedback to shape
performances, build new practice, or improve existing practices.
Formative evaluations may occur on multiple occasions.
• High Schools - schools representing varied configurations of grades nine
through twelve. These schools include both comprehensive and
alternative sites.
• Instruction - a teacher’s style and methods of presenting content
knowledge, including the approaches, materials, equipment, and
technology utilized (Oakley, 1998). Effective instruction is viewed as
that which motivates, engages, and challenges each student to meet or
exceed curricular goals and standards.
• Middle Schools - schools representing varied configurations of grades
six through eight.
• PAR - Peer Assistance and Review, a program established in 1999 by
Assembly Bill IX in the State of California to provide support for
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
identified experienced teachers in need of additional support with subject
knowledge, and/or classroom practices and performance.
• Proprietary Standards - standards established by the 1988 Joint
Committee on Standards for Educational Evaluation which requires that
evaluations be conducted legally, ethically, and with due regards for the
welfare of evaluatee (Stufflebeam, 1988).
• Stull Bill/Act - California State legislation originally enacted in 1971 to
establish a uniform system of evaluation and assessment of the
performance of all certificated personnel within each school district of
the state.
• Summative Evaluations - assessments used to make decisions or
judgments, including whether or not to retain a teacher. Summative
evaluations generally occur at the end of a specific period.
• Teacher Evaluation - the ongoing process of systematic appraisal of
certificated personnel work performance (Booth, 2000). According to
the Education Code, and local board policies, the primary purpose of
teacher evaluation is to improve the educational process and to develop
the highest of professional practices on the part of each employee. The
Education Code, Sections 44660-44665 renumbered the provisions of the
1971 Stull Act. Specific code sections can be viewed at
www.leginfo.ca.gov.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
• Utility Standards - particular standards established by the 1988 Joint
Committee on Standards for Educational Evaluation Intended to guide
evaluations so that they will be informative, timely, and influential
(Stufflebeam, 1988).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
23
CHAPTER II
REVIEW OF THE LITERATURE
Introduction
A careful review of the literature regarding teacher evaluation generated
an extensive array of qualitative studies, case studies, and empirical data. Few
quantitative studies have taken place, and of them, only four dealt with elements
of evaluation of certificated personnel in education in California. Each of these
was a doctoral dissertation. The oldest of these was completed in 1995, with the
remaining three published in 2000. One of these focused on alternative methods
or models of teacher evaluation with the population of primarily school district
superintendents, and a small percentage of principals. One study surveyed K-6
special education teachers about their alternative evaluation experiences. The
third study surveyed three elementary schools in the California Central Valley,
asking their teachers nine question items about perceptions towards the fairness
of evaluations. The fourth and final quantitative study actually incorporated a
mixed-methods design. This study included 41 high school site administrators
and their collective 175 teachers. Two-thirds of the study’s population
represented private secular schools.
While findings, sample sizes, and methodologies varied greatly, the
literature identified and discussed in the chapter helped contribute to a
consensus of understanding, in addition to exposing gaps — which warrant
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
additional study. Some of the similarities and variances in findings could be
attributed to unique characteristics and demographics of the populations
(Marzano, 2003). Interestingly, studies completed in the 1970’s have
represented shared threads of common findings — even 20 years later into the
1990’s.
The implementation of meaningful research and implications for
improved learning has been surprisingly quite slow. Calls for moving away
from static summative evaluations focusing on determining future employment,
have given way to a multitude of formative, professional growth models.
Similar findings and recommendations for differentiated evaluation have been
written for over thirty years. Actual advances in evaluation practices have
moved along the continuum quite slowly. Evolving from checklists, teacher
practices have incorporated contemporary standards for the professional.
According to a considerable percentage of the literature, summative - end of the
year evaluations appear to hold little value for tenured and experienced teachers.
Why then are time and resources allocated to these seemingly ritualistic events?
A review of literature provided extensive examination of emerging trends, and
exposed gaps in research, and the impetus for lasting change.
This review also served to guide researchers in understanding the history
of personnel evaluation in education, the role of federal, state, and local
requirements, broader instructional implications, and the potential impact that it
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
has had both the affective and academic elements of a school setting. The
historical review also highlighted the influence of collective bargaining and the
evaluation process upon teacher practices and overall morale.
A review of the literature pertaining to the topic of teacher evaluation
uncovered an extensive array of germane studies and materials. Many of the
selected articles, studies, and highly regarded texts covering teacher evaluation
incorporated states and local school districts’ efforts and reform movements
from around the United States. Very few of the composite totals were related to
the topic and specific to K-12 public schools within the state of California. Of
those researched and written here in California, most were often specific to one
aspect or model of the teacher evaluation process, and followed a case study or
qualitative study paradigm.
This chapter was divided into three overarching sections. The first
section began with a review of the history of evaluation of teaching personnel in
education through current research and practices. From its origins, teacher
evaluation has evolved to incorporate a myriad of revisions and changes because
of research-based and localized qualitative endeavors. Models of evaluation
provided insights into the plethora of evaluation practices, protocols, and
perspectives on the topic. Considerable attention was given to the many
evaluation models that have been used, axe currently in use, and those that have
been proposed in California’s K-12 public schools.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
Section II covered school accountability governing federal, state, and
local endeavors. In recent years, the public has come to believe emphatically
that the improvements in educational practices and achievement must Include
advancements in the quality of teachers. Such has been the emphasis for
overhauling schoolwide programs, Instructional practices, and curricula in each
of the content areas, not just in Reading/Language Arts, Math, and Science
(Johnson, 1997; Millman & Darling-Hammond, 1989). Several states have
enacted legislation to bring about sweeping improvements to teacher training
and certification. The hope was that subsequent changes in teacher evaluation
would manifest professional growth and greater success in instructional
productivity by teachers, enhance student learning, and ultimately improve the
quality of the educational system (Johnson, 1997).
This second section also reviewed governance issues. A thorough
discussion featured particular implementation challenges and concerns, some of
which served to limit the efficacy and value of teacher evaluation efforts. A
review and discussion of Collective Bargaining was included.
The third section of the review of the literature dealt with the
administrative practices of teacher evaluation, including the role of the principal
and other site administrators as evaluator(s). In this section, the relationship
between evaluation processes and teacher performance demanded careful
consideration as did the implications for instructional improvement. Sometimes
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
overlooked, the impact of evaluation on the school climate and a teacher’s
morale provided much discussion. In revisiting the essential research questions,
readers are reminded that the primary focus of this study was to improve or
facilitate the effective and more efficient use of the teacher evaluation process, if
possible. Despite considerable developments in learning theory and demands
for increased accountability on the part of schools and school districts, education
in general, has been a people-to-people business. The perceptions of principals
and teachers must be considered if true progress and meaningful improvements
are to be made. To ignore these perceptions is to set sail in uncharted waters,
and lead to the squandering of already limited fiscal and human resources.
To conclude this chapter, a brief summary of findings from the literature
led to a discussion of apparent deficiencies in the extant literature on teacher
evaluation, perceptions of principals and teachers on teacher evaluation, and any
lingering issues relative to quantitative studies regarding K-12 public education
within the State of California. Such implications and insights established the
basis for defining the probiem(s) and guided researchers to identify the
appropriate methodology to complete this study.
Teacher evaluation will be no more effective than the extent to which
teachers will support it (Millman & Darling-Hammond, 1989). In like manner,
teacher evaluation is of little value if administrators are not adequately trained,
do not follow a set of standards for implementation, and demonstrate a sincere
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
commitment to support teachers in their professional duties and disciplines.
This quantitative study determined K-12 teachers’ and site administrators’
perceptions of the teacher evaluation process and its impact on classroom
practices.
History of Teacher Evaluation
The classroom teacher has been a central figure in the operation of public
schools. As such, teachers have been evaluated in one manner or another
throughout the history of public schools. Initially, such evaluations were rather
provincial and the standards for employment had more to do with gender,
behavior in public, and support for stakeholders in the community (Ellett &
Teddlie, 2003). As time and expertise has progressed, citizen groups no longer
needed to make visits and hear recitations by students as evidence of classroom
management skills (Stronge & Tucker, 2003). From the 1920’s through the
1940’s, evaluation of teachers was influenced by advances and emerging
theories in psychology, in particular, of personalities and the personal
characteristics of teachers (Ellett & Teddlie, 2003). Theories of learning
remained behavioral in context and practice.
In time, headmasters, or other school administrators took on the task of
developing and conducting efficiency ratings of classroom teachers. This was
much the case from circa 1925 through the beginning of the 1970’s when the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
vast majority of school systems had measures of teacher evaluation in place
(Ellett & Teddlie, 2003; Stronge & Tucker, 2003).
Prior to the 1970’s, teacher evaluation was procedural in manner and
style. Summative evaluations generally followed a simplistic format. Teacher
evaluations consisted of checklists with the outcome determining continuation
of employment. This evaluation process was very stressful and hindered risk-
taking on the part of teachers seeking new and innovative, yet unconventional
ways of improving instruction. In fact, a survey in 1972 conducted by the
National Education Association (NEA) indicated that 93% of respondents
supported the use of evaluation for the purpose of improving teacher
performance (Stronge & Tucker, 2003).
The 1970’s and 1980’s focused on teacher practices in line with Mastery
Teaching models of direct instruction showcasing the empirical work of
Madeline Hunter, and reinforced by the learning theories of Robert Gagne,
among others (Brandt, 1996). Evaluations focused on a teacher’s demonstration
of his or her ability to follow the models of instruction and classroom
management - still summative in nature and format.
As the term accountability became more common in educational
vernacular, mandates for teacher evaluation moved away from local or district
policies to statewide features for licensure (Ellett & Teddlie, 2003). It is during
this time that the Stull Act of 1971 was passed in the California Legislature and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30
signed Into law. Legislation took the principle of quality assurance from
business settings and applied it to the educational arena.
The 1980s brought positive developments in evaluation benchmarks. In
1988, in particular, the National Board of Professional Teaching Standards
(NBPTS) and the Joint Committee on Standards for Education Evaluation
published and promoted clear standards by which teacher professional practices
should be evaluated (Ellett & Teddlie, 2003; Stufflebeam, 1988). The push for
teacher quality became an integral element of the modem school reform
movement (Danielson, 2001). The publication in 1983 of A Nation at Risk by
the National Commission on Excellence in Education, and later the 1996
publication of What Matters Most: Teaching for America’ s Future by the
National Commission on Teaching and America’s Future furthered the cause for
applying research and site-based exemplars to the evaluation process
(Danielson, 2001). The Carnegie Task Force on Teaching as a Profession in
1986 reported that outcome-based approaches to teaching should hold teachers
and principals more accountable for student learning, focus school site efforts on
productivity as much as effectiveness, link incentives to performance, and to
provide more material and human resources to Increase productivity (Ovando,
2001).
The subject of merit pay remained a topic for other studies. It has been a
controversial item in which fear of preferential treatments (both in terms of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
31
favorable ratings and in the placement of higher functioning students in popular
teachers’ classes) and skewed scores of student performance malign any
altruistic reasons for implementing such a system. Following the first year
(1990) of merit pay in Georgia, the State Department of Education found that
99.4% of 61,000 teachers earned such pay by receiving satisfactory or higher
ratings (Frase & Streshly, 1994). The accuracy of Georgia’s teacher
performance ratings remains unclear. What was clear was that efforts to
improve teacher practices must incorporate clear and decisive paradigms and
effective training for evaluators.
By the 1980’s and 1990’s, teacher evaluation incorporated new purposes
and formats to go beyond acceptable teaching practices and focus more on
student learning (Iwanicki, 2001; Stronge & Tucker, 2003). New methods
included formative assessments wherein periodic evaluations served to provide
teachers timely feedback, guidance, and support for improving professional
practice. Formative teacher evaluation does not judge total teacher
performance; rather, it provides an opportunity for teachers to work on specific
skills (Davis, Ellett, & Annunziata, 2002; Howard & McColskey, 2001). Such
formative assessments often factored into the traditional end-of-the-school-year
summative evaluation. Nonetheless, using both methods concurrently may
prove difficult (Peterson, 2000). Teachers wanting feedback may feel
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
32
threatened, and opt not to participate in honest dialogue with Ms or her evaluator
if summative evaluations may hurt or Mnder long-term employment prospects.
Despite advances in classroom-based evaluations and a shift from solely
teacher behaviors to student learning outcomes, summative evaluations have
continued to occur, with little, if any value (Danielson & McGreal, 2000; Davis
et al, 2002; Peterson, 2000; Stufflebeam, 1988). Greater acceptance of one
evaluation instrument or practice applying to all teachers and settings no longer
works. So why is it that summative evaluation, in particular, is still an
instrumental practice at school sites?
Progression Over Time - Past to Present
Simply asking site administrators to change the timing and methods of
evaluating a teacher’s performance raised many formidable implications.
Approaches to teacher evaluation which incorporate a measure of student
learning required valid techniques to assess such learning (Danielson &
McGreal, 2000). Techniques referred to both instruments and evaluator
practices.
Overall, while district-level evaluation systems clearly continued to be
the bureaucratic intrusions into the life and events of school sites, (formative)
methods encouraged more collaborative efforts to improve student learning,
created greater program coherence, and strengthened individual and collective
efficacy beliefs (Davis et al., 2002). Therefore, why are district-required
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
33
evaluation practices still given the time and resources that they require, if so
little benefit is derived from them? One researcher, frustrated by the narrow role
of summative evaluations, still reminded us that teacher evaluations are used in
an effort to accomplish two goals: to ensure accountability/competence, and to
promote professional growth, or change (Manning, 1988). Summative
evaluations generally have not promoted higher stakes beyond tenure because
the emphasis is on minimum standards or basic competencies, and anyone who
does not meet them is not retained for future employment (Sando, 1995).
Today, teacher evaluation continues to be defined along the lines of the
systematic assessment of a teacher’s performance and/or qualifications in
relation to the teacher’s professional role and the school’s mission (Shinkfield,
1996). Shinkfield also noted that such an evaluation has always been important
in order to meet the demands for teacher accountability. Such a belief is readily
understood, even today. Still, effective evaluation provides for accountability
and the best use of time and resources. Ineffective evaluation needs to be
corrected, or otherwise eliminated. Gathering the perceptions of teachers and
evaluators in a systematic quantitative study will provide support for
improvement efforts, both here in California, and possibly throughout the
nation.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
Current Research on Evaluation
Technical aspects of evaluation include: a) methods, instruments, and
sources of evidence: b) the training and expertise of evaluators; and c) structural
features of the evaluation process, such as who, what, when, and which
judgments are communicated (Millman & Darling-Hammond, 1989). Research
in Texas has considered the perspectives of teachers regarding the evaluation
process. While they are sympathetic towards the ideal that learner-centered,
teacher evaluation fosters walk-through observations, opportunities for
professional growth, feedback, learner-centered dialogue, and opportunities for
self-evaluation, they still believe the process to be subjective and question the
process for validity (Ovando, 2001).
While technical expertise is required in some degree for all stakeholders
associated with the teacher evaluation process, a broader political understanding
is equally required. Educational stakeholders often have conflicting
expectations regarding what is good practice and effective reform, and yet the
input and support of these groups is an important aspect of gaining political
support for any new evaluation program (Stronge & Tucker, 1999).
Additional research has demonstrated that often times, evaluation ratings
are inflated beyond reality, little feedback occurs, and professional growth plans
are not aligned with personnel evaluation findings (Frase & Streshly, 1994).
Unclear expectations and procedures further exacerbate this dilemma. One can
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
35
only hope that the introduction of the California Standards for the Teaching
Professional (CSTP) will serve to clarify such ambiguities. Standards and
expectations are quite clear. Still, some teachers need both initial and ongoing
support to understand, process, and implement these new rigorous standards.
Evaluation training programs should effectively address the challenges to, and
controversies over, new evaluation systems. All approaches have inherent
strengths and weaknesses (Stufflebeam, 2001).
Two researchers, Herbert Heneman III and Anthony Milanowski, (2003)
both with the University of Wisconsin - Madison, suggest a systematic approach
to designing and implementing a standards-based teacher evaluation system:
• Start with a teacher competency model
• Decide on the specific purposes for the system
• Stress implementation over instrumentation
• Anticipate different and increased role expectations
• Prepare administrators and teachers thoroughly
• Align other human resource management systems with the
evaluation system.
With these steps in mind, many new efforts to improve and refine
teacher evaluation practices have been undertaken. Still, as has been stated
previously, such efforts are easily thwarted without buy-in by the key
constituents. One should not underestimate the difficulties associated with
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
developing and sustaining over time, evaluation systems that axe valid and
reasonably reflective of both the teaching and learning literatures, contextually
sensitive, politically viable, legitimate, and consistent with what is known. about
schools as organizations (Johnson, 1997).
Current research on teachers and evaluators’ perceptions of teacher
evaluation includes one qualitative study in which predominantly private high
school personnel were interviewed (Lowe, 2000). Responses indicated that:
some current purposes, content items, procedures, and outcomes, are important;
the opportunity for constructive feedback is valued; that widely-held exemplars
of teaching excellence are acknowledged as important (Marzano, 2003; Stronge,
2002); and that the interviewed teachers value rewards for teaching excellence.
While not necessarily education related, Valerie Caracelli and Hallie
Preskill (2000) suggested that learning is a participation process. They
considered Patton’s view that this process as "individual changes in thinking and
behavior, and program or organizational changes in procedures and culture, that
occur among those involved in evaluation as a result of the learning that occurs
during the evaluation process” (p 26). In principle, changes to performance
behaviors remain the underlying goal of teacher evaluation. (Caracelli, Preskiil,
Henry, & Greene, 2000; Patton, 1997).
Postmodernist views towards current teacher evaluation practices
consider such as ineffective and not reliable. Much research supports standards-
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
37
based evaluation practices. (Stufflebeam, 1998) suggested that both practices
views teacher evaluation as,
Neither value-free, nor bias-free, that many evaluation questions are not
definitely answerable, that evaluations are bound by time and social
context, that evaluators should help recipients Interpret findings within
pertinent frames of reference, and that one evaluation is only a part of a
wider array of evaluative inputs (p 289).
Models of Evaluation
Efforts to create more efficient teacher evaluation models must not
overlook the many environmental and ancillary variables that affect a student’s
classroom performance. While much consideration is given the demographics
of particular student populations, additional merit must be afforded to other
material inputs. Just as it is important to evaluate teachers, Stephen Hoenack
and David Monk in Millman and Darling-Hammond (1989) argued that it is
important to evaluate curriculum, textbooks, facilities, and even available
technology. The rationale is clear that a teacher with inadequate resources is
more restricted in the degree of in his or her ability to produce improved student
achievement (Danielson, 2002; Marzano, 2003).
Several models or methods to evaluate teacher classroom performance
are utilized today. While the variety of summative instruments does not equal
that of formative methods, various research points to conflicting views and
dogma as to the merits and caveats to using them (Howard & McColskey,
2001). In 1995, Miles Bryant, of the University of Nebraska and an associate,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
called for change. In a review of contemporary research on teaching practices,
he suggested that administrators and teachers needed to rethink the limits of
traditional teacher evaluation practices. Rather than mimic ice skating judges,
Miles Bryant and Deann Currin recommended that the goal should be to help
teachers develop individual talents, discover new ones, and enhance their own
distinctive ways of interacting with students (Bryant & Currin, 1995). While the
researchers did not postulate a new model, they did point out the merit of
providing differentiated evaluation methods for new and experienced teachers.
Evaluation must fit better with what we are asking teachers to do with
kids (Brandt, 1996). As such, new models have been utilized to attempt to
satisfy stakeholders. Regardless of the model, Brandt went on to encourage
individual goal setting meetings with teachers rather than summative
evaluations. Brandf s comments were made before state CSTP and national
standards, such as the NBPTS standards were established, or widely supported.
The purposes for evaluation are legitimate, yet the same instrument may
not be best applicable for fulfilling all purposes. Evaluation instruments are not
all standardized (Wilson & Wood, 1996). Classroom instruction varies with
content and a multitude of other criteria such as grade, and student readiness,
among others. Instruments must be flexible enough to match circumstances, yet
valid enough to fulfill contractual obligations.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
39
Charlotte Danielson (2001) supported differentiated evaluation systems.
She argued that teachers would get more out of self-reflections and self-directed
growth opportunities than formal evaluation activities. She also suggested that,
at the least, teachers should have such constructive choices in the off years of
their evaluation cycle. Considerable support has been presented for self
reflection, predominantly from empirical research (Beerens, 2000; Iwanicki,
2001; Searfoss & Enz, 1996).
The early to mid-1990s saw tremendous support for the use of student
portfolios, which contained a variety of student work samples. In the same vein,
the teacher-researcher content projects encouraged teacher work samples as a
means to incorporate similar principles of dynamic learning. Portfolios
demonstrate ongoing, self-reflective learning (Kelly, 1999; McNelly, 2002;
Peterson, Stevens, & Mack, 2001; Stronge & Tucker, 2003; Van Wagenen &
Hibbard, 1998). Other researchers went on to advocate the use of dossiers over
portfolios. Dossiers include self-selected pieces or work products with which
the teacher self-refiects and summarizes their instructional and professional
growth activities. Such meta-cognitive activities enhance understanding and
integration of new knowledge into daily practice (Peterson, 2000). Some
researchers suggested that portfolios are time-consuming, and face subjective
examination by peers and administrators (Peterson, Kelly, & Caskey, 2002).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
40
Teachers who have had experience with the use of portfolios found the
process subjective, not time consuming, and a fair method of evaluating
teaching practices (Curry, 2000). Curry’s dissertation work also found just the
opposite among teachers who have not had experiences with teacher portfolios
for the process to be subjective, time consuming, and an unfair form of
evaluation.
In 2000, the Miami-Dade County Public Schools in Florida worked with
educational experts to initiate a pilot study of comprehensive reforms in teacher
evaluation and efforts to provide ongoing reflective professional development
for their teachers using the Professional Assessment and Comprehensive
Evaluation System (PACES) (Davis et al., 2002). Four years later, PACES has
become a readily accessible web-based platform providing extensive training for
participants. Explicit standards and practices have been incorporated in
extensive teacher-leamer vignettes. Principals and other evaluators are provided
training support, not just in methods of learning, but in effective leadership
practices conducive for instructional improvement, as well (Fullan, 2001).
Teacher development lessons differentiate higher-level thinking and
instructional strategies. Beginning teachers receive direction and support.
Experienced teachers with a history of success receive less direction, with ample
support, as needed. PACES seeks to incorporate contemporary research
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
41
findings and links to professional development (Ellett, Annunziata, &
Schiavone, 2002; Stiggins, 1988).
As Duke and Stiggins (1988, 1990) note, evaluation systems that mix
accountability and professional growth may pose too much risk for the
competent teacher, who decides to play it safe (Glatthom, 1997). Concern for
the consequences of talcing risks must not prevent teacher and administrators
from establishing contexts, which encourage calculated instructional practices
(Sando, 1995). Beginning teachers without the protection of tenure take less
innovative steps than do their tenured counterparts. New teachers may also be
preoccupied with the demands of a new position, and all of the first or second
year elements that require their time and energy.
Certainly more controversial, both student and parent evaluation of
teachers have received little support for reasons of bias, political agendas, and
the lack of reliability (Peterson, Wahlquist, Bone, Thompson, & Chatterton,
2001; Wright, Horn, & Sanders, 1997). While much has been made of multiple
sources of data for evaluating teachers, time constraints serve to hinder further
exploration of these models for evaluation.
Much more common in California schools today is a written summative
evaluation instrument. The typical one page instrument mimics the checklist
seen in districts 15-20 years ago. Updated elements include a new emphasis
on professional development and evidence of teacher practices linked to the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
42
California Standards for the Teaching Professional, Depending upon the
district, each of the six standards may include a narrative passage in support of
scores of Satisfactory, Needs Improvement, or Unsatisfactory - Does Not Meet
Standards. Current checklist Instruments reviewed for this study include a
comment section at the end of the document. The six CSTP’s cover:
• Standard I: Engaging and Supporting Students in Learning
• Standard II: Creating and Maintaining Effective Environments for Student
Learning
• Standard III: Understanding and Organizing Subject Matter for Student
Learning
• Standard IV: Planning Instruction and Designing Learning Experiences for
Students
® Standard V: Assessing Student Learning
• Standard VI: Developing as a Professional Educator
While these criteria to measure a teacher’s performance and practices
covered a comprehensive palette of classroom and professional practices, little
information was available regarding evidence or requests to support scores for
each section. Articles suggested that little follow-through was done with these
instruments, much as was the case with prior protocols, and related ritualized
evaluation practices (CDE, 1999; Danielson & McGreal, 2000; Peterson, 2000;
Stronge & Tucker, 2003). Even among our 100 largest school districts, while
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
43
some shifts in evaluation practices have occurred, they are not widespread
(Loup, Garland, Ellett, & Rugutt, 1996). Loup (1996) and her associates also
pointed out how teacher evaluation practices and policies at the local (school
district) level have not incorporated important teaching and learning elements
identified through state and national efforts. These views were echoed by
Milbrey Walling McLaughlin in Millman and Darling-Hammond (1989),
“Teachers are evaluated by one means or another in virtually every school
district. And in most of those districts, teachers and administrators agree that
the activity is ritualistic and largely a waste of time” (p 403).
School Accountability and Governance in Teacher Evaluation
The federal government under the auspices of the NCLB Act of 2001
requires each state to establish and define Adequate Yearly Progress (AYP) for
school districts and schools in terms of student performance. Individual states
then set the level of student achievement that a school must attain in order to
make AYP. Under these AYP and NCLB expectations, statewide and nationally
normed student assessments have been commonly referred to as high-stakes
testing. More often than not, the teachers and site administrators have felt the
increasing pressure to meet AYP — not the students. This is partly because high
stakes testing leads to high-stakes evaluations, including rewards and sanctions,
and subsequent employment decisions. William Mehrens, in Millman and
Darling-Hammond (1989), noted that while all decisions should be based on the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
44
best data that can reasonably be provided, even greater demands are being
placed on the quality of the data. As such, Mehrens encouraged data analysis to
consider data from multiple sources, particularly when important decisions are
to be made.
School administrators are charged with the responsibility to evaluate and
assess the performance of certificated instructional personnel who perform the
requirements of state and federal mandated educational programs. Such
evaluation relates to the instructional techniques and strategies used by the
employee along with his or her adherence to standards-based curriculum and
identified objectives. Yet, can state mandated teacher evaluation fulfill the
promise of school improvement?
Researchers found that any analysis of teachers’ professional
development within these volatile political times has led to eight principles that
should be considered when planning for instructional improvements (Lofton,
Hill, & Claudet, 1997). These principles seem to follow a Mastery Teaching
model, along with scaffolding learning. As a possible alternative to the
California Standards for the Teaching Professional (CSTP), these principles
suggest that teacher evaluations encompass instructional efficacy and consider
that:
1. Prior knowledge and development seem to be directly related to growth
and improvement.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
2. Individual differences should be considered in planning meaningful
learning activities.
3. Collaborative, reflective learning experiences enhance learning and
create a culture for change and improvement.
4. Growth and development seem to be directly related to the level of
individual commitment and amount of engaged time.
5. Follow-up and support, balanced with sufficient pressure to improve, is
essential if content is to be internalized and applied.
6. Focusing on learning motivates growth and improvement.
7. Understanding one’s role in the interactive process of teaching and
learning builds individual and organizational efficacy.
8. Each individual must build his or her understanding (Lofton et al., 1997).
In a Midwestern school district, researchers assessed teacher reactions to
a standards-based teacher evaluation study. Milanowski and Heneman,
researchers from the University of Wisconsin-Madison, determined critical
attributes for favorable responses to the teacher evaluation instruments used. In
short, the teachers’ overall support for the new system was directly related to
their acceptance for teaching standards, as well as their view that, the evaluation
process is fair, that the evaluator is capable and objective, and that the process
has a positive impact on their teaching. Teachers were receptive to the changes
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
In evaluation methods, but concerned with the increase In workload
(Milanowski & Heneman, 2001).
Procedure for due process and equal protection have continued to play an
important role in the changing teacher evaluation process (Desander, 2000).
Along with collective bargaining, perceptions of barriers to effective teacher
evaluation exist. Past practices have also hindered efforts to improve personnel
evaluation in education (Iwanicki, 2001). James Stronge (2002) observed in one
observational case study that conflict sometimes exists between the need and
value of technical expertise in determining good teaching and selecting
appropriate methods for collecting accurate data - and the plethora of political
forces that hold conflicting interests and expectations.
Teachers’ evaluators must be adequately trained to prevent violations of
teachers’ rights, and unnecessarily expose the district to subsequent liability.
Administrators should recognize the legal parameters of their position, and make
appropriate decisions within the scope of their duties (Sullivan & Zirkel, 1998).
Operating within these boundaries will lend credence to the content of
evaluations while complying with state legal requirements and district policies.
In this environment, trust between parties will also serve to enhance the efficacy
of collective efforts to improve teacher classroom practices and student
achievement.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
Administrators, school committee members, and teachers’ associations
need to understand that performance evaluation contributes significantly to the
public’s perception of schools. High-quality evaluations demonstrate the merits
of quality control (Ribas, 2000). The results of one study indicated that
principals believe teacher unions to be the most significant barrier in dealing
with inadequate teachers, more so than the language of the contract (Painter,
2000). In this study, only 37 elementary and middle school responded to a
survey in Oregon.
Collective Bargaining
Collective Bargaining is a procedure for making agreements between
employer and employees’ representatives concerning wages, hours, and other
conditions of employment (Loofbourrow & Duardo, 1996). In essence,
collective Bargaining is the process in which two parties seek to get together and
identify matters, language, and sign a specific binding contract. In school
districts, Collective Bargaining generally refers to the contract between a
teacher’s union and the district’s Board of Trustees. Collective bargaining
agreements establish the criteria (based upon the California Education Code) for
teacher evaluation practices within school districts. Variances among district
policies and specific collective bargaining agreements reflect differences in
negotiated terms. The statewide standardization of teacher evaluation does not
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
48
exist. This exacerbates the challenges to more effective evaluation practices.
(Stronge & Tucker, 1999; Wilson & Wood, 1996)
Bargaining does not always yield terms of employment, which seek to
achieve the district’s mission. Union contracts seek to protect the employee, not
always place the emphasis on student success. Teacher contracts determine the
process and forms in which their membership will be evaluated. Such contracts
generally establish the frequency and guidelines for site administrators to
complete the evaluation. In reviewing various (and recent) teacher contracts
from districts in Southern California, common language exists. Site
administrators are required to evaluate and assess certificated employee
performance as it reasonably relates to four important criteria. These include: a)
the progress of students toward the expected achievement at each grade level in
each area of study; b) the instructional techniques and strategies used by the
employee, c) the employee’s adherence to curricular objectives; and d) the
establishment of a suitable learning environment, within the scope of the
employees responsibilities.
Often times, teachers limit their efforts to what is written in their
contract, and nothing more. Because of this, researchers have pointed out how,
unfortunate it is that teachers need the protection of contracts and labor unions
(Heller, 2004). Heller would prefer that power in a district should lie with the
people that are well versed with the profession, not arbitrary organizations, and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
49
school boards. The implication is that resistance in creating new evaluation
instruments and professional development activities does not represent the needs
and best interests of students and staff alike. Rather, teacher associations should
be partners with administrators in guaranteeing that the professional forces of
any school are excellent. Edwin Bridges in Millman and Darling-Hammond
(1989) wrote that teacher contracts and tenure protects teachers earning poor
evaluations until complaints drive action. Resistance to changes in teacher
evaluation is often met with skepticism. Adversaries need to rethink their role
and capacities to work as partners in the educations of students with high
expectations for all leading to high achievement by all. Arthur Wise in Millman
and Darling-Hammond also suggested that, “If teacher evaluation is to move
from a perfunctory bureaucratic activity carried out in response to external
pressures to improve teachers and teaching, then the methods and the control of
teacher evaluation must change” (p 387).
Administrative Practices and Teacher Evaluation
In a qualitative study of district superintendents and their views towards
influencing principals and the duties as evaluators, Barbara Callard affirmed
prior work that found that superintendents felt that they needed to meet regularly
to review teacher evaluations and assist principals, where appropriate (Callard,
2003). Her study also affirmed the district leadership’s view that reflective
practices and effective training of evaluators were of great importance.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
50
William Ribas (2000) noted in an article for Phi Delta Kappan, that in
Massachusetts, in districts with well-trained evaluators and an effectively
monitored evaluation system, most teachers trust the validity of the evaluation
process as well as their evaluators’ ability to assess their performance
objectively. In the same manner, Ribas found that administrators trust the
teacher association to work with them on addressing issues of low performance.
In a quantitative study conducted in North Carolina in 2001, Susan
Colby used a mixed-methods model to determine the perceptions of teachers
regarding differentiated systems. Teachers in districts using local instruments
perceived teacher evaluation as having a stronger impact on school
improvement, professional development and student learning than did teachers
in districts using state-mandated systems (Colby, 2001). Colby went on to
suggest that the strengths of these connections were often a function of the
effectiveness of the district and school leadership.
In a qualitative case study conducted at Stanford University in 1999,
researchers found that politics often plays a detrimental role in personnel
evaluation (Bridges & Groves, 1999). The researchers go on to point out that,
state and local teacher organizations have relatively high power and access to
evaluation processes, and that the outcomes generally reflect the interests of
teachers more than the interests of students.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
51
The Role of the Principal as Evaluator
School administrators’ duties include day-to-day managerial
responsibilities in addition to instructional leadership, including personnel
evaluation. Linda Darling-Hammond in Wilson and Wood (1996) pointed out
how time constraints limit the support that principals can provide weaker
teachers. In this day of increased accountability, a lack of adequate support is
unacceptable. Support mechanisms, such as BTSA and PAR mentor teachers
have begun to address the demands. Principals have undergone considerable
professional development to maximize their efficacy to facilitate instructional
improvements and student achievement. Still, the efficacy of a teacher’s
evaluation is subject to the training and expertise of the evaluator, the flexibility
of the instrument, adequate time and resources available to the evaluator, and the
degree of bias-free observations and input (Painter, 2000; Peterson, 2000;
Stiggins, 1988).
Some researchers suggest that often teacher evaluation experiences,
administrators are not necessarily looking for strengths and weaknesses in an
attempt to improve teacher performance, but rather desired behaviors.
Understanding that lessons vary with content and circumstances, judging a
teacher’s competence based upon infrequent observations poses considerable
challenges. Linda Darling Hammond in Wilson and Wood (1996) stated:
“Whatever administrators learn about teacher performances has little influence
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
52
on general teaching practice because the evaluation process is limited to a few
interactions between individual teaches and their administrators and focuses on
discrete incidents of teaching behavior” (p 81).
A primary responsibility of the principal is in the evaluation of
certificated faculty and staff. Some researchers have argued that it is also the
teacher’s responsibility to evaluate his or her own performance (Drake & Roe,
2003). Both parties should have a congruent regard for the efficacy of teaching
and its impact upon student achievement. Instruments must also reflect this
viewpoint as the priority of teacher evaluation is on individual growth, not
conformity within narrow parameters. Principals and other site evaluators
should emphasize the influence of professional development with the teacher
evaluation process. While the CSTP Standard VI calls for documented
emphasis in this area, do evaluators and teachers actually implement their
conversations and suggestions in this important area, or is it one more example
of bureaucratic lip service?
Research conducted in 2000 by Suzanne Painter indicates that principals
do emphasize their role in supporting weaker or low-performing teachers and
believe that they are well equipped to do so. Other researchers refute this self-
rating, and suggest that the crux of the problem lies in the exaggerated, inflated
view of evaluators’ own understanding of the teacher evaluation process
(Painter, 2000) Such views, which may be corroborated by teachers were the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
53
result of only 38 survey responses from randomly selected elementary and
middle school principals in one Western state. Without a larger sample, the
findings are not generalizable.
School leader have been viewed as individuals with expert knowledge
and serving in greater roles of instructional leadership. Shedding the label as
expert problem solvers, these administrators should rather lead in their skills in
facilitating group problem solving processes (Fullan, 2001). Effective teacher
evaluation both, in terms of practices and procedures, is one such area in which
greater collaboration must occur.
Evaluation and Teacher Performance
If teachers have a professional responsibility for enhancing learning, then
it seems legitimate to explore the impact of the teacher’s role in learning. The
alternative is to evaluate only the process of teaching without regards to the
effects on students (Stronge & Tucker, 2000).
Failure to provide feedback, accompanied by substantive and practical
suggestions, has prevented improvement of evaluation practices. Instead, such
failure has reduced teachers’ internal motivation (Beerens, 2000; Frase &
Streshly, 1994; Sullivan & Zirkel, 1999). According to Ronald Boyd, (1989),
experienced teachers often state that evaluations are not productive. He goes on
to point out that, some of the dissatisfaction can be avoided if: a) teachers were
to have input into the evaluation criteria; b) evaluators spent more time on
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
54
evaluations; c) evaluators were better trained; and d) results of the evaluations
were actually used to further teacher development. While Boyd did not provide
sources for his findings, or any indication of the sample population size and
location, his points corroborate the works of other authors on the topic.
Poorly designed or delivered training can negatively affect an
employee’s performance (Clark, 2002). According to Clark, one reason for a
decline in performance may be due to a scrambling effect in which prior
experiences and understanding are challenged in the person’s mindset. A
second cause for a decline in performance may be due to a greater perception of
competition among colleagues. Successful teacher evaluation designs must
consider such ideas. Sometimes not all the training and support deemed
appropriate still will bring about desired change. Richard Clark points out that
many of the gaps between current performance and the levels required to
achieve desired goals are caused by a lack of motivation, not a lack of
knowledge and skills (Clark, 2002). Using monetary rewards may not be the
quick fix either. Merit pay as a means of rewarding teacher performance brings
with it a new set of problems.
Should student learning be connected to teacher evaluation? James
Stronge and Pamela Tucker (2000) suggest that as teachers have the professional
responsibility for enhancing learning, in like manner, teacher evaluation should
link both the means and the ends to teacher performance evaluation. Indeed
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
55
research in other states has demonstrated the importance of value-added criteria
and measures to evaluate teacher practices. One such effort to quantify the
variables that link teacher performance and student achievement took place in
Tennessee. The Tennessee Value-Added Assessment System (TVAAS) found
that the importance of the effects of certain classroom contextual variables such
as class size and heterogeneity have minimal impact on overall student
achievement (Wright et al, 1997).
With all differences between classroom variables considered, differences
in teacher effectiveness were found to have the greatest impact on student
outcomes. Considerable recent research exists to support the merits of excellent
instructional practices over time and its positive influence on student
achievement (Marzano, 2003; McNulty, 2004; Stronge & Tucker, 2000; Wright
et al., 1997). Value-added methods can serve as equalizers in comparing
nuances of teacher performance. However, political forces appear threatened to
restrict the use of such methods. A review of the literature does not indicate that
research in the area of teachers and administrators’ perceptions towards the use
of value-added measures in teacher evaluation practices has occurred.
An evaluation system that effectively attends to all of the educational,
legal, public relations, and social/emotional components of the evaluation
process helps administrators be instructions leaders, decreases litigation related
to addressing low performance, and improve staff morale (Ribas, 2000). Ron
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
Brandt (1996) makes the point that he senses that both teachers and
administrators are frustrated that conventional evaluation practices do not really
serve the purposes of either evaluators, or teachers. A quantitative survey of
California’s public K-12 teachers and site administrators will serve to validate or
challenge such thinking.
This comprehensive review of current literature addressing current
perspectives on the teacher evaluation process in K-12 public schools has
exposed the premise that little, if any research has been conducted within the
State of California in this area. In equal measure, most information that is
circulated regarding the perceptions of teachers and administrators is based on
localized input, and inferred from qualitative case studies that have been
conducted. A systematic, quantitative survey must be conducted in order to
determine such views and generalize findings. Only at that time can prescriptive
remedial efforts to improve the teacher evaluation process occur. The extant
#
literature indicates many deficits in the efficacy of teacher evaluation practices
throughout the history of public education in the United States. The extent that
current efforts through the introduction of California Standards for the Teaching
Professional and resources for professional practice have had lasting influence
on classroom instruction and student achievement is unknown. Before
additional time and resources are spent, the research questions and purpose of
this study must be conducted. To what extent is the teacher evaluation process
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
in California a mere function of the Education Code and district policies, and
what impact does it have on teacher practices?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
58
CHAPTER III
RESEARCH DESIGN AND METHODOLOGY
Introduction
The primary purpose of this study was to survey K-12 public school
administrators and teachers in California to determine respective perceptions of
the teacher evaluation process. This study focused on the perceptions
representative to the elementary grades both independent of, and when
compared to, the broader K-12 statistics. Responses to the questionnaire also
provided important data considering the extent that teacher evaluations influence
actual classroom practices. An increased understanding of teachers’ perceptions
relative to evaluation provided vital information on enhanced teaching and
learning methodologies, in addition to apparent weaknesses of newly
implemented evaluation processes and systems (Ovando, 2001). In response to
increasing demands for schools and teachers to raise test scores and differentiate
instruction, the perceptions of administrators and teachers must be considered
before revisions or possible elimination of summative evaluation processes,
except when determining continuation of employment (Danielson, 2001).
A secondary purpose of this study was to analyze apparent differences in
the perceptions of teachers and evaluators based on independent variables.
These variables were matched to responses regarding the appropriateness of
using of students’ test scores in evaluating teachers, the support teachers receive
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
from site and district administrators, the timeliness of feedback, and
opportunities to link evaluation outcomes to professional development and other
support services. A naturalistic inquiry (Patton, 2002) survey was used to
collect the data, with standard and statistical procedures used throughout. There
was no control group in this quantitative study.
Problem Statements
The following statements were developed to represent widely held
postulated beliefs in K-12 public schools in California:
1. The current teacher evaluation process has a minimal positive impact on
teacher instructional practices in California.
2. Collective Bargaining has had a negative impact on the evaluation
process and teacher assessment.
3. The evaluation process is not suited to meet the diverse needs of teachers
with various skills and experience.
Propositions
1. Use of personnel evaluation standards in evaluating educators can and
will provide credible benchmarks in establishing evaluation protocols
and procedures.
2. Uniform teaching standards such as the California Standards for the
Teaching Profession (CSTP) will apply to successful implementation of
teaching practices, including professional development, in K-12 public
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6 0
school classrooms in all content areas if teachers perceive that their site
administrator(s) is/are qualified to evaluate their teaching.
3. The training and integration of effective evaluation procedures in a site
administrator’s leadership practices will influence a teacher’s
perceptions of the validity and utility of the evaluation practices and
systems (Fullan, 2001; Kimball, 2002; Marzano, 2003; Millman &
Darling-Hammond, 1989; Ribas, 2000; Stufflebeam, 1988).
4. Teacher’s self-efficacy beliefs impact the degree and capacity that
evaluation feedback and reflective practices bring about change in a
teacher’s classroom practices (Danielson, 2001; Kimball, 2002; Millman
& Darling-Hammond, 1989).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
61
Conceptual Model
The following diagram illustrated the relationships and links between
teacher evaluation processes and overall school achievement.
Figure 1
The Teacher Evaluation Process: Relationships and Links between Teacher Evaluation and
Overall School Improvement
Teacher
Evaluations
Professional
Development
O verall School
Improvement
API & AYP
Teacher
Classroom
Practices**
Screen Out
Unqualified
Persons
Student
Achievement
Provide
Constructive
Feedback
** Classroom practices include, but are not limited to instructional strategies including
assessments, classroom management, and classroom curriculum design.
This framework assumed that there are causal relationships as a result of
the teacher evaluation upon subsequent elements and classroom practices
(Danielson, 2002; Marzano, 2003; McNulty, 2004; Millman & Darling-
Hammond, 1989). This framework also postulated that teacher classroom
practices influence student achievement and subsequently the overall school
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6 2
improvement as reported by the Academic Performance Index (API) and
expectations for the Adequate Yearly Progress (AYP). Included in teacher
factors influencing student achievement were instructional strategies, classroom
management, and classroom curriculum design (Marzano, 2003; McNulty,
2004). Finally, it was presumed that school districts and educators are In
compliance, and thus follow the intent and expectations of Title II - preparing,
training, and recruiting high quality teachers, and principals, of Public Law 107-
110, passed by the 107th United States Congress on January 8, 2002. This law
has been commonly referred to as the No Child Left Behind Act of 2001
(NCLB, 2004).
This study provided meaningful quantitative data to validate this
conceptual framework, and demonstrate clear perceptions on behalf of K-12
public school site administrators and teachers to indicate that, to some degree,
the process is little more than a waste of time and resources (Danielson &
McGreal, 2000; Heneman & Milanowski, 2003; Johnson, 1999; Millman &
Darling-Hammond, 1989; Peterson, 2000). Not represented in the framework
were outside/environmental influences such as Collective Bargaining
agreements, Teachers’ Unions, teacher-credentiaiing programs, teachers’
background experiences, school population demographics, among others.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
63
Research Questions Restated
In developing the following research questions, an extensive review of
the literature was undertaken relative to the above-referenced problem
statements. Using a quantitative research approach to the generation of new
knowledge, these questions were examined:
1. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving at or above an API
of 800?
2. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving at or above an API of
800?
3. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving below an API of
800?
4. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving below an API of 800?
5. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving more than 10,000
students?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
64
6. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving more than 10,000
students?
7. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving 10,000 or less
students?
8. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving 10,000 or less
students?
9. What are administrators’, with more than five years experience, attitudes
or perceptions of teacher evaluations in public elementary schools?
10. What are teachers’, with more than five years experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
11. What are administrators’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
12. What are teachers’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
Design of Study
This research study sought to determine certificated respondents’
perceptions of the value of personnel evaluation in education. Specifically, this
quantitative study endeavored to examine particulars of the teacher evaluation
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
65
process. The survey instrument was weli defined, and the sampling size large
enough to provide adequate response data for analysis and the generalizability of
findings for K-12 certificated teachers and administrators in California’s public
schools.
Questions on the survey were developed concerning utility, feasibility,
propriety, and accuracy as established by the Joint Committee on Standards for
Educational Evaluation in 1988. Explicit definitions of each of these standards
for educational evaluation were provided in detail in Chapter 1 - Definition of
Terms. In 1995, the American Evaluation Association added principles
governing systematic inquiry, evaluator competence, integrity/honesty, respect
for people, and responsibility to the general public welfare (Patton, 2002).
These standards were also given consideration when developing the survey
instrument.
The survey was cross-sectional as the data was collected at essentially
one time, and sent to all K-12 public school district superintendents throughout
the state of California. It was expected that the distribution of the survey and
data collection would be completed within one calendar month. Using a letter of
introduction distributed via the U.S. Mail and e-mail venues, survey participants
completed the questionnaire online. A commercial statistical survey service,
surveymonkey.com, was used to collect the data. Researchers then utilized the
Statistical Package for the Social Sciences (SPSS) version 12.0 statistical
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
66
software application to analyze responses and examine the independent and
dependent variables. The researchers made inferences from the responses. Such
generalizability would also occur as a result of the size of the sample population
(Creswell, 2002). Item # 36 enabled respondents to provide comments. As with
the entire survey questionnaire, participation in writing comments was optional.
The researchers analyzed the comments to identify themes, and aggregate the
data accordingly.
A letter of introduction was prepared to establish the merits of this
quantitative study. It asked interested, supportive superintendents to encourage
their site administrators and respective K-12 teachers to log online to a website
and complete the survey. Directions for completing the survey were available
on the web pages.
Names of the school districts and their superintendents were obtained
from the State of California’s Department of Education website:
http://www.cde.ca.gov/re/sd/index.asp, and is considered public information.
The survey questionnaire was accessible from any computer with web access.
Participation in the survey was strictly voluntary and required
approximately five (5) minutes for completion. Participation in the survey was
not tracked, and participants remained anonymous. In fact, respondents were
not asked their name, district, or postal/e-mail addresses.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
67
The population for the study included certificated district and site
administrators, and teachers throughout the State of California representing
public schools in grades Kindergarten through 12. A minimum participation of
200 subjects was required representing elementary schools, middle and/or
intermediate schools, and high schools. A convenience sampling (Gall et ah,
2002) was considered to ensure a minimum of representative participation and
responses. Letters of introduction were subsequently distributed to all of the
superintendents in each of California’s counties to adequately represent
statewide perspectives and permit inferences that the results might generalize
(Gall et ah, 2002).
A Likert-type scale was used in development of the survey to measure
teachers and administrators’ perceptions of the teacher evaluation process. The
responses were then compared to various independent variables including:
school-size, gender, education level, SES, and their years of experience as a
teacher and/or administrator, among others. Seventeen questions addressed
statements of standards, satisfaction/effectiveness, and issues pertaining to the
teacher evaluation process. An additional item permitted brief comments to be
added to the survey questionnaire.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6 8
Assumptions
This study made the following assumptions:
1. The respondents to the survey represented a generaiizable sampling from
the targeted population as a whole.
2. The methods and procedures selected for this study were appropriate for
the subject being studied.
3. The responses to the survey questionnaire were accurate and provided
accurate and reliable data.
4. The respondents were truthful and sincere in their answers.
5. The researcher’s analysis of the data was accurate and represented the
perceptions and responses of the participants.
6. Study participants had sufficient knowledge of the teacher evaluation
process within their respective school site or school district.
Limitations
The following are limitations (Creswell, 2002) of the study:
1. The respondents to this study needed to have access to a computer with
Internet capability.
2. Some school districts offer year-round education. As such, some
potential respondents may have been off-track, and therefore unaware of
the opportunity to participate in this survey.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
69
3. The study was limited to include only K-12 public school teachers and
administrators.
4. Although sent, not all of California’s public school district
superintendents received the letter of introduction and request to
encourage their administrators and teachers to participate in this study.
5. The data collected in this study was limited to the number of respondents
to the survey.
6. Only certificated K-12 public school administrators and teachers in
California were asked to participate in this study. Therefore, the results
may not be generalizable to similar participants in other schools and
districts outside the State of California. Further, any conclusions drawn
from the data are not necessarily generalizable to private schools here in
California, or elsewhere.
7. The responses to survey questions were anonymous, and therefore did
not provide the opportunity to ask follow-up questions or for clarification
to any voluntary comment(s). (See Survey item # 36)
8. Some beginning teachers and novice administrators participating as
respondents may not be able to answer all applicable survey questions
due to either a lack of experience or background knowledge relative to
particular items.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
Delimitations
The following are delimitations (Creswell, 2002) of the study:
1. This study was quantitative in design and implementation. Interviews or
other methodologies germane to qualitative studies were not included,
with the exception of Item # 36 - comments.
2. Only certificated K-12 administrators and teachers serving in
California’s publics schools were asked to participate.
3. The study required the voluntary respondents to utilize Internet access to
complete the survey questionnaire online.
Instrumentation
The survey instrument included 36 total items spread over seven of nine
sections. (See Appendix A) The first section provided an introduction with a
link to the Letter of Information and approval from the University of Southern
California’s Institutional Review Board (IRB). Section II covered current
position, gender, length of experience, and personal education level. The third
section included seven school demographics questions, including size, API
score, and other key indicators. The remaining questions incorporated a Likert
Scale format, with a four-point response ranging from Strongly Disagree to
Strongly Agree, from left to right respectively. The theoretical median was 2.5,
with the choices noted as 1 - 4. Section IV covered local teacher evaluation
practices including the use of CSTP instruments, the frequency of teacher
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
71
observations and summative evaluations, and related collective bargaining
requirements.
Section V covered perceptions of statements of personnel evaluation
standards which were aligned with those established by the Joint Committee on
Standards for Educational Evaluation (Stufflebeam & Pullin, 1998; Stufflebeam,
1988). The data collected from this section provided insight(s) relative to how
well current evaluation practices follow these standards. The sixth section asked
respondents their perceptions covering statements of satisfaction and
effectiveness. A coefficient was determined and served as the dependent
variable against independent variables included in sections two and three. The
seventh section asked respondents if evaluation instruments and procedures
should differ for beginning and experienced teachers. The remaining questions
in this section included items relevant to evaluation protocols and perceptions
regarding the use of student input and standardized test scores in teacher
evaluations.
Quantifying responses to Sections V, VI, and VII, questions 19 - 35
were analyzed and presented in graphic forms. The final item, number 36,
permitted each respondent to include any salient comments or provide additional
information on the survey.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
72
Validity and Reliability
In order to demonstrate content validity, survey questionnaire items were
developed to represent content and standards in each of the seven survey
sections of the instrument. Item analysis was conducted to determine the
difficulty, clarity, validity, and reliability of each item on the test. ANOVA’s, or
the analysis of variance, was used to determine whether the difference between
the mean scores of positional groups on dependent variables was statistically
significant. Cronbach’s alpha coefficients measured the internal consistency and
reliability of survey items. In addition, before disaggregating the total responses
into elementary, middle, and high school subgroups, the database was divided
into two halves and the means for both halves were compared.
Computerized resources were utilized to collect data/evidence to support
the reliability of the responses and the validity of the inferences made by the
researchers from the data (Gall et ah, 2002). Use of the surveymonkey.corn’s
data collection services, along with the SPSS version 12 software application
also reduced margins of error. All surveys were administered in a consistent
method throughout the State of California. Quantitative survey constructs
incorporated generally accepted standards and practices. Specific contents could
be used to replicate this study. The first eighteen items on the survey covered
individual facts and school or school district facts. As such, this information
was highly structured and likely to be valid (Gail et ah, 2002). While individual
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
73
responses were of considerable importance, this study sought to average and
compare the responses of the total subgroups (administrators and teachers, as a
whole and those representing elementary, middle and high school settings) and
the population as a whole. This increased reliability, as well.
Data Collection
As had been discussed, data collection occurred utilizing an online-
computerized survey collection service, specifically surveymonkey.com. Once
predetermined minimum or suitable quantities of responses had been received,
the researchers exported the data into Microsoft Excel 2003 and SPSS v.12
formats to complete data analysis. It was anticipated that once distribution of
the letter of introduction to district superintendents had occurred, data collection
would be completed within thirty calendar days. Data collection included
aggregate totals for each of the survey items.
Analysis of Data
The surveys were analyzed both in terms of collective total responses to
each item, and by totals per each of the seven sections. The totals were then
separated into the three K-12 subgroups - elementary, middle, and high school
levels. This particular study focused on, or emphasized, the data collected for
the elementary school level principals and teachers. The data for the elementary
school level was also analyzed against the means for the responses to the middle
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
74
and high school subgroups, and the means for the totals. The SPSS version 12
statistical software was utilized to evaluate the collected data.
Cronbach’s alpha coefficients determined the interna! consistency of the
survey items (Gall et ah, 2002; Maxwell, 2004). Descriptive statistics, including
frequencies, means, and standard deviations were analyzed. Significant
differences were identified between the respondents’ perceptions relative to the
independent variables. These included gender, current school site employment
(teacher or administrator), years of experience in either capacity, or both, and
their highest education level. These also included the size of their respective
school district, their particular school’s population size, their school’s most
recent API score, and the percentage of Title I students at their school site.
Finally, the percentage of English Language Learners (ELL) at their respective
school site, the type, or types of evaluation(s) processes that are utilized at their
school, and the frequency of summative evaluations at their respective school
site wrapped up the respective criteria. The last survey item, an open-ended
comment option permitted respondents to make general statements or clarify
other items. The researchers disaggregated these comments and then regrouped
them by emerging themes relative to evaluation standards and satisfaction.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
75
CHAPTER IV
FINDINGS
Introduction
This chapter presented the findings of this quantitative study in six
general sections. The first section provided the distribution results for the K-12
survey, the internal reliability, and reliability analyses. The second section
detailed the disaggregation of the data and descriptive statistics in two parts -
independent and dependent variables. Dependent variables included standards
for evaluation of educational personnel as established by the 1988 Joint
Committee under the leadership of Daniel L. Stufflebeam, and satisfaction
indices relative to essential elements of the teacher evaluation process in
schools. The third section presented the responses to the research questions and
analyses of the elementary data. This examination included descriptives,
analyses of variance (ANOVA), and multiple comparisons. The fourth section
included an analysis of the aggregated K-12 data, along with detailed
evaluations of key questions. The fifth section covered a qualitative analysis of
Item 36 - comments. Particular themes were identified that paralleled essential
questions, and exposed explicit viewpoints by teachers and administrators. This
empirical analysis served to add dimension to the quantitative outcomes. The
aggregated K-12 data in sections 4 and 5, and part of section six, was examined,
discussed, and summarized in conjunction with two co-researchers, one
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
76
representing the middle school data and findings, and one representing the high
school data and findings. The final section, the sixth, presented a summary of
findings first for the elementary school data, and then the K-12 data.
Distribution Results
The initial letter of introduction was distributed to California’s K-12
public school superintendents in October 2004. By e-mail, 921 requests for
participation were issued with the remaining 38 sent by regular U.S. mail.
Approximately twenty percent were returned with the notification as
undeliverable, with a faulty address. As these names and e-mail addresses were
obtained from the State of California’s Department of Education’s web site, it
was noted that that some of their web servers had changed, while other names
had been rejected as individuals who no longer worked in their respective school
district. After two weeks, reminder e-mails and letters were sent.
Initial feedback via replies indicated that seven superintendents were off
track or out of town at the time of the mailing. Eleven superintendents replied
with words of encouragement and confirmed that they had not only received the
letter of introduction, but would encourage their district’s certificated personnel
to participate. Three replied that they had retired, and provided the name and e-
mail address of their successor. Eight respondents replied that they would not
be able to participate for a variety of reasons, mostly the lack of time, or the fact
that they were already conducting surveys among their teachers and feared
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
77
overwhelming these staff. One respondent exclaimed, “What’s in it for me?”
At the close of the survey, 1,023 respondents did elect to participate. Analysis
of the data was done within 45 days after the initial distribution. At that time,
941 respondents had participated. Due to constraints, the data analysis was
conducted using this total amount. Review of the data indicated that the
additional 82 respondents’ input did not modify the percentages and mean
responses for any of the survey items. For item # 36 - comments, 194 responses
were evaluated and discussed in Section V. Complete responses have been
included as Appendix B.
Internal Reliability
The database was randomly divided into two halves and the means for
both halves were compared. For the descriptive statistics in Tables 1 and 2,
responses were set at 1, 2, 3, and 4 with choices as Strongly Disagree, Disagree,
Agree, and Strongly Agree respectively. The theoretical median for this range
was 2.5.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
78
Table 1
DESCRIPTIVE STATISTICS FOR THE FIRST HALF OF THE DATA
Survey Items N Mean Std. Deviation
V 19 Teacher W elfare 401 2.94 .676
V 20 Utility
402 2.66 .748
V21 Feasibility
400 2.63 .704
V22 Accuracy
403 2.88 .665
V23 Clear Policies 398 2.84 .679
V 24 Alignment
399 3.05 .626
V25 Training
398 2.67 .739
V 26 Sufficient Time
395 2.57 .791
V 27 Resources 395 2.43 .795
V 28 Teacher Practice
398 2.60 .731
V 29 Prof. Development 397 2.93 .562
V 30 Process Should Differ
398 3.33 .741
V31 Process Does Differ
393 2.27 .781
V 32 Use Student Input 398 2.23 .765
V33 Uses Student Input 393 1.74 .535
V 34 U se Test Scores
397 2.24 .883
V 35 Uses Test Scores
395 1.88 .624
Standard
405 2.776 .539
Satisfaction
400 2.726 .541
Valid N (listwise) 372
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
79
Table 2
DESCRIPTIVE STATISTICS FOR THE SECOND HALF OF THE DATA
Survey Items N Mean Std. Deviation
V19 Teacher Welfare
407 2.91 .601
V20 Utility
410 2.48 .724
V21 Feasibility
401 2.49 .686
V22 Accuracy
407 2.86 .592
V23 Clear Policies
405 2.86 .602
V24 Alignment
403 2.95 .544
V25 Training
404 2.42 .766
V26 Sufficient Time 404 2.33 .755
V27 Resources 404 2.31 .677
V28 Teacher Practice 404 2.44 .721
V29 Prof. Development
405 2.91 .549
V30 Process Should Differ 402 3.26 .683
V 3 1 Process Does Differ
401 2.25 .667
V32 Use Student Input
402 2.15 .760
V33 Uses Student Input 400 1.81 .483
V34 Use Test Scores
401 2.19 .802
V35 Uses Test Scores 398 1.91 .536
Standard
410 2.687 .502
Satisfaction
407 2.604 .476
Valid N (listwise)
371
The difference between the means for both standards and satisfaction of
the first half and the means for both standards and satisfaction of the second haif
were within a margin of 5%, indicating no statistical significance.
Reliability Analysis
Dependent Variable #1: Perception of Adherence to Standards
Cronbach’s alpha coefficients were computed in order to ascertain the
degree of internal consistency and reliability among the key survey items. A
response was used only if the respondent completed three of the four questions
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
80
in the section regarding standards for the evaluation of educational personnel.
The Cronbach’s Alpha of .772 indicated adequate reliability.
Table 3
Reliability Statistics
Cronbach's
Alpha
Cronbach's
A lpha Based on
Standardized
Items N o f Items
.772 .770 4
Table 4
Item-Total Statistics
Survey Items
Scale Mean if
Item Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item Deleted
V19 Teacher
Welfare
7.99 2.844 .527 .302 .742
V20 Utility
8.35 2.294 .681 .497 .656
V21 Feasibility
8.36 2.512 .626 .435 .690
V22 Clear Policies
8.05 2.964 .473 .227 .767
Dependent Variable #2: Degree of Satisfaction
Cronbach’s alpha coefficients were computed in order to ascertain the
degree of internal consistency and reliability among the key survey items. A
response was used only if the respondent completed five of the seven questions
in the section regarding perceptions of satisfaction for identified items. The
Cronbach’s Alpha of .865 indicated very high reliability.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
81
Table 5
Reliability Statistics
Cronbach's
Alpha
Cronbach’s
Alpha For
Standardized
Item s N o f Items
.865 .860 7
Table 6
Item -Total Statistics
Survey Item s
Scale
Mean
if Item
Deleted
Scale
Variance
if Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if Item
Deleted
V23 Clear Policies 15.84 10.028 .602 .420 .851
V 24 Alignment 15.69 10.568 .512 .332 .862
V25 Training 16.15 8.794 .778 .645 .825
V26 Sufficient Time 16.23 8.667 .780 .695 .824
V27 Resources
16.31 9.172 .713 .588 .835
V28 Teacher Practice
16.16 9.536 .633 .419 .847
V 29 Prof.
Development
15.76 11.011 .430 .213 .870
Data Disaggregation
After verifying the validity of the survey results, the survey data was
disaggregated into three groups by school level. Of the 941 responses to the
survey, 885 completed enough of the survey to use the response. There were
460 (52.0%) of the responses from elementary school personnel, 233 (26.3%) of
the responses from middle school personnel and 192 (21.7%) responses from
high school personnel. The data for the different groups were each analyzed
independently by different researchers. This study focused on the elementary
school data.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
The following three pages present Tables 7 and 8, which provide
descriptive statistics for this segment of the total respondents. Table 7 focuses
on frequency or responses for eighteen independent variables. Table 8 presents
the mean and standard deviation scores for seventeen dependent variables. A
brief discussion follows Table 8.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
83
Descriptive Statistics
Table 7
Teacher Evaluation Perceptions - Frequency of Elementary Survey Responses
Survey Items Principal Assist. Principal Teacher Other
V I Position 126/27.4% 21/4.6% 282/61.3% 31/6.7%
Male Female
V2 G ender 110/24.3% 343/75.7%
0-2 3-5 6-10 11+
V3 Teacher Exper 34/7.4% 68/14.9% 137/30.0% 218/47.7%
0-2 3-5 6-10 11 + N A
V4 A dm in Exper 221/50.5% 53/12.1% 47/10.7% 42/9.6% 75/17.1%
BA/BS BA+45 MA/MS EdD/PhD
V5 Education 35/7.6% 136/29.6% 254/55.3% 34/7.4%
<1000 1001-10.000 10.001-25.000 >50.000
V6 District Size 52/11.7% 166/35.9% 223/50.0% 11/2.5%
<500 501-1000 1001-2500 >2501
V7 School Size 188/41.3% 248/54.5% 19/4.2% 0/0.0%
Elem K-8 Middle Hish
V8 School Level 392/85.2% 68/14.8% 0/0.0% 0/0.0%
<500 500-699 700-875 >875
V9 School API 2/0.4% 84/18.9% 260/58.4% 99/22.2%
Yes No
V10 API >800 214/48.2% 230/51.8%
Yes No
V I 1 Title I 199/44.4% 249/55.6%
0-25% 26-50% 51-75% >75%
V 12% ELL 264/58.7% 137/30.4% 36/8.0% 13/2.9%
Obsv Obsv./CSTP Peer Eval. Portfolio Multiple Methods
V13 Eval Instr. U sed 66/15.1% 299/68.3% 0/0.0% 0/0.0% 73/16.7%
Yes No
V 14 Observe Separ? 316/73.1% 116/26.9%
>3x/vr l-2x/vr Alt. vrs lx/5vr Never
V I 5 Observe Freq. 40/9.2% 267/61.1% 127/29.1% 3/0.7% 0/0.0%
>3x/yr l-2x/vr Alt. vrs lx/5vr N /A
V 16 Should Observe? 28/6.4% 23/5.3% 236/54.3% 142/32.6% 6/1.4%
1 yr 2 vrs 3-5 vrs Less O ften
V I7 Eval Frequency 80/18.5% 333/76.9% 10/2.3% 10/2.3%
iM
2 vrs 3-5 vrs Less Often N /A
V 18 Should Eval? 39/9.1% 80/18.6% 297/69.2% 12/2.8% 1/0.2%
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
84
Table 8
The Mean and Standard Deviation Scores for the Responses to Survey
Statements 19-35
Items 19-35 N M ean Std. D eviation
¥ 1 9 Teacher W elfare 427 2.98 .671
¥ 2 0 Utility 428 2.60 .760
¥ 2 1 Feasibility 424 2.56 .688
¥ 2 2 Accuracy 427 2.89 .640
¥ 2 3 Clear Policies 426 2.87 .656
¥ 2 4 Alignm ent
426 3.02 .615
¥ 2 5 Training
425 2.58 .764
¥ 2 6 Sufficient Time 425 2.48 .807
¥ 2 7 Resources
427 2.34 .761
¥ 2 8 Teacher Practice
425 2.54 .719
¥ 2 9 Prof. Development
425 2.99 .532
¥ 3 0 Process Should Differ 428 3.34 .694
¥31 Process Differs
425 2.29 .715
¥ 3 2 Use Student Input 427 2.07 .712
¥ 3 3 Uses Student Input 426 1.79 .497
¥ 3 4 Use Test Scores 427 2.20 .838
¥ 3 5 U ses Test Scores 427 1.92 .602
Standards Mean: 431 2.761 .537
Satisfaction Mean:
428 2.689 .530
¥ a lid N (listwise) 400
* M inim um = Strongly Disagree; M axim um = Strongly A gree
For the descriptive statistics in Table 8, responses were set at 1, 2, 3, and
4 with choices as Strongly Disagree, Disagree, Agree, and Strongly Agree. The
theoretical median for this range was 2.5. While items scoring close to a 3.0 or
higher indicated general agreement with survey questions, the following nine
items were particularly noteworthy:
1. Utility - 2.60: Evaluations axe informative, timely, and influential.
2. Feasibility - 2.56: Evaluations are easy to conduct and efficient.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
85
3. Training - 2.58: Evaluators are sufficiently trained to evaluate teaching
performance accurately.
4. Sufficient Time - 2.48: Evaluators spend sufficient time to evaluate
teaching performance accurately.
5. Resources - 2.34: Sufficient resources are spent on evaluation.
6. Teacher Practice - 2.54: The evaluation process affects teacher practice.
7. Process Differs - 2.29: The evaluation process differs for beginning and
experienced teachers. (Compared to 3.34 for item V30 - the process
should differ for these same groups.)
8. Use Student Input - 2.07: Student input should be used in evaluating
teachers.
9. Use Test Scores - 2.20: Student achievement on standardized tests
should be used in evaluations.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
86
Responses to Research Questions - Analysis of the Elementary Data
1. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving at or above an API
of 800?
2. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving at or above an API of
800?
Table 9
Administrators and Teachers - at or above an API of 800
Descriptives
IV DV N Mean Std. Deviation Std. Error
Stds: 1 Principal
58 2.871 .590 .078
2 Asst Principal
9 2.815 .422 .141
3 Teacher
144 2.675 .452 .038
4 Other Admin
5 2.900 .676 .302
Total
216 2.738 .502 .034
Satisf: 1 Principal
58 2.879 .565 .074
2 Asst Principal
9 2.683 .504 .168
3 Teacher
142 2.590 .470 .039
4 Other Admin
5 3.029 .666 .298
Total 214 2.683 .518 .035
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
87
Table 10
Administrators and Teachers - at or above an API of 800
ANOVA
Sum o f Squares d f Mean Square F Sig.
Standards: Between Groups 1.781 3 .594 2.404 .069
Within Groups 52.357 212 .247
Total 54.138 215
Satisfaction: Between Groups 4.059 3 1.353 5.352 .001
Within Groups 53.093 210 .253
Total 57.153 213
Table 11
Administrators and Teachers - at or above an API of 800
Multiple Comparisons
Least Significant Difference (LSD)
Dependent (I) VI
(J) VI
Mean Difference
Variables Position Position (I-J) Std. Error Sig.
Standards: 1 Principal 2 Asst Prin .056 .178 .754
3 Teacher
,196(*) .077 .012
4 Other
Admin
-.029 .232 .899
2 1 -.059 .178 .754
3
.140 .171 .413
4 -.085 .277 .759
3 1 -.196(*) .077 .012
2 -.140 .171 .413
4
-.225 .226 .320
4 1
.029 .232 .899
2 .085 .277 .759
3 .225 .226 .320
Satisfaction: 1 Principal 2 Asst Prin
.197 .3 80 .276
3 Teacher
.28900 .078 .000
4 Other
Admin
-.149 .234 .525
2 1 -.197 .180 .276
3 .093 .173 .593
4 -.346 .281 .219
3 1 -.2 8 9 0 ) .078 .000
2 -.093 .173 .593
4
-.439 .229 .057
4 1 .149 .234 .525
2 .346 .281 .219
3 .439 .229 .057
* The mean difference is significant at the .05 level.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
88
The ANOVA test indicated that there was no significant difference
between the perceptions of both administrators and teachers with an API at or
above 800 and standards with or in adherence to the teacher evaluation process
in California’s public elementary schools. There was a significant difference in
perceptions of administrators and teachers in schools with an API at or above
800 and satisfaction with the teacher evaluation process in California’s public
elementary schools. The multiple comparison analyses of means indicated
significant differences between principals and teachers in dependent variables
representing both standards and satisfaction, with administrators indicating
much more agreement than teachers did.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
89
3. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools in California, achieving below an API of
800?
4. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving below an API of 800?
Table 12
Administrators and Teachers - below an API of 800
D escriptives
IV DV N Mean Std. Deviation Std. Error
Stds: 1 Principal
66 2.886 .557 .069
2 Asst Principal
12 2.799 .454 .131
3 Teacher
117 2.705 .559 .052
4 Other Admin
20 2.900 .709 .159
Total
215 2.784 .571 .039
Satisf: 1 Principal
66 2.767 .484 .060
2 Asst Principal 12 2.679 .538 .155
3 Teacher
116 2.611 .526 .049
4 Other Admin
20 2.963 .733 .164
Total 214 2.696 .544 .037
Table 13
Administrators and Teachers - below an API of 800
ANOVA
Sum o f Squares df Mean Square F Sig.
Standards: Between Groups
1.691 3 .564 1.745 .159
Within Groups 68.170 211 .323
Total
69.861 214
Satisfaction: Between Groups 2.593 3 .864 3.005 .031
Within Groups
60.402 210 .288
Total
62.995 213
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
90
Table 14
Administrators and Teachers - below an API of 800
M ultiple Com parisons
Least Significant D ifference (LSD)
D ependent
V ariables
(I) VI
Position (J) V I Position
M ean Difference (I-
J)
Std.
Error Sig.
Standards: 1 Principal 2 Asst Prin
3 Teacher
4 Other
Admin
1
3
4
1
2
4
1
2
3
.088
-.014
-.088
.093
-.101
-.181(*)
-.094
-.195
014
101
195
.178 .623
.088 .040
.145 .925
.178 .623
.172 .588
.208 .626
.088 .040
.172 .588
.138 .158
.145 .925
.208 .626
.138 .158
.168 .602
.083 .062
.137 .153
.168 .602
.163 .679
.196 .148
.083 .062
.163 .679
.130 .007
.137 .153
.196 .148
.130 .007
Satisfaction: 1 Principal 2 Asst Prin
3 Teacher
4 Other
Admin
1
3
4
1
2
4
1
2
3
088
155
-.197
-.088
.067
-.285
-.155
-.067
-.352(*)
.197
.285
,352(*)
* The m ean difference Is significant at the .05 level
The ANOVA test indicated that there was no significant difference
between the perceptions of both administrators and teachers in schools with an
API below 800 and standards with or in adherence to the teacher evaluation
process in California’s public elementary schools. There was a significant
difference between the perceptions of administrators and teachers in schools
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
91
with an API below 800 and satisfaction with the teacher evaluation process in
California’s public elementary schools.
Analysis of the multiple comparisons indicated significant differences in
the mean perception scores in standards between principals and teachers. There
was also a significant difference in the mean perception scores in satisfaction
between teachers and other administrators. Overall, findings indicated a higher
perception of agreement with both standards and satisfaction of the teacher
evaluation process by administrators in schools with an API below 800, than the
perceptions by teachers in similar schools.
Between groups, when API was a qualifier, there was no statistical
significance in the difference in perceptions (Standards: Questions 1 and 2 —
2.74 and Questions 3 and 4 — 2.78; Satisfaction: Questions 1 and 2 - 2.68 and
Questions 3 and 4 — 2.70 respectively).
5. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving more than 10,000
students?
6. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving more than 10,000
students?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
92
Table 15
Administrators and Teachers - Student Enrollment above 10,000
Descriptives
IV DV N M ean Std. Deviation Std. E rror
Stds: 1 Principal
59 3.004 .456 .059
2 Asst Principal 10 2.808 .533 .169
3 Teacher
155 2.661 .511 .041
4 O ther A dm in 3 2.833 .289 .167
Total
227 2.759 .516 .034
Satisf: 1 Principal
59 2.901 .467 .061
2 A sst Principal 10 2.614 .646 .204
3 Teacher
155 2.562 .491 .039
4 Other A dm in
3 3.048 .165 .095
Total 227 2.659 .511 .034
Table 16
Administrators and Teachers - Student Enrollment above 10,000
ANOVA
Sum o f Squares df Mean Square F Sig.
Standards: Between Groups 5.069 3 1.690 6.851 .000
Within Groups 55.002 223 .247
Total 60.071 226
Satisfaction: Between Groups 5.377 3 1.792 7.466 .000
Within Groups 53.537 223 .240
Total 58.914 226
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
93
Table 17
Administrators and Teachers - Student Enrollment above 10,000
M ultiple Com parisons
L east Significant D ifference (LSD) __________ ______
D ependent
V ariables
(I) V I
Position (I) V I Position
M ean Difference (I-
I)
Std.
Error Sig.
Standards: 1 Principal 2 Asst Prin .1% .170 .250
3 Teacher
.343(*) .076 .000
4 Other
Adm in
.171 .294 .562
2 1
-.196 .170 .250
3
.147 .162 .365
4 -.025 .327 .939
3 1
-.343(*) .076 .000
2
-.147 .162 .365
4
-.172 .290 .553
4 1
-.171 .294 .562
2 .025 .327 .939
3 .172 .290 .553
Satisfaction: 1 Principal 2 Asst Prin .286 .168 .089
3 Teacher .339(*) .075 .000
4 Other
A dm in
-.147 .290 .613
2 1 -.286 .168 .089
3 .052 .160 .744
4
-.433 .323 .180
3 1 -,339(*) .075 .000
2 -.052 .160 .744
4
-.486 .286 .091
4 1 .147 .290 .613
2
.433 .323 .180
3
.486 .286 .091
* The m ean difference is significant at the .05 level.
The ANOVA test indicated that there was a significant difference in the
mean perceptions between groups of administrators and teachers in schools
within districts serving more than 10,000 students in perceptions of standards
and satisfaction with or in adherence to the teacher evaluation process in
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
94
California’s public elementary schools. Analysis of the multiple comparisons
indicated significant differences in the mean perception scores in standards and
satisfaction between principals and teachers. Overall, findings indicated a
considerably higher perception of agreement with both standards and
satisfaction of the teacher evaluation process by administrators in schools
within districts serving more than 10,000 students, than were the perceptions
by teachers in similar schools.
7. What are administrators’ attitudes or perceptions of teacher evaluations
in public elementary schools within districts serving 10,000 or less
students?
8. What are teachers’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving 10,000 or less
students?
Table 18
Administrators and Teachers - Student Enrollment below 10,000
Descriptives
IV DV N M ean Std. D eviation Std. Error
Stds: 1 Principal
63 2.773 .649 .082
2 Asst Principal
11 2.803 .338 .102
3 Teacher 98 2.726 .497 .050
4 Other A dm in
22 2.909 .730 .156
Total 194 2.767 .571 .041
Satisf: 1 Principal
63 2.756 .570 .072
2 Asst Principal
11 2.740 .372 .112
3 Teacher
95 2.656 .502 .052
4 O ther A dm in 22 2.967 .754 .161
Total
191 2.729 .557 .040
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
95
Table 19
Administrators and Teachers - Student Enrollment b e l o w 10,000
ANOVA
Sum o f Squares df M ean Square F Sig.
Standards: Between Groups
.625 3 .208 .634 .594
W ithin Groups
62.383 190 .328
Total 63.008 193
Satisfaction: Between Groups
1.797 3 .599 1.960 .122
W ithin Groups 57.167 187 .306
Total 58.964 190
Table 20
Administrators and Teachers - Student Enrollment below 10,000
Multiple Com parisons
Least Significant Difference (LSD)
Dependent (I) VI (I) V I M ean D ifference Std.
Variables Position Position
(I-J)
Error Sig.
Standards: 1 Principal 2 Asst Prin -.029 .187 .876
3 Teacher
.048 .093 .607
4 Other
Admin
-.135 .142 .342
2 1 .029 .187 .876
3 .077 .182 .674
4 -.106 .212 .617
3 1
-.048 .093 .607
2 -.077 .182 .674
4
-.183 .135 .178
4 1
.135 .142 .342
2 .106 .212 .617
3 5 .183 .135 .178
Satisfaction: 1 Principal 2 Asst Prin
.015 .181 .933
3 Teacher
.100 .090 .268
4 Other
A dm in
-.211 .134 .125
2 1 -.015 .181 .933
3 .085 .176 .631
4 -.226 .204 .269
3 1
-.100 .090 .268
2
-.085 .176 .631
4
-.3 1 1(*) .131 .019
4 1 .211 .137 .125
2 .226 .204 .269
3 ,311(*) .131 .019
* The mean difference is significant at the .05 level.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
96
The ANOVA test indicated that there were no significant differences of
the total mean perceptions between groups of administrators and teachers in
schools within districts serving 10,000 or less students in perceptions of
standards and satisfaction with or in adherence to the teacher evaluation process
in California’s public elementary schools. Specific mean scores indicated a
lower agreement with the statements of satisfaction by teachers in schools
within districts serving 10,000 or less students.
Analysis of the multiple comparisons indicated significant differences in
the mean perception scores in satisfaction between teachers and those of other
administrators. Overall, findings indicated a considerably higher perception of
agreement with satisfaction of the teacher evaluation process by other
administrators in schools within districts serving 10,000 or less students, than
the perceptions by teachers representing similar schools.
Between groups, when district size (research questions 5, 6 and 7, 8) was
a qualifier, there were no statistically significant differences in the total means
regarding perceptions in either standards or satisfaction. Overall, findings also
indicated a higher perception of agreement with both standards and satisfaction
of the teacher evaluation process by administrators in schools (regardless of
size) than were the perceptions by teachers in similar schools.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
97
9. What are administrators’, with more than five years experience, attitudes
or perceptions of teacher evaluations in public elementary schools?
10. What are teachers’, with more than five years experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
Table 21
Administrators and Teachers - with more than five years of experience
Descriptives
DV IV N Mean Std. Deviation Std. Error
Stds: 1 Principal 86 2.8808 .60843 .06561
3 Teacher
192 2.6897 .50909 .03674
4 Other Admin
20 3.0125 .63076 .14104
Total
298 2.7665 .55655 .03224
Satisf: 1 Principal
86 2.8707 .54925 .05923
3 Teacher
189 2.6043 .49345 .03589
4 Other Admin
20 3.1286 .59338 .13268
Total
295 2.7175 .54062 .03148
Table 22
Administrators and Teachers - with more than five years of experience
ANOVA
Sum o f Squares df Mean Square F Sig.
Standard: Between Groups
3.467 2 1.734 5.777 .003
Within Groups
88.528 295 .300
Total 91.995 297
Satisfaction: Between Groups
7.820 2 3.910 14.617 .000
Within Groups
78.108 292 .267
Total
85.928 294
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Table 23
Administrators and Teachers - with more than five years of experience
M ultiple Com parisons
Least Significant Difference (LSD)
D ependent
Variables
(I) VI
Position
(j) VI
Position
M ean D ifference (I-
J)
Std.
Error Sig.
Standards: 1 Principal 3 Teacher .191(*) .071 .008
4 Other
Admin
-.133 .136 .334
3 1
-.191 (*) .071 .008
4
-.323(*) .129 .013
4 1 .132 .136 .334
J
,323(*) .129 .013
Satisfaction: 1 Principal 3 Teacher
. 2 6 6 0
.067 .000
4 Other
Admin
- .2 5 8 0
.128 .046
3 1
- .2 6 6 0
.067 .000
4
- .5 2 4 0
.122 .000
4 1
. 2 5 8 0
.128 .046
3
.5 2 4 0
.122 .000
* The mean difference is significant at the .05 level.
The ANOVA test indicates that there were significant differences in the
mean perceptions between groups of administrators and teachers with more than
five years experience in their perceptions of standards and satisfaction with or in
adherence to the teacher evaluation process in California’s public elementary
schools. In both categories of dependent variables, the teachers perceived
significantly less agreement with the survey statements. With a range of 1 - 4
and the median of 2.5, teachers with more than five years experience indicated a
mean of 2.6897 towards the statements of standards, while teachers with more
than five years experience indicated a mean of 2.6043 towards the statements of
satisfaction.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
99
Analysis of the multiple comparisons indicated significant differences in
the mean perception scores in standards and satisfaction between principals and
teachers with more than five years experience, and similar differences in
perception by other administrators and teachers with more than five years
experience.
11. What are administrators’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
12. What are teachers’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
Table 24
Administrators and Teachers - with five or less years of experience
Descriptives
IV DV N Mean Std. Deviation Std. Error
Stds: 1 Principal
38 2.875 .482 .078
2 Asst Principal 21 2.806 .430 .094
3 Teacher
67 2.705 .469 .057
4 Other Admin 4 2.500 .913 .456
Total
130 2.765 .485 .043
Satisf: 1 Principal 38 2.703 .447 .073
2 Asst Principal 21 2.680 .511 .112
3 Teacher
67 2.596 .507 .062
4 Other Admin 4 2.316 .988 .494
Total 130 2.632 .508 .045
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
100
Table 25
Administrators and Teachers - with five or less years of experience
ANOVA
Sum o f Squares df M ean Square F Sig.
Standards: Between Groups 1.015 3 .338 1.455 .230
W ithin Groups 29.283 126 .232
Total 30.298 129
Satisfaction: Between Groups .730 3 .243 .943 .422
W ithin Groups 32.508 126 .258
Total 33.238 129
Table 26
Administrators and Teachers - with five or less years of experience
M ultiple Com parisons
Least Significant Difference (LSD)
Dependent (I) VI
CD v i
Mean Difference (I- Std.
Variable Position Position J) Error Sig.
Standards: 1 Principal 2 Asst Prin .069 .131 .597
3 Teacher .170 .098 .085
4 Other
.375 .253 .141
Admin
2 1 -.069 .131 .597
3 .100 .121 .407
4 .306 .263 .248
3 1 -.170 .098 .085
2 -.100 .121 .407
4 .205 .248 .410
4 1 -.375 .253 .141
2 -.306 .263 .248
3 -.205 .248 .410
Satisfaction: 1 Principal 2 Asst Prin .023 .138 .870
3 Teacher .107 .103 .300
4 Other
.388 .267 .149
Admin
2 1 -.023 .138 .870
3 .085 .127 .506
4
.365 .277 .190
3 I -.107 .103 .300
2 -.085 .127 .506
4 .280 .261 .286
4 1 -.388 .267 .149
2 -.365 .277 .190
3 -.280 .261 .286
* The mean difference is significant at the .05 level.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
101
The ANOVA test indicated that there were no significant differences in
the mean perceptions between groups of administrators and teachers with five
years or less experience in their perceptions of standards and satisfaction with or
in adherence to the teacher evaluation process in California’s public elementary
schools. In both categories of dependent variables, the other administrators
perceived significantly less agreement with the statements. With a range of 1 - 4
and the median of 2.5, other administrators with five years or less experience
indicated a mean of 2.500 towards the statements of standards. The same group,
other administrators with five years or less experience, indicated a mean of
2.316 or clear disagreement with the statements of satisfaction. Analysis of the
multiple comparisons indicated no significant differences between groups, as
well.
Overall, findings indicated a higher perception of agreement with both
standards and satisfaction of the teacher evaluation process by administrators
and teachers with more than five years experience than the perceptions by
administrators and teachers with five years or less experience. In both cases
however, the differences in total means were not statistically significant.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
102
Factorial ANOVA: Standards - Tables 27-33
Table 27
Between-Subjects Factors - Elementary
Independent Variables N
V I Position 1 Principal
2 A sst Principal
3 Teacher
4 Other Admin
122
21
229
23
V3 Years o f Exp - Teacher 1 0-2
2 3-5
3 6-10
4 11 or more
28
53
121
193
V4 Years o f Exp - Admin 0 N/A
1 0-2
2 3-5
3 6-10
4 11 or more
198
46
42
40
69
V6 District Size 1 1,000 or less
2 1,001-10,000
3 10,001-50,000
4 50,001 or more
39
138
210
8
V I 0 School’s API above 800 1 Yes
2 No
194
201
Table 28
Tests of Between-Subjects Effects - Elementary
D ependent Variable: Standards
Source
Type III Sum o f
Squares d f
Mean
Square F
Sig-
Corrected Model 8.449(a) 14 .603 2.153 .009
Intercept 445.138 1 445.138 1588.205 .000
VI Position .844 3 .281 1.004 .391
V3 Years o f Exp - Teacher .643 3 .214 .765 .514
V4 Years o f Exp - Admin 3.249 4 .812 2.898 .022
V6 District Size .354 J .118 .421 .738
V10 School’s API above 800 .047 1 .047 .169 .681
Error 106.505 380 .280
Total 3113.611 395
Corrected Total 114.954 394
A R Squared = .073 (Adjusted R Squared = .039)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
103
Table 29
VI: Position - Elementary
Dependent Variable: Standards
VI Position M ean Std. Error
1 Principal 2.877 .085
2 Asst Principal
2.745 .135
3 Teacher 2.630 .117
4 Other Admin
2.787 .139
Table 30
V3: Y ears of Experience: T eacher - Elementary
Dependent Variable: Standards
V3 Years o f Experience - Teacher Mean Std. Error
1 0-2 2.801 .125
2 3-5 2.675 .093
3 6-10
2.763 .074
4 11 or more
2.799 .070
Table 31
V4: Years of Experience: Administrators - Elementary
Dependent Variable: Standards
V4 Years o f Exp - Admin Mean Std. Error
0 N /A 2.743 .116
1 0-2 2.948 .106
2 3-5 2.633 .112
3 6-10 2.630 .123
4 11 or more 2.843 .105
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
104
Table 32
V6: District Size - Elementary
Dependent Variable; Standards
V6 District Size Mean Std. Error
I 1,000 or less
2.823 .094
2 1,001-10,000
2.747 .065
3 10,001-50,000
2.797 .061
4 50,001 or more
2.672 .197
Table 33
V10: School’s API above 800 - Elementary
_________ Dependent Variable: Standards
V I0 School’s API above 800 Mean Std. Error
1 Yes
2.748 .078
2 No
2.771 .072
Tables 27 through 33, representing factorial ANOVAs demonstrated
similar findings to the outcomes found in response to the stated research
questions. When examining the mean scores for five independent variables
against the survey statements 19 - 22 concerning standards, the mean difference
between subjects was significant among administrators.
Table 30 highlights, in terms of position, that teachers indicated the
lowest mean perceptions towards standards with principals noting the highest
mean, or more positive perceptions. Table 30 represents the independent
variable of years of experience for teachers and perceptions towards statements
of standards for evaluation. Beginning teachers (0-2 years) indicated the highest
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
105
perception towards standards (Items 19-22). Teachers with 3-5 years experience
indicated the lowest response rating of 2.675. Perceptions by teacher having
taught more than five years improved respectively, but never reaching the mean
score for novice teachers.
Table 31 represented the perceptions of school administrators towards
the standards for evaluation. Unlike their neophyte teacher counterparts,
administrators with two years or less experience indicated the highest
perceptions with declining means through year ten.
When examining district size as an independent variable (See Table 32),
the survey data indicated that the highest positive perceptions (2.823, with 3.0
indicating agreement and 2.5 indicating neutral) were held among the smallest
of school districts, those with 1,000 or less students. Perceptions declined as
district size increased.
When disaggregating the data for perceptions of standards of teacher
evaluation against the independent variable of API score, respondents from
schools with an API score below 800 scored a more favorable rating towards
standards, than did their counterparts in score that did in fact achieve an API
score above 800.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
106
Factorial ANOVA: Satisfaction - Tables 34-40
Table 34
Between-Subjects Factors - Elementary
Independent Variables N
V I Position 1 Principal
2 A sst Principal
3 Teacher
4 Other A dm in
122
21
227
23
V3 Years o f Exp - Teacher 1 0-2
2 3-5
3 6-10
4 11 or more
28
53
119
193
V4 Years o f Exp - Admin 0 N/A
1 0-2
2 3-5
3 6-10
4 11 or more
196
46
42
40
69
V6 District Size 1 1,000 or less
2 1,001-10,000
3 10,001-50,000
4 50,001 or more
38
137
210
8
V10 School’s API above 800 1 Yes
2 No
193
200
Table 35
Tests of Between-Subjects Effects
Dependent Variable: Satisfaction
Source
Type III Sum o f
Squares d f
Mean
Square F Sig.
Corrected M odel 15.673(a) 14 1.120 4.401 .000
Intercept
431.173 1 431.173 1694.914 .000
VI Position .110 3 .037 .144 .934
V3 Years o f Exp - Teacher
.785 3 .262 1.028 .380
V4 Years o f Exp - Admin
6.397 4 1.599 6.287 .000
V6 District Size
.759 3 .253 .994 .396
V10 School’s API above 800
.115 1 ,115 .450 .503
Error 96.160 378 .254
Total 2939.358 393
Corrected Total 111.834 392
A R Squared = . 140 (Adjusted R Squared = .108)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
107
Table 36
V I: Position - Elementary
Dependent Variable: Satisfaction
V I Position M ean Std. Error
1 Principal 2.762 .081
2 Asst Principal 2.686 .129
3 Teacher
2.689 .111
4 Other Admin 2.728 .133
Table 37
V3: Years of Experience: Teacher - Elementary
Dependent Variable: Satisfaction
V3 Years o f Experience - Teacher Mean Std. Error
1 0-2 2.710 .119
2 3-5 2.689 .088
3 6-10 2.685 .071
4 11 or more 2.780 .066
Table 38
V4: Years of Experience: Administrators - Elementary
Dependent Variable: Satisfaction
V4 Years o f Exp - Admin Mean Std. Error
0 N/A 2.557 .110
1 0-2 2.854 .101
2 3-5 2.546 .107
3 6-10 2.678 .117
4 11 or more 2.945 .100
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
108
Table 39
V6: District Size - Elementary
Dependent Variable: Satisfaction
V6 District Size Mean Std. Error
1 1,000 or less 2.854 .091
2 1,001-10,000
2.693 .062
3 10,001-50,000
2.703 .059
4 50,001 or more 2.615 .188
Table 40
V10: School’s API above 800 - Elementary
________ Dependent Variable: Satisfaction________
V10 School’s API above 800 Mean Std. Error
1 Yes 2.734 .074
2 No 2.698 .068
Completion of a factorial ANOVA between subjects and the effect of
dependent variables of satisfaction with the teacher evaluation process (Tables
34-40) indicated similar scores to those found regarding standards. Years of
experience for administrators indicated a mean score that was significantly
different from the other independent variables.
Table 36 presented data to indicate that assistant principals and teachers
were the least satisfied with the teacher evaluation process. This parallels
similar findings towards standards for evaluation (Table 29). A similar parallel
was noted with the findings among teacher respondents (Table 37) — beginning
teachers (less than two years experience) and veteran teachers with 11 or more
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
109
years of experience held the highest agreement with statements of satisfaction
(V23 through V29).
Administrators with three to five years of experience indicated the least
favorable perceptions towards statements of satisfaction among all respondents,
not just from fellow administrators (See Table 38). The mean score of 2.546
was significantly different from their peers. Dissatisfaction towards the teacher
evaluation process ranked the lowest against perceptions of standards, as well.
Table 39 indicated that administrators in school districts with a student
population of 1,000 or less held the highest perceptions of satisfaction with the
teacher evaluation process, with a general decline in perceptions by teachers and
administrators in larger districts. The mean difference among the perceptions of
the smallest group of was statistically significant against the perceptions of their
peers in each of the other district size groupings, with the least favorable
response noted by teachers and administrators in school districts with a student
population of 50,000 or more. This finding is similar to the lowest and highest
scores indicated towards statements of standards (See Table 32).
When examining the data for perceptions of administrators and teachers
from schools against their API score, the opposite of the finding of statements of
standards was found for the statements of satisfaction. Certificated personnel
working in scores that did in fact achieve an API score above 800 indicated
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
110
more favorable perceptions, than did their peers in schools that did not meet the
benchmark API score of 800.
Figure 2: Elementary School Level
Histogram - Responses to Dependent Variables: Standards
120 -
100-
80-
> «
o
c
0
3
cr 60-
0
40-
20 -
/
1.00 ,50
I
2.50
I
Mean = 2.7612
Std. Dev. = 0.53749
N = 431
2.00 2.50 3.00
Standards
3.50 4.00
While Figure 2 represents a slightly more favorable response to
standards of teacher evaluation, a closer examination of specific responses by
teacher and administrators indicated more divergent means, as evidence by the
spikes in the response frequencies noted above.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Ill
Figure 3: Elementary School Level
Histogram - Responses to Dependent Variables: Satisfaction
70-
60-
50-
> ,
o
c 40-
®
3
C T
£
30— I
20 -
10-
1.50
" 1 "'
2.00 2.50
Satisfaction
' i ''
3.00
Mean = 2.6892
Std. Dev. = 0.53049
N = 428
3.50 4.00
Similar to the previous figure, Figure 3 indicated a disparate
representation of responses. When matched against the factorial ANOVA for
satisfaction, teachers and administrators indicated dissimilar mean scores for
statements of satisfaction towards the teacher evaluation process. The histogram
also demonstrated responses considerably greater and less than the actual mean
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
112
of 2.5 on a scale of 1, 2, 3, or 4 with one indicating strong disagreement, and
four indicating strong agreement.
Analysis of the K-12 Data
Comparison of the Means of Different School Levels
Table 41 provides an analysis of the means of each of the three school
levels: elementary, middle, and high. Analysis of individual survey items
provides the background to understand identified trends among mean scores.
For standards statements 19 - 22, a distinct trend was noted. The
collective mean scores of perceptions were highest among administrators and
teachers at the elementary school levels. The overall mean scores progressively
declined for the middle and high school levels. With a range of 1 - 4 and the
median of 2.5, the overall K-12 Standards mean score was 2.73 representing 815
valid responses. Agreement with the statements of standards required a score of
3.0 or higher.
The only exception in this section of the survey results was when
administrators and teachers at the middle school level perceived higher
agreement with V21 - Feasibility standards. As such, they indicated that the
evaluation system at their site was as easy to implement as possible, efficient in
the use of time and resources, adequately funded, and viable from a number of
other standpoints, compared to counterparts at the elementary level.
Among satisfaction statements 23 - 29, a distinct trend was also noted.
Similar to the standards statements, the mean scores of perceptions were highest
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
113
among administrators and teachers at the elementary school levels and
progressively lower at the middle school and high school levels. Mean scores
gradually declined at the middle and high school levels respectively, with the
exception being ¥27 - Resources at the middle school level. This aberration
skewed the overall mean satisfaction score for the middle school level. With a
range of 1 - 4 and the median of 2.5, the overall K-12 mean score was 2.66
representing 807 valid responses. Agreement with the statements of satisfaction
required a score of 3.0 or higher.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced w ith permission o f th e copyright owner. Further reproduction prohibited without permission.
Table 41
Analysis of the K-12 Data
Comparison of the Means of Different School Levels
* Minimum o f 1 = Strongly Disagree; Maximum o f 4 = Strongly Agree
Questions 19 - 35
N -
Elem. Mean
Std
Dev.
N -
Middle Mean
Std
Dev.
N -
High Mean
Std
Dev.
N -
K-12 Mean
Std
Dev.
19) Teacher Welfare 427 2.98 0.67 210 2.89 0.581 169 2.84 0.621 808 2.93 0.639
20) Utility 428 2.60 0.76 212 2.57 0.709 170 2.49 0.740 812 2.57 0.741
21) Feasibility 424 2.56 0.69 209 2.66 0.669 167 2.44 0.733 801 2.56 0.698
22) Accuracy 427 2.89 0.64 211 2.89 0.583 170 2.79 0.643 810 2.87 0.629
23) Clear Policies 426 2.87 0.66 207 2.85 0.601 169 2.81 0.654 803 2.85 0.641
24) Alignment 426 3.02 0.62 208 3.00 0.506 167 2.95 0.604 802 3.00 0.588
25) Training 425 2.58 0.76 208 2.53 0.742 168 2.43 0.779 802 2.54 0.762
26) Sufficient Time 425 2.48 0.81 207 2.46 0.708 167 2.37 0.810 799 2.45 0.782
27) Resources 427 2.34 0.76 204 2.43 0.688 166 2.33 0.750 799 2.37 0.739
28) Teacher Practice 425 2.54 0.72 208 2.53 0.735 168 2.43 0.747 802 2.52 0.729
29) Professional Development 425 2.99 0.53 208 2.84 0.567 168 2.86 0.582 802 2.92 0.555
30) N ew Tchrs - Should Differ 428 2.34 0.69 207 3.22 0.703 164 3.27 0.776 800 3.30 0.713
31) N ew Tchrs - Process Differs 425 2.29 0.72 205 2.20 0.757 164 2.28 0.714 794 2.26 0.726
32) Use Student Input 427 2.07 0.71 207 2.16 0.745 165 2.55 0.807 800 2.19 0.763
33) Uses Student Input 426 1.79 0.50 205 1.68 0.553 162 1.85 0.480 793 1.78 0.51
34) Use Test Scores 427 2.20 0.84 205 2.10 0.807 165 2.37 0.864 798 2.21 0.843
35) Uses Test Scores 427 1.92 0.60 206 1.83 0.566 160 1.91 0.519 793 1.90 0.581
Standards Mean: 431 2.76 0.54 209 2.75 0.466 170 2.64 0.522 815 2.73 0.522
Satisfaction Mean: 428 2.69 0.53 211 2.66 0.488 168 2.60 0.524 807 2.66 0.512
Valid N (listwise): 400 191 151 630
115
Table 42
K-12 Mean Responses to Independent Variables: Standards
Descriptives
Standards Variables ( V 1 9 - V22)
VI Position N Mean Std. Deviation Std. Error
1 Principal 176 2.847 .518 .039
2 Asst Principal 68 2.683 .459 .056
3 Teacher 510 2.674 .517 .023
4 Other Admin 59 2.915 .545 .071
Total
813 2.729 .521 .018
Table 43
K-12 Mean Responses to Independent Variables: Standards
ANOVA
Standards Variables (V I9 - V22)
Sum o f Squares d f Mean Square F Sig.
Between Groups 6.187 3 2.062 7.789 .000
Within Groups 214.183 809 .265
Total 220.370 812
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1 1 6
Table 44
K-12 Mean Responses to Independent Variables: Standards
M ultiple Com parisons
Least Significant Difference (LSD)
(I) VI Position (J) VI Position M ean D ifference (I-J) Std. Error Sig.
i Principal 2 Asst Prin ,164(*) .074 .026
3 Teacher
.173(*) .045 .000
4 Other Admin
-.069 .077 .375
2 1
-,164(*) .074 .026
3
.009 .066 .893
4 -.233(*) .092 .011
3 1
-.173(*) .045 .000
2
-.009 .066 .893
4
-.242(*) .071 .001
4 1
.069 .077 .375
2 ,233(*) .092 .011
3
.242(*) .071 .001
* The m ean difference is significant at the .05 level.
The descriptives, ANOVA, and multiple comparisons for the dependent
variables - standards (Tables 42-44) for the evaluation of educational personnel
indicated significant differences between respondent groups. In addition, the total
mean for the item indicated general disagreement with statements about: teacher
welfare; their being informative, timely, and influential; their being easy to
conduct and efficient; and, their being linked to objective data.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
117
Table 45
K-12 Mean Responses to Independent Variables: Satisfaction
Descriptives
Satisfaction V ariables (V23 - V29)
V I Position N Mean Std. Deviation Std. Error
1 Principal
175 2.793 .510 .039
2 Asst Principal
67 2.560 .462 .056
3 Teacher
504 2.607 .490 .022
4 Other Admin 59 2.874 .624 .081
Total
805 2.663 .512 .018
Table 46
K-12 Mean Responses to Independent Variables: Satisfaction
ANOVA
Satisfaction Variables (V23 - V29)
Sum o f Squares df Mean Square F Sig.
Between Groups
7.830 3 2.610 10.324 .000
Within Groups
202.496 801 .253
Total 210.327 804
Table 47
K-12 Mean Responses to Independent Variables: Satisfaction
Multiple Comparisons
Least Significant Difference (LSD)_____________
(I) VI (J) VI Mean Difference (I-J) Std. Error Sig.
1 Principal 2 Asst Prin .232(*) .072 .001
3 Teacher .185(*) .044 .000
4 Other Admin -.081 .076 .285
2 1 -,232(*) .072 .001
3 -.047 .065 .473
4 -,313(*) .090 .001
3 1 -.185(*) .044 .000
2
.047 .065 .473
4
-,266(*) .069 .000
4 1 .081 .076 .285
2 . 3 1 3 0 .090 .00!
3 .266(*) .069 .000
* The m ean difference is significant at the .05 level.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
The descriptives, ANOVA, and multiple comparisons for the dependent
variables - standards (Tables 45-47) for the evaluation of educational personnel
indicated significant differences between respondent groups. In addition, the total
mean for the item indicated general disagreement with statements about
satisfaction about the effectiveness of the evaluation process.
The K-12 mean responses to standards and satisfaction indicate general
disagreement. Principals and other administrators Indicated higher ratings of
agreement than the responses by assistant principals and teachers (See Tables 42
and 45). The mean differences between positions were statistically significant
between these groups as indicated in Tables 44 and 47. These differences
represented the range of independent variables to including gender, years of
experience, education level, district size, school size, and API score, among
others.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
119
Qualitative Analysis of Item 36: Comments
During the course of the questionnaire data collection, 194 comments
were noted by respondents. Certain themes emerged from disaggregating the data
and then filtering the comments. Seventeen comments were off topic, and
identified as N/A. Twenty-five additional comments were of a general nature and
not relevant to the topic of K-12 teacher evaluation. The following themes
categorized meaningful comments:
• Evaluation Process
• Collective Bargaining
• Administrative Training Issues
• Impact on Teacher Practices
• Beginning and Experienced Teachers
• Preference for Alternative Evaluation Methods
• Use of Student Achievement
• Use of Student Input
• Value Added Measures
Evaluation Process
The theme of the evaluation process included considerable comments by
respondents indicating three subtopics. Fourteen comments considered the
teacher evaluation process as effective. Forty-six comments deemed the teacher
evaluation process as not effective. Five comments indicated that the teacher
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
120
evaluation process is punitive towards teachers, whether specific to the
respondent, or in general to all teachers. Nine individuals wrote poignant
comments along the line that the evaluation process is not effective.
Representative of this view, one respondent commented, “Good luck in finding
something that is truly fair and God-like.” Another respondent quipped,
Evaluations seem to be a waste of time since there is no follow up or even
money to help us with professional development. I cannot afford to do it
all on my own. As it is, I tutor just to afford to live somewhat near my
school. Tuition at the local college or rent to offset high gas prices - more
education will have to wait.
“As a ‘seasoned’ educator, I have never found the time taken for the
principal to evaluate or the teacher to fill out appropriate paper work to be
effective.” This matter of fact comment reflected the five to one ratio of the
evaluation process as not effective over effective.
A common view of respondents shared that, “Teacher evaluations are just
another hoop to jump through. They have little relevance to actually improving
education.” Still, another wrote that,
We are teachers because we want to teach. We are so needlessly bogged
down by paperwork, including these ridiculous evaluations of ourselves
that our time to plan and prepare is highly limited... After all, we are so
tired at the end of our days, and many days have meetings and classes we
choose to take on our own time as well. Someone needs to realize that we
often wake up in the middle of the night wondering how to make things
better. We would do this even without evaluations. Evaluations are such
a waste of our time.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
121
The futility of teacher evaluation was demonstrated by the comment,
“Most teachers don't see the process as a learning tool; it’ s just something they go
through the motions to get done.”
As noted above, fourteen respondents did express support for the
evaluation process as being effective. One obvious administrator respondent
remarked, “1 take evaluation very seriously and find it to be a great tool to guide
professional growth and development.” Another respondent commented, ”1 really
feel that the new process of evaluation with the state standards is highly
effective.” A third respondent (a teacher) commented that,
As long as administrators are sufficiently trained in the evaluative process,
and student performance (a notoriously poor indicator) is not taken into
account, I feel comfortable being observed and evaluated. I appreciate the
input. There's always room for improvement, and accountability should
occur at every level, even for veteran teachers.
Surprisingly, four respondents were compelled to indicate the potential for
the teacher evaluation process to be quite punitive. One comment shared that, “I
believe the evaluation process is too punitive and too regulated by the unions to
be effective. Litigation scares the administration and little changes because of it.”
Another respondent indicated that,
Evaluations at our school are a farce! Recent administrators have used
them as rewards for those they like and punishments for those they want to
torment. They miss the whole point of the process — to monitor and
improve teaching at their campus.
In alignment with this statement one respondent wrote, “Evaluations are
often subjective at my site. Many times, it is also punitive.”
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
122
One final comment reflected a real concern over the use of evaluations in
her district. She noted that,
All too often, I have found evaluation to be a tool to "keep teachers in
line" rather than its original purpose. It is unfortunate. During a recent
labor dispute, for instance, my administrator required three full hours of
observation to alleviate her "concerns" about my instruction. Though I
had been mentor teacher for many years, have delivered district staff
development on instructional content and delivery, and received positive
evaluations every year — including the ones she wrote, she kept finding
another reason to return to evaluate. The staff considered it a joke and the
credibility for the process was lost. Did I mention that I was the CTA
Chapter president?!?
Collective Bargaining
The theme of collective bargaining garnered twenty-seven comments,
predominantly indicating that teacher contracts and tenure have an adverse impact
on teacher practices and teacher evaluations.
The formal evaluation at the end of the year is contrived, the result of
collective bargaining. It is but a slice of the entire year. Evaluation
should be ongoing, aimed at improving the teacher in a variety of unique
areas as well as rewarding the teacher for a job well done.
This comment reflects a common belief among respondents.
Regarding the overly protective nature of tenure, several respondents
(representing teachers and evaluators) expressed opinions. One stated,
I am a veteran teacher. I must say that tenure protects weak teachers.
Fear of conflict, retribution, and even lawsuits prevents qualified
administrators from making observations and evaluations worth the time
and energy. Bad teachers get a free ride, while skilled teachers have to
carry more than their share of responsibility to compensate for negligent
peers. My union has considerable value, but tenure the way it presently
works hurts us all, including students much more than it protects.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
123
An administrator indicated that, “Until tenure is reworked and aligned
more closely to competency and effectiveness, there will always be lots of bad
teachers in the public school system...” Another commented that,
The evaluation system in Calif, is strongly tied to Ed. Code and collective
bargaining agreements. The use of data should be a strong factor, but
current Ed. Code and coll. agreements prohibit. This needs to be
addressed if the process is going to have validity and useful with the
teacher and administrator.
Still another wrote, “Teacher tenure is too poor and allows bad instruction
and teachers to damage children.”
Additional comments exposed the challenge that tenure creates,
In our district, we are still contractually bound to the limitations of the
STULL Bill. CSTP are available as an option but teachers rarely choose
them, the union blocks any change to these meaningless evaluation
procedures for fear of being challenged to change practice, and because
they don't understand the need for veteran teacher growth and change.
Many teachers believe they should not be evaluated by any administrator,
but they would never allow a peer system either. The current system is a
dead end leading to no help for improving instruction for students. As a
school administrator in California, I think the contractual controls on
teacher evaluation practices are one of the ugliest secrets we keep from the
public. Job protection, which is linked to these evaluations, is unknown to
the average person. NO ONE has these kinds of job protections and
insulations from meaningful performance feedback; why would they?
They lead to what we have created in California; a license to coast.
Respondents voiced that, “With the concept of tenure always lurking,
teacher evaluation has VERY LITTLE connection to change in practice.” Still
another said, “However, those teachers who really need to benefit from the
evaluation process don't. It seems that is too difficult to correct or dismiss tenured
teachers.”
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
124
“I think that tenure is a necessary component of teacher evaluations and is
an antiquated concept that can potentially breed complacency,” was written by
one respondent. Another added,
I see tenured teachers who pass their evaluations and have no business
being in the teaching profession. They are lazy and do practically nothing
all year and the teachers receiving their students the following year, have
to work harder at getting these kids up to where they should be. I do not
believe teachers should be automatically tenured. (I have been a teacher
for 32 years.)
But, the tenure system makes it difficult to affect change in recalcitrant
teachers and the attorney costs are prohibitive in trying to release poor
teachers. I really believe student growth and value-added assessment
should become part of the evaluation process and pay should be dependent
on results of student growth under a "value-added" type of system. Right
now, it is easy to nurture non-tenured teachers, but the success rate in
improving the outcomes for poor teachers is dismal, exclaimed one
administrator.
Collective Bargaining has an inherent binding nature that respondents
expressed as inhibits a substantive influence by teacher evaluations, either
formative, or summative. “In a utopian world, you could really dismiss
ineffective teachers quickly regardless of tenure” represents this view, as does,
“Our evaluation process is driven by our negotiated contract that does not allow
administrators to utilize effectively the California Standards for effective
teaching.”
One particular respondent summed up a recurring thought representative
of others:
My current district uses a model from 1978 that is cumbersome and
irrelevant to current standards and practice; that is why my answers vary.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
125
I know and understand from my own professional experience what good
evaiuations should be; the local union is vastly out-of-touch with the
CSTP and present teacher training programs.
Administrative Training Issues
The third theme relates to administrator practices, namely additional
evaluator training is required. These comments also suggested gaps and time
constraints exist which negatively influence the efficacy of teacher evaluation.
The quantitative data indicated that fifty-five percent of respondents disagreed
with the statement that evaluators spend sufficient time to evaluate teaching
performance accurately. Forty-eight percent of the respondents challenged the
statement that evaluators are sufficiently trained to evaluate teacher performance
accurately. Comments corroborated these findings. The qualitative responses
garnered forty-six of the 194 total comments. Comments strongly suggested that,
“I do not feel that administrators are adequately trained to be good instructional
observers, coaches, or evaluators. Not enough focus is put on administrators
being in the classroom.” A similar comment voiced, “More training and emphasis
needs to be implemented in order to assist administrators in this process.” One
administrator related,
I truly think that evaluation and observation of teachers is one of the
largest components of our job. If we are effective, we are then able truly
to work with teachers on an individual basis to challenge them, as well as
provide resources for them as needed (especially new teachers). Only if
we are consistently observing what is going on in the classroom, can we
then see the need for effective staff development opportunities.
Unfortunately, this is an area (that) I wish I had more time. Although it is
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
126
a priority for me, I often spend a good portion of my day balancing my
other responsibilities!
The lack of qualified experience by evaluators was noted in the following
comments. A Dean of Students wrote,
I am a first year Dean of Students in a middle school. I found that some of
the questions were difficult to answer as a dean. I have received little, if
any formal clinical training for teacher observation, supervision, and
evaluation outside of my Tier I program.
A teacher remarked,
When my AP observes me, I have my RSP students doing routine work.
She would not know how to recognize standards-based instruction. She
was assigned to our school after flaming out at the high school. I am self
motivated to learn and apply new strategies with my students. My AP is
not qualified to effectively evaluate and mentor me. I am not alone in
these feelings.
Another assistant principal was described by a teacher’s statement,
Some of my administrators are very qualified to evaluate teachers. One
has a background in PE in private schools. She would not know what
great teaching was if it hit her in the head. She is no more qualified to
observe and evaluate than our custodians.
Time constraints were the focus of critical comments. One respondent
noted,
We have 45 classroom teachers and only one administrator. There is no
way our administrator can be in our rooms often enough to really know
what's going on and to make any sort of holistic evaluation of our teaching
and still do the other required parts of the principal’s job. I wish our
principal really knew what was going on in my room. I'm proud of what I
do and wish those in charge knew the ins and outs of what goes on in the
classrooms a lot more than they do. 1 do not want to be watched like a
hawk - but I would like to be supported from a place of true experience
observing me in action.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
127
Another commented,
The intent of the evaluation process is good. Nevertheless, people are too
overworked (ADMINISTRATORS AND TEACHERS) to make It a
process that really enhances ones teaching ability. It appears (as) another
"paper item" (that) we need to complete. I think it is often only used to
fire or not rehire teachers.
In a few instances, a blistering indictment of the gap in
administrator/evaluator efficacy can be summed up with this remark: “My
principal never comes in my room except to get my signature on my evaluation!”
The challenge of administrator turnover was noted as a recurring comment
relative to this theme. A respondent stated,
The biggest problem for me has always been that there is little
communication beforehand. Moreover, we have a high turnover rate for
admin personnel, so every other year, I feel like another administrator is
trying to turn me into their protege.
Another example of this perspective was seen as,
I have had four principals in nine years. Their experiences vary
considerably — especially when it comes to skill, interest, and emphasis on
evaluations. The district does not appear to have any sense of training for
them. I am a tenured teacher; the evaluations seem as pretty much of a
waste of time. Instead, we should spend money and time towards
ongoing, meaningful, professional development. We never go back and
revisit the end of the year evaluations -- and the discussions of
professional development in the final conference. In that sense,
evaluations for tenured teachers are a joke. I want to grow as a teacher.
Evaluations are not the means for me to do so — even with a
knowledgeable, sincere principal. Their hands are tied by the policies and
process.
The competency and bias of evaluators were highlighted by several
comments. In one, the respondent exclaimed, “Evaluations are only as good as
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
128
the desired outcome of the evaluator.” Another replied, “These answers would
vary widely with a change (again) in administration. The subjective factor is
huge. I'm on my third principal at this school and (while I've done well with
each) their evaluations are all very different.” Still another stated that, “If
evaluations were performed by competent administrators or teachers then the
feedback could be invaluable. (Nevertheless) sadly, most evaluations are done by
the site administrator, whether or not they have experience in an academic
classroom.” Frustration was pointed out through the comment,
The evaluation process at my site is inconsistent. The administrators put
one thing in writing and do not follow through with it. However, they
expect the teacher to maintain all records, which they then review at the
end of the year, making their evaluation from those materials. Often they
observe the classroom once and base an entire year of teaching on that
ONE observation.
Impact on Teacher Practices
The fourth identified theme of the influence of teacher evaluation on
teachers’ classroom practices. Forty-six respondents commented that evaluations
have little if any such influence. In the quantitative survey, respondents were
equally distributed. Fifty percent of them disagreed with the statement that the
evaluation process affects teacher practice. The remaining fifty percent agreed
that the evaluation process does influence teacher practices. This may appear
somewhat benign. However, the large sample size, coupled with very strong
opinions, collectively indicate a negative trend. Of the twenty-seven comments,
all of them spoke to the negative affect. One commented that,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
129
For the most part, my evaluation experiences have been similar to doing
jury duty. Once in a while, an alternative evaluation is nice for a change.
Now and then, evaluators give people a bad time. Overall, it does not
affect my teaching.
Another respondent voiced,
Our evaluation document has satisfactory as the highest-level one can
receive. Our document is based on the CSTP. With almost all of our
teachers being satisfactory or above the document is of little value. If our
document had a fourth level (exceeds standard) the document would be a
more valuable tool. It is very difficult to find teachers in our district that
are not satisfactory. (It’s) just one more task to complete before the end of
the year. Section six, Developing as a Professional Educator is probably
the most valuable of the standards when it comes to improving instruction.
Asserting his or her strong opinion, one respondent wrote:
Evaluations have provided me little substance, without any follow-through
or support for professional development. I see the time we spend as
wasted. My growth experiences come as a result of my own commitment
to make a bigger difference in the lives of my second graders.
Another commented that, “Although the CSTP’s and various models for
teacher evaluation based on them are a good idea, and Peer Assistance and
Review have been written into contracts, the actual practice does not come at all
near to the promise.” Presenting a different view, a respondent recorded,
After 32 years of teaching French and Spanish, I motivate myself to
improve as a teacher. My evaluations every two years do nothing more
than waste paper. Sure, it's nice to here that I am doing a good job, but the
evaluation doesn't really get me to change the way I teach - my students
do.
One final comment regarding the influence of evaluations upon teacher
practices adds:
Evaluations, in particular goals and objectives, are virtually useless. Not
to appear totally negative, but they really have not changed my teaching
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
130
practices, nor of any teachers (that) I work with, to my knowledge. They
are just more hoops for teachers to jump through. The money and effort
spend on goals and objectives could be far better spent reducing class size,
buying supplies, and in general making teaching easier and more
enjoyable.
These opinions indicate that much can be done to improve instructional
practices.
Beginning and Experienced Teachers
The fifth theme emerged with twenty-four comments addressing the need
for differentiated evaluation procedures and processes for beginning teachers
compared to experienced teachers. While the comments generally followed up on
survey items questions 31 and 32, these questions paralleled comments noted with
collective bargaining, and the impact of evaluation on teacher classroom
practices. Teachers expressed opinions that evaluations have little value for
experienced or veteran teachers. Other veteran teachers indicated that while they
do have tremendous concern for instructional improvement and increased student
achievement, other factors noted in the other themes inhibit effective their
evaluation experiences. Teachers and administrators also indicate that beginning
teachers warrant uniquely tailored support and evaluation practices.
Preference for Alternative Evaluation Methods
The sixth theme, Preference for Alternative Evaluation Methods, included
nine comments. One respondent wrote, “In order for evaluations to be more
meaningful...a peer chosen by the teacher should be involved. Many teachers are
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
131
defensive about criticism...even if it is constructive. A peer could really help,
especially a beginning teacher.”
Use of Student Achievement
The seventh theme noted in analyzing the 194 comments covered the topic
of using student achievement in evaluating teachers. Eight respondents supported
such use while seventeen respondents opposed linking student achievement to
teacher evaluations. One respondent penned,
Student comments or test scores should never be used to judge teachers.
Students are the product of their environment, most of which the teacher
has little influence over. Tests are snapshots of a student's performance
and do not take into account family, or social concerns the student might
be more worried about at the time of the test.
Use of Student Input
The use of student input did not receive many additional comments. Nine
respondents supported the use of input. Most of these were teachers and
administrators working in secondary settings. This was expected in light of the
review of the literature, and with reason that high school students can provide
reliable data pertaining to their teachers’ instructional practices. Three
respondents felt compelled to voice opposition to the use of student input,
particularly due to the lack of reliability of younger students to give bias-free
input.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
132
Value Added Measures
The final theme that emerged from the 194 total comments pertained to
the need for a more refined approach to linking student achievement to teacher
evaluations. Specifically, eight respondents commented that, “I think student
"growth" should be used for evaluative purposes - but not just simply
standardized test scores - multiple measures need to be considered.” Another
respondent commented,
No matter how wonderful a teacher is, some students cannot perform
above a certain level and other students will perform extremely well. If
someone assessed help at home, knowledge of English, IQ of students,
required after-school jobs to support family, and about 100 other criteria
that effects students' performance, then it might be wise to base a teacher's
abilities with test scores. I think it would be wiser to take schools of
similar demographics and have the more successful schools (test results
wise) give suggestions to other schools in that group.
Summary of Findings - Elementary Quantitative Data
1. Survey results demonstrated significant differences between principals and
teachers in statements representing both standards and satisfaction in school
with an API at or above 800, with administrators indicating much more
favorable agreement than did teachers.
2. Findings indicated a higher perception of agreement with both standards and
satisfaction of the teacher evaluation process by administrators in schools with
an API below 800, than the perceptions by teachers in similar schools.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
133
3. When API was a qualifier, the mean scores of administrators and teachers
Indicated slightly favorable perceptions of statements of standards (2.74 and
2.78 respectively, with a median of 2.5). Similar perceptions demonstrating
modest support for the statements of satisfaction were noted (2.68 and 2.70
respectively, with a median of 2.5).
4. Findings indicated a considerably higher perception of agreement with both
standards and satisfaction of the teacher evaluation process by administrators
in schools within districts serving more than 10,000 students, as well as
districts serving 10,000 or fewer students, than were the perceptions by
teachers in similar schools.
5. Administrators and teachers with more than five years experience indicated
perceptions of both sets of dependent variables. The data found teachers
expressing significantly lower — almost neutral views, than did principals and
other administrators who indicated relative agreement with the statements of
standards and satisfaction.
6. Regarding both sets of dependent variables, principals, and assistant principals
with five years or less experience expressed significantly more support than
were teachers and other administrators. Teachers and other administrators
expressed neutral or disagreement with the statements.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
134
Summary of Findings - K-12 Quantitative Data
1. K-12 administrators and teachers indicated considerable disagreement
with the statement that teacher evaluations are informative, timely, and
influential, particularly at the secondary levels.
2. K-12 administrators and teachers indicated considerable disagreement
with the statement that teacher evaluations are easy to conduct and
efficient, particularly at the elementary and high school levels.
3. K-12 administrators and teachers indicated considerable disagreement
with the statement that evaluators are sufficiently trained to evaluate
teacher performance accurately, particularly at the middle and high school
levels.
4. K-12 administrators and teachers indicated strong disagreement with the
statement that sufficient resources are spent on evaluations, particularly at
the elementary and high school levels.
5. K-12 administrators and teachers indicated considerable disagreement
with the statement that the teacher evaluation process affects teacher
practice, particularly at the secondary levels. Teachers and administrators
as a mean indicated a relatively neutral viewpoint at the elementary level.
6. K-12 administrators and teachers at all instructional levels indicated strong
agreement with the statement that the teacher evaluation process should
differ for beginning and experienced teachers.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
135
7. K-12 administrators and teachers at all instructional levels indicated
considerable disagreement with the statement that the teacher evaluation
process does differ for beginning and experienced teachers.
8. K-12 administrators and teachers at all instructional levels indicated strong
disagreement with the statement that student input should be used in
evaluating teachers.
9. K-12 administrators and teachers at all instructional levels indicated strong
disagreement with the statement that student achievement on standardized
test scores should be used in evaluating teachers.
10. The K-12 factorial ANOVA completed for perceptions of evaluation
standards and statements of satisfaction indicated similar outcomes. Each
suggests statistically significant differences between position groups,
particularly between principals and other administrators compared to
assistant principals and teachers. The data for each subgroup indicates
considerable disagreement with survey statements in both standards and
satisfaction.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
136
CHAPTER Y
CONCLUSIONS, RECOMMENDATIONS, and IMPLICATIONS
Introduction
This study examined K-12 administrators and teachers’ perceptions of the
teacher evaluation process in California’s public schools, with particular focus on
the elementary school level. Evaluation of public K-12 teachers in the State of
California serves to enhance classroom practices and facilitate professional
development in compliance with state and local legislative and policy
requirements. In this era of high-stakes testing and calls for increased
accountability of public education for both student achievement and the use of
available resources, the process of evaluation of certificated personnel required
examination in terms of overall efficacy, and its role in maximizing the
achievement of desired goals.
Teacher evaluation has the potential of greatly influencing classroom
practices and student achievement. Teacher evaluation, when done well, has a
significant influence on a school’s culture (Danielson, 2002). Simply, culture
refers to the way things are done around here (Bolman & Deal, 2003; Deal &
Kennedy, 1982, 2000). Much can be said for the importance of organizations
working towards shared values and the success of the common good. Effective
evaluations provide stakeholders assurances of high standards for teacher
performance while promoting professional learning. In a particular school, a
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
137
culture of mutual respect and shared high expectations for students and staff can
yield positive outcomes and enhanced success (Danielson, 2002).
Despite considerable research, past practices, and state and local mandates
addressing the process and functions of evaluations, often times, individual
teacher evaluations serve to fulfill a requirement, and then simply get filed away
never to be seen or considered again. Much time and energy on the part of
evaluators, teachers, and occasionally support staff is seemingly wasted. Unless
careful consideration is given to the perceptions of principals and other site
evaluators and teachers about evaluation and the evaluation process, time and
resources might be better allocated to other important school functions and
personnel. To date, consideration of such perceptions by evaluators and teachers
among California’s K-12 public schools are limited to site qualitative case studies,
with teacher evaluation given little merit. A review of the literature suggested
that some quantitative examination of evaluation practices has occurred outside of
California. Such efforts were limited in scope and did not reflect current trends in
school governance, accountability, and standards-based instructional practices.
Purpose of the Study
The purpose of this study was to survey elementary public school
administrators and teachers in California as to their perceptions of the teacher
evaluation process. This study also examined their perceptions regarding the
impact of teacher evaluation on actual classroom practices. Teachers’ voices
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
138
relative to emerging teacher evaluation trends have been missing. An increased
understanding of teachers’ perceptions relative to evaluation could provide vital
information on enhanced teaching and learning methodologies, in addition to
apparent weaknesses of newly implemented evaluation processes and systems
(Ovando, 2001). Additionally, this study analyzed differences in the perceptions
of teachers and evaluators based on independent criteria such as years of
experience in each respective role, the school’s academic performance index
(API), school size, and their years of experience, relative to dependent variables.
Such criteria included items addressing the appropriateness of using of students’
test scores in evaluating teachers, the support teachers receive from site and
district administrators, the timeliness of feedback, and opportunities to link
evaluation outcomes to professional development and other support services,
among others.
Literature
A careful review of the literature regarding teacher evaluation generated
an extensive array of qualitative studies, case studies, and empirical data. Few
quantitative studies have taken place, and only four dealt with elements of
evaluation of personnel in education in California. Each of these was a doctoral
dissertation. The oldest of these was completed in 1995, with the remaining three
published in 2000. One of these focused on alternative methods or models of
teacher evaluation with the population of primarily school district
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
139
superintendents, and a small percentage of principals. One study surveyed K-6
special education teachers about their alternative evaluation experiences. The
third study surveyed three elementary schools in the California Central Valley,
asking their teachers nine question items about perceptions towards the fairness of
evaluations. The fourth and final quantitative study actually incorporated a
mixed-methods design. This study included 41 high school site administrators
and their collective 175 teachers. Two-thirds of the study’s population
represented private secular schools.
While findings, sample sizes, and methodologies varied greatly, the
literature identified and discussed in the chapter helped contribute to a consensus
of understanding, in addition to exposing gaps — which warrant additional study.
Some of the similarities and variances in findings could be attributed to unique
characteristics and demographics of the populations (Marzano, 2003).
Interestingly, studies completed in the 1970’s have represented shared threads of
common findings - even 20 years later into the 1990’s.
The implementation of meaningful research and implications for improved
learning has been surprisingly quite slow. Calls for moving away from static
summative evaluations focusing on determining future employment, have given
way to a multitude of formative, professional growth models. Similar findings
and recommendations for differentiated evaluation have been written for over
thirty years. Actual advances in evaluation practices have moved along the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
140
continuum quite slowly. Evolving from checklists, teacher practices have
incorporated contemporary standards for the professional. According to a
considerable percentage of the literature, summative - end of the year evaluations
appear to hold little value for tenured and experienced teachers. Why then are
time and resources allocated to these seemingly ritualistic events? A review of
literature provided extensive examination of emerging trends, and exposed gaps
in research, and the impetus for lasting change.
This review also served to guide researchers in understanding the history
of personnel evaluation in education, the role of federal, state, and local
requirements, broader instructional implications, and the potential impact that it
has had both the affective and academic elements of a school setting. The
historical review also highlighted the influence of collective bargaining and the
evaluation process upon teacher practices and overall morale.
A review of the literature pertaining to the topic of teacher evaluation
uncovered an extensive array of germane studies and materials. Many of the
selected articles, studies, and highly regarded texts covering teacher evaluation
incorporated states and local school districts’ efforts and reform movements from
around the United States. Very few of the composite totals were related to the
topic and specific to K-12 public schools within the state of California. Of those
researched and written here in California, most were often specific to one aspect
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
141
or model of the teacher evaluation process, and followed a case study or
qualitative study paradigm.
Research Questions
1. What are administrators’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving at or above an API of
800?
2. What are teachers’ attitudes or perceptions of teacher evaluations in public
elementary schools in California, achieving at or above an API of 800?
3. What are administrators’ attitudes or perceptions of teacher evaluations in
public elementary schools in California, achieving below an API of 800?
4. What are teachers’ attitudes or perceptions of teacher evaluations in public
elementary schools in California, achieving below an API of 800?
5. What are administrators’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving more than 10,000
students?
6. What are teachers’ attitudes or perceptions of teacher evaluations in public
elementary schools within districts serving more than 10,000 students?
7. What are administrators’ attitudes or perceptions of teacher evaluations in
public elementary schools within districts serving 10,000 or less students?
8. What are teachers’ attitudes or perceptions of teacher evaluations in public
elementary schools within districts serving 10,000 or less students?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
142
9. What are administrators’, with more than five years experience, attitudes
or perceptions of teacher evaluations in public elementary schools?
10. What are teachers’, with more than five years experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
11. What are administrators’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
12. What are teachers’, with five years or less experience, attitudes or
perceptions of teacher evaluations in public elementary schools?
Sample and Methodology
Analysis of the K-12 results was completed from 941 total responses from
administrators and teachers. The collected data was subsequently disaggregated
into three levels — elementary, middle and high school. The elementary school
population size used for data analysis included 460 respondents. Invitations to
participate were sent to superintendents in all public school districts throughout
the State of California. Efforts to contact school district superintendents were
done via U.S. Mail, or e-mail. A letter of introduction was prepared to introduce
this quantitative study, and asked willing superintendents to encourage their site
administrators and respective K-12 teachers to log online to a particular website
and complete the survey. Directions for completing the survey were available on
the website along with a three-page information sheet for non-medical research
approved by the University of Southern California’s Institutional Review Board.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
143
Data collected via the online survey service was later extensively analyzed by the
researchers.
Names of the school districts and their superintendents were obtained from
the State of California’s Department of Education website:
http://wvnv.cde.ca.gov/re/sd/index.asp, which was public information. The
survey questionnaire was available from any computer with web access.
Participation in the survey was strictly voluntary and required
approximately five (5) minutes to complete. Completion of the survey was not
tracked, and participants remained anonymous. In fact, respondents were not
asked their name, school district, or mail/e-mail address.
A Likert-type scale was used in development of the survey to measure
teachers and administrators’ perceptions of the teacher evaluation process. The
responses were then compared to various independent variables including: school-
size, gender, education level, SES, and their years of experience as a teacher
and/or administrator, among others. Seventeen questions addressed statements of
standards, satisfaction/effectiveness, and issues pertaining to the teacher
evaluation process. An additional item permitted brief comments to be added to
the survey questionnaire. Both aggregated and disaggregated responses to the
survey were analyzed to allow the researchers to examine the data for their
respective grade levels. This study focused on the elementary school data, yet
provided comparative analyses of the aggregated K-12 data, as well.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
144
Data collection occurred utilizing an online-computerized survey
collection service, specifically surveymonkey.com. Once predetermined
minimum or suitable quantities of responses had been received, the researchers
exported the data into Microsoft Excel 2003 and SPSS v.12 formats to complete
data analysis.
Selected Findings
Summary of Findings - Elementary Quantitative Data, Part I
Responses to Research Questions
1. Survey results demonstrated significant differences between principals and
teachers in statements representing both standards and satisfaction in
school with an API at or above 800, with administrators indicating much
more favorable agreement than did teachers.
2. Findings indicated a higher perception of agreement with both standards
and satisfaction of the teacher evaluation process by administrators in
schools with an API below 800, than the perceptions by teachers in similar
schools.
3. When API was a qualifier, the mean scores of administrators and teachers
indicated slightly favorable perceptions of statements of standards (2.74
and 2.78 respectively, with a median of 2.5). Similar perceptions
demonstrating modest support for the statements of satisfaction were
noted (2.68 and 2.70 respectively, with a median of 2.5).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
145
4. Findings indicated a considerably higher perception of agreement with
both standards and satisfaction of the teacher evaluation process by
administrators in schools within districts serving more than 10,000
students, as well as districts serving 10,000 or fewer students, than were
the perceptions by teachers in similar schools.
5. Administrators and teachers with more than five years experience
indicated perceptions of both sets of dependent variables. The data found
teachers expressing significantly lower - almost neutral views, than did
principals and other administrators who indicated relative agreement with
the statements of standards and satisfaction.
6. Regarding both sets of dependent variables, principals, and assistant
principals with five years or less experience expressed significantly more
support than were teachers and other administrators. Teachers and other
administrators expressed neutral or disagreement with the statements.
Summary of Findings - Elementary Quantitative Data, Part II
About 400 respondents with teachers outnumbering administrators by a
margin of 9:7 participated in this survey. Further analysis of the elementary data
representing both administrators and teachers in factorial ANOVAs relative to
five additional questions yielded the following findings to postulated questions:
1. What were the overall findings for standards (Tables 27-33) and
satisfaction (Tables 34-40) for elementary administrators and teachers?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
146
Overall, respondents for all four subgroups indicated a moderate to
significant disagreement for statements of standards and satisfaction.
Principals and other administrators generally indicated greater agreement
for both sets of dependent variables, that assistant principals and teachers.
2. In general, what was the nature of the difference between administrators
and teachers? Teachers indicated considerable disagreement with
evaluation standards, and ranked statements of satisfaction marginally
higher than did administrators with 3-5 years experience. The
independent variables of District size, API scores, and years of experience
all factored into the identified differences.
3. The administrator and teacher differences in mean scores for standards and
satisfaction cannot be directly attributed to years of experience. Relative
to evaluation standards, administrators and teachers expressed similar
overall scores. Significant differences were noted in standards and
satisfaction with both subgroups noting mean scores lower for individuals
with 3-5 and 6-10 years of experience, than their peers with 0-2, and 11 or
more years.
4. Is the administrator and teacher difference effect attributed to district size?
The factorial ANOVAs indicated similar response mean scores regarding
standards and satisfaction. Both groups indicated the highest ratings
among districts serving 1,000 or less students, and secondly district of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
147
10,000 - 50,000 students. Administrators indicated higher agreement for
standards and satisfaction that did teachers. Reasons for each of these
outcomes warrant additional research.
5. API score status did indicate greater support for statements of standards
and satisfaction by administrators than teachers did (See elementary
findings 1-3, Part I, above).
6. Concerning the qualitative findings, elementary teachers’ comments were
more critical of the overall evaluation process, administrative training
issues, the need for differentiation for the evaluation of beginning and
experienced teachers, a preference for alternative evaluation methods, and
strong opposition to the use of student achievement, and student input.
Administrators wrote explicit comments expressing concern over the role
of collective bargaining, tenure, limited available resources, the need to
increase the impact of evaluation on teachers’ classroom practices, and the
potential for value-added measures to improve the efficacy of teacher
evaluation.
Summary of Findings - K-12 Quantitative Data
As noted in Chapter 4, much of the aggregated K-12 data was examined,
discussed, and summarized by in conjunction with two co-researchers, one
representing the middle school data and findings, and one representing the high
school data and findings. The following K-12 Summary of Quantitative and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
148
Qualitative Findings were prepared by the same three researchers, each
representing elementary, middle, and high school levels, respectively.
1. K-12 administrators and teachers indicated considerable disagreement
with the statement that teacher evaluations are informative, timely, and
influential, particularly at the secondary levels.
2. K-12 administrators and teachers indicated considerable disagreement
with the statement that teacher evaluations are easy to conduct and
efficient, particularly at the elementary and high school levels.
3. K-12 administrators and teachers indicated considerable disagreement
with the statement that evaluators are sufficiently trained to evaluate
teacher performance accurately, particularly at the middle and high school
levels.
4. K-12 administrators and teachers indicated strong disagreement with the
statement that sufficient resources are spent on evaluations, particularly at
the elementary and high school levels.
5. K-12 administrators and teachers indicated considerable disagreement
with the statement that the teacher evaluation process affects teacher
practice, particularly at the secondary levels. Teachers and administrators
as a mean indicated a relatively neutral viewpoint at the elementary level.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
149
6. K-12 administrators and teachers at ail instructional levels indicated strong
agreement with the statement that the teacher evaluation process should
differ for beginning and experienced teachers.
7. K-12 administrators and teachers at all instructional levels indicated
considerable disagreement with the statement that the teacher evaluation
process does differ for beginning and experienced teachers.
8. K-12 administrators and teachers at all instructional levels indicated strong
disagreement with the statement that student input should be used in
evaluating teachers.
9. K-12 administrators and teachers at all instructional levels indicated strong
disagreement with the statement that student achievement on standardized
test scores should be used in evaluating teachers.
10. The K-12 factorial ANOVA completed for perceptions of evaluation
standards and statements of satisfaction indicated similar outcomes. Each
suggests statistically significant differences between position groups,
particularly between principals and other administrators compared to
assistant principals and teachers. The data for each subgroup indicates
considerable disagreement with survey statements in both standards and
satisfaction.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
150
Summary of Findings - K-12 Qualitative Data
The following meaningful themes were identified from explicit comments:
• Evaluation Process
• Collective Bargaining
• Administrative Training Issues
• Impact on Teacher Practices
• Beginning and Experienced Teachers
• Preference for Alternative Evaluation Methods
• Use of Student Achievement
• Use of Student Input
• Value Added Measures
As noted in detail in Chapter IV, these particular themes emerged which
paralleled essential questions, and communicated candid viewpoints by teachers
and administrators. This empirical analysis also served to add dimension to the
quantitative outcomes. One item, Collective Bargaining was addressed relative to
instrumentation, evaluation functions, and the frequency of formal observations
and evaluations. In the quantitative analysis, 63% of respondents indicated use of
traditional observation protocols or rubrics linked to the California Standards for
the Teaching Professional. Seventy-one percent (71%) of respondents indicated
that classroom observations and summative evaluations are separate functions.
Item #36 - Comments indicated that the efficacy of teacher evaluations
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
151
diminishes considerably in their influence upon classroom practices once teachers
achieve tenure. Comments also noted that the training, professionalism, and
effectiveness of the evaluator greatly influence the perceptions of teachers being
evaluated, and subsequently their instructional practices.
Conclusions
As a result of this study, particular conclusions emerged from the
elementary level findings. Regardless of a particular school’s Academic
Performance Index (API), administrators and teachers have distinctly different
viewpoints as to personnel evaluation standards and specific statements of
satisfaction with the teacher evaluation process with California’s K-12 public
schools. While administrators expressed greater agreement with both standards
and overall satisfaction, the data suggests that considerable improvement can be
done to improve both the perceptions and efficacy of evaluation practices and
processes. Careful examination of the data indicated that the size of the school
district did not influence the perceptions by both administrators and teachers.
When considering the perceptions of administrators and teachers with more than
five years of experience, the results showed considerably higher support for
current standards for teacher evaluations and in statements of satisfaction for the
teacher evaluation process. Other administrators and teachers with five years or
less experience were clearly not so supportive of the teacher evaluation process.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
152
The K-12 quantitative data provided considerable information. The data
indicated that teacher evaluations are not informative, timely, and/or influential.
Teacher evaluations are not easy to conduct and efficient, as well. The data also
made clear the perception by administrators and teachers that evaluators are not
adequately trained to evaluate teacher performance accurately. The data
suggested that insufficient resources are spent on evaluation. Another conclusion
drawn from the data indicated that, in fact, the teacher evaluation process does not
influence, or affect teacher practices, particularly for tenured faculty.
This survey also examined the perceptions of administrators and teachers
relative to non-standards and statements of satisfaction. The data demonstrated
considerable support for differentiated evaluation practices for beginning and
experienced teachers. Currently, the data suggested, such is not the case. Little
support exists, even when disaggregating data for different school levels, for the
use of student input in evaluating teachers. Similarly, a lack of support exists for
linking student achievement on standardized test scores to teacher evaluations.
The data suggests that much can be done to improve the process of teacher
evaluation in California’s K-12 public schools. Conclusions can be drawn from
the perceptions of administrators and teachers to suggest that changes need to be
made. The data also provided quantitative evidence that such views are
representative of populations of teachers and administrators from school districts
of all sizes, with various percentages of English Language Learners, Title I status,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
153
a range in years of experience, education levels, and student achievement as
evidenced by their particular school’s API score. Qualitative analysis of
respondents’ comments corroborated the quantitative data, and voiced similar, If
not more critical conclusions.
Recommendations
1. Amend state legislation and/or local school district policies to change the
teacher evaluation process to a clearly formative, professional growth
model over current less effective and disparate mechanisms.
2. Amend state legislation and/or local school district policies to align all
mandated teacher evaluations to incorporate the California Standards for
the Teaching Professional. Regardless of the instrument, the CSTP’s
provide fair and uniform standards in which to evaluate classroom teacher
instructional practices.
3. Amend state legislation, local school district policies, and Collective
Bargaining agreements to improve the affect of the teacher evaluation
process upon teacher practices.
4. Change evaluation practices to increase significantly their efficacy in
providing meaningful information, their timeliness, and ability to
influence classroom instructional practices.
5. Change evaluation practices to streamline implementation and improve
efficiency for both evaluators and certificated teachers.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
154
6. Implement systematic and rigorous evaluator training procedures and
practices to improve the accuracy of evaluating teacher performance.
7. Change evaluation practices to ensure that evaluators spend adequate time
to evaluate teacher performance accurately.
8. Continue to provide professional development suggestions during the
teacher evaluation process.
9. Provide on-going support and mutual accountability features for
professional development during and after the teacher evaluation process.
10. Develop and implement uniform and local policies for differentiated
evaluation practices for beginning and experienced teachers.
Implications for Further Research
1. Conduct research on schools currently using formative evaluation
instruments to determine the impact on classroom practices.
2. Conduct research on causal factors that impede transition to
implementation of CSTP evaluation instruments by out of compliance
school districts.
3. Conduct research on current Collecting Bargaining elements that inhibit
the efficacy of evaluation practices on classroom performance.
4. Determine which changes to current evaluation practices would increase
the efficacy in providing meaningful information, their timeliness, and
ability to influence classroom instructional practices.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
155
5. Determine which changes to current evaluation practices would improve
the ease of conduct and efficiency of teacher evaluations for both
evaluators and certificated teachers.
6. Conduct research on how to improve evaluator training procedures and
practices to enhance the effectiveness and accuracy of evaluating teacher
performance.
7. Determine what changes in methods, resources, and support are required
to increase the time administrators spend in evaluating teacher
performance.
8. Conduct research on causal factors that impede transition to
implementation of CSTP Standard VI - Professional Development, by out
of compliance school districts.
9. Determine ways to incorporate professional development opportunities
and support into ongoing teacher evaluation practices.
10. Conduct research to determine which differentiated teacher evaluation
practices are most effective for beginning and experienced teachers and
how best to implement them.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
156
REFERENCES
Bastarache, R. (2000). Purposes, methods, and effectiveness o f teacher
evaluation: Perceptions o f urban elementary teachers and principals.
Beerens, D. (2000). Evaluating teachers for professional growth: Creating a
culture o f motivation and learning. Thousand Oaks: Corwin Press.
Bolman, L., & Deal, T. (2003). Reframing organizations: Artistry, choice, and
leadership (Third ed.). San Francisco: Jossey-Bass, John Wiley & Sons,
Inc.
Booth, J. A. (2000). The use o f additional evaluation and/or assessment criteria
to evaluate special education teachers in k— 6 California public schools.
Boyd, R. (1989). Improving teacher evaluations. Eric Digest no. I l l (071
Information Analyses— ERIC IAPs; 142 Reports— Evaluative). District of
Columbia: ERIC Clearinghouse on Tests, Measurement, and Evaluation,
Washington, DC.
Brandt, R. (1996). On a new direction for teacher evaluation: A conversation with
Tom McGreal. Educational Leadership, 53(6), 30-33.
Bridges, E. M., & Groves, B. R. (1999). The macro- and micropolitics of
personnel evaluation: A framework. Journal o f Personnel Evaluation in
Education, 13(4), 321-337.
Bryant, M., & Curtin, D. (1995). Views of teacher evaluation from novice and
expert evaluators. Journal of Curriculum and Supervision, 10(3), 250-261.
Callard, B. M. (2003). Superintendent's impact on the principal's role as teacher
evaluator. University of Southern California, Los Angeles.
Caracelii, V. J., Preskill, H., Henry, G. T., & Greene, J. C. (2000). The expanding
scope o f evaluation use source: New directions for evaluation (Vol. 88).
San Francisco: Jossey-Bass.
CDE. (1999). California standards for the teaching profession: Resources for
professional practice. In CCTC (Ed.) (Field Review Version ed., pp. 77).
Oakland: California Department of Education and Educational Testing
Service.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
157
Clark, R. E. (2002). Turning research into results: A guide to selecting the right
performance solutions. Atlanta: Center for Effective Performance.
Colby, S. A. (2001). A comparison o f the impact o f state-mandated and locally
developed teacher evaluation systems. East Carolina University.
Creswell, I. (2002). Research design: Qualitative, quantitative and mixed method
approaches (2nd ed.). Thousand Oaks: Sage Publications.
Curry, S. L. (2000). A study o f teachers’ perceptions o f the teacher evaluation
portfolio in regard to subjectivity, time constraints, andfairness.
University of La Veme, La Veme.
Danielson, C. (2001). New trends in teacher evaluation. Educational Leadership,
58(5), 12-15.
Danielson, C. (2002). Enhancing student achievement: A framework for school
improvement. Alexandria: Association for Supervision and Curriculum
Development.
Danielson, C., & McGreal, T. L. (2000). Teacher evaluation to enhance
professional practice. Alexandria: Association for Supervision and
Curriculum Development.
Davis, D. R., Ellett, C. D., & Annunziata, J. (2002). Teacher evaluation,
leadership and learning organizations. Journal o f Personnel Evaluation in
Education, 16(4), 287-301.
Deal, T. E., & Kennedy, A. A. (1982, 2000). Corporate cultures: The rites and
rituals o f corporate life (Second ed.). Cambridge: Perseus Publishing.
Desander, M. K. (2000). Teacher evaluation and merit pay: Legal considerations,
practical concerns. Journal o f Personnel Evaluation in Education, 14(4),
307-317.
Drake, T., & Roe, W. (2003). The Principalship (6th ed.). Upper Saddle River:
Merrill Prentice Hall.
EdSource. (2004, September 6). Teacher quality overview. Retrieved September
6, 2004.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
158
Eliett, C. D., Annunziata, }., & Schiavone, S. (2002). Web-based support for
teacher evaluation and professional growth: The professional assessment
and comprehensive evaluation system (paces). Journal o f Personnel
Evaluation in Education, 16(1), 63-74.
Eliett, C. D., & Teddlie, C. (2003). Teacher evaluation, teacher effectiveness and
school effectiveness: Perspectives from the USA. Journal o f Personnel
Evaluation in Education, 17(1), 101-128.
Frase, L. E., & Streshly, W. (1994). Lack of accuracy, feedback, and commitment
in teacher evaluation. Journal o f Personnel Evaluation in Education, 5(1),
47-57.
Fullan, M. (2001). Leading in a culture o f change. San Francisco: Jossey-Bass.
Gall, M. D., Gall, J. P., & Borg, W. R. (2002). Educational research: An
introduction (7th ed.). Boston: Allyn & Bacon; Upper Saddle River:
Pearson Education [Distributor].
Glatthorn, A. A. (1997). Differentiated supervision (2nd ed.). Alexandria:
Association for Supervision & Curriculum Development.
Heller, D. A. (2004). Teachers wanted: Attracting and retaining good teachers.
Alexandria: Association for Supervision & Curriculum Development.
Heneman, H. G., Ill, & Milanowski, A. T. (2003). Continuing assessment of
teacher reactions to a standards-based teacher evaluation system. Journal
o f Personnel Evaluation in Education, 1 7(2), 173-195.
Howard, B. B., & McColskey, W. H. (2001). Evaluating experienced teachers.
Educational Leadership, 58(5), 48-51.
Iwanicki, E. F. (2001). Focusing teacher evaluations on student learning.
Educational Leadership, 58(5), 57-59.
Johnson, B. L., Jr. (1997). An organizational analysis of multiple perspectives of
effective teaching: Implications for teacher evaluation. Journal o f
Personnel Evaluation in Education, 11(1), 69-87.
Johnson, B. L., Jr. (1999). Great expectations but politics as usual: The rise and
fail of a state-level teacher evaluation initiative. Journal o f Personnel
Evaluation in Education, 15(4), 361-381.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
159
Kelly, L. (1999). Teacher portfolios: Tools for successful evaluations. Education
Update, 41(2), 4.
Kimball, S. M. (2002). Analysis of feedback, enabling conditions and fairness
perceptions of teachers in three school districts with new standards-based
evaluation systems. Journal o f Personnel Evaluation in Education, 16(4),
241-268.
Lofton, G., Hill, F., & Claudet, I. G. (1997). Can state-mandated teacher
evaluation fulfill the promise of school improvement? Events in the life of
one school. Journal o f Personnel Evaluation in Education, 11(1), 139-
165.
Loofbourrow, S., & Duardo, J. (1996). Collective bargaining: Maximizing school
board leadership. West Sacramento: California School Boards
Association.
Loup, K. S., Garland, J. S., Eliett, C. D., & Rugutt, I. K. (1996). Ten years later:
Findings from a replication of a study of teacher evaluation practices in
our 100 largest school districts. Journal o f Personnel Evaluation in
Education, 10(3), 203-226.
Lowe, A. M. (2000). A study o f the evaluation o f secondary school teachers in
selected schools in southern California as perceived by secondary school
teachers and evaluators. Azusa Pacific University, Azusa, CA.
Manning, R. C. (1988). The teacher evaluation handbook: Step-by-step
techniques andforms for improving instruction. Hoboken: Jossey-Bass
Publishers; John Wiley & Sons.
Marzano, R. J. (2003). What works in schools: Translating research into action.
Alexandria: Association for Supervision & Curriculum Development;
Boulder: netLibrary [Distributor].
Maxwell, N. (2004). Data matters: Conceptual statistics fo r a random world.
Emeryville: Key College Publishing.
McNelly, T. A. (2002). Evaluations that ensure growth: Teachers portfolios.
Principal Leadership, 5(4), 55-60.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
160
McNulty, B. (2004). School leadership that works: Understanding and applying
the research on leadership to school improvement, VIP School Leadership.
Camarillo: McREL.
Milanowski, A. T., & Heneman, H. G., III. (2001). Assessment of teacher
reactions to a standards-based teacher evaluation system: A pilot study.
Journal o f Personnel Evaluation in Education, 15(3), 193-212.
Milkman, I., & Darling-Hammond, L. (1989). The new handbook o f teacher
evaluation: Assessing elementary and secondary school teachers (2nd
ed.). Thousand Oaks: Corwin Press.
NCLB. (2004). No child left behind act of 2001. In U. S. D. O. Education
(Ed.):http://www.ed.gov/policy/elsec/leg/esea02/index.html.
Oakley, K. (1998). The performance assessment system: A portfolio assessment
model for evaluating beginning teachers. Journal o f Personnel Evaluation
in Education, 11(1), 323-341.
Ovando, M. N. (2001). Teachers' perceptions of a learner-centered teacher
evaluation system. Journal o f Personnel Evaluation in Education, 15(3),
213-231.
Painter, S. R. (2000). Principals' efficacy beliefs about teacher evaluation. Journal
o f Education Administration, 55(4), 368-378.
Painter, S. R. (2000). Principals' perceptions of barriers to teacher dismissal.
Journal o f Personnel Evaluation in Education, 14(3), 253-264.
Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. (3rd
ed.). Thousand Oaks: Sage.
Patton, M. Q. (2002). Qualitative research & evaluation methods (Third ed.).
Thousand Oaks: Sage Publications.
Peterson, K. D. (2000). Teacher evaluation: A comprehensive guide to new
directions and practices (2nd ed.). Thousand Oaks: Corwin Press.
Peterson, K. D., Kelly, P., & Caskey, M. (2002). Ethical considerations for
teachers in the evaluation of other teachers. Journal o f Personnel
Evaluation in Education, 16(4), 317-324.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1 6 1
Peterson, K. D., Stevens, D., & Mack, C. (2001). Presenting complex teacher
evaluation data: Advantages of dossier organization techniques over
portfolios. Journal o f Personnel Evaluation in Education, 15(2), 121-133.
Peterson, K. D., Wahlquist, Bone, K., Thompson, J., & Chatterton, K. (2001).
Using more data sources to evaluate teachers. Educational Leadership,
58(5), 40-43.
Ribas, W. B. (2000). Ascending the elps to excellence in your district's teacher
evaluation. Phi Delta Kappan, 81(8), 585-589.
Sando, J. P. (1995). Implementation o f teacher evaluation systems that promote
professional growth. University of Southern California, Los Angeles.
Searfoss, L. W., & Enz, B. J. (1996). Can teacher evaluation reflect holistic
instruction? Educational Leadership, 53(6), 38-41.
Shinkfield, A. J. (1996). Teacher evaluation: Guide to effective practice. Norwell:
Kluwer Academic Publishers.
Stiggins, R., & Duke, D. (1988). The case for commitment to teacher growth:
Research on teacher evaluation1 . State University of New York.
Stronge, J., & Tucker, P. (2003). Handbook on teacher evaluation: Assessing and
improving performance. Larchmont: Eye On Education.
Stronge, J. H. (2002). Qualities o f effective teachers. Alexandria: Association for
Supervision & Curriculum Development.
Stronge, J. H., & Tucker, P. D. (1999). The politics of teacher evaluation: A case
study of new system design and implementation. Journal o f Personnel
Evaluation in Education, 13(4), 339-359.
Stronge, J. H., & Tucker, P. D. (2000). Teacher evaluation and student
achievement. Annapolis Junction: National Education Association.
Stufflebeam, D. (1998). Conflicts between standards-based and postmodernist
evaluations: Toward rapprochement. Journal o f Personnel Evaluation in
Education, 12(3), 287-296.
Stufflebeam, D. (2001). Evaluation models: New directions for evaluation.
Hoboken: Jossey-Bass Publishers; John Wiley & Sons.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
162
Stufflebeam, D., & Pullin, D. (1998). Achieving legal viability in personnel
evaluations. Journal o f Personnel Evaluation in Education, 2/(1), 215-
230.
Stufflebeam, D. L. (1988). The personnel evaluation standards: How to assess
systems for evaluating educators, Thousand Oaks: Corwin Press.
Sullivan, K. A., & Zirkel, P. A. (1998). The law of teacher evaluation: Case law
update. Journal o f Personnel Evaluation in Education, 22(4), 367-380.
Sullivan, K. A., & Zirkel, P. A. (1999). Documentation in teacher evaluation:
What does the professional literature say? NASSP Bulletin, 83(607), 48-
58.
Van Wagenen, L., & Hibbard, K. M. (1998). Building teacher portfolios.
Educational Leadership, 55(5), 26-29.
Wilson, B., & Wood, J. A. (1996). Teacher evaluation: A national dilemma.
Journal o f Personnel Evaluation in Education, 10(1), 75-82.
Wright, S. P., Horn, S., & Sanders, W., L. (1997). Teacher and classroom context
effects on student achievement: Implications for teacher evaluation.
Journal o f Personnel Evaluation in Education, 11(1), 51-61.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
APPENDIX A
TEACHER EVALUATION PERCEPTIONS - SURVEY QUESTIONNAIRE
Teacher Evaluation Study
University of Southern California (USC)
I. Ovenuew of the Study
Welcome, and thank you for participating in this study of K-12 teachers'
and administrators' perceptions of the teacher evaluation process and its impact on
classroom practices. The University of Southern California (USC) is sponsoring
this study through the Rossier School of Education. Your voluntary participation
is completely anonymous. No identifying information will be collected.
The purpose of this study is to determine the effectiveness of the teacher
evaluation process used in the State of California. This research will assist state
and school officials in understanding the perceptions of teachers and
administrators with this process.
For additional information regarding this study, please visit:
http://www.vcnet.com/~wildcats/Misc/infosheet.pdf
The survey was designed to be short and painless. The entire survey will
take about five minutes.
Again, thank you for participating in this research study. To begin the
survey, please scroll down this page and click on next.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
164
II, Survey Questions — Career Experience
Anonymous Individual Facts
1. Current Position
• Principal
• Asst. Principal
• Teacher
• Other Admin
2. Gender
• Male
• Female
3. Years Of Experience As A Teacher
• 0-2
. 3-5
• 6-10
• 11 or more
4. Years Of Experience As An Administrator
* 0-2
. 3-5
• 6-10
• 11 or more
• N/A
5. Education Level
(Highest completed)
• BA/BS
• Bachelors plus 45 units
• MA/MS
• EdD/PhD
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
III. Survey Questions -- School Demographics
Information About Your School
6. District Size
Total number of students in the district
• 1,000 or less
• 1,001 - 10,000
• 10,001 - 50,000
» 50,001 or more
7. School Size
Total number of students enrolled in your school
® 500 or less
• 501 - 1,000
• 1,001 -2,500
• 2,501 and above
8. School Level
Student grade levels at your school
• Elementary
• K-8 only
• Micldle/Intennediate
• High School
9. School's Annual Performance Indicator (API)
• Below 500
• 500-699
• 700 - 875
• 875 and above
10. Was Your School's API Above 800?
• Yes
• No
11. Is Your School Identified As Title I
• Yes
• No
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
166
12. Percentage Of Your School's Students Who Are Identified As English
Language Learners?
Percentage of ELL students enrolled in your school
• 0 - 25%
• 26 - 50%
• 51-75%
• Above 76%
IV. Teacher Evaluation Practices
Elements of the Teacher Evaluation Process
13. Type of Evaluation Process Used (*CSTP = California Standards for the
Teaching Professional)
• Traditional Observation/Rubric before CSTP
• Traditional Observation/Rubric with CSTP
• Peer Evaluation
• Portfolio
® Multiple Methods
14. Are Classroom Observations And End Of The Year Evaluations Separate
Functions?
• Yes
® No
15. How Often Are Teachers Formally Observed?
• 3 or more times per year
• 1-2 times per year
• Every other year
® Every five years
• Never
16. According To Your Collective Bargaining Agreement, How Often Should
You Be Formally Observed?
By contract, how often should you be formally observed?
• 3 or more times per year
• 1-2 times per year
• Every other year
• Every five years
• N/A
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
167
17. How Often Are Teachers Evaluated (Summative/End Of The Year Process)?
How often are teachers evaluated?
• Yearly
• Every two years
• 3-5 years Less often
18. According To Your Collective Bargaining Agreement, How Often Should
You Be Evaluated?
According to contract, how often should you be evaluated?
• Yearly
• Every two years
• 3-5 years
• Less often
• N/A
V. Questions Concerning the Evaluation Process
Perceptions of Evaluations
19. Evaluations Are Conducted With Regard For Teacher Welfare
Concern for teacher welfare
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
20. Evaluations Are Informative, Timely, And Influential
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
21. Evaluations Are Easy To Conduct And Efficient
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
168
22. Evaluations Are Linked To Objective Data
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
VI, Effectiveness of the Evaluation Process
These Questions Measure One's Satisfaction and Belief in the
Effectiveness of the Evaluation Process.
23. District Policy/Procedures For Teacher Evaluation Are Clearly Understood
• Strongly Disagree
• Disagree
• Agree
® Strongly Agree
24. Evaluation Documents Are Aligned With State And District Expectations For
Practice
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
25. Evaluators Are Sufficiently Trained To Accurately Evaluate Teaching
Performance
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
26. Evaluators Spend Sufficient Time To Accurately Evaluate Teaching
Performance
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
169
27. Sufficient Resources Are Spent On Evaluation
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
28. The Evaluation Process Affects Teacher Practice
® Strongly Disagree
• Disagree
• Agree
• Strongly Agree
29. Professional Development Suggestions Are Provided During The Evaluation
Process
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
VII. Questions Concerning Teacher Performance.
This Final Set of Questions Involve Teacher Experience, Student Input,
and Student Achievement.
30. The Evaluation Process Should Differ For Beginning And Experienced
Teachers
® Strongly Disagree
• Disagree
• Agree
• Strongly Agree
31. The Evaluation Process Differs For Beginning And Experienced Teachers
• Strongly Disagree
® Disagree
• Agree
• Strongly Agree
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
170
32. Student Input Should Be Used In Evaluating Teachers
• Strongly Disagree
• Disagree
® Agree
• Strongly Agree
33. Student Input Is Used In Evaluating Teachers
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
34. Student Achievement On Standardized Tests Should Be Used In Evaluations
® Strongly Disagree
• Disagree
• Agree
• Strongly Agree
35. Student Achievement On Standardized Tests Is Used In Evaluations
• Strongly Disagree
• Disagree
• Agree
• Strongly Agree
VIII. Respondent’s Comment(s)
This item, is optional.
36. Final Comments:
IX. Untitled Page
Thank you for participating in this study. Your participation will make
a difference!
Done
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
171
APPENDIX B
TEACHER EVALUATION PERCEPTIONS
20
m
Survey Question #36 Respondents’ Comments
j I truly feel that the evaluation process should be a powerful tool to shape 1
| the experience, of new and veteran teachers. Too often, the summative
process becomes a way of evaluating teachers. Reason? Could be time
and tradition. 'The evaluation process must be on going with more
formative evaluation time spent with the teachers. This should be built
into the way administrators interact with their staff. Teachers want to
become better teachers, and this can only be done if they have on going
feedback about their direction that is candid and instructional. If we as
administrators try to fit that goal into a summative evaluation, we risk the
chance of sounding like critics instead of instructional leaders
We have an alternative assessment, which allows teachers to make a goal
related to specific interests in their teaching field. I like this option as it
allows me to explore new frontiers and it counts for somethin,
I feel teachers with 7+ years who have been evaluated 4 times with
satisfactory or above evals should not be evaluated every 2 years, every 5
should be sufficient. 1
■ M m B m m m m m m m a a s& m
interns and probationary teachers are evaluated 2x per year; tenured every I
other year
Evaluations, in particular goals and objectives, are virtually useless. Not
to appear totally negative, but they really have not changed my teaching
practices, nor of any teachers, I work with, to my knowledge. They're just
more hoops for teachers to jump through. The money and effort spend on
goals and objectives could be far better spent reducing class size, buying
supplies, and in general making teaching easier and more enjoyable.
As a BTSA support provider, veteran of 35 years in the classroom, and
pending National Board candidate, I am very interested in the results of
your research. In general, I feel my district is reasonably competent and
fair, but administrators are often struggling to meet their deadlines and
observations and evaluations become very superficial. If possible, I
would like to hear your results when published. I can be reached at
(ffiv.net or at @r.kl2.ca.us. Good luck with your work.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
172
pis
5 s
H
this is a new process as of this year. Tenured teachers are evaluated with
the observation/data method. Our district is easing into the new process.
Tenured teachers with over 10 years experience were offered the option
this yeaf.
I don't think student achievement on standardized assessments should be
used in teacher evaluations until the state develops a system that
encourages student participation and accountability to do their best on the
tests. I would love for the state to imbed the CAHSEE in the Content
Standards/CAT6 tests. Student try harder on the CAHSEE because there
is something at stake for them, unlike the CST/CAT6 tests.
..........................: : : ................................... ,.... ............ ............ _ . m i
Probationary teachers are evaluated three times per year for two years.
Permanent teachers are evaluated every other year with an informal and a
formal observation. In. looking at student achievement with regard to
teacher evaluation, I look at trends rather than whether or not a specific
class did well on a given assessment. If the teacher’ s classes consistently
do poorly, then 1 inclined to believe it is the teaching rather than the
students.
IT
The evaluation process at my site is inconsistent. The administrators put
one thing in writing and do not follow through with it. However, they
expect the teacher to maintain all records, which they then review at the
end of the year, making their evaluation from those materials. Often they
observe the classroom once and base an entire year of teaching on that
ONE observation.
11 . T ^w ^o^s^rom tl^^^steacherem ore^hana^D ^^ft^re^A
unions are battling items such as tying evaluations to student
performance. Teachers should be held accountable for progress of
students not necessarily, whether they are proficient. Evaluations should
fhere is a difference between how we evaluate probationary and tenured
eachers. Policies for evaluation of tenured and new teachers would make
. difference in some of the rest
113fc Hlam at a high, school that jusf opened on August 31,2004 so we do not
; have an API or an AYP.score.
Most evaluations feel and appear to be mere formalities. The goal is to be
in compliance and nothing more. I feel as if the performance objectives
that I write at the beginning of the year go into a black hole never to be
seen again. Therefore, about ten years ago, I started using the same
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
173
i objectives every year. No one has noticed yet.
The process is hindered by a lack of administrative time. The principal/vp
etc often make appointments to observe and then, have to cancel due to
emergencies, etc. This makes it unwielc
R. USD is just beginning to use the CSTP format. Before that, our
process was used for punitive purposes if at all. The feeling was that
teachers were being crushed through the evaluations to show why
students were not learning. Hopefully, the new format indicates a new
philosophy behind the evaluation process.
1L
STANDARDIZED TESTS SHOULD NOT BE USED TO BASE
TEACHER, ADMINISTRATIVE OR SCHOOL PERFORMANCE.
THE VARIABLES BETWEEN SCHOOLS, COMMUNITIES, AND
SOCIOECONOMIC STATUS ARE FAR TOO GREAT TO BE
INCLUDED IN EVALUATIONS^
*
It is sad that administrators do not get a good cross view of the teachers’
abilities in the classroom. An entire year is based on 2 or 3 twenty-minute
observations; including any documentation that may be available to
substantiate different practices or projects being performed in the
classroom.
: * r . .: .V . /'U, .‘ J * * ./ V Z..NV v...
Evaluations mean nothing if there are no consequences for not achieving
goals or meeting standards. Too many teachers, old and new, never make
the changes needed to keep up with student needs. This implies a need
for stronger leadership, i.e., better trained administrators.
I think that student scores should be a part of the evaluation as a reflection
of the teaching process, and connect it to teacher growth. The teacher
should not be penalized unless recommendations have not been followed.
Evaluations should be based on areas of expertise. RSP/SDC should not
be evaluated on the same criteria as regular education teachers. Special
Education students should not be held to the same standards as the regular
ed students because by definition they are at least two years behind the
others, and have a disability. All teachers and all students should not be
treated the same! i
Student pre-test & posttest can be used to evaluate effectiveness of
teaching. Overall achievement however, is not a true picture of
effectiveness due to the variety of students we serve in one class (ELL,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
174
gtasa
RSP, Low Level Gate, etc)0ne test in isolation does not show where you
need to improve as a teacher.
I feel strongly about NOT using students and student achievement data in
teacher evaluations due to: 1. inappropriate age/immaturity of students to
evaluate teacher performance without bias 2. too many factors affect
student achievement
—
Sony 1 couldn't answer them all. I am a 1st year and 1 don't know all that
info yet Some of the data has been told to me but 1 don’ t remember it off
Evaluations are often subjective at my site. Many times, it is also
unitive.
2
None of the teachers at my school has received an evaluation that merits
evaluation every 5 years. Our students are reading at grades 5-7. How
can you expect me to bring them to grade level during the 2004-2005
school years? We are working towards improving our student’s level of
reading through Renaissance Reading, but the bottom line is the desire
has to come from the student.
to' not very well done
1
30. j Never should Standardized testing be used in a teacher's
evaluation!!!!!!!!!!!!!!!!!!
LL i Pri
_ j wt
*0010101
i i i A s
Probationary teachers are observed formally multiple times each year
whereas tenured teachers once yeariy
As long as administrators are sufficiently trained in the evaluative
process, and student performance (a notoriously poor indicator) is not
taxen into account, I feel comfortable being observed and evaluated. I
appreciate the input. There's always room for improvement, and
accountability should occur at every level, even for veteran teachers.
Sometimes (often) the person evaluating performance is so far from the
classroom experience, or from the level of instruction being offered that
the evaluation loses some of its impact. It would be beneficial for the
evaluator to spend time periodically in the grade levels and classrooms
• j being evaluated as an educator. Only so much can be understood from
reading and training
; "• : ' . .to t v 1
I evaluate teachers every other year or three times per year depending on
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
175
P fllP f8
iiiisi
their status. We use the teacher standards for the evaluation summative.
It is very powerful. Standards have increased our ability to expect
positive outcomes. I have teacher who are on PAR pier assistance and
are not doing their job well.
Evaluations at our school are a farce! Recent administrators have used
them as rewards for those they like and punishments for those they want
to torment. They miss the whole point of the process- to monitor and
improve teaching at their campus.
as
Summative Evaluation should deal with continued employability. The
effective support based evaluation process has to be formative and
relationship based. IHO
37i There is not enough Time to do this job properly. New teachers in
particular need a great deal of support, while tenured teachers, for the
most part, do not need to have the same process.
-s It is difficult at times to provide the time it would take to do observations
and conferences that would make the eval process highly effective. Even
in a small school like mine, 1 am pulled in lots of directions at once. It
helps when a school has VP's or other assistants to free the principal to do
high quality observations and conferences.
A. For formal evaluation to be effective, the site administrators need to be
adequately trained. Administrators should be instructional leaders well
versed in effective instructional practices.
. 40, Good luck assembling your data.
asi
Sill
I believe that the evaluation process is a joke. I am sorry for my view, but
it is how I truly feel. At my. school, there are five or six evaluators. They
all have different ideas on how things should be done. If you are friends
with.the evaluator, you may only see them once or twice in a year for 5-10
minutes at a time. I also feel that after a teacher has been teaching for 10-
12 years they should not be evaluated. They have already established an
effective teaching style or they wouldn't have made it through 10 years of
ttacMsj
Question #28 asked if the evaluation process affects teacher practice. I
don't see how teacher practice is affected when little, if anything is done
with the evaluation. It is strictly a formality for tenured teachers.
'Getting test and student input will not work with Kindergarten teachers.
Unless all teachers are f a ir ^ /e ^ ^ ^ d , it is a slippery slope to do so with
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
a
4?;
some, and not all.
.....................................
While I am not afraid of constructive input from my students, I don't
know how you could screen for immature and biased statements.
£
145*2} Teacher evaluations do not have much impact on tenured teachers for
obvious reasons. . : . - ■ '
"
When my AP observes me, I have ray RSP students doing routine work.
She wouldn't know how to recognize standards-based instruction. She
was assigned to our school after flaming out at the high school. 1 am self-
motivated to learn and apply new strategies with my students. My AP is
not qualified to effectively evaluate and mentor me. 1 am not alone in
•these feelings.
-!S My special education/resource students couldn't provide clear input for
my evaluation. Also, my principal never comes in my room except to get
my signature on my evaluation!
£
u .
Some of the questions were difficult for me to answer, especially API,
Title L and the percent of ELL. The questions about the impact of teacher
evaluations were right on target.
* 1
I #
32
How can you tie in student achievement scores to teacher evaluations
when you teach content areas that are not assessed? The idea is a good
one, but it's too difficult to carrv out.
Our evaluation process is driven by our negotiated contract that doesn't
allow administrators to utilize effectively the California Standards for
effective teaching.
Teachers are evaluated throughout the year; sometimes informally,
sometimes formally; sometimes by administrators, sometimes by students,
sometimes by parents, sometimes by board members. The formal
evaluation lat the end of the year is contrived, the result of collective
bargaining. It is but a slice of the entire year. Evaluation should be
ongoing, aimed at improving the teacher in a variety of unique areas as
well as rewarding the teacher for a iob well done.
5T \j I selected disagree when asked if student achievement should be used to
evaluate a teacher's performance. I don't feel that we should use a single
score, but I do feel that an examination of a matched cohort grouping
score c o m p a rii^ g ^ ^ ^ o v er time would be beneficial in evaluating a
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
177
« i§
teacher's performance. (Example: A 3rd grade teacher finding the mean
o f the CAT6 scores of the students entering his/her class from their 2nd
grade test and comparing it to thom em score of. their. .3rd grade test.. In
essence,: what they walked in with to what they left -with,
I feel that the administration should constructively evaluate experienced
teachers. We appreciate the input of what we are strong in and areas that
we can improve. Teaching Is an on going learning experience for the
teacher also.
We have 45 classroom teachers and only one administrator. There is no
way our administrator can be in our rooms often enough to really know
what's going on and to make any sort of holistic evaluation of our
teaching and still do the other required parts of the principal’s job. I wish
our principal really knew what was going on in my room. Tm proud of
what I do and wish those in charge knew the ins and outs of what goes on
in the classrooms a lot more than they do. I don't want to be watched like
a hawk - but I would like to be supported from a place of true experience
observing me in action.
:V.
I use the clinical psychology method of supervision advanced by
Sergiovanni. It is a process that I am told by teachers is extremely
meaningful. Unfortunately, it seems like each administrator differs in
his/her approach to the process. This extreme variation could be an
interesting subject of a study. By the way, nice survey. Good luck with
our dissertation.
■ §
tan
I believe that teachers need to implement a "regular" lesson on evaluation
day - not a dog and pony show because they are being observed (when the
lesson is atypical of the day-to-day events)
We are teachers because we want to teach. We are so needlessly bogged
down by paperwork, including these ridiculous evaluations of ourselves
that our time to plan and prepare is highly limited...After all we are so
tired at the end of our days, and many days have meetings and classes we
choose to take on our own time as well. Someone needs to realize that we
often wake up in the middle of the night wondering how to make things
better. We would do this even without evaluations. Evaluations are such
a waste of our time.
Although the CSTPs and various models for teacher evaluation based on
them are a good idea, and Peer Assistance and Review have been written
’too contracts, -I'cuctuzl practice does rxA come a; all near to tr.c promise.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
178
60. It's hard to agree with standardized tests or even the STAR when teachers
sis axe not responsible for all of the growth (or lack of) of a student.
Teaching High School PE made me answer questions differently than my
| wife who teaches English. I like the idea of student comments. Most of
I them are realistic arid helpful. Standardized test scores would be of little
value for my department, and me unless they were fitness type
measurements. ■ •
62. After 32 years of teaching French and Spanish, I motivate myself to
improve as a teacher. My evaluations every two years do nothing more
than waste paper. Sure, it's nice to here that I am doing a good job, but
the evaluation doesn't really get me to change the way I teach - my
students do.
1 am a high school ceramics/art teacher. The evaluation process in my
district has little, if any impact on my teaching. Most administrators at
my school rely upon input from my department chair or students'
comments to evaluate me. Each May, my A P comes in with a piece of
paper and says, "Sign here.” 1 welcome student input, but can see where
student achievement on standardized tests may not work. My wife who
teaches third grade would not find student input to be helpful or realistic
in judging her teaching. Good survey questions.
1 1#
I have taught for eight years after making a career change and stopping to
raise kids. Evaluations have provided me little substance, without any
follow-through or support for professional development. I see the time
we spend as wasted. My growth experiences come as a result of my own
commitment to make a bigger difference in the lives of my second
graders.
I am a veteran teacher. I must say that tenure protects weak teachers.
Fear of conflict, retribution, and even lawsuits prevents qualified
administrators from making observations and evaluations worth the time
and energy. Bad teachers get a free ride, while skilled teachers have to
carry more than their share of responsibility to compensate for negligent
peers. My union has considerable value, but tenure the way it presently
works hurts us all, including students much more than it protects.
Our principal is too busy to evaluate more than once a year for 30
| minutes. That quick snapshot of performance isn't enough. Students j
aren’ t asked their opinions. STAR results, aren’ t looked at either. Need \
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
179
I 1 'more time evaluating new teachers and assisting with ideas and
• recommendations, lust starting BTSA at our school, hope it helps...
Would like portfolio too.
Some of your questions do not allow for a "true" picture - 1 think student
"growth" should be used for evaluative purposes - but not just simply
standardized test scores - multiple measures need to be considered.
Although student achievement data is used on teacher evaluations, it is
only used if the teacher has used it in setting up their goals for the year.
We always use a measurable goal, but it doesn't have to be from a
standardized test. A teacher would never be evaluated only as a result of
! the achievement of that goat or not. I would ask them to analyze the
results. In my opinion this analysis of instruction would tell me as much
about the teacher as the results of the goal.
SSI Regarding use of CSX data-with caution it should be looked at global as
only part of the process.
■ /i
1
Teachers need to be held accountable for student achievement. Tenure
nebds to go away! Expectations need to rise for all educators, just as they
are for students.
1
I 72, | Our evaluation document has satisfactory as the highest-level one can
receive. Our document is based on the CSTP. With almost all of our
teachers being satisfactory or above the document is of little value. If our
document had a fourth level (exceeds standard) the document would be a
more valuable tool. It is very difficult to find teachers in our district that
jj are not satisfactory. Just one more task to complete before the end of the
I year. Section 6, Developing as a Prof. Educator is probably the most
! valuable of the standards when it comes to improving instruction.
Some of the questions could have been answered in a couple of different 1
< j ways. First and second year teachers, for instance, are observed and j
evaluated differently than tenured teachers. Most first and second year j
' * teachers are also involved in BTSA programs i
Some of the questions do not allow refinements. For example, the same
evaluation form and observation forms (formal/informal) are used for
I J beginning and tenured teachers. As a site administrator, you don't have
\ j the same expectations for a teacher in the first or second year of teaching
1 ■ j that you have of experienced teachers. The focus is always on student
achievement, what's good for kids, continual improvement, and effective
i ms mm
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
180
SiS 1 believe in the goal of evaluation alignment to the CSTP. What I have
experienced are administrators without the experience/expertise to utilize
correctly an objective and supportive and growth filled evaluation tool. I
would like to see administrators trained adequately in this (and other)
roles. 1 was a private school teacher for many years. I felt a great deal of
professionalism and support and success from all those I encountered in
private education. It has only, been recently that I have entered the public
school arena. I have been so very disappointed, in the lack of skill and
professional practice of the administration I have encountered. Help in
the field of public education is sorely needed.
:Z k
I am continually frustrated by the fact that, as an administrator, teachers
have difficulty thinking of me as a coach who can affect instruction for
the better. I would prefer a process that would include non-evaluative
coaching for veteran teachers around goals the teachers set. The other
frustration is surrounding time— how little I have, and how much this
recess demands. It's a constant struggle to "escape" to do observing
a
Until tenure is reworked and aligned more closely to competency and
effectiveness, there will always be lots of bad teachers in the public
school system...
I am a superintendent; some of the questions don't apply. age
iuim , m
For the most part, my evaluation experiences have been similar to doing
jury duty. Once in a while, an alternative evaluation is nice for a change.
Now and then, evaluators give people a bad time. Overall, it does not
affect my teaching.
j In theory, a principal/instructional leader should be in the classroom
\ j weekly for informal observations. IN reality, they have limited time.
’ I Should informal observations be used for evaluations? I would think so,
l j but others say only formal planned ahead observations should be.
‘ h i'
_ When teachers’ are tenured at our school district, they are evaluated every
I .--to 1 other year.
8 3 ,,.'If evaluations were performed by competent administrators or teachers
then the feedback could be invaluable. But, sadly, most evaluations are
done by the site administrator, whether or not they have experience in an
academic classroom. All they hold is their administrative credential.
1 How can we feel like they have some real constructive suggestions to heir
I
nJ
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18!
?«3S *® T ^S0K !is£«T ® S 8B H S f
Some of the questions were hard to respond to because they didn’ t exactly
fit a multi-choice selection, i.e. standardized testing. I would have rather
had to comment on these sections vs. marking a multiple-choice answer.
Standardized testing by itself is not sufficient to eval. A teacher, but all
|| assessment of student progress might be and how that teacher uses that
I assessment to measure student achievement towards standards. There
I were several other items I struggled with because m-c answers were not
I fitting.
LSI
H i
mm
The question about evaluations being done for the welfare of the teacher
is unclear in its meaning. The question about how often teachers are
observed does not really cover the differences between tenured and not
tenured staff, which is distinctly different in our district. Also, the new
law allowing for extensions beyond the every other year for veteran
teachers has. just been introduced to the mix which makes the answer I
gave of "every other year" just applicable for some teachers and not
others. Lastly, I'd like to say that although I do not feel our evaluation
forms improve instruction, the conversations held during the process does
improve classroom instruction. I would like to see an updating of my
district's actual system. Thank you,
I teach Alternative Education. Student test scores arc important, but
sometimes out side of the control of our staff. We have students who are
sometimes too far behind. We are then held responsible for past failures
from other leaning inadequacies, (student or teachers) Some teachers
have fallen into ruts, and are no longer effective. There needs to be a
better, more effective process to force these teachers to improve or leave
the field.
■ I
For the moslpart evaluations are a tool to get rid of teachers who have
made it into the system and shouldn’ t have, at one extreme and at the other j
to assist teachers to become better at their job. Evaluations more often |
end up being used as a power tool. The State's standards are nonsense, j
but I suppose are driven by the NCLB. Tangible results of student 1
achievement should be the evaluation tool and forget ail the other waste of j
everyone's time, , 1
. )ur district used to evaluate teachers every other year. Last year we j
■.irgained to reduce this process to every 5 years for permanent teachers f
; long as both parties (principal and teacher) agree. j
89. i Formal evaluations may jlace ever /ears but only with teacher
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
182
| with 10 years experience and yearly agreement between evaluator and
t | evaluates. New in contract this year. Difficult to indicate on the
t . ! multiple-choice questions.
am a first year Dean of Students in a middle school. I found that some
c :the questions were difficult to answer as a dean. I have received little,
any formal clinical training for teacher observation, supervision, and
valuation outside of my Tier I program. I relied more upon my
experiences as a teacher to answer the opinion questions. Good luck. |e
m
Dpmion qi
I do not have the API score here at home but our school has done well and
raised our score every year.so that is why it hasn't become a "direct"
issue in regard to evaluation, I think the process is something that works
for our school, a small (145 students) k-8 rural school. These types of
surveys do not often reflect our situation.
Though some of the questions are simple in statement, the implications
and ramifications are great.
■ A Standardized tests results should not be used in the evaluation process, but
Content Standard achievement should be!
i n
In our district, we are still contractually bound to the limitations of the
STULL Bill. CSTP are available as an option but teachers rarely choose
them, the union blocks any change to these meaningless evaluation
procedures for fear of being challenged to change practice, and because
they don't understand the need for veteran teacher growth and change.
Many teachers believe they should not be evaluated by any administrator,
but they would never allow a peer system either. The current system is a
dead end leading to no help for improving instruction for students. As a
school administrator in California, I think the contractual controls on
teacher evaluation practices are one of the ugliest secrets we keep from
the public. Job protection, which is linked to these evaluations, is
unknown to the average person. NO ONE has these kinds of job
protections and insulations from meaningful performance feedback; why
would they? They lead to what we have created in California; a license to
coast.
Annually, parents are asked for their input of the school year. They can
1 comment on teachers, administration, and policies. Feedback is usually
positive, but we have received input on things we need to work on as
well. It's been helpful for all staff-members.
..... ” m i t x n t
None
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
183
P P S
97.
m i
H I
We have switched over to a new streamlined model for teacher
evaluations using the State Standards. It is much easier doing
inferences with teachers.
None
Our job as administrators is to support our teachers and offer ideas and
suggestions that enhance learning. We have a vested interest and
obligation to pur students to have the best teachers in their classrooms.
Our district is currently working on aligning our forms and procedures. I
feel there is a strong need for a rubric and that teachers should be
evaluated on all area of the teaching standards.
All too often, I have found evaluation to be a tool to "keep teachers in
line" rather than its original purpose. It’s unfortunate. During a recent
labor dispute, for instance, my administrator required 3 full hours of
observation to alleviate her "concerns" about my instruction. Though I
had been mentor teacher for many years, have delivered district staff
development on instructional content and delivery, and received positive
evaluations every year —including the ones she wrote, she kept finding
another reason to return to evaluate. The staff considered it a joke and the
credibility for the process was lost. Did I mention that I was the CTA
Chapter president?!?
I feel we have a very effective evaluation process. I do however contend
that observing a teacher for 45-55 minutes twice a year doesn't tell the
whole story as to whether he/she is an effective teacher. We need to take
in consideration where they started and how he/she consistently
progressed. The evaluation tool needs to be examined to make sure we
are being fair to our teachers. Teachers are under more pressure these
days then ever before because of The No Child Left Behind (NCLB)
requirements. The days of taking student's homework home and isolating
themselves so that they could effectively evaluate their students
performances has become an endangered species. They instead are
engaged with the internet taking online courses, are on a college campus
sometimes four (4) nights a week in an effort to fulfill their "Highly
Qualified" requirements. The days of teaching for enjoyment, mentoring,
thought sharing, and all the important things that are needed in our efforts
to address student needs are quickly becoming a thing of the past. So
until these concerns are met, the evaluation of classroom instruction will
remain just a routine procedure that site administrators are required to do.
What is measured is what is done in the classrooms. Clear understandings
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
184
MRP
between the evaluator and evaluatee are critical for success as well as
clear district wide standards for process and content of evaluations.
I !04, i Good luck with this survey..
IQ fe, 1 1 had some confusion with this survey because we would read a statement
and mark an opinion. In many cases, one can speak to the statement
because it is district policy and I follow the rules. The problem with
strongly disagree, etc., is that I may disagree with the policy I must
follow.
< 107, t There needs to be an additional response of it depends regarding some of
■ | the above questions. Teacher practice I believe can be influenced through
; the evaluation process but for some it is not. I also do not believe that
principals of Title I schools have the administrative support to be in the
classrooms as much as is needed to sufficiently support the Evaluation
process. Inequities among workload among principals may need to be
| looked at in terms of this study to see if there is a difference. I believe
j while Title I school's may have additional funding and more challenges,
J that these principal's need to be even more diligent in getting into the
■ classroom. However, as a Title I principal, I feel there are too many
j "emergencies" and needs to be as diligent as we should be while also
dog on the managerial responsibilities of running a school._ _
W f l f
We are redesigning our process to include peer evaluation when
acceptable by administrator, and we are also going to tie the process to
state standards for teachers.
I really feel that the new process of evaluation with the state standards is
highly effective. I feel that because of this, tenured teachers do not need .1
to be evaluated every other year. Once every five would be sufficient
ilong with the three times a year meetings on progress that our district
foes. Evaluations could be done more often if there were problems with a
Im m i
l believe the evaluation process is too punitive and too regulated by the
onions to be effective. Litigation scares the administration and little
changes because of it. I don't think how students score on standardized
;ests is a fair evaluation of what happens in the classroom. I teach 12th
•aders so are not evaluated by test scores.
The intent o iii®wSi®8 ie’ ftmrt&i M essrs
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
185
■mw/i s gssgsm
overworked (ADMINISTRATORS AND TEACHERS) to make It a
process that really enhances one’ s teaching ability. It appears to be I
another'"paper item” we need to complete. I think it is often only used to j
fee' or not rehire teachers. \
The section regarding frequency of evaluation is not applicable to our
district policy. We are evaluated every three years.
114.: There were a couple of questions that could have had more than one
Internretatioii. mternretation.
life
Some of the questions require additional explanation. Not all teachers
understand the different elements of teacher evaluation and the union
contract. I have served on the negotiating team in our district for many
years. Evaluations have been a priority, but other critical issues seem to
capture our attention, namely money and benefits. With the question
regarding the use of student input, it's okay at the high school level. I
don't see support at the elementary and middle school levels. Teacher
evaluations could really be changed to better support the needs of tenured
teachers. Administrators need to be better trained, as well.
I S
Interesting questions. Good luck!
life
Evaluations and the process can be improved, even with CFASST and
BTSA. Tenured teachers are a slam-dunk. I don't have enough time to
rovide much depth to the evaluations. My schedule is already so tight.
i
f i l l I have had four principals in nine years. Their experiences vary
considerably — especially when it comes to skill, interest, and emphasis
on evaluations. The district does not appear to have any sense of training
for them. I am a tenured teacher; the evaluations seem as pretty much of a
waste of time. Instead, we should spend money and time towards
ongoing, meaningful, professional development. We never go back and
revisit the end of the year evaluations — and the discussions of
professional development in the final conference. In that sense,
evaluations for tenured teachers are a joke. I want to grow as a teacher.
Evaluations are not the means for me to do so — even with a
knowledgeable, sincere principal. Their hands are tied by the policies and
srocess. Good luck!
, . .......
Our school's API score was 912 last year. We are being pushed so hard to j
et our scores higher that I feel crazy about it. Our district is in a wealthy, *
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
186
educated community and the parents want both 999 API scores and lots of
enrichment. When will someone say to parents, look we have a score of
912, why isn't that good enough?!!!! I cant give your child all the
enrichment in things like art history, poetry, etc... If I need to have a
score of 999 on "The Test", Either tell me to teach to the test and forget
all the rest or get off nrjTback! Thank you for letting me vent. :)
I think that tenure is a necessary component of teacher evaluations and is
an antiquated concept that can potentially breed complacency.
Thank you for providing time for my input.:
The most intensive observation and evaluation might be in the first two
years of teaching if the evaluation process is to have meaning. Veteran
teachers may need intensive, meaningful professional development. A
more cooperative approach to evaluation might be productive. A deletion
of the "top down" idea might be productive. Peer evaluation is one
approach but it leaves out the hiring process. If teachers hire teachers
then they may adequately evaluate teachers but to have administrators hire
and teachers evaluate with admin cooperation doesn't work. Someone
always has an agenda in this scenario. One might run a school like a
business. Use the Japanese model of corporation shareholders.
Administrators rise from the ranks by teacher consent based on
productivity. A different job with same pay. If one doesn't produce then
one isn't paid. Thanks
M
j 2511 My school administrator has used his observations as a tool for coercion
and force/threat. He has told me that he personally dislikes my
homework, classwork, and testing practices and uses derogatory
comments on my review to keep me on an annual review schedule. As a
chemistry teacher, I have a difficult subject to teach and deal with parent
> c o m p la in ts ^ ^ M ^ I ^ ^ i that his importance is to satiate these parents
I see tenured teachers who pass their evaluations and have no business
being in the teaching profession. They are lazy and do practically nothing
all year and the teachers receiving their students the following year, have
to work harder at getting these kids up to where they should be. I don't
believe teachers should be automatically tenured. (I have been a teacher
for 32 years.)
Thank you for the opportunity for comments. I have been in this district
one year, and revamping the whole evaluation process is one of our top
negotiations goals for this year, in line with continuous improvement
strategies.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
187
and not to uphold the leaching of the states standards and expectations.
He has told me that 1 hour of homework 3-4 days per week is
unacceptable add that he can achieve greater results with his students with
only 20 minutes bf- homework per night, I find his lack of science
background and respect to be disheartening. As a college preparatory
class* chemistry should be taught'with high regards to quality and rigor. I
expect my grade to represent something. I expect colleges to respect my
grades as well. This will not be done by lowering or weakening my
expectations or goals, I say this because the review process we participate
in ranks our areas of performance on a 1-3 scale. I have received l's in
the homework category and these scores can lead to discipline if the
subsequent reviews are so noted. Our school is heavily involved in sports
and athletics, because of this I am constantly competing for time. The
parents of the athletes are very vocal and the result is the lowering of
expectations for homework. I have been told by other teachers that this is
becoming more common. It is not just a condition of this school. You
ask about the adherence to state standards, but I would also like you to
consider the adherence to expectations for college preparatory courses.
What is done to be sure that a student’s grade is an honest assessment that
can be trusted by a college entrance counselor? My comment here is
geared towards academic honesty. The review process that I complain
about is being used to coerce me to be less challenging. My administrator
has told me that he has a prejudice towards my methods (Socratic) and I
feel that he now uses the review process to harass me. He has told me
that he can "make an AP student in Spanish 3 with only 20 minutes of
homework per night". I only wish that I could take three years to teach
chemistry, His 20 miri. would be plausible. As to your question about the:
use of state standardized'testing to help in the review of a teacher, I feel
that my administrator does not use them. My scores on state standardized
tests have been very good. In 2002-3 only 3% of my juniors scored in the
bottom 2 categories. Yet he chooses to review me in this environment.
By way o f comparison, in my 4 9th grade Earth- Science classes I have
never had good test results. Twenty five percent o f my students in this
class score in the bottom 2 categories, My administrator has never
commented on these classes nor observed them. His attention is given to
the classroom with the college bound student, and his comments have
been that it is too difficult He plays favorites and uses the review process
to achieve his means, not to build a more effective and successful
education environment. One of the things that have energized me about
this subject is the following. Two years ago three of our students were
admitted to CSU Chico. These three students were placed in a remedial
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
188
m ath class after taking placement exams at the college. Each of these
students had passed our algebra II / Trigonometry class. Each o f these
students was not allowed to re-enroll for the following semester before
1 passing their remedial math .class. 1 was told that one of them did know
understand negative'numbers. This is extremely disturbing. Classes
should be assessed by using scores from state standardized tests, and
courses should be taught in a maimer that fulfills the expectations of the
area of study. Having teachers lower their standards, or giving grades to
students that are'not earned is a severe problem which lowers the respect
earned and. expected by teaching professionals. This topic is a heated one
for me, and I can go on. But class begins. I hope that I didn't ramble too
m uck .Please contact me if .you have a question, comment, or other. **
At this moment, before I write a final comment, I would have liked to
have been able to view my questionnaire in it's entirety by not by pushing
the "back" button several times. Some of the questions didn't seem to be
specific enough for our district since we use BTSA's CFASST process as
the 2-year new teacher induction; . program
JJZJ
Student comments could somewhat be taken into account in the secondary
level if students would be truly constructive in their comments. Teachers
who have taught over 10 years on a constant basis should be observed on
a different schedule than beginning teachers. Every other year only once
a year seems more reasonable and practical.
£ I
There were a few questions I would have liked to have qualified, but the
survey was well done. I believe as long as teachers always strive to get
better at their craft, we have few problems with administration evaluations
at our school. Thanks. Good luck on your research.
8 1
m
1 3 k
While I think that student comments and testing results should be used in j
teacher evaluations I believe they should be used to validate the I
administrator’s observation not be the source of the evaluation,
No matter how wonderful a teacher is, some students cannot perform
above a certain level and other students will perform extremely well. If
one assessed help at home, knowledge of English, IQ of students, required
after-school jobs to support family, and about 100 other criteria that
effects students' performance, then it might be wise to base a teacher's
abilities with test scores. I think it would be wiser to take schools of
similar demographics and have the more successful schools (test results
wise) give suggestions to other schools in that group.
" ‘ ~ ~ — ' . ■ ' . .
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
189
I I
j . m
i i i
PMsn
The subjective factor is huge. I'm on my third principal at this school and «
(while I've done well with each) their evaluations are all very different.
Also, in 1 st grade it's very hard to have the students evaluate a teacher.
And finally, in a Title One/RSP/Mainstreaming/ etc. classroom evaluation J
by testing is not effective, 1
Teacher evaluations are just another hoop to jump through. They have
uitle relevance to actually improving education. Peer observations and
peer coaching could be valuable tools in improving education if they
could be used in a non-threatening (to the teacher) manner.
Evaluations are a waste of time. There is no way an administrator who
has been out of the classroom for years can come in for 20 min. and
accurately judge a teacher's performance. Student comments or test
scores should never be used to judge teachers. Students are the product of
their environment, most of which the teacher has little influence over.
Tests arc snapshots of a student's performance and do not take into
account family, or social concerns the student might be more worried
about at the time of the test.
i
Although using student input, progress on standards and standardized
tests could and maybe should be used, it must be used judiciously. It is
often the case that a teacher has a class that is stacked with students that
are challenged to make the kind of progress that other students can make.
Other teachers my have classes that predominantly have a high percentage
of students to whom reaching these standards is easy. It can't be the only
measurement of the teacher's worth as a teacher or his/her effort to do the
job.
As a first year principal it is difficult to answer whether I think the
evaluation process is efficient. I think if a teacher has an obvious need, it
is something that is reflected upon during observations and evaluations.
But teachers that are perceived as capable seem to get little or no benefit
from the evaluation process; it is more of a rubber stamp procedure. It is
a fine line where teachers take suggestions as helpful or as criticism and
an administrator has to be very careful in how he/she presents even the
j most minor of suggestions. If teachers were more open to wanting to
| improve, the process might be more worthwhile. In addition, districts
should properly train administrators to get the most out of the process,
what we learn in our Masters courses does not nearly help us to help
teachers be more successful.
1 believe that teacher evaluations are a useful tool. The evaluation process
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
190
I | at my school is extensive and requires a lot of paperwork. It is a good
process but takes a lot of time. It would be easier, as teachers become
more experienced, if the process was not as extensive.
We are currently working on our teacher evaluation packet. A good
evaluation always takes time. The load is a factor that makes us less ,:j
I think that the evaluation process is necessary and useful, but I think its |
overall effectiveness is very limited due to the VERY limited resources
and time available to teachers and administrators. The current evaluation
process seems only to be useful to administrators reviewing highly
unqualified/inept teachers. Otherwise, for most teachers, it's a "hoop
jumping" exercise. It is valuable to look at evaluations and the teacher
evaluation process; however, teacher support, training, and retention
should be more highly prioritized.
_
■ V tV A ' ■ I
mm
HR
■
OUR STULL EVALUATIONS ARE NOTHING MORE THAN A
HOOP THAT NEEDS TO BE JUMPED THROUGH EVERY THREE
YEARS. THERE IS NO FOLLOW THROUGH, AND THE
TEACHERS THAT NEED TO BE MONITORED MORE
FREQUENTLY ARE SET ADRIFT, WHILE THOSE WHO COULD
SURVIVE WITH FEWER VISITS ARE TREATED THE SAME. I ■
DON'T KNOW THE ANSWER, BUT THE WAY WE ARE DOING i f
IN OUR DISTRICT IS NOT VERY PRODUCTIVE.
i!t® i
Teacher evaluation is not really a process to evaluate a teacher's ability to
teach, but rather as an indicator of whether the teacher will be retained for
employment purposes. I feel that a teacher's experience and routine can
prohibit change within their respective classes. They have a system that
works for them and their class. For example, a 20-year math teacher may
be resistant to using PowerPoint even though it may offer a visual that
could assist some students with leamim
This evaluation study does not allow for N/A responses. I disagree or
agree does not show if any question does not apply for this district. Also,
I have been evaluated several times throughout my teaching career and I
feel that the evaluation process does not accurately reflect my teacher
performance or enhance my teaching. I believe instead of observations by
an administrator once or three times every other year, principals and
veteran teachers should evaluate classroom practice on a monthly basis.
There should also be student and parent input. Then I think that teachers
who need help should bejxmed with a veteran teacher to guide and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
191
PSSTEUBRaSMW?*?!* a/* l
instruct on the best practices used in the classroom. Staff Development
',} should also be available on a monthly-basis for those teachers who wish to
attend
14d:
US
Evaluations that include suggestions for change with some follow-through
are effective. For example, in-service time, grade level collaboration and
conferences with extra principal input. Evaluations can reinforce
effective practices and boost teacher confidence and morale. I would like
to see a distinction between new and veteran teachers with different
protocols used for failing teachers. More emphasis should be spent on the
new and failing teachers. In a utopian world, you could really dismiss
ineffective teachers quickly regardless of tenure.
I want to add a caveat to my answer that student opinion should be used in
evaluation. My belief that student opinions should be used is contingent
upon the WAY in which they are used. If a student thinks a teacher's
class is fun, interesting, or challenging; that may be useful in some
aspects of evaluation. However, a student who has just received a bad
grade on a test or been disciplined should not have their sour grapes
&
Congratulations!
m
I work at two elementary schools, I will do another survey based on that
school. Kindergarten students are a prime example why student input is
not used at the elementary schools. It is stretching it to ask 6th graders to
evaluate their teachers.
M b,! New teachers are formally observed four times during the year with two
formal evaluations. Permanent teachers are evaluated once with 2 formal
observations. I visit every classroom a minimum of once per week. I also
meet with every teacher three times a year for 45 minutes to set, update,
and evaluate Professional Performance Plans. I take evaluation very
seriously and find it to be a great tool to guide professional growth and
development.
1 14fy | You should consider adding an option of "do not know" to some of the
questions because I truly do not know some of the answers you are
asking.
sfcifcl Evaluations are only as good as the desired outcome of the Evaluator. I
149.1 Good luck ir ‘ >
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
192
i l
formal evaluative process. No teacher would want to teacher at low
performing schools, take the ELD class, etc. and many conflicts would
arise in setting up classes as teachers would want all the high achieving
students.
Several questions don't apply accurately for administrators: #18: they
have no contract that covers admins. And other questions asked about the
size of the school a person completing the survey to assigned, when some
completing the survey aren't assigned to schools. If only site
administrators should complete the survey, then that should be clarified
and don't ask about their "Collective Bargaining Agreement".
Our district has 4 formal observations for non-tenured teachers every
year, and two formal observations for tenured teachers every other year.
We do informal observations weekly. But, the tenure system makes it
difficult to affect change in recalcitrant teachers and the attorney costs are
prohibitive in trying to release poor teachers. I really believe student
growth and value-added assessment should become part of the evaluation
process and pay should be dependent on results of student growth under a
"value-added" type of system. Right now it is easy to nurture non-tenured
teachers, but the success rate in improving the outcomes for poor teachers
is dismal.
153. ^E^^ri^^d^^ers^rQujdn^towto^ev^^^ever^ear.
l i i l It puzzles me when I see an administrator who has taught for 4 or 5 years
evaluating a teacher who has taught 15 years. I am not saying they aren't
ualified, but I wonder what the teacher thinks of the evaluator's input.
I
ipi
l l i i
The teacher evaluation process should be an aid to professional
development not as a disciplinary process. A viable follow up and access
to educational techniques for improvement should be available to teachers
f t o - l i - , : l i d - 1 ® A ; ; ■ f 'L ^ ':’ .v . A v
The biggest problem for me has always been that there is little
communication beforehand. Moreover, we have a high turnover rate for
admin personnel, so every other year, I feel like another administrator is
ine to turn me into their protege.
I B l
The age o f the student should be considered in your question about
student input in the eval process. Upper grade students may be able to
] have valid input. Teachers who consistently have parent and student
j complaints about "being mean" need to address that in the eval process,
] and it is not considered at this point. Good luck with your pro
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
193
1
**
Ka ! As a "seasoned” educator I have never found the time taken for the
lit
; principal to evaluate or the teacher to fill out appropriate paper work to be
effective. ' ■
Best of Luck!
Rather than focus on what the teacher may not be doing that the
administration/state/god has decreed they should be doing, the evaluation
process should honor the fact that teaching is an art, that we have different
but nevertheless effective teaching styles. The process should focus on
our own goals and aspirations. What do we want to work on this year,
where do we want to be 10 years from now, how will we get there, what
can the school/administration/district/state/god do to help us reach our
own professional goals. No one has ever asked me what I would like to
do next, what is the next professional development that I desire. Such
considerations would make the evaluation process a worthwhile and
interesting exercise in self-reflection for the teacher, and, therefore, worth
doing and taking seriousl;
f t
:w
L jw
Future surveys should indicate the number of questions before the survey
begins so those completing it can determine how much is left. Some
interesting questions but there should have been about formative and
summative conferences etc in relation to evaluations— ties to the schools
WASC Action plan and Single plan etc-often Evaluations are just an
isolated unrelated process that should tie in directly to student
achievement (not just API scores etc) but actually instruction— How do :
we know students are learning. Thank you! Good luck!
mm
W m
t o ; £
Every teacher is evaluated each year, it is a load on the principals, but I
think it has helped us stay on top of things. We also have the ability to
"freeze" teachers on the salary schedule if they receive a less than
satisfactory evaluation. Very helpful!
a M
Some answers 1 didn’t know like percentage o f ELD or ESL students in
our school and our API index.. There wasn't an 'f don't know" box so I
just guessed.'
ami
s
A few questions were difficult to answer in your format. For example, I
believe that beginning teachers should have more formal and informal
visitations then experienced teachers so in this way the evaluation process
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
194
„
should be different. However, the evaluation criteria should be the same
for both groups therefore the criteria is the same. I feel that standardize
test data should influence a person's evaluation in terms of value added. I
would not look at an overall score but are kids moving ahead or beyond
’ ’le in their care. I wish you well in your stud)
[Although I can see the value for'evaluation observations for new teachers
or for those who seem to be having a difficult time with teaching, I think
it is a waste of time for ait experienced teacher who has proven his/her
capabilities over the test of time, to have to put together a dog and pony
show when that time could be better used in other preparations.
l i l l While I am relatively new to teaching, I work hard to learn and grow as a
teacher even without much support from my district and principal. My
principal is so busy, and can't be everywhere. Evaluations seem to be a
waste of time since there is no follow up or even money to help us with
professional development. I can't afford to do it all on my own. As it is, I
tutor just to afford to live somewhat near my school. Tuition at the local
college or rent to offset high gas prices — more education will have to
wait.
169.
120,
Li
h R I
In order for evaluations to be more meaningful...a peer chosen by the
teacher should be involved. Many teachers are defensive about
criticism...even if it is constructive. A peer could really help, especially a
beginning teacher. .
For the most part, excellent teachers and self-motivated and
conscientious. They will do a great job regardless of the evaluation
process. They are the ones that take the recommendations to heart, and
actually strive to perfect their teaching. However, those teachers who
really need to benefit from the evaluation process don't. It seems that is
too difficult to correct or dismiss tenured teachers.
Some of my administrators are very qualified to evaluate teachers. One
has a background in PE in private schools. She would not know what
great teaching was if it hit her in the head. She is no more qualified to
observe and evaluate than otir custodians. . j
.............................
I didn't know the answers to a lot of the questions.
I am an administrative assistant. Pm not sure if you want my survey
answers in your data or n o t I am connected with the process as far as
scheduling and processing written evais and the process is much too
exhaustive for our school that has lust one administrator who wears man)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
195
M m
p i l l
different "categorical" hats, Lucidly, we are fortunate that our site
administrator coaches several days per month in the classrooms so lie is
able to view the teachers more informally* which I believe strengthens
staff development i.e. helps staff become better teachers.
M S
This last section might ask respondents to explain or share their most
effective evaluation, evaluation process, and what the attributes of that
evaluation entailed. That does mean that one would need to define or
interpret "effective."
i H
My current district uses a model from 1978 that is cumbersome and
irrelevant to current standards and practice; that is why my answers vaty,
I know and understand from my own professional experience what good
evaluations should be; the local union is vastly out-of-touch with the
CSTP and present teacher training programs. I follow the district's
guidelines, but incorporate the CSTP on my own to make the
evaluation/obsv. Meaningful.
176. Some of the questions were not applicable or were too broad. Good luck!
-------
77. i Good Luck.
IB ,:
Timelines prevent use of year-end data - collective bargaining contract
requires me to meet with the teacher for final evaluation before year-end
classroom assessments are done, let alone STAR results. The district
instrument for teachers’ goals matches Stull requirements, but our final
evaluation instrument matches the Standards for The Teaching Profession
- it's apples and oranges. As a working principal evaluating 21 teachers
this year, I never have enough time to do as many observations as I really
want to do - I'm hard pressed to just do the minimum required by contract.
For me, it's an opportunity to offer teachers guidance, but I'm not sure
they all see it that way. I would like to be able to work intensively with
them for the first five years of their careers - not put them on the alternate
cycle the first year they get tenure. After that, habits are set and unless
the teacher is unusual, I don't see much ongoing growth.
179111 truly think that evaluation and observation of teachers is one of the
largest components of our job. If we are effective, we are then able truly
to work with teachers on an individual basis to challenge them, as well as
provide resources for them as needed (especially new teachers). Only if
i we are consistently observing what is going on in the classroom, can we
j then see the need for effective staff development opportunities.
Unfortunately, this is an area that I wish I had more time to do ~ although
it is a priority for me; I often spend a good portion of my day balancing
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
196
my other responsibilities! Thanks for the opportunity! Good luck to all
i msk,
180.; Through my experience in special education, Principals and Assistant
Principals are rarely equipped to evaluate special education personnel.
Especially the resource specialist program and itinerant services such as
ch and vision.
fifS Teacher evaluations should reflect the teaching standards, curriculum
implementation, and data from standardized tests.
Our process is quite different for probationary cf to permanent
teachers...thus the questionnaire was difficult to answer. Also, for a non
school site admin. Some of the questions don't fit.
SIS The evaluation process is critical to the welfare of education. More
training and emphasis needs to be implemented in order to assist
administrators in this process.
184, | With the concept of tenure always lurking, teacher evaluation has VERY
j LITTLE connection to change in practice.
185,
Administrators, as do teachers, have many concerns when student input is
considered as part of teacher evaluations. Following are a few examples
o f those concerns: 1. Age appropriate - Too young 2. Disciplinary
problem students - Anger 3. Subject matter - Student achievement in
class is poor
186, New administrators are not thoroughly indoctrinated in the evaluation
process utilized by our district. Permanent certificated staffs are only
required to have one formal observation each year and then a summative
evaluation. I don't feel that this gives a true and accurate picture of what
is really occurring in the classroom. Most teachers don't see the process
as a learning tool; it's just something they go through the motions to get
done.
, 18 7 .* j Teacher tenure is to poor and allows bad instruction and teachers to
I T-vYdamage children, .
l «
The position I am in is Superintendent/Principal. I am supposed to be
evaluated annually by the board and it is a difficult process for them, too.
■ 3 3 3 2
As a site administrator, I find it difficult to find the time necessary to
evaluate teachers effectively. We us© a rubric with the CSTP and it can
be effective if It would dtjve staff development based on needs and results
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19?
1
1 . . . v . .
The evaluation system in Calif, is strongly tied to Ed, Code and collective
S bargaining agreements. The use of data should be a strong factor, but
j current Ed. Code and coll. agreements prohibit. This needs to be
I addressed if the process is going to have validity and useful with the
1 teacher and administrator.
, - W
I believe most administrators would agree the current evaluation system
lacks a process for truly informing and improving classroom instruction.
I feel the California Standards for Teaching should be the basis of
evaluation and the specifics of teaching methodology and instructional
practices should be addressed in detail including the effect on students
and their performance and research-based prescriptions for improvement.
Unfortunately, I also believe the vast majority of administrators are not
id with sufficient knowledge to address these specifics.
m m m
Excellent survey.
1Q4. ] I do not feel that administrators are adequately trained to be good
Cultural influences at a school or in a'district are a strong factor in
whether or not evaluations are of value in changing practice in instruction
It is also complicated by the. fact that many administrators are still not J
easily able to confront teachers who need improvement, but with strong j
PAR and BTSA processes in place, that has begun to change. Until there 3
is some strong pressure for teacher accountability in the state and federal j
systems (Currently focused on principals and superintendents primarily, j
but may change with the new District PI legislation resulting from the
Williams Lawsuit) the value of evaluation may not be much different than
it has been in the past. I do believe classroom practice is influenced by
the behaviors of principals as they are active in monitoring instruction in
classrooms combined with grade level peer action, planning and
expectations, but I have never believed teachers truly lived in fear of
evaluations if their practice was not viewed as acceptable— and the
evaluation in fact did not put any pressure on them to alter their practices
and behaviors. If the connection is there between fair evaluation that ties
to what has been emphasized and stated as expectations for teacher
behavior, the process will likely become more valuable for the teachers
who were already serious about their work. I still don't know about those
teachers who basically believe that they can do as they wish and not have;
to play if they don't want to-and unfortunately there are still many of
those around-albeit in the minority, at least in my district
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19;
| instructional observers, coaches, or evaluators. Not enough focus is put
1 on administrators being in the classroom.
** Blank items were deleted to protect confidentiality
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
High school teachers' and administrators' perceptions of the teacher evaluation process in California's public schools
PDF
An analysis of the implementation of content standards in selected unified school districts of California
PDF
How classroom teachers react to and implement California's Beginning Teacher Support and Assessment educational reform policy
PDF
Connecting districts and schools to improve teaching and learning: A case study of district efforts in the Los Coyotes High School District
PDF
An analysis of the use of data to increase student achievement in public schools
PDF
Data: Policies, strategies, and utilizations in public schools
PDF
Adult school student achievement on the California High School Exit Examination: Are adult schools ready for the challenge?
PDF
How effective schools use data to improve student achievement
PDF
How districts and schools utilize data to improve the delivery of instruction and student performance: A case study
PDF
An analysis of the elementary principal's role in implementing school accountability within California's high -priority school: A case study
PDF
Implementing computer assisted instruction in a multilevel -multigraded classroom evaluation /action plan
PDF
Districtwide instructional improvement: A case study of an elementary school in the Beach Promenade Unified School District
PDF
A longitudinal comparative study of the effects of charter schools on minority and low-SES students in California
PDF
Design, implementation and adequacy of using student performance data and the design's link to state context for assessing student performance: A case study
PDF
A comparative analysis of academic achievement for CalWORKs students in a K--12 public school system
PDF
A case study of teacher evaluation and supervision at a high performing urban elementary school
PDF
An analysis of the elementary principal's role in implementing school accountability within California's High Priority School: A case study
PDF
A study of equity in education finance: An analysis of the Archdiocese of Los Angeles elementary schools
PDF
Class size reduction: A case study
PDF
An analysis of perceptions and implementation of California's teacher evaluation process in a K--5 public school and its impact on teacher practice
Asset Metadata
Creator
Sand, Jon David
(author)
Core Title
Elementary administrators and teachers' perceptions of the teacher evaluation process in California's public schools
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, administration,education, curriculum and instruction,education, elementary,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Hocevar, Dennis (
committee chair
), Gross, Jerry C. (
committee member
), Picus, Lawrence O. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-349167
Unique identifier
UC11340186
Identifier
3180405.pdf (filename),usctheses-c16-349167 (legacy record id)
Legacy Identifier
3180405.pdf
Dmrecord
349167
Document Type
Dissertation
Rights
Sand, Jon David
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, administration
education, curriculum and instruction
education, elementary