Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
(USC Thesis Other)
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running Header: DIGITAL TEACHER EVALUATIONS 1
DIGITAL TEACHER EVALUATIONS: PRINCIPAL EXPECTATIONS OF USEFULNESS &
THEIR IMPACT ON TEACHING
by
Jonathan Brandon Schild
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2013
Copyright 2013 Jonathan Brandon Schild
DIGITAL TEACHER EVALUATIONS 2
Dedication
I would like to acknowledge and thank my wife, Melissa, who stuck with me through this
process and who provided constant encouragement that helped me accomplish my goals. I
would also like to thank my parents, Ken and Gail, and my sister, Debbie, who offered support
and motivation when I needed it. I would also like to thank my friends who texted or emailed
me during the process. You gave me a good laugh when I needed that mental break from the
stresses of life. All of you played important roles in this three year pursuit. I am extremely
grateful for all of your encouragement and support.
DIGITAL TEACHER EVALUATIONS 3
Acknowledgements
I would like to acknowledge several people who were instrumental in my completion of
this study. First and foremost is my wife, Melissa. Without her emotional support and constant
encouragement, none of this would be possible. Her association with the University helped
provide the financial incentive to pursue this degree. But it was her love of a husband who
sought to better himself intellectually that was the most motivating and encouraging factor in the
completion of this study. We welcomed a new addition to the family in February. The birth of
our first child provided additional incentive to see this through in a timely manner so as to enjoy
and celebrate the expansion of our family.
My family has always instilled in me the desire to seek out knowledge. My father is
probably the smartest person I know, and without him being a role-model and person to aspire to
be like, I would not have pursed this degree. My mother has always been the emotional
cornerstone of our family. Her constant support and push to work hard provided me with the
emotional support I needed to see this through. My sister and I have been close throughout our
childhood and will continue to be close. Our competitive drive may have originated through
athletic competition, but became cooperative over the years as we developed common interests
and began to drive each other towards excellence instead of competing against one another. She
and I desire the best for each other and her support and encouragement through this process was
invaluable. I cherish our relationship. I am excited for the opportunities for both our new
families to develop similar bonds and create supportive environments for our children. We will
make excellent role-models for our children and we will instill in them the same values of
success as our parents did in us.
DIGITAL TEACHER EVALUATIONS 4
Finally, I would like to acknowledge my Committee members: Dr. Castruita, Dr. Batsis,
and Dr. Garcia. Dr. Castruita’s motivational words of wisdom kept me on track through this
entire process. I am indebted to his persistence and the time he allotted to my frequent questions.
I am grateful for his thorough review of my progress and desire to see me finish on time. Dr.
Batsis has been my mentor for a long time. He has offered me wisdom and guidance throughout
my career in education. I will always seek his advice and listen with open ears. Dr. Garcia also
provided encouragement and levity throughout this process.
DIGITAL TEACHER EVALUATIONS 5
Table of Contents
List of Tables 6
List of Figures 8
Abstract 9
Chapter One: Introduction 10
Statement of the Problem 12
Purpose of the Study 13
The Importance of the Study 14
Limitations 15
Definition of Terms 16
Chapter Two: Literature Review 17
History of Teacher Evaluations 17
Teacher Evaluation Framework 20
Teacher Evaluation Process and the Role of Technology 23
Summary 27
Chapter Three: Methodology 29
Design of the Study 30
Sample and Population 32
Instrumentation 33
Survey Design Considerations 33
Interview Protocol Considerations 35
Conceptual Framework Considerations 38
Data Collection 40
Data Analysis 41
Summary 42
Chapter Four: Results 44
Demographic Data 45
Principal Descriptor Data 46
Research Question #1 48
Research Question #2 58
Research Question #3 64
Research Question #4 73
Summary 79
Chapter Five: Discussion 81
Summary of Findings 81
Implications for Practice 85
Future Research 86
Conclusions 86
References 88
Appendix A 94
Appendix B 99
Appendix C 102
DIGITAL TEACHER EVALUATIONS 6
List of Tables
Table 1: Relationship of Data Sources to Research Questions 42
Table 2: Quantitative Surveys: Response Rate 45
Table 3: Qualitative Interviews: Response Rate 45
Table 4: Principal Longevity 47
Table 5: Principal Age 47
Table 6: Time Using Digital Teacher Evaluations 47
Table 7: Use Traditional Method Prior to Digital Evaluation 47
Table 8: Evaluation Rating Scale 47
Table 9: Number of Times Faculty are Evaluated a Semester 48
Table 10: Evaluation Provides Accurate Information about Performance 49
Table 11: Evaluation Tracks Data over Time 49
Table 12: The Domains Used are Important Qualities of Instruction 49
Table 13: The Evaluation Provides Information about Performance 49
Table 14: Data Gathered Provides Information about School Growth Needs 50
Table 15: Effective Teaching Domain Comparisons 50
Table 16: Quality of Feedback 59
Table 17: Teacher Appreciativeness of the Feedback 59
Table 18: The Feedback Improves Teacher Effectiveness 59
Table 19: The Digital Teacher Evaluation Feedback is more effective
than Traditional Methods 59
Table 20: Digital Teacher Evaluation Affordability 65
Table 21: Digital Teacher Evaluation is Worth the Cost 65
Table 22: Digital Teacher Evaluation as a Time Saver 65
DIGITAL TEACHER EVALUATIONS 7
Table 23: Digital Teacher Evaluation Ease of Use 65
Table 24: Comfort Levels Using the Digital Teacher Evaluation 66
Table 25: Digital Teacher Evaluation Process Time Saving 66
Table 26: Anticipated Time to get Comfortable Using Digital Teacher Evaluation 66
Table 27: Effectiveness of Measuring Teacher Planning and Preparation 74
Table 28: Effectiveness of Measuring Instruction 74
Table 29: Effectiveness of Measuring the Learning Environment 74
Table 30: Effectiveness of Measuring Professional Responsibilities 75
Table 31: Effectiveness of Measuring Subject Knowledge 75
Table 32: Recommend Digital Teacher Evaluation to a Colleague 75
Table 33: Digital Teacher Evaluation Compared to Previous Instrument 75
Table 34: Overall Rating of Digital Teacher Evaluation 75
Table 35: Difference between Traditional Instrument and Digital Teacher
Evaluation 79
DIGITAL TEACHER EVALUATIONS 8
List of Figures
Figure 1: Digital Teacher Evaluations Ability to Report Information to
make Data-Driven Decisions 51
Figure 2: Breakdown of Principal Survey Responses Linking the Tracking
of Data with Assistance in Making Data-Driven Decisions 52
Figure 3: Difference between Principals who Track Data Over Time and Aid
Digital Teacher Evaluations Provide to made Data-Driven Decisions 53
Figure 4: Difference in Experience Level and Regard for Qualities of
Instruction Used in Digital Teacher Evaluation 54
Figure 5: Multi-Rating System and Information Gained About Teacher
Performance 56
Figure 6: Overall Value of Digital Evaluations at Providing Feedback to
Improve Performance 61
Figure 7: Difference between Level of Experience and Feedback Provided 62
Figure 8: Difference between Number of Observations and Effectiveness
of Feedback 64
Figure 9: Digital Teacher Evaluation is Worth the Cost 67
Figure 10: Digital Teacher Evaluation Ease of Use 68
Figure 11: Time Saving Benefits of Digital Teacher Evaluations 69
Figure 12: Principals with More Experience and Time Saving Benefits 70
Figure 13: Principals with More Experience and Comfort 71
Figure 14: Digital Teacher Evaluation Expense 72
Figure 15: Satisfaction Levels of Digital Teacher Evaluations Framework of
Effective Teaching 77
Figure 16: Level of Satisfaction and Years as Principal 78
DIGITAL TEACHER EVALUATIONS 9
Abstract
Principals are charged with the responsibility of conducting teacher evaluations that lead to
improved instructional practices, as well as using information gathered during the process to
make informed data-driven decisions about teacher performance. This study analyzes the impact
digital teacher evaluations have played on this process. Four research questions guided this
study: (1) How are digital teacher evaluations meeting Principal expectations for reporting of
information to make data-driven decisions about teaching? (2) How are digital teacher
evaluations meeting the need for Principals to provide feedback to teachers to improve
performance? (3) Do digital teacher evaluations meet Principal utility costs in terms of
affordability, ease of use and time? (4) How do principal satisfaction levels of digital teacher
evaluations compare to traditional paper and pencil evaluation practices? A mixed method
approach was utilized. Thirty high school Principals of a large, urban Catholic diocese
participated in the quantitative phase, and four in the qualitative phase. Findings from the study
show that Principals see value in the centralized information available through the use of digital
teacher evaluations. The findings further show that digital teacher evaluations save Principals
time; are effective alternatives for tracking data; and deliver teachers the immediate feedback
necessary to improve instruction. Future research is necessary because of the relative newness of
digital teacher evaluations. A case study on a specific instrument or school site will add to the
literature on the topic.
DIGITAL TEACHER EVALUATIONS 10
CHAPTER ONE: OVERVIEW OF THE STUDY
A major challenge for school Principals is to establish and execute objective teacher
evaluation practices that lead to positive changes in teaching. The role of teacher evaluations has
constantly been scrutinized and evaluated (Wise, Darling-Hammond, McLaughlin, and
Bernstein, 1984). Because teachers have the greatest direct impact on student performance
(Heck, 2009; Stronge and Tucker, 2000; Stronge, Ward, Tucker and Hindman, 2007; Tucker and
Stronge, 2005; Wenglinsky 2002), it is important for Principals to effectively and accurately
evaluate their staff.
Recent reauthorization of the Elementary and Secondary Education Act, commonly
known as No Child Left Behind (NCLB, 2002) by President’s Obama and Bush have added
increasing pressure on educational leaders to carry out evaluation practices aligned with value-
added modeling (VAM) (Harris, 2009; Martineau, 2006). Principals and district leaders are now
encouraged to make employment decisions utilizing accessible data. This data comes in the
form of both student assessment results and teacher observation data. The student assessments
utilized in VAM are most often standards-based exams (Braun, 2005; Darling-Hammond and
Prince, 2007). The federal government has made it clear that any school seeking federal
assistance (Race to the Top) for reform efforts must demonstrate how they will integrate value-
added models for retention and promotion of teachers.
This recent change in accountability practices has placed additional demands on the role
of a Principal. Principals are already viewed as the instructional leader of a school, but these
changes in accountability measures force them to utilize best practices and make data-driven
decisions (Peterson, 2004). Current research on the topic suggests that these decisions should
not be made based solely on student achievement results (Baker, et al., 2010; Braun 2005;
DIGITAL TEACHER EVALUATIONS 11
Darling-Hammond, 2007; Goe, 2008; Schochet, and Chiang, 2010). It is therefore imperative
that as instructional leaders, Principals utilize information gained from teacher supervision and
observations to make these informed, data-driven decisions. These evaluation tools should not
only be aligned with state and school site standards, but should have the ability to provide useful
information to construct school-wide action plans.
A component of any useful evaluation tool is the ability to provide teachers with helpful
feedback. Researchers agree that this should be a component of any comprehensive performance
evaluation system (Weisberg, Sexton, Mulhern, and Keeling, 2009). In order for teachers to
provide students with the most effective teaching strategies, teachers need to receive frequent
feedback from their supervisors. In most cases this role falls upon the Principal. Teachers have
constantly cited the need and desire for feedback on their teaching practices and research
confirms the effectiveness feedback can have on student outcomes (Weisberg et al., 2009)
Much has been written on instructional leadership (Andrews and Soder, 1984; Heck,
2009). While definitions may vary, teacher supervision and evaluation are common threads
throughout most of the research. Despite the importance of this component of instructional
leadership, it has become increasingly difficult for Principals to devote adequate time to perform
evaluations. These evaluations can take on many forms (videotaping, interviews, surveys, pop-
in, and scheduled visitations). But because of the increased pressures placed on Principals it
often becomes a challenge to draw consensus on a course of action due to the large volume of
evaluations.
Technology could be a great resource for Principals to turn to as a means to gather,
disaggregate and report on evaluations. However, little research has been conducted on digital
teacher evaluations and their effectiveness as a time saver. Recent advances in technology could
DIGITAL TEACHER EVALUATIONS 12
permit Principals to use digital methods to tabulate observation data and provide teachers with
visual tracking of improvement.
Statement of the Problem
What is unknown about this problem is whether technology and advances in computer
based performance evaluations have had any impact on teacher effectiveness. Clearly there is a
demand for effective tools that can provide teachers with useful information about their
performance, as well as the ability to quickly consolidate multiple evaluations into meaningful
reporting information. Accountability practices such as the employment of value-added
measures place added pressure on Principals as well as the instrument they use to carry out
faculty evaluations. These increased demands require advances in evaluation tools to quickly
and accurately provide Principals with the necessary information to make informed decisions.
Principals are instructional leaders and therefore should be able to effectively
communicate with faculty about their practice. This has been a topic of conversation in recent
research and policy briefs (Weisberg, et al., 2009). Technology should provide Principals with
appropriate methods to carry out this piece of the accountability movement. This is why it is
important to investigate recent developments by companies to determine their impact on
instructional leadership and teacher improvement through their use.
Since digital evaluation tools are so new, it is also important to determine if these tools
are reliable alternatives to traditional evaluations. As schools adjust to 21
st
century demands, so
should Principals. It is therefore important to determine if these new evaluation tools are aligned
with research on effective teaching, foster professional dialogue about teacher effectiveness, and
lead to changes in teaching. Accountability measures require value-added decision making, and
Principals need the necessary tools to make informed decisions.
DIGITAL TEACHER EVALUATIONS 13
Purpose of the Study
The purpose of this study is to investigate the use of digital evaluation methods employed
by Principals as well as to ascertain how technology has assisted Principals in the evaluation
process. Hypotheses were developed for each research question. Salkind (2011) recommends
that hypotheses be developed as a process of translating research questions into forms “more
amenable to testing” (p. 127). The hypotheses are both directional and nondirectional. The
nondirectional hypotheses do not indicate the direction of difference between groups (Salkind,
2011). The directional hypotheses indicate both a difference and direction of the groups
(Salkind, 2011). The research questions for this study and corresponding hypotheses are as
follows:
1. How are digital teacher evaluations meeting Principal expectations for reporting of
information to make data-driven decisions about teaching?
Hypothesis 1.1. There will be a difference between Principals who track teacher
data over a period of time and the aid digital teacher evaluations provide
Principals to make data-drive decisions.
Hypothesis 1.2. There will be a difference between the level of experience of
Principals and their regard for the qualities of instruction used in the digital
teacher evaluation instrument.
Hypothesis 1.3. Digital teacher evaluations with a multi-rating system will
provide Principals with accurate information about teacher performance.
2. How are digital teacher evaluations meeting the need for Principals to provide feedback
to teachers to improve performance?
DIGITAL TEACHER EVALUATIONS 14
Hypothesis 2.1. There will be a difference between Principals who have greater
experience with digital teacher evaluations and the feedback they are able to
provide teachers.
Hypothesis 2.2. There will be a difference between the number of teacher
observations performed by Principals and the effectiveness of the feedback
provided by digital teacher evaluations to improve teacher effectiveness.
3. Do digital teacher evaluations meet Principal utility costs in terms of affordability, ease
of use and time?
Hypothesis 3.1. Principals who have more experience using digital teacher
evaluations will save time.
Hypothesis 3.2. Principals who have more experience using digital teacher
evaluations will be more comfortable using them.
Hypothesis 3.3. Principals who use digital teacher evaluations will find them
expensive alternatives to paper and pencil evaluation instruments.
4. How do Principal satisfaction levels of digital teacher evaluations compare to traditional
paper and pencil evaluation practices?
Hypothesis 4.1. There will be a relationship with the level of satisfaction and the
years as principal.
Hypothesis 4.2. There will be a difference between traditional pencil and paper
evaluation instruments and digital instruments.
The Importance of the Study
The importance of this study is to contribute to the broad literature on instructional
leadership as well as to provide incentive for continued developments in teacher evaluations that
DIGITAL TEACHER EVALUATIONS 15
utilize 21
st
century tools. Research points out that Principals are central in the teacher evaluation
process (Peterson, 2004). This study will be valuable to Principals, as practitioners, in using
digital evaluations to make informed decisions. In addition, this study will provide Principals
with important information on the effectiveness of digital teacher evaluations in providing
teachers with meaningful feedback. Potential developers of teacher evaluations will benefit from
this study as well, since it can be used to design tools that utilize research-based evidence in the
design of digital evaluations.
Limitations
Identifying schools that employ digital teacher evaluations was conducted through a
search of digital evaluations in the Apple App Store. The search yielded an evaluation
application utilized by multiple school districts. Most of these districts were Catholic
archdioceses. As such, the researcher limited the research to an urban Catholic diocese in a large
metropolitan area. Because of the large number of schools in the diocese, the researcher limited
the study to include just the high schools. The population in this study was all Catholic schools.
No public schools were included. The results of this study were therefore limited and not as
widely generalizable.
In addition, because of the limited literature on digital teacher evaluations, it was difficult
to identify actual digital evaluations utilized by Principals. Again, this study was limited by this
fact and the fact that the researcher used an online App Store to identify known evaluations.
This search proved fruitful, but this study is limited in this regard.
DIGITAL TEACHER EVALUATIONS 16
Definition of Terms
Data-Driven Decision Making –collecting and analyzing various forms of data to make informed
to guide decisions about teaching.
Digital Teacher Evaluation – teacher observation instrument used by a Principal, or designee,
which is executed on a mobile device. Digital teacher evaluations can be in the form of a
downloaded program or application that does not require pen or paper.
No Child Left Behind – Public Law 107-110 is an act to close the achievement gap with
accountability, flexibility, and choice, so that no child is left behind.
Teacher Effectiveness – the quality of instruction measured on multiple research-based domains.
Value-Added-Models (VAM) – evaluations that incorporate both Principal observation reports of
teachers as well as student data on statewide exams. The weighted value of each may vary by
district. VAM takes into account multiple measures to rate teacher effectiveness.
DIGITAL TEACHER EVALUATIONS 17
CHAPTER TWO: LITERATURE REVIEW
Teachers play perhaps the single most important role in improving student achievement
(Goldhaber, 2002; Goldhaber and Anthony, 2007). The difference can often be sizeable, as
Sanders and Horn (1998) found in their study on learning outcomes and teacher effectiveness in
Tennessee schools. Gordon, Kane, and Staiger (2006) found similar outcomes in their
investigation of teacher impact on student math achievement scores. Student achievement
results can vary by as much as 10 percentile points when assigned to an effective teacher. It is
therefore important for Principals to utilize a comprehensive evaluation program that accurately
assesses teacher effectiveness (Weisberg, et al., 2009). Recent changes in accountability
practices have placed even greater pressure on Principals to align this practice with value-added
models (NCLB, 2002; Harris, 2009; Martineau, 2006).
This review of literature is presented in three sections. The first is a brief history of the
evolution of the teacher evaluation system. The second is an exploration of the characteristics of
effective teachers. In order for Principals to conduct effective and accurate evaluations, they
should possess knowledge of what constitutes effective teaching. The third section is a review of
how Principals conduct and use evaluations. This will serve as a rationale for a greater emphasis
on the use of digital evaluations to assist Principals in making informed decisions.
History of Teacher Evaluations
Teacher evaluations have been part of education discussions dating back to the 1930’s
(Danielson, and McGreal, 2000). It wasn’t until the 1980’s, however, that teacher quality and
effectiveness became the primary focus of education reform. Under the Reagan administration,
the National Commission on Excellence in Education (1983) drew attention to teacher
evaluations with the publication of A Nation at Risk: The Imperative for Educational Reform.
DIGITAL TEACHER EVALUATIONS 18
Among the recommendations of this report was a call for school administrators to develop
systems to objectively evaluate teacher effectiveness. It also recommended that teachers who are
deemed ineffective through the evaluation process be dismissed. The Commission viewed
teacher improvement as the best means to improve the academic system as a whole.
A Nation at Risk’s focus on classroom instruction and teacher effectiveness was a
complete reversal of previous reform efforts that focused on curriculum and student competence
(Wise, Darling-Hammond, McLaughlin, and Bernstein, 1984). Teacher effectiveness, as
measured through an evaluation process, became the driving factor for improving student
learning. From A Nation at Risk came the focus on teacher evaluation. This focus on teacher
effectiveness as the greatest contributor to improved student achievement has since been
confirmed by research (Darling-Hammond, 1999; Stronge &Tucker, 2000; Wenglinsky, 2002;
Wallace Foundation, 2004).
In 2002, George W. Bush signed into law the No Child Left Behind (2001) Act of 2001
(Simpson, Lacava, and Graner, 2004). NCLB is the most recent and perhaps most aggressive
reform initiative since A Nation at Risk. NCLB dramatically changed accountability practices by
placing the emphasis on student achievement results (Harris, 2009). In stark contrast to A Nation
at Risk which called for the development of an objective teacher evaluation system, NCLB drew
focus away from the process of evaluation and shifted it instead on student outcomes. Teacher
effectiveness is then determined by how students perform on a standardized exam (Peterson,
2004).
Districts and Principals have since begun to tie student achievement data into the teacher
evaluation process. This process of utilizing student achievement data as part of the overall
teacher evaluation system is known as a value-added-model (VAM). VAM has become an
DIGITAL TEACHER EVALUATIONS 19
attractive alternative to the traditional teacher evaluation system and is often tied to a teacher’s
compensation (Darling-Hammond, 2009). Denver, Colorado, for example, ties teacher
compensation in the form of merit pay to the results of these state exams (Harris, 2009). The
District of Columbia is another school district that incorporates student achievement data in the
teacher evaluation process. In addition to the traditional Principal classroom observation,
District of Columbia public school teachers are rated based on student test performance,
commitment to the school community, and professionalism (District of Columbia Public
Schools, 2010).
While this approach is attractive and promising, researchers warn that it should not be a
replacement of an objective evaluation system. Darling-Hammond (2009), a leading researcher
in the area of teacher effectiveness, states multiple reasons why VAM should not be the primary
measure for teacher evaluation. She credits work done by other agencies (the RAND
Corporation and Henry Braun of the Educational Testing Service) and agrees that there are too
many external variables like student assignment along with threats to the validity of the test
results to rely solely on achievement results. Schochet and Chiang (2010) find similar errors in
their investigation of value-added models. They conclude that there is a significant amount of
random error that cannot be accounted for in the value-added approach to evaluation. In fact,
Schocet and Chiang (2010) state that up to 90 percent of the gains in student scores are due to
student-level factors beyond the control of the teacher. The literature therefore supports the
notion that Principals should execute an objective evaluation process to determine teacher
effectiveness and be cautioned when using student achievement data as a means of rating teacher
effectiveness (Baker, Barton, Darling-Hammond, Haertel, Ladd, Linn, Ravitch, Rothstein,
Shavelson, and Shepard, 2010).
DIGITAL TEACHER EVALUATIONS 20
Teacher Evaluation Framework
Because the literature supports the need for Principal’s to carry out objective teacher
evaluations, it is important to understand the framework used in the design of these evaluation
tools. These instruments are often tied to National and State teaching standards developed by
teaching professionals organized as a board.
One such national board is The National Board for Professional Teaching Standards
(NBPTS). The NBTS was created in 1987 and developed a vision for accomplished teachers.
This vision of what teachers should know and be able to do was developed by fellow teachers.
NBPTS’ five propositions are that teachers are committed to the students and learning, have
knowledge of the subject they teach, manage and monitor learning, reflect on their practice, and
are members of learning communities (NBPTS, 1987). According to the NBPTS website, these
standards are periodically reviewed and revised if necessary.
The NBPTS is not the only organization that has developed a set of teaching standards.
Darling-Hammond, Audrey-Beardsley, Haertel, and Rothstein (2012) describe how a consortium
was created by a group of states calling themselves the Council for Chief State School Officers.
The consortium they created is called the Interstate New Teacher Assessment and Support
Consortium (InTASC). Through a review of literature and assistance from researchers at
Michigan State University, the consortium developed ten teaching standards for teachers. These
teaching characteristics have been adopted by over forty states (Darling-Hammond, Audrey-
Beardsley, Haertel, & Rothstein, 2012).
The Bill and Melinda Gates Foundation also recently funded a project to identify
characteristics of effective teaching that can be aligned with evaluation tools. This initiative
began in 2009 and is known as The Measures of Effective Teaching (MET) Project. The MET
DIGITAL TEACHER EVALUATIONS 21
characteristics of effective teaching will be defined through a review of professional standards,
an analysis of over 12,000 videotaped classroom sessions of over 3,000 teachers, and through the
use of student achievement results to test the validity of their findings (Cantrel, & Scantlebury,
2011).
Charlotte Danielson (2007) developed the framework that is used in the MET Project and
that is used by Chicago public schools as the rubric for their teacher evaluations (Oliva, Mathers,
& Laine, 2009). Danielson clustered 22 components around four domains of effective teaching.
The four domains include planning and preparation, environment, instruction, and professional
responsibilities (Danielson, 2007).
Danielson’s (2007) domains are very similar to the broad categories in Wise, et al (1986)
research on evaluation systems of 32 school districts. While the practices varied at each district,
the characteristics of an effective teacher centered on five factors. These were teaching
procedures, classroom management, knowledge of subject matter, personal characteristics, and
professional responsibility (Wise, et al, 1986, p. 18).
Covino and Iwanicki (1996) conducted a study that not only identified the characteristics
of effective teachers, but sought to identify the cognitive processes used by these experienced
teachers as well. Through a review of literature they agreed on four excellent descriptors of
effective teachers. The four descriptors were that teachers concentrate on academics, be
effective presenters, be involved in interactive decision-making, and monitor and assess
classwork and homework (Covino & Iwanicki, 1996, p. 327).
Within each of these descriptors were underlying cognitive processes that they wanted to
better understand. Surveys and factor analysis were used to draw out these factors from the
descriptors of effective teachers. It is Covino and Iwanicki’s (1996) summation that these ten
DIGITAL TEACHER EVALUATIONS 22
reliable factors should be part of any evaluation system of teachers. The ten factors were: 1)
monitors students understanding during instruction, 2) uses high-interest lessons, 3)
communicates to all students the expectation that they are to achieve their best, 4) adapts
teaching to students’ learning styles, 5) motivates students effectively, 6) provides opportunities
for problem solving, 7) uses homework effectively, 8) uses a variety of instructional materials
and techniques, 9) encourages students to take responsibility for their learning, and 10) uses
appropriate information to assess students’ learning needs (p. 350).
While most researchers utilized teacher surveys and classroom observations to identify
teacher characteristics to use in teacher evaluations, some researchers used student perspectives
to flesh out these characteristics. The rationale is that since students are the ones most directly
impacted by teachers, they could be just as helpful at identifying characteristics that make
teachers most effective. Schulte, Slate, and Onwuegbuzie (2008) conducted such a research
study. They surveyed 615 undergraduates from a Southwest College using a modified survey
with open-ended questions developed by Minor et al. (2002) and Witcher et al. (2001) to identify
effective high school teacher characteristics.
Not surprisingly, Schulte, Slate, and Onwuegbuzie’s (2008) results revealed similar
characteristics as those done by researchers who relied solely on teacher inputs and data.
Twenty-four common themes were uncovered through a process of frequency distribution and
factor analysis. The top five themes in order were knowledgeable, patience, caring,
understanding, and teaches well (p. 356). The descriptions of each of these themes are very
similar to those utilized in teacher evaluation instruments described above. Descriptions of
knowledge as an effective teacher characteristic, for example, include understanding of subject
material as well as student needs, and use of outside materials. This is very similar to what other
DIGITAL TEACHER EVALUATIONS 23
researchers (Danielson, 2007) would define as planning and preparation. Schulte, Slate, and
Onwuegbuzie (2008) also highlight the importance of teacher knowledge since it corresponds
with NCLB’s position that teachers should be highly qualified. It is Schulte, Slate, and
Onwuegbuzie’s (2008) hope that their investigation of student descriptors of effective teacher
characteristics will be used to make informed employment decisions. This can be inferred that
they too recommend that these identified characteristics be part of any teacher evaluation system
used to help Principals make informed decisions (p. 360).
The review of the literature on the behaviors and characteristics of effective teachers
illustrates that there appears to be some commonalities between the constructs developed by
some of the researchers mentioned above. Management, planning and preparation, monitoring,
and follow-up seem to some of the common threads. Despite the multiple constructs and
definitions, it is clear that all the researchers agreed that any teacher evaluation system should be
aligned with research based practices.
Teacher Evaluation Process and the Role of Technology
While there is sufficient literature on what effective teaching is and supporting evidence
that an evaluation process can lead to improved learning gains, there is little research on the role
technology plays in the evaluation instrument. Evaluation instruments are aligned with current
research on what constitutes effective teaching, but it is unclear from the literature whether these
instruments are structured in any fashion other than in the traditional sense of paper and pen
reports. There is evidence that videotaped sessions are used in some evaluation systems, but
purely as a means of reflection (Wright, 2010).
There is, however, evidence of evaluation practices and what the desired outcomes of the
evaluations are for both teachers and Principals. Knowing this will better assist in understanding
DIGITAL TEACHER EVALUATIONS 24
the role that technology can play in the evaluation process and whether technology based
evaluation instruments meet the needs addressed in the literature.
The most common method of evaluating teachers is through a systematic classroom
observation that utilizes an interactive rating system (Strong, Gargani, and Hacifazlioglu, 2011).
These rating systems are aligned with theoretical framework as described above. These rating
systems have recently come under considerable scrutiny. Weisberg, et al (2009) reported in their
research that of five school districts that used a binary rating evaluation system, satisfactory or
unsatisfactory, that 99 percent of teachers received the satisfactory rating. Even districts that use
a multiple rating scale have an overwhelming majority of teachers receiving the highest mark.
Seventy percent of teachers in five other districts received the highest mark (Weisberg, el al,
2009).
This illustrates the importance for Principal’s and districts to not only utilize a multiple
rating system, but to have a clear understanding of each level. Glazerman, Goldhaber, Loeb,
Raudenbush, Staiger, and Whitehurst (2011) make the recommendation that better systems of
evaluations differentiate between teachers. Systems like those researched by Weisberg, et al,
(2009) are useless if over 95 percent of teachers receive the same rating. This is a common flaw
identified by researchers and is an important consideration as Principals become increasingly
pressured to make employment decisions using a value-added measure (VAM) or are in a district
that has a structured merit pay program. Based on their findings, researchers (Wise, et al, 1986,
Glazerman, et al, 2011) suggest that ratings systems used in teacher evaluations should be well
defined so that the true top teachers receive the highest ratings and are therefore properly
acknowledged for their efforts.
DIGITAL TEACHER EVALUATIONS 25
While it appears that multiple ratings systems are desired over binary ones (Weisberg, et
al, 2009), the evaluation process becomes important to investigate. Principals are the primary
individuals who conduct teacher evaluations (Olivia, Mathers, and Laine, 2009; Peterson, 2004).
This is usually carried out in one of two ways. The first is a brief classroom observation that
lasts between three to six minutes (Peterson 2004; Strong, Gargani, and Hacifazlioglu, 2011).
During these unscheduled “walk-throughs” Principal’s usually look for evidence of student
learning and provide the teacher with highlights of the evaluation at some point after the walk-
through. Peterson (2004) states: “walk-throughs can be more valuable data sources than formal
observations because they sample more reliably with a greater number of observations, are less
intrusive on actual ongoing instruction, and are more flexible in focusing on what makes a
difference in school functioning and student learning” (p. 62). It is not clear from the literature,
however, whether this feedback is provided electronically via an email report or a handwritten
report.
The other common procedure is for Principals to have a pre-meeting with teachers, prior
to the formal observation with a follow up discussion. Wise, et al. (1986) describe this
procedure in their investigation of Toledo’s teacher evaluation system. A common evaluation
instrument is used with a follow-up discussion of the evaluation. The frequency of this more
formal evaluation procedure can vary immensely. Weisberg, et al (2009) found that in most of
the districts in their studies that teachers were evaluated based on two or fewer formal
observations. Olivia, Mathers, and Laine (2009) also found in their research that nontenured
teachers are evaluated twice a year while tenured teachers once every two to five years (p. 18).
Wise et al (1986), on the other hand, described how teacher discussions on observations occur
“several times a year” (p. ix).
DIGITAL TEACHER EVALUATIONS 26
While the process may vary by district and by Principal, the literature is congruent
regarding the purposes of teacher evaluations. Aside from providing principals with information
for employment (Wise, et al, 1986), teacher evaluations provide teachers with feedback on how
to improve their practice, and Principals with information for future professional development.
Feedback in the form of recommendations on how to improve instruction is a major component
of an effective teacher evaluation. This is supported by multiple researchers (Wise, et al, 1986;
Peterson, 2004; Darling-Hammond, et al, 2012) who have concluded that this is perhaps the most
important outcome of the evaluation process. The MET Project (2009) indicates that teachers
could reach a plateau by the third or fourth year of teaching if feedback is not a regular part of
the evaluation procedure. Most of the literature does not indicate the form in which this
feedback takes place, but Darling-Hammond, et al, (2012) states in her findings that feedback is
most often done in written form.
While feedback is valued by teachers to improve their practice as well by Principals as
instructional leaders to be actively involved in learning (Andrews, and Soder, 1987), providing
constructive feedback is a time consuming process. This has long been an identified as a utility
concern with teacher evaluations (Wise, et al, 1986). Peterson (2004) included in his findings
that even though conducting evaluations and providing feedback were among the greatest of
priorities of Principals that they reported having significantly not enough time to effectively
carry them out.
Cost is another utility concern. The efficient use of resources is always a concern of
school leaders, but with the increased demand for visible outcomes (Oliva, Mathers, and Laine,
2009) it has become even more important for school districts and leaders to target resources like
teacher evaluations that will directly affect student outcomes. While advances in technology
DIGITAL TEACHER EVALUATIONS 27
appear to be viable solutions to both of these utility problems, the literature does not indicate
whether this has been applied to address them.
Another area where advances in technology could greatly assist in teacher evaluations is
the measure of effectiveness over a period of time. Glazerman, et al, (2011) suggest this in their
outline of better teacher evaluations. In order for Principals to make data-driven decisions, it is
important for them to identify teachers who have performed well over a period of time. The
literature does not state how this should be done, but it is intimated by other researchers that
electronic tracking of evaluations would assist in the decision making process. Only four of the
twelve districts studied by Wesiberg, et al (2009) tacked evaluation results electronically (p. 20).
The other districts use a paper system and keep records at a central office. Again, it is unclear
from the literature whether this process is done through data input which could be time
consuming, or whether this tracking of data over time was a feature of the actual teacher
evaluation instrument used in the process.
Summary
There is a clear mandate for Principals to execute teacher evaluations in the most
effective manner possible. This includes having clear definitions for the multiple ratings being
used, along with using an instrument aligned with research-based characteristics of effective
teaching. The literature acknowledges that the Principal is the primary instructional leader of a
school best qualified to carry out teacher evaluations. Based on the findings in the literature, the
best method is for Principals to perform multiple “walk-throughs” during the year and provide
teachers with frequent feedback on ways to improve instruction (Olivia, Mathers, and Laine,
2009). The literature is unclear whether digital teacher evaluations are utilized for this task, but
it is suggested by some researchers that this would be a desired practice due to the limited
DIGITAL TEACHER EVALUATIONS 28
amount of time Principals have to conduct evaluations the way they would prefer (Peterson,
2004). Tracking of data over a period of time is another useful output of the teacher evaluation
process (Glazerman, et al, 2011). The literature is unclear on how this is executed, but is again
suggested that electronic tracking would provide Principals and district officials with useful
evaluation information to make data-driven decisions (Weisberg, et al, 2009).
Based on the findings in the literature, it becomes important to study the effects of digital
teacher evaluations. The literature suggests some clear benefits, but the only studies of how
technology is used in evaluations are through reflection and video recording of an observation
(Wright, 2010). It is suggested in the literature that some districts employ digital tracking
methods (Weisberg, et al., 2009), but not on the evaluation instrument itself. Time, cost,
feedback, professional development, and improved learning outcomes are important components
of an evaluation system, and which digital advances in teacher evaluations could contribute to
the effectiveness of the process.
DIGITAL TEACHER EVALUATIONS 29
CHAPTER THREE: METHODOLOGY
The purpose of this Chapter is to review the methodology used in the study. It will cover
the research questions, sampling and population characteristics, the instrumentation utilized to
carry out the research, data collection practices, and analysis techniques.
The purpose of this study is to investigate Principal expectations of digital teacher
evaluations. In particular, the study examines how digital teacher evaluations compare to
previous traditional practices and whether newer digital models meet Principal needs to improve
teacher performance. Four research questions are presented to address the topic.
Research question 1 centers on the ability of teacher evaluations to track teacher
performance data over a period of time to be used by principals in making data-driven decisions
about teaching. In addition to having a multiple instead of a binary teacher rating system
(Weisberg, et al, 2009), it is recommended that teacher evaluations track performance over a
period of time (Strong, Gagani, and Hacifazlioglu, 2011; Glazerman, et al, 2011). The first
research question investigates how digital teacher evaluation rating and tracking systems help
Principals make informed decisions.
Research question 2 centers on feedback provided to teachers by Principals to improve
teaching practices. Several researchers (Wise, et al, 1986; Peterson, 2004; Darling-Hammond, et
al, 2012) agree that providing feedback to teachers is one of the most important roles carried out
by Principals. It is therefore important to investigate how effective digital teacher evaluations
are in providing teachers with meaningful feedback.
Research question 3 centers on utility costs. Time and cost of teacher evaluations have
been identified as important factors by researchers (Wise, et al, 1986; Oliva, Mathers, and Laine,
2009). No research was found on the ease of use of digital teacher evaluations, which serves as
DIGITAL TEACHER EVALUATIONS 30
another important component of the utility of the instrument. This question measured the
effectiveness of digital teacher evaluations in meeting these three specific utility costs of
Principals.
Research question 4 centers on Principal satisfaction of digital teacher evaluations
compared to traditional pen and paper practices. Technology has improved dramatically and is
recommended by some researchers (Glazerman, et al, 2011) as a viable alternative to traditional
practices. This question will ascertain the degree of satisfaction of digital teacher evaluations
from the perspective of the Principals who utilize them.
Design of the Study
Creswell’s (2005) three questions for a research study were taken into consideration in
the design of this study. These questions are:
(1) What knowledge claims are being made by the researcher including
theoretical framework?
(2) What strategies of inquiry will inform the procedures?
(3) What methods of data collection and analysis will be used?
These questions are addressed in the design of this study. In addition, the theoretical framework
was structured to successfully answer the research questions.
In order to address the research questions, a mixed methods approach was selected for
this research study. A mixed method research design employs both qualitative and quantitative
research (Patton, 2002). As Creswell (2005) states, “A mixed method research design is a
procedure for collecting both quantitative and qualitative data in a single study, and analyzing
and reporting this data based on a priority and sequence of information” (p. 594).
DIGITAL TEACHER EVALUATIONS 31
This approach was selected because of the need to enhance the study with a second
source of data. This is often referred to as triangulation (Patton, 2002). Because there is very
little on the subject of the effectiveness of digital teacher evaluations, it was important to
complete Principal interviews to support the quantitative data from the survey.
This study will also follow the recommendations of Creswell (2005) to run two phases in
the collection of data. Quantitative data will be collected and analyzed in the first phase
followed by the collection and analysis of qualitative data in the second phase. This structure
will guarantee that the qualitative data will connect to the quantitative data (Creswell, 2005).
Creswell (2005) also outlined four strategies to strengthen a mixed methods design. The
first consideration is the implementation sequence of the quantitative and qualitative data
collection. The second concerns the priority given to the quantitative and qualitative data
collection and analysis. The third centers on when the findings will be integrated. The forth is
whether a theoretical perspective will be used.
To address the collection sequence of the quantitative and qualitative data, this study will
implement a quantitative electronic survey through SurveyMonkey.com (2007) followed by semi-
structured follow-up interviews with principals. A cross-sectional survey will be used for this
study since the data produced represented current attitudes, or beliefs of digital teacher
evaluations. Because principals will be surveyed only once, and the data produced will represent
attitudes at one single point of time, the researcher will use a cross-sectional survey (Creswell,
2005).
The data collected from the quantitative and qualitative research will have equal priority.
However, the quantitative research will be conducted first through an electronic survey, and the
DIGITAL TEACHER EVALUATIONS 32
qualitative research will be conducted second through semi-structured surveys with principals.
The interview findings will clarify the quantitative findings.
In conclusion, a mixed methods research design will be used in this study. A cross-
sectional survey will be used for the collection of quantitative data. Semi-structured interviews
will be conducted for the collection of qualitative data. This approach is an appropriate design
for this study.
Sample and Population
Purposeful sampling was used in this study. Because of the uncertainty of which schools
or school districts were utilizing digital teacher evaluations, a strategic search was used to
identify schools for the study. Apple’s App Store was used as a search criterion because of its
well-constructed categories. A search for teacher evaluations in the education category resulted
in a two digital teacher evaluations. For the purpose of this study, the evaluation identified will
be known as Evaluation X. Evaluation X was developed by a company that will be known as
Developer X. Developer X’s website listed over fifty school districts that utilized this particular
digital teacher evaluation. A Catholic Archdiocese in a large urban area in California in close
proximity to the researcher was among the listed districts. This diocese was used as the sample
population for this study. There are 136 schools in this particular district, of which ninety-six are
elementary schools (Grades K-8) and forty are high schools (Grades 9-12). For the purposes of
this study, only the forty high schools of this Catholic diocese were used.
An email was sent to the Superintendent of Secondary Schools to ask permission of the
forty principals to participate in this study. Once approval from the Superintendent was secured
a recruitment email was sent to the forty principals of the diocese. The Principal recruitment
email can be viewed in Appendix C.
DIGITAL TEACHER EVALUATIONS 33
Instrumentation
Two sources of data were used for this study. The first is the quantitative research of the
mixed methods design that will collect data on the four research questions through an electronic
survey constructed through SurveyMonkey. The second source of data was the qualitative
portion. This involved follow-up, semi-structured interviews with principals. The survey and
interviews will be conducted in two phases. The first phase was the distribution of the survey.
The survey was designed as a cross-sectional measurement since it ascertains Principal
perspectives at one particular point in time (Creswell, 2005). The second phase of this study
involved follow-up interviews with Principals. The interview protocol used was be semi-
structured and intended to clarify the quantitative findings.
Any identifiable information obtained in connection with this study will remain
confidential and will be disclosed only with the permission of the individuals involved, or as
required by law. The data will be stored on the researcher’s password protected computer for
three years after the study has been completed and then destroyed. When the results of the
research are published or discussed in conferences, no identifiable information will be used.
Survey Design Considerations
Because there is little research on the research topic, the survey was developed. Several
considerations were made in the development of the survey – the first of which is the
consideration of the principal’s time. Therefore, it was determined that the length of the survey
must not take longer than twenty minutes to complete otherwise the respondent may experience
cognitive overload and ultimately skew results.
The number of questions associated with each research question was considered in the
creation of the survey. A minimum of four questions per research question was determined to be
DIGITAL TEACHER EVALUATIONS 34
appropriate. Using at least four questions per research question allowed the researcher to
measure the internal consistency of the research questions. Salkind (2011) describes the process
of knowing whether items are consistent in representing one construct or area of interest as
internal consistency reliability.
Scales of measurement were also taken into consideration in the design of the survey.
Nominal levels of measurement were used to identify particular categories. Salkind (2011)
describes nominal levels of measurement as “the characteristics of an outcome that fit into one
and only one class or category” (p. 104). Questions 1-6 of the survey are all nominal levels of
measurement (See Appendix A).
Interval levels of measurement will also be used in the survey. These will be in the form
of a 4-point Likert Scale. This rank-order scale reflects differences in magnitude by having
Principals rate statements from Strongly Disagree = 1 to Strongly Agree = 4. Some survey
questions will also ask the degree of satisfaction and were rated from Not Satisfied = 1 to Very
Satisfied = 4.
Survey questions were clustered together by research question to minimize confusion of
the respondent and for ease of data analysis. All questions in the survey were selected after
careful consideration of the research questions. Questions 7 – 11 are aligned with the first
research question. These five questions are about how principals use digital teacher evaluations
to make data informed decisions and on the ability of the evaluations to track data over a period
of time. Questions 12-15 are aligned with the second research question. These four questions
center on feedback and the quality of feedback provided teachers through the use of the digital
teacher evaluation. Questions 16-22 are aligned with the third research question. These seven
questions relate to the utility costs associated with the digital teacher evaluation. Questions 23-
DIGITAL TEACHER EVALUATIONS 35
30 are aligned with the fourth research question. These eight questions are linked to the domains
of the digital teacher evaluation and overall degree of satisfaction of principals using this form of
instrument for observation and evaluation. See Appendix A to review the survey questions used
in this study.
Interview Protocol Considerations
The research questions guided the selection of the interview questions. They were also
selected to support the finding from the Principal Survey. They are open-ended questions
instead of dichotomous and will permit the Principal to take whatever direction they want to
express (Patton, 2002).
The Principal Interview Protocol was designed using the framework of Patton’s (2002)
Interview Guide for Quantitative Research. A standardized open-ended interview approach was
selected for this study. This type of interview uses exact wording of questions that were
determined in advance. The strengths of this interview instrument are that: all respondents will
answer the same questions; interviewer effects and bias are reduced; and the interview was
designed to facilitate the organization and analysis of the data (Patton, 2002). The weakness of
this interview instrument was that there will be little flexibility in relating the interview to
circumstances and may constrain the naturalness of the interview (Patton, 2002).
Rapport was established with each Principal that participated in the interview. This was
accomplished through the neutrality of the delivery of the questions and the neutrality in the
researcher’s demeanor when responses are given. Patton (2002) describes rapport as establishing
a level of respect between the interviewer and individuals being interviewed. The structured
format of the interview assisted in the establishment of rapport and neutrality since questions
were delivered sequentially. In addition, the researcher remained neutral after each question was
DIGITAL TEACHER EVALUATIONS 36
answered so the Principal felt comfortable telling the researcher anything without engendering
either favor or disfavor with regard to the content of the response (Patton, 2002, p. 365).
Consent and confidentiality were highly regarded in the design of the Principal Interview
Protocol. Consent for the interview was established prior to the formal meeting through the last
question of the Principal Survey. (See Appendix A) An opening statement clarifying how
information would be used during the interview, length of time it will be kept, and the use of
pseudonyms were clearly spelled out at the beginning and end of the interview. (See Appendix
B)
Conceptual Framework Considerations
While the research questions serves as the preliminary guide for the development of the
survey and interview protocols, a conceptual framework is also employed. Four principles
guided the framing of this study. The first was to determine the parameters for what constitutes
exceptional teaching and how they should be measured. The work of Charlotte Danielson (2007)
was used as the conceptual framework to address this area of research. Danielson’s framework
of effective teachers is the basis for evaluations in Chicago (Oliva, Mathers, & Laine, 2009) and
are being used as the metric in the recent MET project (Bill & Melinda Gates Foundation, 2009)
and therefore appropriate for investigation in the integration of digital teacher evaluations.
Questions 23-27 of the survey are aligned with this framework and address the reporting
component of the first research question. Questions 7-11 of the Principal Survey target how
Principals use the digital teacher evaluation to make informed decision and also address the first
research questions. (See Appendix A) Interview questions 3, 4, and 7 along with the follow up
questions for each are also aligned with this framework. (See Appendix B) They clarify the
findings in the quantitative portion of this study. In particular, the principal responses clarify
DIGITAL TEACHER EVALUATIONS 37
survey responses on the reporting ability and use of digital teacher evaluations to make data-
driven decisions about teaching.
The second guiding principle that assisted in the conceptual framework of the study was
the work of researchers Darling-Hammond, et al (2012) and Peterson (2004) on the role of
feedback. As instructional leaders, Principals are charged with the responsibility to evaluate
teacher progress and offer feedback on how to improve. Peterson (2004) describes the process of
providing feedback to teachers as one of the greatest priorities of Principals. While it is unclear
on what is the best method, Darling-Hammond, et al (2012) suggests this be done in a written
form. Based on the value placed on feedback in evaluations by these researchers, the role of
feedback and how Principals use digital teacher evaluations to provide it became an important
consideration of this study and component of the conceptual framework of this study. The
second research question of this study centers on the role of feedback. Questions 12-15 of the
Principal Survey are developed to ascertain the effectiveness of digital teacher evaluations in
assisting Principals in the delivery of feedback and whether this has resulted in improved
teaching. (See Appendix A) In addition, question 6 of the Principal Interview Protocol was
included to elicit qualitative support on the concept of feedback.
The third guiding principle of the conceptual framework of this study was the utility cost
involved in using digital teacher evaluations. The time and cost of using digital teacher
evaluations are important considerations of Principals. The research supported this concept.
Wise, et al (1986) discussed the limited time Principals have to conduct thorough teacher
evaluations. Olivia, Mathers, and Laine (2009) discussed how budgets and cost weigh heavily
on Principals and are therefore important concepts of investigation in this study. While time and
cost are clearly important considerations for Principals as instructional leaders, the literature is
DIGITAL TEACHER EVALUATIONS 38
unclear as to whether digital teacher evaluations are both cost effective and save time. Questions
16-22 of the Principal Survey have to do with utility costs associated with digital teacher
evaluations. In addition, question 3 and 5 of the Principal Interview Protocol will get qualitative
support on the concept of the utility cost associated with digital teacher evaluations.
The fourth guiding principle of the conceptual framework of this study had to do with the
ability of Principals to use digital teacher evaluations to track data over a period of time to make
data-driven decisions. Glazerman, et al (2011) suggested in his research that teacher observation
data should be recorded over time so Principals could make informed decisions regarding their
performance. Weisberg, et al (2009) even suggested in his research that having this information
in electronic form would be more valuable to Principals. The work of both of these studies
provided the conceptual framework for the portion of this study that looked at how digital
teacher evaluations assist Principals to make data-driven decisions about teacher performance.
Questions 7-11 of the Principal Survey and questions 4 and 7 of the Principal Interview Protocol
are associated with this concept.
Reliability and Validity
A test of reliability determines whether a measurement tool measures something
consistently (Salkind, 2011). An exhaustive search for survey instruments measuring Principal
uses of digital teacher evaluations to improve teaching proved fruitless. The questions of the
Principal Survey were developed after review by three Department Chairs of a school that use
digital teacher evaluations. The researcher presented them with the questions and described how
they are linked to the research questions. Their input was taken into consideration in the
development of the final Principal Survey (See Appendix A). Test-Retest Validity (Salkind,
2011) could not be determined because of the uniqueness of this study and because the Principal
DIGITAL TEACHER EVALUATIONS 39
Survey and Principal Interview Protocol had to be originated by the researcher. A Cronbach’s
alpha was not computed to test internal consistency reliability of the Principal Survey. The
researcher, however, determined that the questions in the survey are consistent with one another
based on the feedback from the Department Chairs.
The Principal Interview Protocol was trial tested prior to final development as well. A
Department Chair at the same school site where the Principal Survey was tested participated in
the interview. The results were transcribed into Excel with the corresponding research questions.
They were reviewed by a professor in the doctoral program at University of Southern California
as part of an Inquiry Course assignment. The interview questions used in the Principal Interview
Protocol (See Appendix B) were chosen based on the feedback from this professor and the
responses by the Department Chair. Questions were adjusted to ensure responses would not be
binary and related to the research questions.
Because there is one researcher involved in this study a measurement of interrater
reliability will not be necessary. Salkind (2011) defines interrater reliability as: “the measure
that tells you how much two raters agree on their judgments of some outcome” (p. 115).
Therefore, a measurement of interrater reliability is not required.
Validity is “the property of an assessment tool that indicates that that tool does what it
says it does” (Salkind, 2011, p. 117). The content validity of both the Principal Survey and
Principal Interview Protocol were confirmed through the process described above. Because the
Department Chairs had familiarity with using digital teacher evaluations they served as content
experts on the topic and confirmed the content validity of the Principal Survey. The analysis of
the Principal Interview Protocol through the course work at USC confirmed the content validity
of the interview questions.
DIGITAL TEACHER EVALUATIONS 40
Data Collection
The quantitative phase of this study utilized SurveyMonkey (1999) in the collection of
the data from the Principal Survey. Questions were inputted manually in the designated order
and delivered via an email link to the forty High School Principals of the diocese. The
researcher monitored the number of respondents after the first week of delivery of the Principal
Survey. A follow-up reminder email was delivered to all High School Principals after one week
passed with the link to the SurveyMonkey survey. A third reminder email was delivered after
two weeks passed from the initial email request. After this time, the researcher determined that
additional communication was not necessary to get an accurate representation from the data
sample.
The responses from the Principal Survey were exported into SPSS. SPSS (http://www-
01.ibm.com/software/analytics/spss/products/statistics/) is a statistics program that is part of
IBM. SPSS and Excel were used to run quantitative statistics of this study.
The qualitative phase of the study followed the recommendations from Patton (2002) in
regards to the collection of interview data. A review of the background information of the
Principals who indicated they would be interested in a follow-up interview was conducted to
ensure a diverse representation of qualitative data. Interviews were coordinated based on the
availability of the participating Principal. The researcher accommodated the availability of the
participating Principals. A recording device was used in all interviews in order to conduct
thorough analysis.
The recordings were then listened to and transcribed into an Excel spreadsheet. The
interview questions run along the vertical axis and the Principal responses along the horizontal.
DIGITAL TEACHER EVALUATIONS 41
The interview data provided rich data. In order to organize and distinguish what was
collected a process for coding was established (Patton, 2002). Patton (2002) describes this
process of interpreting the qualitative data. The researcher sought to identify patterns and
themes among the responses. Patton (2002) describes patterns as descriptive findings and
themes take on more of a categorical form (p. 453). From the review of the Principal responses
patterns develop worth mentioning and that supported data from the quantitative phase.
Each Principal interview was an independent worksheet coded so the researcher can
associate it with the quantitative information gathered from the survey. This information is
stored on the researchers password protected computer. The researcher is the only one with
access to this information unless otherwise required by law. Any identifiable information
obtained in connection with this study will remain confidential.
The data source chart shown in Table 1 lists the number of questions and specific items in
each instrument that was used to gather data to answer each research question. The data source
table titled Relationship of Data Sources to Research Questions is presented in Table 1.
Data Analysis
The first phase of data collection was the quantitative research component of the study.
SPSS statistics software and Excel were used to compute the quantitative statistics. Salkind’s
(2011) work on statistics guided the selection of the statistics to run in this study.
Descriptive statistics were used to organize and describe the characteristics of the data
gathered from the Principal Survey. Demographic information of the Principals was organized
as part of the descriptive statistics. Measures of central tendency were computed for each survey
question. This will include relationships between the nominal levels of measurement and
interval levels of measurement of the Principal Survey.
DIGITAL TEACHER EVALUATIONS 42
Table 1
Relationship of Data Sources to Research Questions
Research Questions Phase I: Survey Items Phase II: Interview Items
1. How are digital teacher
evaluations meeting Principal
expectations for reporting of
information to make data-
driven decisions about
teaching?
10 3
2. How are digital teacher
evaluations meeting the need
for Principals to provide
feedback to teachers to
improve performance?
4 1
3. Do digital teacher
evaluations meet Principal
utility costs in terms of
affordability, ease of use and
time?
7 2
4. How do Principal
satisfaction levels of digital
teacher evaluations compare
to traditional paper and
pencil evaluation practices?
8 4
Summary
In summary, a mixed method approach was selected for this study. The works of
Creswell (2005), Patton (2002), and Salkind (2011) provided the framework for the development
of the methodology of this study. The literature on the topic of digital teacher evaluations and
effective teaching along with the research questions provided the conceptual framework of the
study. The first phase of the study was the quantitative phase. This was conducted through the
distribution of the Principal Survey through SurveyMonkey (1999) (See Appendix A). The
Principal Survey was developed after review by department chairs to ensure reliability and
validity of the instrument.
DIGITAL TEACHER EVALUATIONS 43
The second phase of the study was the qualitative phase. The information gained from
this phase enhanced the results from the Principal Survey. Principals who indicated they would
like to participate in a follow-up interview were contacted to coordinate a time to conduct the
semi-constructed interview. (See Appendix B) Each interview was transcribed into an Excel
spreadsheet.
SPSS and Microsoft Excel were utilized to perform quantitative statistics. Patton’s
(2002) guide for analysis, interpretation, and reporting of interview was be utilized for review of
the qualitative statistics. Patterns developed from the interview that supported information
gained in the survey. Descriptive statistics were run from the data gathered from the survey and
interview. The data analysis established relationships between research questions and nominal
and interval scales established in the Principal Survey. Through this analysis of data, the
researcher was able to ascertain the effectiveness of digital teacher evaluation to improve
teaching.
DIGITAL TEACHER EVALUATIONS 44
CHAPTER FOUR: RESULTS
The purpose of this Chapter is to review the findings of the study. It provides a review of
the research questions, descriptive statistics of the Principals participating in the study and the
results from both the quantitative and qualitative phases of the study. In addition, this section
includes a discussion addressing the research questions and hypotheses associated with each as
previously outlined in Chapter One. The results from the quantitative and qualitative phases are
organized by research question.
The focus of this study was to determine the effectiveness of digital teacher evaluations
to assist Principals in the review of teachers and in the improvement process. It examined how
Principals used digital teacher evaluations to make data-driven decisions, as well as how digital
teacher evaluations measure important variables of instruction as described by the research on
exceptional teachers (Danielson, 2007). Because digital teacher evaluations are a relatively new
medium, utility costs were also reviewed. The following research questions guided this study:
1. How are digital teacher evaluations meeting Principal expectations for reporting of
information to make data-driven decisions about teaching?
2. How are digital teacher evaluations meeting the need for Principals to provide feedback
to teachers to improve performance?
3. Do digital teacher evaluations meet Principal utility costs in terms of affordability, ease
of use and time?
4. How do Principal satisfaction levels of digital teacher evaluations compare to traditional
paper and pencil evaluation practices?
The study was conducted using a mixed methods approach. Information was gathered
from both a quantitative and qualitative phase. Quantitative data was gathered through the
DIGITAL TEACHER EVALUATIONS 45
Principal Survey (Appendix A). Qualitative data was gathered through open-ended interview
questions (Appendix B). Information from the interviews provided support for the findings
gathered from the Principal Survey.
Demographic Data
The quantitative phase of this study involved distributing surveys to forty high school
Principals of a large, urban Catholic archdiocesan school district in California. Thirty of the
forty Principals completed the online survey. Four Principals who completed the Principal
Survey participated in the qualitative phases of the study. Table 2 and Table 3 display the
participation rates for both the quantitative and qualitative phases of this study.
Table 2
Quantitative Surveys: Response Rate
Measure
Invited to
Participate
Participated % Participated
Principals 40 30 75%
Table 3
Qualitative Interviews: Response Rate
Measure
Completed the
Survey and
Responded to
Interview Request
Responded
Yes
% Participated in
Interview
Principals 30 4 13%
Attaining this percentage of participation required the assistance of the Superintendent of
Secondary Schools to follow up the initial request to complete the survey with additional emails
drafted by the researcher. In addition, paper copies of the survey had to be distributed to some of
the Principals due to problems opening the web link to the online survey.
DIGITAL TEACHER EVALUATIONS 46
Principals did not respond positively to the invitation to participate in the follow-up
interview on the topic. Only 13% of those who completed the survey volunteered to participate
in the interview (See Table 3). Despite the low percentage, the information gathered from the
qualitative phase provided the researcher with information to support the findings from the
quantitative phase.
The first six questions of the survey are all nominal levels of measurement (Salkind,
2011) designed to obtain a contextual overview of the Principals participating in this study.
Tables 4-9 show the results from these questions.
Principal Descriptor Data
Two-thirds of all Principals participating in the quantitative phase of this study have
either 1-3 years of experience (33.3%) or more than ten years of experience (30.0%) (See Table
4). The age of the Principals in the study varied greatly as well (See Table 5). Eleven of the
respondents (36.7%) were over 55 years of age and two (6.7%) were relatively younger at 25-30
years of age. Most of the Principals (90.0%) were thirty-six or older. Principals were fairly
inexperienced using digital teacher evaluations (See Table 6). Twenty-two (73.3%) responded
that they have been using digital teacher evaluations fewer than two years. Five (16.7%)
responded that they have more than three years’ experience using digital teacher evaluation and
observation methods. Nearly all (96.7%) of the Principals had used a traditional paper and
pencil method prior to using a digital form (See Table 7). The digital teacher evaluation most
commonly used by these Principals uses a multiple rating scale (See Table 8). Twenty-three
(76.7%) Principals responded that the instrument they use has a multiple rating scale, and 7
(23.3%) reported using an instrument with a binary rating scale. The responses to the frequency
of either formal or informal observations varied as well (See Table 9). Most of the Principals
DIGITAL TEACHER EVALUATIONS 47
responded that they observe teachers 1-2 times a semester (60.0%). Six Principals (20.0%)
responded 3-4 times a semester, four (13.3%) responded 5-6 times a semester, and two (6.7%)
responded more than six times.
Table 4
Principal Longevity
Measure 1-3 Years 4-6 Years
7-10
Years
>10 Years Omit Total
Principal
%
10
33.3%
5
16.7%
4
13.3%
9
30.0%
2
6.7%
30
100%
Table 5
Principal Age
Measure 25-30 31-35 36-40 41-45 46-50 51-55 <55 Total
Principal
%
2
6.7%
1
3.3%
6
20.0%
3
10.0%
5
16.7%
2
6.7%
11
36.7%
30
100%
Table 6
Time Using Digital Teacher Evaluations
Measure 1 Year 2 Years 3 Years >3 Years Omit Total
Principal
%
16
53.3%
6
20.0%
2
6.7%
5
16.7%
1
3.3%
30
100%
Table 7
Use Traditional Method Prior to Digital Evaluation
Measure Yes No Total
Principal
%
29
96.7%
1
3.3%
30
100%
Table 8
Evaluation Rating Scale
Measure Binary Scale Multiple Rating Scale Total
Principal
%
7
23.3%
23
76.7%
30
100%
DIGITAL TEACHER EVALUATIONS 48
Table 9
Number of Times Faculty are Evaluated a Semester
Measure 1-2 Times 3-4 Times 5-6 Times >6 Times Total
Principal
%
18
60.0%
6
20.0%
4
13.3%
2
6.7%
30
100%
Research Question #1: How are digital teacher evaluations meeting Principal expectations
for reporting of information to make data-driven decisions about teaching?
The first research question pertained to the ability of digital teacher evaluations to track
observation data over a period of time. This tracking of information would allow Principals to
make informed data-driven decisions about teacher performance. It is the recommendation of
several researchers (Strong, Gagani, and Hacifazlioglu, 2011; Glazerman, et al, 2011) that
teacher evaluations track performance over a period of time and utilize a multiple rating system
(Weisberg, et al, 2009). This research question investigated how digital teacher evaluation rating
and tracking systems help Principals make informed decisions.
The Principal Survey contained ten questions centered on this research question (See
Table 1). Tables 10-15 show the results for research question #1. The Principal responses to
these ten questions of the Principal Survey demonstrate support for the first research question.
These Principals acknowledge that digital teacher evaluations are meeting expectations for
reporting information to make data-driven decisions about instruction. The only survey
questions to have combined results of agree or strongly agree less than 60% are numbers 23, 26,
and 27. These questions are related to Charlotte Danielson’s (2007) framework for exceptional
teachers. Survey question #23 has the lowest positive response with thirteen Principals agreeing
or strongly agreeing that the digital teacher evaluation is effective at evaluation the quality of
planning and preparation teacher put into their lesson plans (See Table 15). Survey questions
DIGITAL TEACHER EVALUATIONS 49
#26 and #27 both have 53.3% favorable responses (agree or strongly agree). Sixteen Principals
responded that they agree or strongly agree that the digital teacher evaluation is effective at
evaluating the professional responsibilities of a teacher, as well as the teacher knowledge of the
subject (See Table 15).
Table 10
Evaluation Provides Accurate Information about Performance
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
2
6.7%
6
20.0%
18
60.0%
2
6.7%
2
6.7%
30
100%
Table 11
Evaluation Tracks Data over Time
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
6
20.0%
17
56.7%
6
20.0%
1
3.3%
30
100%
Table 12
The Domains Used are Important Qualities of Instruction
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
6
20.0%
17
56.7%
4
13.3%
3
10.0%
30
100%
Table 13
The Evaluation Provides Information about Performance
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
7
23.3%
16
53.3%
7
23.3%
0
0.0%
30
100%
DIGITAL TEACHER EVALUATIONS 50
Table 14
Data Gathered Provides Information about School Growth Needs
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
10
33.3%
13
43.3%
5
16.7%
2
6.7%
30
100%
Table 15
Rating of Effective Teaching Domains
Domain
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Planning and
Preparation
3
10.0%
12
40.0%
10
33.3%
3
10.0%
2
6.7%
30
100%
Instructional
Practices
1
3.3%
8
26.7%
14
46.7%
5
16.7%
2
6.7%
30
100%
Learning
Environment
2
6.7%
8
26.7%
12
40.0%
6
20.0%
2
6.7%
30
100%
Professional
Responsibilities
3
10.0%
9
30.0%
12
40.0%
4
13.3%
2
6.7%
30
100%
Teacher
Knowledge
3
10.0%
9
30.0%
12
40.0%
4
13.3%
2
6.7%
30
100%
Combining the overall responses of the Principal Survey that correspond with this
research question show favorable agreement that the digital teacher evaluation used by Principals
is effective at providing them with information to make data-driven decisions about teacher
performance. Figure 1 (See Below) is a chart that combines all survey responses associated with
this research question into one graph. Combining all responses illustrates the favorable
responses by Principals. Principals who either agreed or strongly agreed to these questions
constitute 187 of the possible 300 possible responses (62%).
School Principal 2 stated during the interview that, compared to the previous evaluation
method, using digital teacher evaluations has “changed my willingness to get out of my desk and
go do it [in reference to an evaluation] because it has streamlined the process.” School Principal
DIGITAL TEACHER EVALUATIONS 51
3 made similar remarks and stated, “while I’m a bit old-school in that I still prefer the traditional
paper and pencil process, I see the value in having all the data at your fingertips.” Further,
School Principal 3 affirmed the benefit of the digital teacher evaluation in making informed
decisions by saying that “instead of going through my file cabinet to look for teacher records, I
can navigate through the website anywhere and take a look at an evaluation I did.”
Figure 1
Digital Teacher Evaluations Ability to Report Information to make Data-Driven Decisions
Hypothesis 1.1. There will be a difference between principals who track teacher data over
a period of time and the aid digital teacher evaluations provide principals to make data-
driven decisions
Questions #8 and #10 of the Principal Survey pertain to Hypothesis 1.1. The survey
responses indicate that there is a difference among Principals who use digital teacher evaluation
to track data over a period of time and the ability of the digital teacher evaluation to provide
Principals with information to assist in making data-driven decisions. Survey responses were
sorted by the questions associated with Hypothesis 1.1 to arrive at these results.
5%
27%
47%
15%
6%
Reporting of Information to make Data-
Driven Decisions
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
Omit
DIGITAL TEACHER EVALUATIONS 52
Figure 2 illustrates the Principals responses to questions #8 and #10 of the Principal
Survey sorted by question #8, which inquires about the ability of the digital teacher evaluation to
track observation data over a period of time. Half (50%) of the six Principals who strongly
agreed with the statement also strongly agreed that the digital teacher evaluation also aided them
in making data-driven decisions. Of the seventeen Principals who agree that the digital teacher
evaluation tracks data over time, two (12%) strongly agree, fourteen (82%) agree, and one (6%)
somewhat agree that the digital teacher evaluation aids them in making data-driven decisions.
Of the six Principals who somewhat agree that the digital teacher evaluation tracks data over
time, one (17%) strongly agrees, two (33%) agree, and three (50%) somewhat agree that the
digital teacher evaluation aids them in making data-driven decisions. One Principal omitted their
response on the tracking question, but strongly agrees that the digital teacher evaluation aids in
making data-driven decisions.
Figure 2
Breakdown of Principal Survey Responses Linking the Tracking of Data with Assistance in
Making Data-Driven Decisions
Responses for Tracking Responses for Aid in Making
of Data Over Time Data-Driven Decisions Percentages
3 x Strongly Agree 50%
6 x Strongly Agree
3 x Somewhat Agree 50%
2 x Strongly Agree 12%
17 x Agree 14 x Agree 82%
1 x Somewhat Agree 6%
1 x Strongly Agree 17%
6 x Somewhat Agree 2 x Agree 33%
3 x Somewhat Agree 50%
1 x Omit 1 x Strongly Agree 100%
Twenty-three Principals strongly agree or agree that the digital teacher evaluation is
effective at tracking data over time. This group of respondents represents Principals who use the
DIGITAL TEACHER EVALUATIONS 53
digital teacher evaluation to track data over time. Combining these responses together and
comparing them to Principal responses on the ability of the digital teacher evaluations to aid
them in making data-driven decisions illustrates the difference in responses associated with
hypothesis 1.1. The figure below illustrates the difference in responses.
Figure 3
Difference between Principals who Track Data Over Time and Aid Digital Teacher Evaluations
Provide to make Data-Driven Decisions
Favorable (Agree or Strongly Combined Responses
Agree) Responses for Tracking for Aid in Making Percentages
of Data Over Time Data-Driven Decisions
23 x Strongly Agree 5 x Strongly Agree 21.7%
or Agree 14 x Agree 60.9%
4 x Somewhat Agree 17.4%
Figure 3 shows the results of the twenty-three Principals who responded favorably
(strongly agree or agree) to the Principal Survey question on the ability of the digital teacher
evaluation to track teacher observation data over a period of time. Five (21.7%) Principals who
had favorable responses on the ability of the digital teacher evaluation to track data over time
strongly agree that it aids them in making data-driven decisions. Fourteen (60.9%) who had
favorable responses on the ability of the digital teacher evaluation to track data over time
responded that they agree that it also aids them in making data-driven decisions. Three (17.4%)
who had favorable responses on the ability of the digital teacher evaluation to track data over
time responded that they somewhat agree that it also aids them in making data-driven decisions.
Overall, nineteen (82.6%) of the twenty-three Principals responded favorably (strongly
agree or agree) that the digital teacher evaluation tracked data over time and aided in making
data-driven decisions. Four Principals (17.4%) only somewhat agree that the digital teacher
evaluation not only tracked data over time but aided in making data-driven decisions.
DIGITAL TEACHER EVALUATIONS 54
When asked about using the digital teacher evaluation for tracking observation data,
School Principal 1 stated that after uploading data into the computer, “I try to see; notice if there
are things that I didn’t see or look for before.” This is in reference to visual representation of the
observation data. Following up on the comment, School Principal 1 stated that the “most
valuable information I get is from the kids”, and that “the instrument doesn’t necessarily let you
pick up the vibe.”
Hypothesis 1.2: There will be a difference between the level of experience of principals and
their regard for the qualities of instruction used in the digital teacher evaluation.
Questions #1 and #9 of the Principal Survey are related to Hypothesis 1.2. Question #1
inquires about the years of experience of the Principals and question #9 inquires about the
importance of the domains used in the digital teacher evaluation. Individual survey responses
were sorted by these questions to arrive at the results for Hypothesis 1.2.
Figure 4
Difference in Experience Level and Regard for Qualities of Instruction Used in Digital Teacher
Evaluation
Principal Experience
Degree of Importance of
Teacher Domains of the
Digital Teacher Evaluation
Percentages
10 x 1-3 Years
2 x Strongly Agree
5 x Agree
2 x Somewhat Agree
1 x Omit
20%
50%
20%
10%
5 x 4-6 Years
1 x Strongly Agree
4 x Agree
20%
80%
4 x 7-10 Years 4 x Agree 100%
9 x >10 Years
1 x Strongly Agree
4 x Agree
4 x Somewhat Agree
11%
44%
44%
2 x Omit 2 x Omit 100%
DIGITAL TEACHER EVALUATIONS 55
Figure 4 shows the results for Hypothesis 1.2 sorted by Principal Survey questions #1 and
#9. Twenty-eight of the 30 Principals who participated in the Principal Survey provided their
years of experience as a Principal. Two omitted their response to this question. Of the ten
Principals with 1-3 years of experience as a Principal, two strongly agreed that the domains used
in the digital teacher evaluation are important qualities of instruction; five agreed; two somewhat
agreed, and one omitted a response to this question. Of the five Principals with 4-6 years of
experience as a Principal, one strongly agreed that the domains used in the digital teacher
evaluation are important qualities of instruction, and four agreed. All four Principals with 7-10
years of experience as a Principal agreed that the domains are important qualities of instruction.
Of the nine Principals with more than ten years of experience as a Principal, one strongly agreed
that the domains used in the digital teacher evaluation are important qualities of instruction; four
agreed; and four somewhat agreed. The two Principals who did not provide information on their
years of experience as a Principal also omitted a response on the domains used in the digital
teacher evaluation.
Nineteen of the thirty Principals had either 1-3 years of experience (33%) or over ten
years of experience (30%) as a Principal. Seventy percent of the Principals with 1-3 years of
experience as a Principal strongly agreed or agreed that the domains used in the digital teacher
evaluation are important qualities of instruction, compared with 20% somewhat agreeing, and
10% omitting a response. Fifty-five percent of the Principals with over ten years of experience
as a Principal strongly agreed or agreed that the domains used are important qualities of
instruction, compared with 44% somewhat agreeing. The next largest groupings of Principals
are those with 4-6 years of experience (16.7%). All (100%) Principals with 4-6 years of
experience as a Principal either strongly agreed or agreed that the domains used are important
DIGITAL TEACHER EVALUATIONS 56
qualities of instruction. The lowest groupings of Principals are those with 7-10 years of
experience (13.3%). As stated earlier, 100% of the Principals in this group agreed that the
domains used are important qualities of instruction.
When asked about the domains of the digital teacher evaluation, School Principal 1
stated: “I think they’re good, but in the same token, I do resort back to the traditional because it
gives you a bigger picture. The teacher gives me a lesson plan that I can look at. There’s no
place to put those types of things, or I just don’t know how to use it.” School Principal 4 had
similar feelings and said:
They are similar to the ones I used before. They rate things like environment, bell to bell
teaching, level of questioning…things like that. I find them to be important things to
look at when evaluating a teacher and what’s nice is the digital evaluation is on my
computer and it’s not hard to see the different categories. They’re all right there and
they’re what I would expect out of a good evaluation tool.
The digital teacher evaluation used by School Principal 2 includes rating categories tied to the
School’s Mission. In describing the domains, School Principal 2 stated that “it looks for teacher
strengths and I list whatever those were, and if I saw something related to the school’s Catholic
identity, or ESLR’s [Expected School-Wide Learning Results] I would mark them.”
Hypothesis 1.3: Digital teacher evaluations with a multi-rating system will provide
Principals with accurate information about teacher performance
Questions #5 and #7 of the Principal Survey are related to Hypothesis 1.3. Question #5
asks whether the digital teacher evaluation utilizes a binary rating system or a multiple rating
system. Question #7 asks the Principals to respond to a statement about how effective the digital
teacher evaluation is at providing them with accurate information about teacher performance.
Individual survey responses were sorted by these questions to arrive at the results for Hypothesis
1.3.
DIGITAL TEACHER EVALUATIONS 57
Figure 5
Multi-Rating System and Information Gained About Teacher Performance
Rating Scale
Provides Accurate
Information of Teacher
Performance
Percentages
7 x Binary
1 x Strongly Agree
4 x Agree
2 x Somewhat Agree
14%
57%
29%
23 x Multiple
1 x Strongly Agree
14 x Agree
4 x Somewhat Agree
2 x Strongly Disagree
2 x Omit
4%
61%
17%
9%
9%
Figure 5 shows the results for Hypothesis 1.3 sorted by Principal Survey questions #5 and
#7. All thirty Principals that participated in the Quantitative Phase of the study provided an
answer on the type of rating scale used by the digital teacher evaluation they utilize. Seven
responded that the evaluation instrument uses a binary scale with scales of measurement either as
1 or 2, or meets or does not meet the standards. Twenty-three responded that the evaluation
instrument uses a multiple rating system. Examples provided are scales of measurement ranging
from 1-5, or from low to high. Of the seven Principals who use a digital teacher evaluation with
a binary rating scale, one strongly agreed that the rating systems is useful in providing them with
accurate information about teacher performance; three agreed; and two somewhat agreed. Of the
23 Principals who use a digital teacher evaluation with a multiple rating scale, one strongly
agreed that the rating systems is useful in providing them with accurate information about
teacher performance; 14 agreed; four somewhat agreed; two strongly disagreed; and two omitted
a response to this question.
A majority of the Principals (77%) utilize a digital teacher evaluation with a multiple
rating scale, compared to 23% who utilize an evaluation with a binary rating scale. Hypothesis
DIGITAL TEACHER EVALUATIONS 58
1.3 states that a digital teacher evaluation with a multiple rating scale will provide Principals
with accurate information about teacher performance. Based on the responses to the Principal
Survey, 65% of the Principals who use a digital teacher evaluation with a multiple rating scale
strongly agree or agree that the rating system provides them with accurate information about
teacher performance. Seventeen percent somewhat agree that the multiple rating scale provides
them with accurate information about teacher performance. Nine percent of the Principals
strongly disagreed or omitted their response.
When asked about the ability of the digital teacher evaluation to provide teacher
performance information, School Principal 3 stated: “It provides me with the same level of
information. I use the evaluation at the end of the year when talking to teachers and discussing
progress and areas to work on.” School Principal 4 uses the information more frequently and
stated: “I try to meet with the teacher, or email the review during the day so the teacher knows
what I saw, what was good, and what I think he needs to improve at. I always let the teacher
know that I am available to talk to them about it.” These statements from Principals confirm the
results from the Principal Survey about the ability of digital teacher evaluations to provide
accurate information about teacher performance.
Research Question #2: How are digital teacher evaluations meeting the need for Principals
to provide feedback to teacher to improve performance?
This research question centers on the level of effectiveness of the feedback given to
teachers by Principals to improve teaching practices. A number of researchers (Wise, et al,
1986; Peterson, 2004; Darling-Hammond, et al, 2012) concluded that one of the most important
roles of a Principal is to provide teachers with effective feedback from a class observation. This
DIGITAL TEACHER EVALUATIONS 59
feedback is to be used by the teacher to improve future lessons and should be provided by
Principals on a frequent basis.
The Principal Survey (See Appendix A) contains four Likert-Scale questions on the topic
of feedback. Tables 16-19 illustrate the results for this research question.
Table 16
Quality of Feedback
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
1
3.3%
8
26.7%
13
43.3%
8
26.7%
0
0.0%
30
100%
Table 17
Teacher Appreciativeness of the Feedback
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
6
20.0%
17
56.7%
6
20.0%
1
3.3%
30
100%
Table 18
The Feedback Improves Teacher Effectiveness
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
10
33.3%
14
46.7%
6
20.0%
0
0.0%
30
100%
Table 19
The Digital Teacher Evaluation Feedback is more effective than Traditional Methods
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
4
13.3%
11
36.7%
10
33.3%
2
6.7%
3
10.0%
30
100%
DIGITAL TEACHER EVALUATIONS 60
The results from the four Principal Survey questions on the value of the feedback
provided by the digital teach evaluation to improve teacher effectiveness is overwhelmingly
positive. The only question that has less than 60% of the respondents either strongly agreeing or
agreeing is #15. This question of the Principal Survey asks Principals to weigh in on whether the
feedback provided using the digital teacher evaluation is more effective than previously used
methods. Fifteen (50%) of the Principals responded with somewhat agree or strongly disagree to
this statement. Only twelve (40%) responded with agree or strongly agree. Three (10%)
Principals omitted a response to this question.
Combining the overall responses of the Principal Survey that correspond with the second
research question show favorable agreement that the digital teacher evaluation provides teachers
with effective feedback to improve performance. Figure 6 (See Below) is a chart that combines
all survey responses associated with this research question into one graph. Combining all
responses illustrates the favorable responses by Principals. Principals who either agreed or
strongly agreed to these questions constitute 76 of the 120 possible responses (63.3%). This is
compared to 40 of the 120 (33.3%) Principal responses that are either somewhat agree or
strongly disagree. Four (3.3%) Principals omitted their responses.
Both School Principal 1 and School Principal 2 made positive remarks about how
feedback is provided and received by faculty through the digital teacher evaluation they utilize.
When asked how teachers are taking the feedback, School Principal 1 stated: “I think they’re
fine. I think it’s done in a constructive manner for the most part”. School Principal 2 stated: “I
think they appreciate it. They want you to come in and see what they’re doing because they
want you to think that they are doing a good job”. When asked about whether the feedback has
made a difference, School Principal 2 said:
DIGITAL TEACHER EVALUATIONS 61
I think with some of the newer ones; yes. You’re reaffirming or affirming what they are
doing. With some of the older teachers, it gives them a kick in the pants. You can tell
them that you’re yellow lecture notes don’t cut it anymore. With a few it does change
what they do.
Compared to the previous method used, School Principal 2 believes that the immediate feedback
provided by the digital teacher evaluation is an important quality. School Principal 2 stated that
“what they [in reference to faculty] do like is the immediate feedback.” School Principal 1 made
similar remarks about the comparison by saying: “I think it may make people more aware of
certain things they should be doing.” In addition there are components of the feedback that were
not thought of before. School Principal 1 continued by stating: “I never thought about discussing
bell-to-bell teaching before; so to me this was a help.”
Figure 6
Overall Value of Digital Teacher Evaluations at Providing Feedback to Improve Performance
4%
29%
45%
18%
3%
Effectiveness of Feedback
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
Omit
DIGITAL TEACHER EVALUATIONS 62
Hypothesis 2.1: There will be a difference between principals who have greater expereince
with digital teacher evaluations and the feedback they are able to provide teachers.
Questions #3 and #12 of the Principal Survey correspond to Hypothesis 2.1. Question #3
inquires about the number of years responding Principals have used digital teacher evaluations
and observation methods. Question #12 inquires about the degree to which the digital teacher
evalaution is able to provide teachers with valuable feedback. Individual survey responses were
sorted by these questions to arrive at the results for Hypothesis 2.1.
Figure 7
Difference between Level of experience and Feedback Provided
Principal Experience with
Digital Teacher Evaluation
Valuable Feedback
Provided to Teachers
Percentage
16 x 1 Year
2 x Strongly Agree
9 x Agree
5 x Somewhat Agree
12.5%
56.25%
31.25%
6 x 2 Years
2 x Strongly Agree
2 x Agree
1 x Somewhat Agree
1 x Strongly Disagree
33.3%
33.3%
16.67%
16.67%
2 x 3 Years
1 x Strongly Agree
1 x Somewhat Agree
50%
50%
5 x > 3 Years
3 x Strongly Agree
1 x Agree
1 x Somewhat Agree
60%
20%
20%
1 x Omit 1 x Agree 100%
Figure 7 shows the results for Hypothesis 2.1. Sixteen of the thirty Principals (68.75%)
who completed the Principal Survey have had one year of experience with a digital teacher
evaluation. Eleven (67.75%) of these sixteen Principals responded either strongly agree or agree
that the evaluation instrument provides teachers with valuable feedback. Five (31.25%)
Principals in this age group responded that they somewhat agree with this statement about the
value of the feedback provided to teachers.
DIGITAL TEACHER EVALUATIONS 63
The next largest group of Principals are those who have two years of expereience using
digital teacher evaluations. Six (20%) Princpials who completed the Pricnipal Survey have at
least two years of experience using them. Four (66.6%) of the Principals with two years of
experience with digital teacher evaluations strongly agree or agree that the evaluation instrument
provides teachers with valuable feedback.
The third largest group of Princpals are those who have used digital teacher evaluations
for three years. Five (16.67%) Principals who completed the Principal Survey have at lest three
years of experience using them. Four (80%) of the Pricnipals in this range of experience with
digital teacher evaluations strongly agree or agree that the feedback is valuable to teachers.
Two (6.67%) Principals have three years of experience using digital teacher evaluations.
These two Princpals are split with their response on whether they are able to provide valuable
feedback using digital teacher evaluations. One (50%) responded that they strongly agree that
they are able to provide teachers with valuable feedback, while the other (50%) responded
somewhat agree. One (3.33%) Principal omitted a response, yet agreed (100%) that they are able
to provide teachers with valuable feedback using the digital teacher evaluation.
Hypothesis 2.2: there will be a difference between the number of teacher observations
performed by principals and the effectiveness of the feedback provided by digital teacher
evaluations to improve teacher effectiveness.
Questions #6 and #14 of the Principal Survey correspond with Hypothesis 2.2. Question
#6 asks Principals to respond to the number of times, on average, they observe every teacher a
semester. Question #14 asks Principals to rate the level they agree that the feedback they
provide teachers improves effectiveness. Individual survey responses were sorted by these
questions to arrive at the results for Hypothesis 2.2.
DIGITAL TEACHER EVALUATIONS 64
Figure 8 show the results for Hypothesis 2.2. Of the eighteen Principals who observe
teachers 1-2 times a semester, 61.11% strongly agree or agree that feedback provided by the
digital teacher evaluation improves teacher performance. Another 38.89% of the Principals who
observe teachers 1-2 times a semester somewhat agree. Of the six Principals who observe
faculty 3-4 times a semester, 66.66% strongly agree or agree that the feedback prvovided by the
digital teacher evaluation improves teacher effectiveness. Another 33.33% somewhat agree. Of
the Principals who observe teachers 5-6 times a semester, 75% strongly agree or agree that the
feedback provided by the digital teacher evaluation improves teacher effectivenss. Another 25%
somewhat agree. Of the two Principals who observe teachers more than six times a semester,
100% agree that the feedback provided by the digital teacher evaluation improves teacher
effectiveness.
Figure 8
Difference between Number of Observations and Effectiveness of Feedback
Times Observed per
Semester
Feedback Improves Teacher
Effectivenss
Percentages
18 x 1-2 Times
3 x Strongly Agree
8 x Agree
7 x Somewhat Agree
16.67%
44.44%
38.89%
6 x 3-4 Times
2 x Strongly Agree
2 Agree
2 Somewhat Agree
33.33%
33.33%
33.33%
4 x 5-6 Times
1 x Strongly Agree
2 x Agree
1 x Somewhat Agree
25%
50%
25%
2 x > 6 Times 2 x Agree 100%
Research Question #3: Do digital teacher evalutaions meet Principal utility costs in terms
of affordability, ease of use, and time?
This research question centers on utility costs. Wise, et al. (1986) and Olica, Mathers,
and Laine (2009) identify time and the cost of teacher evaluations as important factors. No
DIGITAL TEACHER EVALUATIONS 65
research was found on the ease of use of digital teacher evaluation; this issue is investigated
through this research question.
The Principal Survey (See Appendix A) contains seven Likert-Scale questions on the
topic of utility cost. Tables 20-26 illustrate the results for this research question.
Table 20
Digital Teacher Evaluation Affordability
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
6
20.0%
6
20.0%
18
60.0%
0
0.0%
0
0.0%
30
100%
Table 21
Digital Teacher Evaluation is Worth the Cost
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
5
16.7%
11
36.7%
14
46.7%
0
0.0%
0
0.0%
30
100%
Table 22
Digital Teacher Evaluation as a Time Saver
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
10
33.3%
15
50.0%
5
16.7%
0
0.0%
30
100%
Table 23
Digital Teacher Evaluation Ease of Use
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
1
3.3%
10
33.3%
15
50%
3
10.0%
1
3.3%
30
100%
DIGITAL TEACHER EVALUATIONS 66
Table 24
Comfort Levels Using the Digital Teacher Evaluation
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
10
33.3%
20
66.7%
0
0.0%
0
0.0%
30
100%
Table 25
Digital Teacher Evaluation Process Time Saving
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
0
0.0%
8
26.7%
17
56.7%
5
16.7%
0
0.0%
30
100%
Table 26
Anticipated Time to get Comfortable Using Digital Teacher Evaluation
Measure
Strongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
4
13.3%
12
40.0%
9
30.0%
5
16.7%
0
0.0%
30
100%
Questions #16 and #17 of the Principal Survey inquire about the cost associated with the
digital teacher evaluation. Both questions are constructed in the affirmitive, so favorable
responses indicate that Principals rate the digital teacher evaluation as an affordable means for
evaluating teachers. Combining the results from both of these questions reveals marginal
favortism for the cost associated with the digital teacher evaluation. Eleven of the combined
responses (18.3%) strongly disagree with the questions associated with the cost of the digital
teacher evaluation. Seventeen of the combined responses (28.3%) somewhat agree that the
digital teacher evaluation is cost effective. Thirty-two of the combined responses (53.3%) agree
that the digital teacher evaluation is cost effective. Figure 9 (See Below) shows the combined
results for the two Principal Survey questions associated with cost.
DIGITAL TEACHER EVALUATIONS 67
Questions #19 and #20 of the Principal Survey inquire about the ease of use of the digital
teacher evaluations. Both questions are constructed in the affirmitive, so favorable responses
indicate that Principals rate the digital teacher evaluation as an easy to use evaluation instrument.
Combing the results reveals favorable opinions by participating Principals of the ease of use of
the digital teacher evaluation. Question #22 was not used in the combined responses because it
is associated with the anticipated time needed to get comfortable using the digital teacher
evaluation.
Figure 9
Digital Teacher Evaluation is Worth the Cost
One combined response (1.67%) strongly disagree that the digital teacher evaluation is
easy to use. Twenty combined responses (33.3%) somewhat agree with statements associated
with the ease of use of the digital teacher evaluation. Thirty-five combined responses (58.3%)
agree with ease of use statements about digital teacher evaluations. Three combined responses
(5.0%) strongly agree with ease of use statements about digital teacher evaluations. There is one
18.3%
28.3%
53.3%
0.0%
Digital Teacher Evaluations are Affordable
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
DIGITAL TEACHER EVALUATIONS 68
combined responses (1.67%) that ommitted a response. Figure 10 (See Below) shows the
combined results of questions associated with the ease of use of the digital teacher evaluation.
Questions #18 and #21 of the Principal Survey inquire about time saving benefits of the
digital teacher evaluations. Both questions are constructed in the affirmitive, so favorable
responses indicate that Principals rate the digital teacher evaluation as an time saver. Combining
the results reveals favorable opinions by participating Principals of the time saving benefits of
using digital teacher evaluations.
Figure 10
Digital Teacher Evaluation Ease of Use
No participating Principal strongly disagreed with the ability of digital teacher
evaluations to save them time. Eighteen combined respones (30%) somewhat agree with
statements that digital teacher evaluations are time savers. Thirty-two combined responses
(53.3%) agree that digital teacher evaluations save time. Ten combined responses (16.67%)
strongly agree that digital teacher evaluations save them time. Figure 11 (See Below) shows the
1.7%
33.3%
58.3%
5.0%
1.7%
Digital Teacher Evaluation Ease of Use
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
Omit
DIGITAL TEACHER EVALUATIONS 69
combined responses for questions associated with the time saving benefits of digital teacher
evaluations.
When discussing the utility costs associated with the digital teacher evaluation, School
Principal 2 said that the digital teacher evaluation is “a great time saver because I’m not
scribbling notes and then going back to my office and transcribing them; and then running and
giving them to the teacher. I can do it right there and then and email it to the teacher. So this is a
much quicker process.” School Principal 1 stated: “I think I could utilize it better, but for now
it’s really not a time saver.” School Principal 3 made a similar remark about time saving
qualities, stating: “I’m just getting use to it. I think the more I play with it, the faster I’ll get.
There’s still a lot I need to figure out, but I can see the value in it.”
Figure 11
Time Saving Benefits of Digital Teacher Evaluations
All four Principals had no opinions on the cost of the digital teacher evaluation. School
Principals 1, 3, and 4 stated that the evaluation was free as part of a professional development
0.00%
30.00%
53.33%
16.67%
Digital Teacher Evaluations Save Time
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
DIGITAL TEACHER EVALUATIONS 70
opportunity presented to them. School Principal 1 stated: “It was a freebie. They gave us the
workshops, they gave us the iPads, so went with it.” The digital teacher evaluation used by
School Principal 2 is not a formal program, but the previous observation tool converted to
electronic form. When asked about how the evaluation was chosen, School Prinicpal 2 stated:
“It was what I had as an informal observation and I just put it on my iPad. It’s not a formal
program that I purchased from anybody. Actually, it was more of a convenience than anything
else.”
Hypothesis 3.1: Principals who have more expereince using digital teacher evaluations will
save time.
Questions #3 and #18 of the Principal Survey are associated with Hypothesis 3.1.
Question #3 asks Principals to indicate the length of time in years they have been using digital
teacher evaluations. Question #18 asks Principals to respond to the level of agreeance of digital
teacher evaluationas to save them time. Responses to the Principal Survey were sorted to arrive
at the results for Hypothesis 3.1.
Figure 12
Principals with More Experience and Time Saving Benefits
Years Using Digital
Teacher Evaluations
Using the Digital Teacher
Evaluation Saves Time
Percentages
16 x 1 Year
11 x Agree
5 x Somewhat Agree
68.75%
31.25%
6 x 2 Years
3 x Strongly Agree
2 x Agree
2 x Somewhat Agree
50%
33.33%
16.67%
2 x 3 Years
1 x Strongly Agree
1 x Agree
50%
50%
5 x >3 Years
1 x Strongly Agree
2 x Agree
2 x Somewhat Agree
20%
40%
40%
1 x Omit 1 x Agree 100%
DIGITAL TEACHER EVALUATIONS 71
Figure 12 shows the results for Hypothesis 3.1. The results are mixed for this hypothesis.
A majority of the Principals have been using digital teacher evaluations for two or fewer years
(73%). Only seven Principals (37%) have three or more years experience with digital teacher
evaluations. Despite the disparity in the distribution of experience, the Principals with three
years of experience responded favorably that the digital teacher evaluation saves them time.
Both Principals (100%) with three years expereince using digital teacher evaluations strongly
agree or agree that the instrument saves them time. Three of the five Principals (60%) with more
than three years expereince with digital teacher evaluations strongly agree or agree that the
instrument saves them time.
Hypothesis 3.2: principals who have more experience using digital teacher evaluations will
be more comfortable using them.
Questions #3 and #20 of the Principal Survey are associated with Hypothesis 3.2.
Question #3 asks Principals to indicate the length of time in years they have been using digital
teacher evaluations. Question #20 asks Principals to respond to the level of comfort using the
digital teacher evaluation. Responses to the Principal Survey were sorted to arrive at the results
for Hypothesis 3.2.
Figure 13
Principals with More Experience and Comfort
Years Using Digital
Teacher Evaluations
Comfortable Using Digital
Teacher Evalutation
Percentages
16 x 1 Year
10 x Agree
6 x Somewhat Agree
62.5%
37.5%
6 x 2 Years
4 x Agree
2 x Somewhat Agree
66.67%
33.33%
2 x 3 Years
1 x Agree
1 x Somewhat Agree
50%
50%
5 x > 3 Years
4 x Agree
1 x Somewhat Agree
80%
20%
1 x Omit 1 x Agree 100%
DIGITAL TEACHER EVALUATIONS 72
Figure 13 shows the results for Hypothesis 3.2. The results are similar to those of
Hypothesis 3.1. A majority of the Principals have been using digital teacher evaluations for two
or fewer years (73%). Only seven Principals (37%) have three or more years experience with
digital teacher evaluations. One (50%) of the Principals with three years experience using digital
teacher evaluations agrese with the statement about feeling comfortable using it. The second
Principal with this much experience with the instrument somewhat agreed about the comfort
level using it. The Principals with three years of experience responded more favorably about
their comfort using digital teacher evaluations. Of the five with more than three years using
digital teacher evaluations, four (80%) responded that they agree that they are comfortable using
it.
Hypothesis 3.3: Principals who use digital teacher evaluations will find them expensive
alternates to paper and pencil evaluation instruments.
Questions #4 and #17 of the Principal Survey are associated with Hypothesis 3.3.
Question #4 asks Principals to indicate whether they used a traditional paper and pencil
observation method prior to using a digital teacher evaluation. Question #17 asks Principals to
respond to a statement about the cost value of the digital teacher evaluation. Responses were
sorted to arrive at the results for Hypothesis 3.3.
Figure 14
Digital Teacher Evaluation Expense
Used Traditional
Observation Tool
Digital Teacher Evaluations
are Worth the Cost
Percentages
29 x Traditional
13 x Agree
11 x Somewhat Agree
5 x Strongly Disgree
44.82%
37.93%
17.24%
DIGITAL TEACHER EVALUATIONS 73
Figure 14 shows the results for Hypothesis 3.3. Twenty-nine of the thirty Principals
(96.67%) who completed the Principal Survey indicated they used a traditional paper and pencil
observation instrument prior to using a digital teacher evaluation. Sixteen of the twenty-nine
Principals (55.17%) reponded either strongly disagree or somewhat agree to the statement that
digital teacher evaluations are worth the cost. This indicates that a slight majority of Principals
believe digital teacher evaluations are expensive alternatives to traditional evaluation methods.
Thirteen (44.82%) Principals responded that they agree. The responses demonstrate agreement
with the hypothesis. Principals who use digital teacher evaluations find them to be expensive
alternatives to traditional paper and pencil observation tools.
Research Question #4: How do Principal satisfaction levels of digital teacher evaluations
compare to traditional paper and pencil evaluation practices?
Technology has improved dramatically and is recommended by some researchers
(Glazerman, et al, 2011) as a viable alternative to traditional evaluation practices. As such, this
research question investigates the degree of satisfaction of Principals who utilize digital teacher
evaluations. Charlote Danielson’s (2007) framework for teaching provided the conceptual
framework for the teacher and instructional practice questions in the Principal Survey. Questions
#23-27 of the Princpal Survey (See Appendix A) are associated with these characteristics of
effective teaching that should be part of an evaluation instrument (Bill & Melinda Gates
Foundation, 2009). Questions #28-30 (See Appendix A) of the Principal Survey center on
Principal satisfaction of the digital teacher evaluation they utilize. Tables #27-34 illustrate the
results from the Principal Survey associated with this research question.
Questions #23-#27 of the Principal Survey ascertain Principal satisfcation levels of the
digital teacher evaluations ability to evaluate five major domains of effective teraching based on
DIGITAL TEACHER EVALUATIONS 74
Danielson’s (2007) framework. Combining the results of these questions shows somewhat
favorable levels of satisfaction of the digital teacher evaluations ability to evaluate qualities of
effective teachers. Twelve (8%) combined Principal responses strongly disagree with the ability
of the evaluation to evaluate quality teaching domains. Forty-six (30.67%) combined Princpal
responses somewhat agree with the ability of the evaluation to assess quality teaching domains.
Sixty (40%) combined Princpal responses agree with the ability of the evaluation to assess
quality teaching domains. Twenty-two (14.67%) combined Princpal responses strongly agree
with the ability of the evaluation to evaluate quality teaching domains, and ten (6.67%) ommitted
a response. Figure 15 (See Below) illustrates these results.
Table 27
Effectiveness of Measuring Teacher Planning and Preparation
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
3
10.0%
12
40.0%
10
33.3%
3
10.0%
2
6.7%
30
100%
Table 28
Effectiveness of Measuring Instruction
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
1
3.3%
8
26.7%%
14
46.7%
5
16.7%
2
6.7%
30
100%
Table 29
Effectiveness of Measuring the Learning Environment
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
2
6.7%
8
26.7%
12
40.0%
6
20.0%
2
6.7%
30
100%
DIGITAL TEACHER EVALUATIONS 75
Table 30
Effectiveness of Measuring Professional Responsibilities
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
3
10.0%
9
30.0%
12
40.0%
4
13.3%
2
6.7%
30
100%
Table 31
Effectiveness of Measuring Subject Knowledge
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
3
10.0%
9
30.0%
12
40.0%
4
13.3%
2
6.7%
30
100%
Table 32
Recommend Digital Teacher Evaluation to a Colleague
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
3
10.0%
9
30.0%
14
46.7%
4
13.3%
0
0.0%
30
100%
Table 33
Digital Teacher Evaluation Compared to Previous Method
Measure
Stongly
Disagree
Somewhat
Agree
Agree
Strongly
Agree
Omit Total
Principal
%
3
10.0%
10
33.3%
15
50.0%
2
6.7%
0
0.0%
30
100%
Table 34
Overall Rating of Digital Teacher Evaluation
Measure
Not at all
Satisfied
Somewhat
Satisfied
Satisfied
Very
Satisfied
Omit Total
Principal
%
1
3.3%
12
40.0%
16
53.3%
1
3.3%
0
0.0%
30
100%
DIGITAL TEACHER EVALUATIONS 76
School Principal comments about satisfaction with using digital teacher evaluations were
generally positive. Comments by School Principal 2 about desired changes of the digital teacher
evaluation are very telling about the future of digital evaluations. School Principal 2 stated:
I would like to see it all on digital. I’d like for us to make comparison and track teacher
performance; look at how you scored here and be able to idedntify needs. People are
more data driven now than they were in the past and I think it’d be great to show them
[faculty] a graph or chart, or Excel spreadsheet…something to show them how they’ve
been doing that says you’re doing ok here…but in this area – maybe it’s classroom
management – you need to do better.
School Principal 4 had similar positive remarks about the digital teacher evaluation saying: “It
give me a good picture of what this teacher is capable of doing.” When asked about some of the
benefits of the digital teacher evaluation, School Principal 3 stated:
I really like it. I push teachers to use digital resources, so it only makes sense that I begin
using similar tools too. I like the one that I use, but I’m not aware of any other ones out
there. I’m sure they’ll be like most other things…there’ll be someone that makes another
one and then everyone will jump to it because it has some new feature that we didn’t
think of before. I’ve only been using this one for a year, and can see that the future of
these things is bright. It’s exciting. I guess the biggest benefit is the quickness of it and
that I can have everything in one place.
These comments combined with Principal Survey results demonstrate a general overall positive
outlook for the value of digital teacher evaluations.
DIGITAL TEACHER EVALUATIONS 77
Figure 15
Satisfaction Levels of Digital Teacher Evaluations Framework of Effective Teaching
Hypothesis 4.1: There will be a relationship with the level of satisfaction and the years as
principal.
Questions #1 and #30 of the Principal Survey are associated with Hypothesis 4.1.
Question #1 inquires about the years as a Principal. Question #30 asks Principals to rate their
overall level of satisfaction for the digital teacher evaluation they utilize. Responses were sorted
to arrive at the results for Hypothesis 4.1.
Figure 16 shows the results for Hypothesis 4.1. Seventeen of the thirty Principals
(56.67%) who completed the Principal Survey are either very satisfied or satisfied overall with
the digital teacher evaluation they utilize. Thirteen Principals (43.33%) are either somewhat
satisfied or not at all satisfied with their digital teacher evaluation.
There is very even distribution of years of experience of Principals who have favorable
overall views of the digital teacher evaluation. Of these seventeen Principals, six have been
8.00%
30.67%
40.00%
14.67%
6.67%
Principal Satisfaction of Teaching Framework
Strongly Disagree
Somewhat Agree
Agree
Strongly Agree
Omit
DIGITAL TEACHER EVALUATIONS 78
Principal for 1-3 years, three for 4-6 years, three for 7-10 years, and five for more than ten years.
No one group has an overwhelming majority. The two largest groups among the seventeen
favorable responses of digital teacher evaluations are Principals with 1-3 years of experience
(35.29%) and 7-10 years of experience (29.41%).
Figure 16
Level of Satisfaction and Years as Principal
Overall Satisfaction of
Digital Teacher Evaluation
Years As Principal Percentages
1 x Very Satisfied 1 x > 10 Years 100%
16 x Satisfied
6 x 1-3 Years
3 x 4-6 Years
3 x 7-10 Years
4 x > 10 Years
37.5%
18.75%
18.75%
25.0%
12 x Somewhat Satisfied
3 x 1-3 Years
2 x 4-6 Years
1 x 7-10 Years
4 x > 10 Years
2 x Omit
25.0%
16.67%
8.33%
33.33%
16.67%
1 x Not At All Satisfied 1 x 1-3 Years 100%
There is also a fairly even distribution among the thirteen Principals who have
disfavorable overall views of the digital teacher evaluation they utilize. Of these thirteen
Principals, four have 1-3 years of experience, two have 4-6 years of experience, one has 7-10
years of experience, four have over ten years of experience, and two Principals ommitted a
response. The two largest groups among the thirteen disfavorable responses of digital teacher
evaluations are also Principals with 1-3 years of experience (30.77%) and 7-10 years of
experience (30.77%).
Hypothesis 4.2: There will be a difference between traditional pencil and paper evaluation
instruments and digital instruments.
DIGITAL TEACHER EVALUATIONS 79
Question #29 of the Principal Survey is associated with Hypothesis 4.2. The question
asks Principals to rate their level of agreement on whether the digital teacher evaluation is a
better instrument for evaluating teaching than the previous instrument used for teacher
observation and evaluation.
Table 35
Difference between Traditional Instrument and Digital Teacher Evaluation
The Digital Teacher Evaluation is a Better
Insrtument than Previous Instrument
Percentages
2 x Strongly Agree
15 x Agree
10 x Somewhat Agree
3 x Strongly Disagree
6.67%
50.0%
33.33%
10.0%
Table 35 shows the results for Hypothesis 4.2. A majority of the Principals who
completed the Principal Survey had favorable views that the digital teacher evaluation is a better
instrument than the previous one used by Principals to observe and evaluate teachers. Seventeen
of the thirty (56.67%) Principals either strongly agreed or agreed with this statement. Thirteen of
the thirty (43.33%) either somewhat agreed or strongly disagreed with this statement.
Summary
Chapter Four reviewed evidence of Principal expectations of the usefulness of digital
teacher evaluations and their impact on teaching. Results from the Principal Survey provided
quantitative data for the four research questions on the topic. Principal interview data was
reviewed to provide connections to the findings of the Principal Survey. These interviews
elicited Principal responses to the four research questions with accompanying hypotheses.
Findings from the Principal Survey and interviews were analyzed and connections were made
with the Literature Review in Chapter Two. The findings indicate support for the ability of
digital teacher evaluations to track observations over a period of time to make informed, data-
DIGITAL TEACHER EVALUATIONS 80
driven decisions. While there is additional support for the ability of digital teacher evaluations to
provide teachers with more immediate feedback, the findings were mixed with respect to their
utility cost. Digital teacher evaluations save Principals time, but do not appear to be worth the
cost. Overall, Principals were marginally more favorable of digital teacher evaluations than
previous methods, but seem to see the long-term benefit of transitioning to a digital device. A
summary of the study, conclusion, and implications are presented in Chapter Five.
DIGITAL TEACHER EVALUATIONS 81
CHAPTER FIVE: DISCUSSION
The purpose of this Chapter is to summarize the study. It provides a review of the
research questions, a summary of the findings, implications for practice, a discussion of future
research, and a conclusion. The summary of the findings will review the methodology as well as
the study’s guiding research questions. Implications for practice will review recommendations
for professionals. Future research will address recommendations needed as a result of the
findings of the study. Conclusions will review connections made to the background and purpose
of the study.
Principals are charged with the responsibility to establish and execute objective teacher
evaluations that lead to positive changes in teaching. Advances in technology should provide
Principals with appropriate means to carry out this piece of teacher accountability. The purpose
of this study was to investigate the effectiveness of digital teacher evaluations to assist Principals
in the evaluation of teachers as well as provide an insight into the usefulness of digital teacher
evaluations.
Summary of Findings
A mixed-methods design was used in this study. Quantitative research consisted of a 32-
item survey distributed to forty High School Principals in a large urban Catholic diocese school
district. The survey was sent to forty Principals. Thirty Principals completed the survey.
Qualitative research was conducted through interviews consisting of ten primary questions and
follow-up questions depending on the responses. Four Principals participated in the qualitiative
phase of this study. The quantitative and qualitative research of this study were collected based
on four guiding research questions:
DIGITAL TEACHER EVALUATIONS 82
1. How are digital teacher evaluations meeting Principal expectations for reporting of
informaiton to make data-driven decisions about teaching?
2. How are digital teacher evaluations meeting the need for Principals to provide feedback
to teachers to improve performance?
3. Do digital teacher evaluations meet Principal utility costs in terms of affordability, ease
of use, and time?
4. How do Principal satisfaction levels of digital teacher evaluations compare to traditional
paper and pencil evaluation practices?
Research question 1 investigated how digital teacher evaluations track data and assist
Principals in making informed, data-driven decisions about teacher performance. Research on
teacher evaluations illustrated the need for evlauations to track performance over a period of time
(Strong, Gagani, and Hacifazlioglu, 2011; Glazerman, et al, 2011). Data from the quantitative
phase of the study provided evidence that supports the ability of digital teacher evaluations to
assist Principals in making data-driven decisions about teacher performance.
Comments by Principals during the qualitiative phase of the study confirmed these
findings. Principals saw the value in utilizing digital teacher evaluations. While qualitative
comments were inconclusive regarding the tracking capabilities of the evaluations, Principals
highlighted the accessibility of the data, and demonstrated a renewed willingness to perform
classroom observations because of the easier process in executing the evaluation.
Research question 2 centered on the role of feedback. Several researchers (Wise, et al,
1986; Peterson, 2004; Darling-Hammond, et al, 2012) agreed that providing teachers with
meaningful feeback is one of the most important roles carried out by a Principal. This research
question investigated the effectiveness of digital teacher evaluations to provide teachers with
DIGITAL TEACHER EVALUATIONS 83
feedback that leads to these improved instructional practices. Results from the quantitative phase
of the study showed overwhelming support for the ability of digital teacher evaluations to
provide teachers with feedback that improves performance. These results were consistent for the
hypotheses despite relatively few Principals having more than two years of experience working
with digital teacher evaluations. The value of the feedback is viewed to be the greatest among
the Principals who have the most experience using them. In addition, the survey findings
showed that feedback provided through the use of digital teacher evaluatioins improves teacher
effectiveness regardless of the amount of times a Principal conducted an observation.
Comments by Principals parralleled the findings of the Principal Survey. One of the
greatest advantages of using digital teacher evaluations was the ability to provide immediate
feedback to teachers. Principals valued the ability to email reports immediately after an
observation and believe they make a difference in performance. Making teachers aware of areas
of improvement soon after a classroom observation, as well as discussing matters with teachers
are constant themes in Principal reviews of the effectiveness of the digital teacher evaluation
feedback. A majority of the Principals that participated in the qualitative phase of the study
supported the notion that digital teacher evaluations have the ability to provide teachers with
more immediate feeback that will lead to improved instructional practices.
Research question 3 investigated the utlity cost associated with using digital teacher
evaluations. Time and cost are important considerations when selecting an evaluation tool
(Wise, et al, 1986; Olicia, Mathers, and Laine, 2009). This research question looked at the
effectiveness of digital teacher evaluations to meet these considerations along with the ease of
using them. Findings from the quantitative phase of the study showed marginal agreement that
digital teacher evaluations are worth the cost. Slightly above half of the surveyed Principals saw
DIGITAL TEACHER EVALUATIONS 84
digital teacher evaluations as cost effective. Survey results did, however, show strong evidence
that digital teacher evaluations save Principals time in the evaluation process. In addition, a
majority of the Principals surveyed felt that the digital teacher evaluation they utilize is easy to
use.
The qualitative phase of the study revealed that Principals are utlizing different types of
digital teacher evaluations. One Principal simply converted an existing evaluation into digital
form instead of utilizing an application like other participating Principals. Despite the difference
in types of evaluations, the Principals acknowledged through the interview that the digital
evaluation saved them time and that the more they use it the better they will become at it. This
supports the findings from the survey that showed that very few Principals had more than two
years of experience utilizing digital teacher evaluations. Principals with the most experience
using digital teacher evaluations is also the group most comfortable in using them.
Research question 4 investigated Principal overall levels of satisfaction using digital
teacher evaluations compared to previous observation instruments and their alignment with
effective teaching practices. Charlotte Danielson’s (2007) framework for teaching provided the
conceptual framework for these instructional practices. The quantitiave findings showed
marginally favorable results for overall satisfaction of digital teacher evaluations. There were
similar findings when comparing digital teacher evaluations to previous instruments used by
Principals. While more than half of the Principals participating in the survey felt more favorable
that the digital teacher evaluaton is a better instrument than the previous one utilized, it was not
overwhelming. The quantitative phase showed similar findings for the five major domains of
effective teaching based on Danielson’s (2007) framework. Principals again were only
DIGITAL TEACHER EVALUATIONS 85
marginally favorable in the ability of digital teacher evaluations to rate these qualities of
effective teaching.
Principal comments during the qualitative phase were generally positive. There seemed
to be an overwhelming consensus from the Principals that using digital teacher evaluations will
be utilized more in the future. Some of the comments suggest that because Principals are
encouraging teachers to incorporate technology into their lessons, Principals feel compelled to
used technology. In addition, there seemed to be agreement that there is great potential in the
development of digital teacher evaluations and that the more they are able to show teachers
progress over a period of time, the more effective and specific they can become with offering
guidance and support.
Implications for Practice
This study adds to the literature regarding the ability of digital teacher evaluations to lead
to changes in instructional practices. Very little has been written on this topic or regarding the
utility cost associated with transitioning to a digital teacher evaluation. This study synthesizes
the literature and provides Principals with a framework to utilize when considering implementing
digital teacher evaluations. This study could also influence developers and programmers looking
for practitioner input on the effectiveness of digital teacher evaluations and areas to focus on for
building a better evaluation instrument.
This study addresses the need for effective evaluation instruments. Principals are
charged with making data-driven decisions based on objective observations. Digital teacher
evaluations can be a time saving solution for assisting Principals in this process. This study
demonstrates a willingness by Principals to utilize current technology and that Principals believe
DIGITAL TEACHER EVALUATIONS 86
that digital teacher evaluations can serve them well in saving them time while offering teachers
with feedback necessary to impact teaching.
Future Research
Recommendations for future research should include the following:
1. Include sub-domains of teaching framework as part of the survey to distinguish specific
teaching traits that Principals believe are worth inclduing in a digital evaluation
2. Compare evaluation data to student standardized test outcomes to further understand the
measurable impact of the feedback provided through the evaluation
3. Conduct a case study on one specific device, or school site to provide evidence on the
effectiveness of one specific device, or on one specific school site in implementing the
use of a digital teacher evaluation
4. Include a broader sample of Principals in the qualitative phase to gain a more diverse
perspective to support quantitative findings
5. Compare findings with a public school to determine if there are any similarities or
differences between the findings in this study of a Catholic diocese
Conclusions
The responsiblities of a Principal are broad and highly demanding. Recent changes in
accountability practices have placed additional demands on Principals to execute objective
teacher evaluations that lead to improved teaching. In addition, the information gained through
the evaluation process should assist Principals to make data-driven decisions. This study
confirms the value of digital teacher evaluations to assist Principals in this process. The role of
technology in this area is relatively new, but there is a clear desire for Principals to adjust to 21
st
century demands. The role of the Principal will continue to be demanding. It is clear that use of
DIGITAL TEACHER EVALUATIONS 87
digital teacher evaluations has a promising future in providing Principals with the optimal
instrument to execute effective evaluations that lead to positve gains in the classroom.
DIGITAL TEACHER EVALUATIONS 88
References
Andrews, R. L., & Soder, R. (1987). Principal leadership and student achievement,
Educational Leadership, 44, 9-11.
Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L.,
Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010). Problems
with the use of student test scores to evaluate teachers (Briefing Paper No. 278).
Washington, DC: Economic Policy Institute.
Bill & Melinda Gates Foundation (2009). MET Project: Working with Teachers to
Develop Fair and Reliable Measures of Effective Teaching, Seattle, WA: Bill &
Melinda Gates Foundation.
Blase, J., & Blase, J. (1999). Principals’ instructional leadership and teacher
development, Educational Administration Quarterly, 35(3), 349-378.
Braun, H. (2005), Using Student Progress to Evaluate Teachers: A Primer on Value-
Added Models, Princeton, NJ: Educational Testing Service.
Cantrel, S., & Scantlebury, J. (2011). Effective teaching: What is it and how is it
measured?, Voices in Urban Education, 31, 28-35.
Covino, E. A., & Iwanicki, E. F. (1996). Experienced teachers: Their constructs of
effective teaching. Journal of Personnel Evaluation in Education, 10, 325-363.
Creswell, J. W. (2005). Educational Research: Planning, Conducting, and Evaluating
Quantitative and Qualitative Research, 2
nd
Edition. Thousand Oaks: Sage
Publications.
Danielson, C. (2007). Enhancing Professional Practice: A Framework for Teaching, 2
nd
Edition, Alexandria, VA: Association of Supervision and Curriculum
DIGITAL TEACHER EVALUATIONS 89
Development.
Danielson, C., & McGreal, T. L. (2000). Teacher Evaluation to Enhance Professional
Learning. Princeton, NJ: Educational Testing Service.
Darling-Hammond, L. (1999). Teacher Quality and Student Achievement: A Review of
State Policy Evidence. Seattle: University of Washington, Center for the Study of
Teacher Policy.
Darling-Hammond, L. (2007). Recognizing and enhancing teacher effectiveness: A
policymaker’s guide. In L. Darling-Hammond and C. D. Prince (Eds.),
Strengthening Teacher Quality in High-Need Schools: Policy and Practice.
Washington, DC: The Council of Chief State School Officers.
Darling-Hammond, L. (2009). Recognizing and enhancing teacher effectiveness. The
International Journal of Educational and Psychological Assessment, 3, 1-24.
Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012).
Evaluating teacher evaluation: Popular modes of evaluating teachers are fraught
with inaccuracies and inconsistencies, but the field has identified better
approaches. Phi Delta Kappan, 93(6), 8-15.
District of Columbia Public Schools (2010). IMPACT: The District of Columbia Public
Schools effectiveness assessment system for school-based personnel 2010-2011.
Group 1 general education teachers with individual value-added student
achievement data. Washington, DC: DCPS.
Gardner, D. P. (1983). A nation at Risk. Washington D.C.: National Commission for
Excellence in Education.
Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., & Whitehurst, G.
DIGITAL TEACHER EVALUATIONS 90
J. (2011) Passing Muster: Evaluating Teacher Evaluation Systems. Brown Center
on Education Policy at Brookings, Retrieved from:
http://www.brookings.edu/~/media/Files/rc/reports/2011/0426_evaluating_teachers/0426
evaluating_teachers.pdf.
Goe, L. (2008). Key Issue: Using Value-Added Models to Identify and Support
Highly Effective Teachers. Washington, DC: National Comprehensive Center for
Teacher Quality.
Goldhaber, D. (2002). The mystery of good teaching: Surveying the evidence on student
achievement and teachers’ characteristics. Education Next, 2(1), 50-55.
Goldhaber, D., & Anthony, E. (2007). Can teacher quality be effectively assessed?
National board certification as a signal of effective teaching. The Review of
Economics and Statistics, 89(1), 134-150.
Harris, D. N. (2009). Would accountability on teacher value added be smart policy? An
examination of the statistical properties and policy alternatives, Education
Finance and Policy, 4(4), 319-350
Heck, R. H. (2009). Teacher effectiveness and student achievement: Investigating a
multilevel cross-classified model. Journal of Educational Administration, 47(2),
227-249.
Martineau, J. A. (2006). Distorting value added: The use of longitudinal, vertically scaled
student achievement data for growth-based, value-added accountability, Journal
of Educational and Behavioral Statistics, 31(1), 35-62.
National Board for Professional Teaching Standards. (1987). What Teachers Should
Know and Be Able to Do, Arlington, VA: The National Board for Professional
DIGITAL TEACHER EVALUATIONS 91
Teaching Standards.
No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).
Oliva, M., Mathers, C., & Laine, S. (2009). Effective evaluation. Principal Leadership,
9(7), 16-21.
Patton, M. Q. (2002). Qualitative Research & Evaluation Methods. Thousand Oaks, CA:
Sage Publications.
Peterson, K. (2004). Research on school teacher evaluation, NASSP Bulletin 88(639),
60-79.
Quinn, D. M. (2002). The impact of principal leadership behaviors on instructional
practice and student engagement, Journal of Educational Administration, 40(5),
447-467.
Salkind, N. J. (2011). Statistics for People Who (Think They) Hate Statistics, 4
th
Edition.
Thousand Oaks, CA: Sage Publications.
Sanders, W., & Horn, S. P. (1998). Research findings from the Tennessee value-added
assessment system (TVASS) database: Implications for educational evaluation
and research. Journal of Personnel Evaluation in Education, 12(3), 247-256.
Schochet, P.Z., & Chiang, H.S. (2010). Error rates in measuring teacher and school
performance based on student test score gains (NCEE 2010-4004). Washington,
DC: National Center for Evaluation and Regional Assistance, Institute of
Educational Sciences, U.S. Department of Education.
Schulte, D. P., Slate, J. R., & Onwuegbuzie, A. J. (2008). Effective high school teachers:
A mixed investigation. International Journal of Educational Research, 47(6),
351-361.
DIGITAL TEACHER EVALUATIONS 92
Short, E. C. (1995). A review of studies in the first 10 volumes of the Journal of
Supervision, Journal of Curriculum and Supervision, 11(1), 87-105.
Simpons, R. L., Lacava, P. G., Graner, P. S. (2004). The no child left behind act:
Challenges and implications for educators, Intervention in School and Clinic
40(2), 67-75.
SPSS. (1998). IBM SPSS Statistics Base [Computer Software]. Available from:
http://www-01.ibm.com/software/analytics/spss/products/statistics/
Strong, M., Gargani, J., & Hacifazlioglu, O. (2011). Do we know a successful teacher
when we see one? Experiments in the identification of effective teachers. Journal
of Teacher Education, 62(4), 367-382.
Stronge, J. H., & Tucker, P. D. (2000). Teacher Evaluation and Student Achievement.
Washington, DC: National Education Association.
Stronge, J. H., Ward, T. J., Tucker, P. D., & Hindman, J. L. (2007). What is the
relationship between teacher quality and student achievement? An exploratory
study, Journal of Personnel Evaluation in Education, 20, 165-184.
SurveyMonkey. (1999). Survey Monkey [Computer Software]. Available from:
http://www.surveymonkey.com.
Weisberg, D., Tucker, P. D., & Stronge, J. H. (2005). Linking Teacher Evaluation and
Student Learning. Alexandria, VA: Association for Supervision and Curriculum
Development.
Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The Widget Effect: Our
National Failure to Acknowledge and Act on Teacher Effectiveness. New York
City, N.Y.: The New Teacher Project.
DIGITAL TEACHER EVALUATIONS 93
Wenglinsky, H. (2002). The link between teacher classroom practices and student
academic performance. Educational Policy Analysis Archives, 10 (12). Retrieved
from http://epaa.asu.edu/ojs/article/view/291
Wallace Foundation. (2006). Leadership for Learning: Making the Connections Among
State, District and School Policies and Practices. New York: NY: The Wallace
Foundation
Wise, A. E., Darling-Hammond, L., McLaughlin, M. W., & Bernstein, H. T. (1984).
Teacher Evaluation: A Study of Effective Practices, Santa Monica, CA: Rand.
Wright, G. A. (2010). Improving teacher performance using an enhanced digital video,
Learning and Instruction in the Digital Age, 3, 175-190.
DIGITAL TEACHER EVALUATIONS 94
Appendix A
Survey Instrument for Principals
The following instrument was used in this study. It was delivered through email to principals in
the sample population after approval from both the school superintendents of elementary and
secondary education of the district. SurveyMonkey was used in the construction of the survey.
Thank you for participating in this survey. The results will assist me in my research on the use
of digital forms of teacher evaluations by Principals. All information will be kept confidential
and respondent’s identity will remain anonymous. Please answer each question to the best of
your abilities.
1. How long have you been a Principal?
(A) 1-3 Years (B) 4-6 Years (C) 7-10 Years (D) >10 Years
2. How old are you?
(A) 25-30 (B) 31-35 (C) 26-40 (D) 41-45 (E) 46-50
(F) 51-55 (G) 56+
3. How long have you been using digital teacher evaluation and observation methods?
(A) 1 Year (B) 2 Years (C) 3 Years (D) >3 Years
4. Did you utilize a traditional teacher evaluation/observation instrument (paper and pencil)
prior to utilizing a digital instrument?
(A) Yes (B) No
5. The digital teacher evaluation I utilize has the following rating scale:
(A) Binary Scale (Ex: 1 or 2; or Meets or Does Not Meet)
(B) Multiple Rating Scale (Ex: 1-5; or Levels Ranging from Low to High)
6. On average, I observe (either formally or informally) every teacher the following amount of
times a semester:
(A) 1-2 Times (B) 3-4 Times (C) 5-6 Times (D) > 6 Times
DIGITAL TEACHER EVALUATIONS 95
The following questions contain statements about digital teacher evaluation instruments. Please
respond to each statement based on your knowledge and use of the evaluation and to the best of
your abilities.
7. The rating system of the digital teacher evaluation is useful in providing me with accurate
information about teacher performance.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
8. The digital teacher evaluation assists me in tracking teacher observation data over the span of
a school year.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
9. The teacher domains of the digital teacher evaluation that I use are important qualities of
instruction.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
10. The digital teacher evaluation aids me in making informed decisions about teacher
performance.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
11. The data gathered from the evaluations provides me with enough information to identify
needs for school-wide professional development.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
12. I am able to provide teachers with valuable feedback using the digital teacher evaluation.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
13. Teachers are appreciative of the feedback I am able to provide using the digital teacher
evaluation.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
14. The feedback I provide improves teacher effectiveness.
DIGITAL TEACHER EVALUATIONS 96
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
15. The feedback I provide using the digital teacher evaluation is more effective than what I
provided using traditional evaluation/observation methods.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
16. The digital teacher evaluation is an affordable instrument for observing and evaluating
teachers.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
17. The digital teacher evaluation is worth the cost.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
18. The digital teacher evaluation saves me time.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
19. The digital teacher evaluation is easy to use.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
20. I am comfortable using the digital teacher evaluation.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
21. Using the digital teacher evaluation saves me time in evaluation teachers.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
22. It took me more time than anticipated to get comfortable using the digital teacher evaluation.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
DIGITAL TEACHER EVALUATIONS 97
23. The digital teacher evaluation is effective at evaluating the quality of planning and
preparation teachers put into their lesson plans.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
24. The digital teacher evaluation is effective at evaluating the instructional practices of
teachers.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
25. The digital teacher evaluation is effective at evaluating the learning environment established
by the teacher.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
26. The digital teacher evaluation is effective at evaluating the professional responsibilities of a
teacher.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
27. The digital teacher evaluation is effective at evaluating the teacher knowledge of the subject.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
28. I would recommend the use of this digital teacher evaluation to a colleague.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
29. Overall, the digital teacher evaluation is a better instrument for evaluating teaching than the
previous instrument I used for teacher observation and evaluation.
(A) Strongly (B) Somewhat (C) Agree (D) Strongly
Disagree Agree Agree
30. How would you rate the overall level of satisfaction of the digital teacher evaluation you
utilize?
DIGITAL TEACHER EVALUATIONS 98
(A) Not At All (B) Somewhat (C) Satisfied (D) Very
Satisfied Satisfied Satisfied
31. Would you like to be contacted for a 30 minute follow-up interview?
(A) Yes (B) No
32. If you responded YES to the previous question please provide a contact email in the field
provided.
Thank you for participating in this survey. The information will greatly assist me in my research
on the topic. All information will remain anonymous and confidential.
DIGITAL TEACHER EVALUATIONS 99
Appendix B
Principal Interview Protocol
The following is the interview protocol used in the qualitative portion of the study. They are
semi-structured with open-ended questions and conducted by the researcher.
Thank you for taking the time to sit with me for this interview. I am going to record this
conversation for later review. This will assist me in getting an accurate cross-section of data to
go with the survey results. Your anonymity will be intact in this research. Pseudonyms will be
used for any direct quotes. I will hold onto this data for three years after completion of the study
and then destroy or delete any files associated with this study. This interview should last no
longer than thirty minutes.
1. Can you describe how you use the digital teacher evaluation?
a. How is this different than the previous process?
b. Is the Principal the only one to conduct evaluations?
i. If yes, have you calibrated the process in terms of what you look for when
observing a teacher?
2. How was the digital teacher evaluation chosen?
a. Was there consideration for the underlying framework in the selection process?
b. Was there any consideration for cost?
c. Was there training?/By whom?
3. Can you describe the domains or categories used in the digital teacher evaluation to rate
teachers?
a. How do you feel about these characteristics of teaching?
4. How frequently do you use the evaluation instrument?
a. How does this compare to the previous method?
5. How much time do you estimate you spend observing one teacher with this instrument?
a. How does this compare with the previous instrument?
DIGITAL TEACHER EVALUATIONS 100
b. How would you compare the two in terms of ease of use during a teacher
observation?
c. How would you rate the digital teacher evaluation in terms of a time saver?
6. Can you describe how feedback is provided to teachers?
a. How do you think teachers are taking the feedback?
b. Has this made a difference in teacher performance?
c. How does this compare to previous methods?
7. How do you use the digital teacher evaluation to track teacher observation data?
a. Has this assisted you in making informed decisions about teacher performance?
b. How does this compare to previous methods of tracking teacher observation data?
8. What have been some of the challenges you’ve encountered in using digital teacher
evaluations?
a. What have been some of the benefits you’ve encountered in using digital teacher
evaluations?
9. If there was one thing you would like to change about the digital teacher evaluation, what
would it be?
10. One final question. Is there any question that you think I should have asked that I didn’t
during the course of this interview?
Thank you very much for permitting me to come here today and conduct this interview. I value
your input and your time, and I believe that is all the time we have at this point. Again, all
information provided during this interview will remain anonymous. The data will be stored on
my password protected computer for three years and then destroyed. When the results of the
research are published or discussed in conferences, no identifiable information will be used.
DIGITAL TEACHER EVALUATIONS 101
Would you be interested in an Executive Summary of my findings? Thank you again. Have a
nice day.
DIGITAL TEACHER EVALUATIONS 102
Appendix C
Principal Recruitment Letter
Below is the text of an email that was sent to forty Principals in a large urban diocese. The
message asks for permission to participate in the study. For purposes of anonymity, the name
portion of the message is left blank.
Dear ___________________,
My name is Jonathan Schild and I am a doctoral candidate in the Rossier School of
Education at University of Southern California. I am conducting a research study as part of my
dissertation, focusing on how Principals use digital teacher evaluations to improve teaching.
You are invited to participate in the study. If you agree, you will be asked to complete a survey
and a focus group interview.
The survey is anticipated to take no more than 20 minutes to complete; the focus group
interview is anticipated to last approximately 30 minutes and will be audio-taped.
Participation in this study is voluntary. Your identity will remain confidential. If you
have any question or would like to participate, please contact me at: jschild@usc.edu, or (818)
720-5771.
Thank you,
Jonathan Schild
University of Southern California
Abstract (if available)
Abstract
Principals are charged with the responsibility of conducting teacher evaluations that lead to improved instructional practices, as well as using information gathered during the process to make informed data-driven decisions about teacher performance. This study analyzes the impact digital teacher evaluations have played on this process. Four research questions guided this study: (1) How are digital teacher evaluations meeting Principal expectations for reporting of information to make data-driven decisions about teaching? (2) How are digital teacher evaluations meeting the need for Principals to provide feedback to teachers to improve performance? (3) Do digital teacher evaluations meet Principal utility costs in terms of affordability, ease of use and time? (4) How do principal satisfaction levels of digital teacher evaluations compare to traditional paper and pencil evaluation practices? A mixed method approach was utilized. Thirty high school Principals of a large, urban Catholic diocese participated in the quantitative phase, and four in the qualitative phase. Findings from the study show that Principals see value in the centralized information available through the use of digital teacher evaluations. The findings further show that digital teacher evaluations save Principals time
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Data-driven decision-making practices that secondary principals use to improve student achievement
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
The teacher evaluation: key components that positively impact teacher effectiveness and student achievement
PDF
The leadership skills, knowledge, and training required of high school principals to effectively evaluate classroom teachers
PDF
The principal's perspective of current tenure practices
PDF
21st century superintendents: the dynamics related to the decision-making process for the selection of high school principals
PDF
A comparative analysis of teacher evaluation systems utilized by public and charter schools in Southern California
PDF
The secondary school principal's role as instructional leader in teacher professional development
PDF
Superintendents increase student achievement by selecting effective principals
PDF
A case study of promising leadership practices employed by principals of Knowledge Is Power Program Los Angeles (KIPP LA) charter school to improve student achievement
PDF
Secondary school counselor-principal relationships: impact on counselor accountability
PDF
Comparative study of the networked principal vs. the isolated principal
PDF
21st century superintendents: the dynamics related to the decision-making process for the selection of high school principals
PDF
An examination of the leadership practices of Catholic elementary school principals
PDF
Effective strategies that urban superintendents use that improve the academic achievement for African-American males
PDF
Effective leadership practices of principals of low socioeconomic status high schools consisting of predominantly African-American and Latino students showing sustained academic improvement
PDF
Effective leadership practices used by middle school principals in the implementation of instructional change
PDF
Holding on and holding out: why some teachers resist the move toward data-driven decision making
PDF
Multiple perceptions of teachers who use data
Asset Metadata
Creator
Schild, Jonathan Brandon
(author)
Core Title
Digital teacher evaluations: principal expectations of usefulness and their impact on teaching
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
03/23/2013
Defense Date
02/25/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
data-driven decision making,digital teacher evaluations,OAI-PMH Harvest,value-added models
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Castruita, Rudy Max (
committee chair
), Batsis, Thomas (
committee member
), García, Pedro Enrique (
committee member
)
Creator Email
jonschild@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-226156
Unique identifier
UC11293716
Identifier
usctheses-c3-226156 (legacy record id)
Legacy Identifier
etd-SchildJona-1486.pdf
Dmrecord
226156
Document Type
Dissertation
Rights
Schild, Jonathan Brandon
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
data-driven decision making
digital teacher evaluations
value-added models