Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The continuous failure of Continuous Improvement: the challenge of implementing Continuous Improvement in low income schools
(USC Thesis Other)
The continuous failure of Continuous Improvement: the challenge of implementing Continuous Improvement in low income schools
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
The Continuous Failure of Continuous Improvement: The Challenge of Implementing
Continuous Improvement in Low-Income Schools
Jacare E. Thomas
Rossier School of Education
University of Southern California
A dissertation submitted to the faculty
in partial fulfillment of the requirements for the degree of
Doctor of Education
May 2024
© Copyright by Jacare Everett Thomas 2024
All Rights Reserved
The Committee for Jacare Everett Thomas certifies the approval of this Dissertation
Rudolph Crew
Stefanie Phillips
Maria Ott, Committee Chair
Rossier School of Education
University of Southern California
2024
Abstract
This study uses the Clark and Estes Gap Analysis Framework to investigate the challenges and
strategies related to implementing Continuous Improvement within a school context. As a best
practice study, it focuses on a school serving predominantly low-income Black and Latinx
students in a large, urban school district. The purpose is to explore how knowledge, motivation,
and organizational culture impact the effective implementation of Continuous Improvement.
Through interviews with teachers from a school selected based on exemplary growth in the year
before the COVID-19 pandemic, the study seeks to answer specific research questions related to
teacher knowledge, motivation for Continuous Improvement practices, and the impact of the
school's culture on implementation effectiveness. Findings reveal significant gaps in knowledge
despite high motivation and organizational support. The research recommends bridging these
knowledge gaps through ongoing and direct guided training on Continuous Improvement
practices, fostering intentional motivation by supporting teacher self-efficacy and creating a
conducive social environment, and organizing schools to consistently support improvement
needs. The study's results suggest that schoolwide collaboration in addressing these gaps is
pivotal for realizing the full potential of Continuous Improvement and improving both school
performance and student achievement.
iv
Dedication
To the teachers who work tirelessly to create even the smallest opportunities for the young
people who are most often counted out. It is not over for us yet!
v
Acknowledgments
It was not until I was a young parent that I began to fully understand that my parents did
not enjoy saying I could not have all the frivolous things that I thought I wanted. The reality is
that they were doing everything that they possibly could to ensure that we were fed, clothed,
housed, and educated. Growing up as a Black male on the rougher side of town, my parents
shielded me from as much as possible while keeping me enrolled in some form of academic
enrichment. I recognize that this was not the reality for many of my peers and I recognize the
difference that it made then and could make for so many today.
I cannot begin to express the appreciation that I have for my parents and the stress they
placed on quality of education. As the third person in my familial history to walk this path, it is
my deepest desire that this dissertation catalyze improved opportunities for Black students who
are stuck in school systems that have yet to realize how to fully support their needs. Jack and
Nancy, you will always be the inspiration for my life’s work.
Secondly, to my wife, I promise that this is the last time you will have to suffer through
my obsession with continued education. As I make this promise, Charlette, I also appreciate that
you know that I will never resolve my obsession with continued education. None of this would
be possible without your encouragement, support, and affirmation. As in all things, we did this
together.
Additionally, I would like to thank my amazing children who are my reason for living
and my encouragement for attempting to be a fraction of all the things that I know you will
become. Earnest, you have reminded me of the beauty that can be found when you follow your
dreams. Your Black boy joy is infectious. Kazi, I hope to make you as proud of me as I am of
you. Ari, my namesake, I am inspired by your relentless insistence on fighting against injustice
vi
in every space that you enter. Had it not been for you kids, I might not have been able to persist
and earn your title of Dr. Daddy.
I would be remiss not to acknowledge the many ways that “The Ladies,” Denise Little
and Wanda Washington have impacted my life. Your faith and guidance allowed me to become a
leader in Continuous Improvement for our schools. Thank you, Batman and Robin. My mentor,
Venessa Woods, the first person in my family that I have seen graduate, has always been my
dependable champion in my continued education and career. I am forever grateful for your
support in all things.
I am beyond grateful for the support and encouragement of my dissertation committee.
Dr. Maria Ott, your early support of my topic and ongoing support of my research gave me the
confidence to address such an important topic with optimism. Dr. Stefanie Phillips and Rudolph
Crew, your expertise in public school systems was invaluable. I appreciate being pushed on the
application of my findings. Because of you, I am confident in my approach to supporting schools
using my research in the future.
Finally, I am especially grateful for my public school teachers who, like my parents,
made the most out of the little that we had. In particular, I am thankful to Mrs. Marcia Arrington
who made learning relevant for me. I continue to be inspired by everything that you are to so
many people and I continually strive to make you proud, Mama.
vii
Table of Contents
Abstract...........................................................................................................................................iv
Dedication........................................................................................................................................v
Acknowledgments.......................................................................................................................... vi
Table of Contents..........................................................................................................................viii
List of Tables.................................................................................................................................. xi
List of Figures................................................................................................................................xii
Chapter One: Introduction To the Study........................................................................................13
Context and Background of The Problem......................................................................... 14
Purpose of the Project and Research Questions.................................................................16
Importance of the Study.....................................................................................................16
Overview of Theoretical Framework and Methodology................................................... 17
Definitions..........................................................................................................................18
Organization of The Dissertation.......................................................................................20
Chapter Two: Review of the Literature......................................................................................... 22
Measuring Success in Schools...........................................................................................22
A Call for School Improvement.........................................................................................29
Exploring the History of Continuous Improvement.......................................................... 40
Why Continuous Improvement? Why Now?.....................................................................44
Best Practice Framework for Continuous Improvement Implementation......................... 47
viii
Conceptual Framework......................................................................................................51
Summary............................................................................................................................54
Chapter Three: Methodology.........................................................................................................56
Research Questions............................................................................................................56
Overview of Design........................................................................................................... 56
Research Setting.................................................................................................................58
The Researcher...................................................................................................................59
Data Sources...................................................................................................................... 60
Validity and Reliability...................................................................................................... 64
Ethics..................................................................................................................................65
Limitations and Delimitations............................................................................................66
Chapter Four: Findings.................................................................................................................. 67
Results and Findings..........................................................................................................71
Findings: Research Question 1.......................................................................................... 72
Findings: Research Question 2.......................................................................................... 87
Findings: Research Question 3.......................................................................................... 94
Summary............................................................................................................................97
Chapter Five: Discussion and Recommendations........................................................................101
Findings........................................................................................................................... 100
Recommendations for Practice........................................................................................ 105
ix
Limitations and Delimitations..........................................................................................110
Conclusion....................................................................................................................... 112
References....................................................................................................................................114
Appendix A: Interview Protocol..................................................................................................128
Appendix B: Interview Data Analysis......................................................................................... 133
Appendix C: Information Sheet for Exempt Research................................................................ 135
x
List of Tables
Table 1: Data Sources 54
Table 2: Years Teaching Versus Years Teaching at SLA 68
Table 3: Themes Aligned to Research Study Questions 69
Table 4: Knowledge of Continuous Improvement Means of Acquisition Indicator Count 70
Table 5: Indications of Hand-on Training with Feedback on Objectives 73
Table 6: Knowledge Assessment 76
Table 7: Motivation for Continuous Improvement Indicator Count 78
Table 8: Continuous Improvement Implementation Confidence 81
Table 9: Motivation Assessment 84
Table 10: Motivation Assessment 86
Table 11: Continuous Improvement Time Available versus Time Made Available 89
Table 12: Organization Assessment 91
xi
List of Figures
Figure 1: Conceptual Map 51
Figure 2: Interview Population by Grade Band 67
xii
13
CHAPTER ONE: INTRODUCTION TO THE STUDY
This study addresses the challenges faced by schools with high poverty rates and their
efforts to implement systems of Continuous Improvement as a method for driving school
improvement. Continuous Improvement is a systems-thinking approach to driving school
improvement by focusing data strategy on student experience and stakeholder engagement
(Elgart, 2018). Typically, Continuous Improvement involves a cyclical process of reviewing data
to understand the current state of a school, setting goals for where the school wants to be,
engaging in conversations to identify the root cause of the gap between the current state and the
set goals, designing a strategy to reach the set goals, and monitoring the progress of the
implementation of the strategy. Teachers at schools engaging in Continuous Improvement as an
approach to school improvement meet regularly to work on their cycles of inquiry using a
plan-do-study-act (PDSA) approach. In multiple studies completed in recent years, PDSA has
shown evidence of a positive impact on low-income schools as a result of testing for small
changes and monitoring the effectiveness of the strategies involved using short cycles of inquiry
(Wagner, et al., 2017). A 2018 study by a national student assessment agency found that in
schools identified as predominantly poor, the median reading and math performance against
national norms was in the 29th percentile while performance was in the 73rd percentile for
schools considered more financially stable (Hegedus, 2018). The designation as a poor school
was based on the percentage of students who qualify for the school's federal free lunch program.
In 2019, on average, 44% of Black and Hispanic students attended high-poverty schools while
8% of White students attended high-poverty schools in the United States (National Center for
Education Statistics, 2019). This finding suggests that student performance is directly correlated
to poverty and race. Failure to implement systems of Continuous Improvement in these schools
14
is important to address because Continuous Improvement is largely regarded by school districts
as a critical lever for school improvement. However, it is infrequently implemented with the
fidelity necessary to improve learning outcomes in poverty-stricken schools that desperately
need equitable learning solutions (Eghart, 2018).
Context and Background of the Problem
Lieberman and Miller (1988) note that since the 1955 publication of American Schools in
Transition by Paul Mort, regarded as the first publication on school improvement, researchers
have been attempting to understand how to package tools and practices that will improve
schools’ ability to realize the benefits of refining school curriculum and programs. Despite the
many efforts categorized as “school improvement” attempted in the almost 70 years since Mort’s
American Schools in Transition, most efforts have made inadequate contributions to the idea of
improving schools with little to no sustainable impact. Hopkins (2001) indicated that school
improvement practices that have focused primarily on teacher practice and lack of resources,
without regard to school management, have ignored the most authentic solution to the issues
facing schools. In “It Can Be Done, It’s Being Done, and Here’s How,” Karin Chenoweth (2009)
highlighted two high poverty-high minority schools as examples of the type of improvement that
can be achieved when school management accounts for the need for teachers to collaborate.
Chenoweth found that the commonality between the growth of Graham Road and George Hall
schools’ student achievement could be attributed to the schools’ restructuring to allow for
teachers to spend substantial time in common preparation periods, department meetings, and
meetings with the principals to strategize. Recently, given the widening achievement gap
between high- and low-income students, the importance of identifying methods for improving
student achievement has given new consideration to educators who have been advocates of
15
formalized approaches to school improvement (Hopkins, 2001). Continuous Improvement
provides a holistic purpose to schools and empowers educators to drive change through collegial
conversations (Zmuda et al., 2004).
Continuous Improvement as a recognized school improvement practice by educators, in
the fifty years since its inclusion in school systems, has yet to fulfill its promise in school
systems with higher rates of poverty (Eghart, 2017). Simultaneously, more financially affluent
schools that practice Continuous Improvement benefit from the system of improvement while the
performance gap in financially strapped schools remains stagnant (Eghart, 2017). While
Continuous Improvement suggests that more successful schools adopt a holistic approach to
implementing the system, less successful adopters tend to have difficulty balancing the focus on
teacher practice, student outcomes, and overall school culture (Eghart, 2018).
Not to be confused with the founding industrial principles of Continuous Improvement as
developed by George Robson for General Electric, Continuous Improvement in education is
specific to the complexities of the larger political influences in education and local challenges of
schools. Whereas the origins of Continuous Improvement theory streamlined practices by
making direct connections to systems processes, Continuous Improvement borrows the inquiry
aspects of the system while potentially creating space to acknowledge the human complexities of
race, gender, and power (Yurkofsky et al., 2020). The methods of Continuous Improvement
practiced in education create an opportunity for education to be examined through the lens of
impact on classrooms, teacher teams, whole schools, and the entirety of the school system.
Continuous Improvement in education creates a space for educators to peel back the onion for
deeper levels of understanding of the why of their classrooms and schools. When implemented
effectively, it departs from a broad approach of attempting to extract “what works” from a set of
16
prescribed best teaching approaches for academic skill development and allows educators to
examine the full scope of the variables impacting student and school performance with equity as
a factor for consideration.
Purpose of the Project and Research Questions
The purpose of this study is to better understand how some schools leverage knowledge,
motivation, and school organization to overcome challenges and benefit from the implementation
of Continuous Improvement. This study will examine how lower-income schools can maximize
their Continuous Improvement system effectiveness to improve school performance for students.
The following questions will be explored:
1. What is teacher knowledge and motivation for Continuous Improvement practices?
2. How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
3. What recommendations do teachers have around knowledge, motivation, and school
organization that may help other schools increase their likelihood of successfully
implementing Continuous Improvement systems?
Importance of the Study
This problem of failing to implement a sustainable system of Continuous Improvement in
low-income schools is important to study so these students can have a more equitable
opportunity to compete socially and financially as adults. Though the achievement gap stalled in
the 1990s, today the standardized gap between affluent and impoverished students in America
has widened by 40% (Michelmore & Dynarski, 2017). Consequently, Michelmore and Dynarski
note that students from a lower socioeconomic background have scores that differ by one
standard deviation from those from more affluent familial backgrounds. These students attend
17
college at a 14% lower rate and earn nine percent less in income by the time they are 28 years
old. Establishing a school improvement mechanism that will eliminate this gap is critical to the
life outcomes of these students.
Continuous Improvement is the logical improvement mechanism for businesses,
government agencies, and public institutions. While Continuous Improvement is not the sole
solution for school improvement, research shows that effective Continuous Improvement
practices result in improved school outcomes (Eghart, 2018). In AdvanceEds 2017 review of
over 250,000 classrooms, it was found that there is a direct correlation between effective
Continuous Improvement practices and high performance in schools (Elghart, 2017). While
literature surrounding research on the matter has produced insight into potential gaps in best
practice implementation of Continuous Improvement, research does not explore differences in
the impact of how entities prepare for the implementation of Continuous Improvement.
Additionally, the literature points out that even the best efforts of implementation are subject to
the distraction of external stakeholders who define outcome as a quantitative analysis of
academic assessments (Eghart, 2018). Uncovering the best practices of schools that have been
able to implement Continuous Improvement effectively to the benefit of their low-income
students will provide similar schools with valuable insight into how they might replicate these
systems in their schools and improve outcomes for their students.
Overview of Theoretical Framework and Methodology
This research uses Clark and Estes’ Gap Analysis framework to explore how a school
with higher numbers of students living below the federal poverty threshold was able to
successfully implement a system of Continuous Improvement. Clark and Estes’ Gap Analysis
Framework is a practical and useful framework for this study because it provides a
18
comprehensible structure for identifying lack of performance as gaps in knowledge, motivation,
and organization structure, and this bucketed structure is easily accessible for those seeking to
understand their own performance gaps. Understanding the reasons for their gaps in
implementation will create opportunities for educators to identify strategies that will improve the
implementation of Continuous Improvement as a school improvement tool. Conceptually, using
Clark and Estes’ framework, the research will engage with a school that has been successful in
overcoming the challenges of Continuous Improvement. The research will explore a school’s
ability to overcome gaps in understanding of Continuous Improvement, teacher motivation to
implement Continuous Improvement, and schools’ organization of culture and climate for
Continuous Improvement.
This study takes a qualitative approach to collecting best practices from a school that has
successfully implemented Continuous Improvement with particular attention to how teachers
adapted to the change in demands of Continuous Improvement systems and how they define
successful implementation and outcomes.
Definitions
The following terms are defined to provide a shared context of the population and
strategy.
Continuous Improvement
When referred to in education, Continuous Improvement is an intentional, cyclical
systems-thinking approach to identifying issues and attempting to take incremental steps toward
solving those issues (Elgart, 2018). Using the Plan, Do, Study, Act method of inquiry, educators
test strategies that work best in different scenarios and then monitor and implement those
strategies as a route to school improvement (Tichnor-Wagner et. al., 2017).
19
School Improvement
School Improvement refers to the formalized systems approach to improving schools to
improve outcomes for students through deep interrogation of classroom practices and school
management with the intent of impacting teaching and learning (Hopkins, 2001).
Low Income
In education, low income is defined by family participation in the federal free lunch
program. Because free lunch program eligibility is determined by a family’s household income
being 185% below the poverty line, it stands as a suitable proxy (Michelmore & Dynarski,
2017).
Data Inquiry/Strategy
Data inquiry and data strategy are the practices of engaging in a cyclical process of
reviewing data to understand the root cause of gaps in the current state of performance and the
desired future state of performance. The process is inclusive of the practice of developing
strategies to address root cause issues and then monitoring those changes for impact (Silverstein,
2014).
Achievement Gap
The achievement gap refers to the widening performance gaps between minority and
majority groups. Typically used in reference to gaps amongst students of different nationalities,
this study primarily focuses on gaps in students’ economic designations according to free lunch
eligibility (Rothstein, 2015).
Plan-Do-Study-Act
Plan-Do-Study-Act (PDSA) is the term used to refer to the intentional cycle of inquiry
that involves planning for some impact, implementing the plan, studying the resulting impact,
20
and making some decisions about the next steps based on observations from the study. PDSA is
the action portion of Continuous Improvement Research (Christoff, 2018).
Continuous Improvement Research
Continuous Improvement Research (CIR) describes the approach to using research to
determine why practices in education are effective. In CIR, educators move beyond the ideas of
“what works” to understand why interventions work in different conditions so that practitioners
may know when to use which interventions and what customizations might be needed to increase
the likelihood of effectiveness (Sanchez & Blanco, 2016).
Organization of the Dissertation
This dissertation is organized into five chapters that collectively provide a comprehensive
review of the problem in question. Chapter One of this dissertation introduces the issue of the
challenges of low-income schools in effectively implementing sustainable Continuous
Improvement practices that impact student achievement for their students. Chapter One provides
some background on this problem and the purpose and importance of this research. Additionally,
the theoretical and conceptual frameworks for how the study has been approached to answer the
identified research questions are shared in Chapter One. Chapter Two provides an in-depth
literature review of previous works around the historical context, best practices, challenges, some
existing strategies for effectively implementing Continuous Improvement in schools, and
describes the conceptual framework used in the study. Chapter Three is an overview of the
methodology of the study and includes an explanation of how the study was conducted, how
subjects were selected, how data were collected, and the limitations of the study. Chapter Four of
this dissertation shares the findings of the study in alignment with the study’s research questions.
The final chapter of this dissertation provides a summary of the findings as well as
21
recommendations for addressing the problem of practice that was researched and
recommendations for future research.
22
CHAPTER TWO: REVIEW OF THE LITERATURE
The objective of this literature review is to provide background information on the use of
Continuous Improvement in school improvement efforts. Continuous Improvement, as discussed
in these pages, is a scientific, inquiry-based approach to school improvement that has
substantially grown in preference as a promising approach to improving academic outcomes for
all demographics of students (Szőköl, 2018). Continuous Improvement is a research and
action-focused approach to the concept of intentionally attempting to improve schools that
encompasses elements of many of the school improvement initiatives of the last seventy years.
This chapter will begin by introducing the origin of the concept of measuring schools for
success which coincided with the call for improvement in American schools. Following the
examination of the multiple school improvement initiatives, the chapter will discuss the rise of
Continuous Improvement as a promising strategy. Next, Continuous Improvement will be
discussed in terms of its ability to address rising concerns for a need to respond to the impact of
COVID-19 and to address issues of equity in education. Then, the chapter will provide guidance
for best practice implementation of Continuous Improvement. Finally, this chapter will provide
background on the Clark and Estes (2012) theoretical framework and its knowledge, motivation,
and organizational gap analysis model for how research will be approached.
Measuring Success in Schools
The idea of measuring school success was not always the prevalent conversation that it is
today (Haertel, 1986; Wade, 2001). Historical incidents have been the catalyst to the
encouragement of education reform efforts pressed by the realization that if countries are to be
competitive, they will need to have a strong educational core that supports the academic needs of
the future (Topolovčan, 2019). To measure the return on the investment in education, it has
23
become second nature to measure progress, achievement, and levers of potential impact. The
following text will explore the history of measuring success in education with a lens on why
success is measured, how success is measured, and the metrics used to measure success.
Why Success is Measured
Following the 1957 launch of the Sputnik satellite by the Soviet Union, America, and
other nations became focused on their need to be competitive in industry, commerce, and
education. President Eisenhower responded to the fears that America might be lagging in
education with the development of the National Defense Education Act (NDEA) of 1958
(Hanushek, 2019). The NDEA marked the first time that the federal government acknowledged a
decline in American education and responded by setting expectations and making resources
available to schools (Harden, 1981; McGeehan, 2018). The resulting interest in measuring the
improvement of American schools resulted in monumental student gains in math and sciences. It
was also a turning point in the rationalization of why academic achievement should be measured.
Almost a quarter of a century after the launch of the National Defense Education Act was
created to make America more competitive by way of improving education, America was again
in academic distress. According to the National Commission on Excellence in Education, the
gains achieved in the years following the launch of Sputnik had been lost (Hanushek, 2019;
National Commission of Academic Excellence in Education, 1983). In their report, A Nation at
Risk: The Imperative for Educational Reform, a call to action was issued that would inspire
policy changes in the areas of academic content, time in school, teacher capacity, and
expectations. Among the list of recommendations for equitably improving the state of education
was the call to set higher standards and expectations as well as setting consequences for states for
24
not meeting the needs of students, specifically minorities and the socioeconomically
disadvantaged (Guthrie & Springer, 2004; Cochran-Smith, 2020).
In the 10 years before the release of A Nation at Risk, there were clear indicators that the
education system was in rapid decline and the price of educating students was increasing. In
Texas, education costs rose by 14% with little change in the number of students being educated
though 30% of students were not meeting the basic skill requirement for reading, math, and
science (Bessent & Bessent, 1984). In response, 25 school districts in the state gathered
themselves for regular meetings to problem solve, develop interventions, and hold each other
accountable. Though the group did not have the tools to measure the efficacy of their inputs, they
did see comparative improvements in student outcomes (Bessent & Bessent, 1984).
The last fifty years of education have seen the rise of many theories on what data to use
and when to use it, but the critical debate of why to use data remains. Beyond the general
understanding of the need to employ data to meet the needs of students and families, there is a
question of preference and benefit of achievement data or accountability data (Skedsmo &
Huber, 2019; Steifel, 2013). While Lyndon B. Johnson’s Elementary and Secondary Education
Act served a purpose in providing needed federal funding to the school system, each of its
reauthorizations since 1965 has amplified its pressure on high standards and accountability
(Darling-Hammond, 2018; Herman et al., 2022; Paul, 2013).
How Success is Measured
The use of data to measure school efficiency and predict future outcomes has not always
been standard practice. Bessent and Bessent (1979) shared that the introduction of statistical
analysis in the education space, while foreign and complex to most educators, created new
opportunities to manage schools toward improvement. Later, the application of these and other
25
statistical methods created an opportunity to develop performance metrics for schools and school
districts (xFäre et al., 1989). By the beginning of the twenty-first century, educators were able to
use historical data to make claims of which data were better tools for driving the process for
school improvement as well as the steps needed to collect and analyze the data (Aderet-German
& Ben-Peretz, 2020; Wade, 2001).
In 1972, David Irvine and Alan Stewart created a report called Measuring the
Performance of School Districts. The report provided a detailed process for how historical data
on student achievement in reading and math could be used in a regression analysis to create an
expected outcome score for school districts as New York Public Schools was modeling with their
Performance Indicator in Education measures. Irvine and Stewart asserted that the use of data in
this way could be a useful approach to improving school districts, identifying schools and school
inputs that were likely indicators for how districts should invest resources. The approach, an
early use case for data in education, created a simple comparison of historical outcomes of
students to expected outcomes and indicated whether or not a school met the expected outcomes.
While the scores provided some insight into the school conditions that contributed to a school
meeting expectations set by the regression model, the scores were not an indicator of actual
school success (Irvine & Stewart, 1972).
A few years later, in 1979, researchers from the University of Texas expanded on the
efforts to determine if schools were reaching expected outcomes. Using Data Envelopment
Analysis, a statistical process used in economic theory, the researchers introduced a process for
measuring the efficacy of inputs to outputs as a way to evaluate the effectiveness of managerial
practices in schools (Bessent & Bessent, 1979).
26
Building on the work of Bessent and Bessent (1979), researchers combined statistical
approaches to using data in education to inform policy (xFäre, 1989). The authors asserted that
methodologies that rely on simple inputs and outputs do not provide enough data to create
actionable policies based on technical proficiency. These new sets of multiple metrics would
move education analysis to a space beyond school efficiency or accountability. Improved data
analysis would allow schools to have tools to be more efficient and more accountable.
The early collection and use of data has paved the way for Big Data as a powerful tool in
measuring, predicting, and driving success in schools (Fischer et al., 2020). While early
iterations of data use were limited due to collection and human processing limitations,
technological improvements have made data more accessible and actionable for schools. Large
amounts of data can be processed and analyzed quickly at the student level. According to Fischer
et al., the introduction of student information systems and learning management systems has
allowed educators to analyze individual student data and assign learning interventions specific to
unique student needs. Using data to measure success has evolved from its accountability roots to
a student improvement-centered necessity (Bai et al., 2021; Horner, 2020).
Metrics Used to Measure Success
Though the importance of understanding what data to use is generally understood today,
the adoption of statistical analysis and data tools in education was still developing at the time that
Nation at Risk was published in 1983. A better understanding of education data would have
revealed that the nation was not in the catastrophic downward spiral that was claimed in the
report. A simple misunderstanding of the difference in the change in average scores versus
percentages of students growing by subgroup led to changes in policy that have impacted schools
for more than fifty years (Ansary, 2007). Fortunately, the policy changes that resulted have
27
mostly been necessary and have greatly amplified interest in bridging racial performance gaps
(Guthrie & Springer, 2004; Kamenetz, 2018). While these data mishaps show that imperfect data
can sometimes still be useful and informative, the importance of knowing what data to use can
be critical.
Among the challenges of identifying the right data is the idea of using the information to
meet the needs of the user. At large, educational data serve two primary purposes. Edward
Haertel (1986) pointed out that often considered without discussion is the practice of collecting
data to answer the requests of politicians and policymakers. The challenge in merely collecting
these data is that users may begin to lose orientation for why the data might be useful. Perhaps,
one of the most important uses of data is to show how practice influences outcome. In early data
usage scenarios, educators relied primarily on simple data that showed the results of a student's
performance. Examples of these data include grades, test scores, and attendance. These data can
be useful in identifying areas of opportunity for a student, however, they may not be as beneficial
when attempting to understand replicable strategies that improve the practices for students
(Haertel, 1986). As was the case with President Bush’s reauthorization of ESEA, No Child Left
Behind encouraged educators to prioritize scores over growth and progress (Paul, 2013).
Typically, accountability data are considered the data that are shared with policymakers
via a school’s performance policy. Accountability data can also be data necessary to assess a
school or educator's ability to provide instruction that meets the needs of students. Among the
findings listed by the National Committee on Excellence in Education was the revelation that a
large portion of teachers were being hired from the bottom quartile of high school and college
graduates, and 45 states were filling math and science positions with unqualified teachers (1973).
Though sometimes an unpopular opinion, some researchers began to question how data are
28
collected to assess teacher competency (Steifel, 2013). It stands to reason that an incompetent
teacher will not be able to provide students with the support that they need to reach their
academic goals. In the 1990s, the popular opinion was that teacher performance tests were
ineffective in determining how a teacher would perform in the classroom. They introduced
performance tasks that assessed teachers’ practical application of necessary skills for driving
student improvement (Davey, 1991). Access to data reflecting the number of qualified teachers
in a school gives a school leader insight into where they may have the opportunity to improve the
likelihood of success for students. Similar to teacher performance tasks, there has also been an
effort by some educators to reduce their reliability on sometimes unactionable accountability
data for students. Educators advocating for student performance tasks against a set of learning
standards assert that there is lifelong learning growth potential in empowering students to assess
their own performance while equipping teachers with data that are indicative of how a student
can perform in the practical application of skills (Ferguson, 2004).
The array of data that can be used to drive school improvement is boundless and
constantly shifting. Historically, data usage has largely been driven by the urgent need to be
accountable and responsive to public inquiry. As educators and policymakers become more
savvy around education data, there has been a growing interest in what data says about the
outcome and quality of life for students. This sentiment is reflected in the most recent
reauthorization of ESEA, 2015 Every Student Succeeds Act (ESSA). President Obama’s
reauthorization prioritized actionable data that assessed the quality of implementation of
strategies like principal and teacher performance review plans, college and career readiness
plans, and school district plans for addressing the lowest-performing schools (Paul, 2013). Data
present a clear path for school improvement on behalf of students. More importantly, the policy
29
discourages resentment of data by incentivizing educators as opposed to punishing struggling
schools.
Along with emerging questions around the notion of measuring success in school came
the notion of developing strategies that would result in the improvement of schools. The concept
of School Improvement is a specific strategy for driving school actions that result in improved
academic outcomes for students and requires schools to organize in teams to collectively drive
that improvement.
A Call for School Improvement
School Improvement Science, as defined by David Hopkins (2001), involves an
“...approach to educational change that has a relentless focus on the learning and achievement of
students, and on the establishing of a professional learning community within the school” (p.
200). The following sections will give context to the history of School Improvement as a
scientific approach as well as some best practice approaches to successfully implementing
School Improvement as a strategy.
Three Phases of School Improvement
Perhaps one of the most influential education legislations in the last 100 years, The
Elementary and Secondary Education Act (ESEA) of 1965 was designed to address quality of
life and financial inequities exacerbated by gaps in the education system (Paul, 2013). Embedded
in the act was a required 5-year review cycle of the policy which created an opportunity for the
effort to be refined and improved over time. This policy would become the foundation for
iterative School Improvement reform efforts for education in the United States. Though
approaches to School Improvement have shown varying levels of success, each strategic
approach to improving school and the school system has been a stepping block to the growing
30
understanding of how to approach improving academic outcomes for students with intentionality.
Hopkins and Reynolds (2001) chronicled many of the approaches to School Improvement since
the 1970s and introduced a segmented approach to understanding the development of the
discipline over time; however, it is the work of Harris and Chrispeels (Lim, 2007) that
reorganized Hopkin’s notes into three new distinct phases of School Improvement. Harris and
Chrispeel asserted that School Improvement strategies can be divided into the phases of
individual school improvement, classroom improvement, and program refinement.
Phase One of School Improvement
In the introduction of Improving Schools and Educational Systems, researchers Harris
and Chrispeel describe the first phase of School Improvement as emphasizing individual
schools’ ownership of their school improvement efforts (Lim, 2007). Marking the beginning of
the School Improvement movement in the early 1970s, School Improvement was born in
response to the call for school accountability raised by ESEA in 1965. ESEA attached funding
for schools to be used for professional development, parent engagement, and instructional
material. This order of allocation of funding led school districts to rationalize that the lever of
change was at the school level. Consequently, this reaction led to a phase of school improvement
that was fully thought out with outcome and student success in mind (Lim, 2007). Of the many
school-based approaches to improvement, notable approaches include the Effective Schools
approach, the teacher as researcher approach, and the self-evaluation approach.
Shortly after the development of ESEA, a report was created by social scientist J.S.
Coleman. In the report, Coleman summed up the results of a correlating survey by stating,
“While schools may be primarily responsible for whether or not students function adequately in
school, the family is probably critical in determining whether or not students flourish in school”
31
(Lezotte, 2001, p. 1). The statement implied that schools were not responsible for ensuring
education as their efforts ultimately had little impact. The sentiment led to many schools
attempting to change the way students approached their education relative to the teaching
methods that schools found most convenient for themselves as opposed to changing school
practices to suit the needs of students. Opposed to this inequitable approach to educating
students, what would become known as the Effective Schools movement focused on the inputs
that would result in quality education for all students despite detractors external to the school.
Effective Schools placed ownership of School Improvement on the schools. The first step
of Effective Schools was to identify the commonalities in schools that had shown evidence of
effectiveness. Effective School researchers defined school effectiveness based on certain
premises that prioritized teaching and learning, student outcomes, alignment between schools
and school districts on desired outcomes, and demonstration of quality and equity for the overall
benefit of students. (Johnson et al., 2018; Lezotte and Bancroft, 1985). Using these premises, the
Effective Schools movement sought to empower schools to hold themselves accountable for
organizing themselves to meet the needs of students, especially those that were most
disadvantaged (Jackson et al., 2020; Lozette, 1991). Researcher Ron Edmond rejected the claims
of Coleman in a 1982 paper entitled, “Programs of School Improvement: An Overview,” in
which Edmond first introduced the metrics, which he called correlates, that would later be used
by schools adopting the Effective Schools model. These schools would form school
improvement teams which were charged to rate themselves against the set of correlates and
design strategies to address their insufficiencies in instructional leadership, understanding of
instructional focus, school environment, high expectations for student achievement, and relevant
data analysis (Hopkins, 2020; Lezotte, 2001).
32
The Effective Schools approach to School Improvement reinforced the belief that
teachers should own and be accountable for individual student improvement. A widely held
opinion has been that because teachers were so close to the practice of educating students, they
should consider themselves as the researchers who would solve the local and systemic challenges
of academic achievement. The opposing line of thought is that because teachers are so close to
practice they cannot be contributors to mainstream bodies of research as their research and
understanding are often difficult to transfer to other schools (Kershner et al., 1998; Stoll, 2014).
This claim suggests that the impact of teacher research at the local level has been difficult to
validate in a broader context and thus cannot be comfortably relied upon (Edmonds, 2020; Ellis
& Castle, 2010). Models of School Improvement including the Effective Schools model have
resulted in the development of learning communities that have interrogated teaching and learning
practices beyond the limitations of accountability goals set in federal and state policies like
ESEA. Regardless of efforts to discount the importance and impact of teacher research
developed as a practice in the first phase of School Improvement, teacher research continues to
be a practice in schools and has shown evidence of success (Ellis & Castle, 2010; Malik, 2016).
As mentioned above, the accountability requirements of ESEA created a series of
theories about the origin of the gaps in student academic achievement. Whereas the intention was
not solely to place the blame on teachers, the scramble to identify the levers of change eventually
fell largely on teachers and teacher practice. The prevailing rationale was that teachers were the
primary lever of influence on student achievement and if teacher practices were ineffective then
students would not achieve. Furthermore, the lack of teacher evaluation systems created an
opportunity for teacher ineffectiveness to exist unchecked (Rafiq et al., 2022; Stronge, 2006).
Consequently, systems of teacher evaluation external to schools created distrust among teachers
33
creating disengagement from the opportunity to identify areas for potential teacher practice
improvement. Stronge (2006) referenced Michael Fullman’s description of a phenomenon known
as fragmentation. Fragmentation describes the common situation in education reform where
policy, even well-meaning policy, works in cross purpose with its potential for impact. This was
the case with federal and district mandates of teacher evaluation.
Nonetheless, teachers did see the value of self-evaluation and professional development
at the school level. Efforts like Effective Schools created an opportunity for teachers to gather in
groups to discuss the impact of their teacher practices on their larger goals of providing quality
education to all students. Whereas teacher evaluations were a segregated action to assess teacher
ability, teacher self-evaluation was an iterative process of self-improvement (Harks et al., 2014;
Iwanicki & McEachern, 1983). In school improvement teams of phase one, schools engaged in
this practice of self-evaluation as a strategy to improve local capacity at their sites.
Phase Two of School Improvement
According to Harris and Chrispeel (Lim, 2007), the second phase of School Improvement
emphasized schools and classrooms. Whereas phase one was largely strategically unorganized
and often not guided by an overarching theoretical framework, the second phase of school
improvement was guided by prepackaged, research-based models for school improvement. In
phase two, notable approaches to School Improvement included an emphasis on student learning
and the Accelerated Schools Model.
According to Hopkins and Reynolds (2001), in all phases of School Improvement,
schools have struggled to incorporate teaching and learning conceptually and practically in
classrooms (Muijs et al., 2014). Early approaches to school improvement have emphasized the
need for school achievement and test scores to improve and they have focused primarily on
34
organizational strategies for facilitating that improvement. In phase two, educators have turned to
classroom practices to determine if teaching is leading to students learning. Before this shift,
educators have focused on the theory of instruction, however, there has also been a need to focus
on the practice of delivering lesson studies effectively (Brendefur et al., 2014). While many
schools have organized improvement teams, instructional leadership teams, and teacher teams,
the challenge of finding the time in the school day and willingness to critique peer practice
remains.
Assuming that school leaders have been able to effectively establish teaching and
learning-focused teams, tools were developed to support the assessment of quality teaching.
Developed in 1996, the Danielson Framework for Teaching became a very popular tool. This
created an opportunity to improve school teaching and learning by focusing instruction on the
four domains of planning and preparation, instruction, classroom environment, and professional
responsibilities (Alvarez & Anderson-Ketchmark, 2011; Hunzicker, 2017). Additional tools were
available like the Teaching and Learning School Improvement Framework which expanded
Danielson’s view on specific teacher practice to assess the quality of teaching and learning in a
whole school setting (Masters, 2010; Muir, 2018).
The second phase of the School Improvement movement included the emergence of
comprehensive programs. Developed in 1986 by Dr. Henry Levin, the Accelerated Schools
program was designed to address the student learning needs of at-risk students specifically. The
program was developed to combat the assumption that students who struggled academically
should be given remedial tasks as opposed to being challenged to progress rapidly as students
labeled as gifted were (Levin, 1989; Levin, 2017). The Accelerated Schools program
acknowledged that many at-risk students had challenges at home and school and required a
35
multilevel approach of setting high expectations of school staff, engaging families, as well as
engaging in constant cycles of inquiry to ensure that progress was made within specific amounts
of time (Hopfenberg, 1990; Levin, 2017). Central to the vision of Accelerated Schools was the
idea of creating learning communities where there was unity in the vision, positive relationships,
and shared culture (Biddle, 2002; Roig-Vila & Alvarez-Herrero, 2022). This whole school
transformation approach bridged the gaps between homes and schools and empowered schools to
break away from the federal oversight of ESEA by asserting that schools knew what was best for
the students in their care. The rigorous structure of the Accelerated Schools design made the
agreement to allow schools control more digestible (Gaziel, 2001; Pijanowski, 2019).
Phase Three of School Improvement
The third phase of School Improvement marks a transitional period beginning in the
1990s to the present. It is in the third phase that researchers and practitioners acknowledge that
while the School Improvement efforts have not consistently shown the results that were hoped
for, there has been evidence that with refinement School Improvement strategy is on the right
path (Lim, 2007). Due to these types of failures, the third phase of School Improvement has risen
to redirect efforts with the lessons learned in phases one and two. The following sections discuss
the refinement of programs, systems thinking, and whole school design.
During this phase, practitioners have acknowledged that efforts by vendors to provide
blanket programs to schools have largely been ineffective as a whole but specifically to the
marginalized students that federal policy was intended to support the most. Typically, these
programs that are adopted as a way to bridge gaps and provide complimentary services leave
staff divided and frustrated after working to learn, adapt, and implement these incoherent
programs that ultimately fail to meet expectations (Darling-Hammond et al., 2016; Newman et
36
al., 2001). Where previous phases may have haphazardly selected programs, phase three
promotes the idea of program coherence. Program coherence means that when selecting multiple
academic programs, the curriculum, instruction, assessments, and standards must all be in
alignment.
Along the lines of promoting program coherence, the third phase again advocates for a
more thoughtful implementation of School Improvement by promoting systems thinking. In prior
phases, practitioners did not fully articulate processes for inquiry and data review. In a systems
thinking model, schools ensure that improvement efforts are constantly engaged in short-cycle
reviews of strategy, evaluation of the effectiveness of strategy, and realignment of resources to
ensure that the goals of the strategies are being met and supported (Shaked & Schechter, 2017;
Thornton et al., 2007). This systems thinking approach aligns school staff and administrators to
allow information to be shared across the organization. Successes and failures are interrogated to
reduce the likelihood of repeat failure thereby conserving resources and shortening the part to
improvement.
The No Child Left Behind initiative of ESEA is often discussed as having a negative
impact on School Improvement due to its punitive accountability measures. One punitive
measure that did show promise to School Improvement was the requirement of schools that fail
to make adequate yearly progress (AYP) to implement a whole school reform plan. Whole
School Improvement Design or Comprehensive School Reform suggests that schools reassess
curriculum, school organization, patterns for meeting, and instructional programming to create a
coherent plan for how the school operates for success. Though the service providers
recommended by the federal government ultimately proved that less than 50% of the providers
showed success, the amount of success experienced by minority and disadvantaged students
37
showed tremendous promise. Their positive impact can be attributed to the more frequent federal
requirement for service provider oversight to ensure that these schools follow guidance for
whole-school improvement design (Husband & Hunt, 2015; Kidron & Darwin, 2007).
Best Practices of School Improvement
School Improvement efforts across the three phases beginning in the 1970s have shown
varying levels of success depending on the context of its application. One of the best lessons
learned from these successes is the differentiation of what works for whom as opposed to what is
the best fit based on the constraints of government policy (Harris, 2000; Nelson & Campbell,
2019). There are many opinions about the best approach to implementing School Improvement
as a practice, however, there is no single best practice. Lim (2007) pointed out that the best
practice of School Improvement is the practical application of elements of School Improvement
taken from common practices across the three phases. Promising practices for the next era of
school improvement can be found in the Comprehensive School Reform model, root cause
inquiry practice, school improvement planning, and school leadership.
Comprehensive School Reform (CSR)
It is important to note that the descriptor that is referred to as Title I of the ESEA of 1965
designated funding to primarily low-income, high-minority schools with evidence of a
Black-White achievement gap. Since its inception, billions of dollars have been allocated to this
effort. While the concept of Comprehensive School Reform is not solely an ESEA program, it
was an incentivized program of the NCLB reauthorization of ESEA in 2001. A great example of
fragmentation, the federal Comprehensive School Reform Act ended a few years later (Corey,
2009; Peterson, 2016). Considering School Improvement’s primary focus is the elimination of
performance gaps of the most marginalized and disadvantaged students, CSR must be regarded
38
as a core best practice of School Improvement Research as no Title I initiative before or since
has had any sustainable level of impact on the Black-White achievement gap.
Failure of external organizations to implement CSR is not an indication that CSR is not
effective as much as it is a roadmap to effective CSR. For CSR to be effective it should be
acknowledged that the quality of implementation is critical and difficult. Comprehensive School
Reform requires a cohesive alignment of its core elements: coordination of resources,
research-based methodologies, stakeholder alignment, staff capacity building, goal setting and
monitoring, supportive environments, community engagement, external support, and a
commitment to continuous improvement (Borman et al., 2002; van Elk & Kolk, 2016). Borman
et al. pointed out that perhaps most critical to the success is the internal buy-in from the staff of
schools that seek to employ this strategy.
Root Cause Inquiry
Like the elements of CSR, many of the school improvement initiatives and tools are
presented as if they are straightforward solutions that just need to be implemented with fidelity,
however, not enough credit is given to the complexities of the many approaches and moving
parts of these strategies (Feldholf et al., 2016). In fact, successful implementation of these
initiatives requires a substantial investment of time and financial resources if practitioners are
expected to reap the benefits of critical processes like root cause analysis in schools.
A 2021 study claims to be the first time the practice of Root Cause Analysis (RCA) in
education has been explored. Researchers assert that while root cause analysis is a process
commonly used in other disciplines, it is infrequently used by educators and less frequently used
in effective alignment with strategic planning (Meyers & VanGronigen, 2021). Meyers and
VanGronigen suggested that schools, for lack of quality training and comprehension of RCA,
39
commonly fail to engage in the short cycle planning that would allow them to adjust the RCA
and strategies in real time. The ability to effectively benefit from School Improvement Planning
as a strategy is stunted without the use of RCA in short-cycle plan reviews.
School Improvement Planning
Another requirement of NCLB for schools not making AYP was the requirement of
schools to implement a School Improvement Plan (SIP). While the sentiment of creating a plan
to drive improvement is logical, NCLB failed to provide the necessary oversight to ensure that
SIPs were of the quality necessary to drive improvement (Huber & Comway, 2015). The
guidance on these mandated SIPs did not include the level of specificity needed by schools to
address their specific challenges or the organizational processes that needed to be put in place for
school improvement. Nonetheless, schools continue to develop SIPs as a guide to drive
continuous improvement efforts. When incorporated with quality short-cycle inquiry that
includes RCA and opportunities to refine strategy, SIPs can be effective tools for equity.
School Leadership
As described by Feldholf et al. (2016), the complexities of School Improvement require
organization and leadership to manage effectively. While instructional leadership is often
elevated as the most important aspect of School Improvement, to manage the complexities of the
School Improvement process, leaders must engage various approaches to leadership including
shared leadership (Shaked, 2023). In How Organizational Management Supports Instructional
Leadership, Haim Shaked referenced a statement made by Grisom and Loeb (2011), “A more
holistic view of school leadership as necessitating skills across multiple dimensions, in
instruction but also in the management of the school as an organization, is important for
identifying the ways that principals can promote school improvement” (p. 61). This statement
40
perfectly describes the awesome responsibility of school leadership to be strong in multiple
aspects of school leadership and the reality that leadership should be shared to be most effective.
School Improvement Research introduced the idea that strategies could be used to drive
improvement in schools. Although no single approach to School Improvement can be regarded
as the absolute solution for all schools, the common elements that have been most effective
across approaches share many of W. Edwards Deming’s key principles of Continuous
Improvement. Of Deming’s 14 industry-rooted principles, the most applicable and inspiring for
education Continuous Improvement is to “improve constantly and forever the system” (Knouse
et al., 2009; Lohr, 2015).
Exploring the History of Continuous Improvement
In the early 1990s, the first signs of the incorporation of Continuous Improvement
strategies as a method to assess progress, re-evaluate the effectiveness of programs against goals,
and organize for ongoing engagement in this type of improvement process became available.
Rather than full engagement in Continuous Improvement as described by Deming and Shewart,
education improvement practitioners gradually introduced concepts of Continuous Improvement
that had proven to be successful in industry and other social service sectors (Coburn & Penuel,
2016; Temponi, 2005). The following text provides a brief history of Continuous Improvement
in its industrial origins, adaptation in other social service sectors, and application in education.
Industrial Continuous Improvement
Considered the father of Continuous Improvement, Dr. William Deming was a quality
management expert who believed improvements could be achieved by intentionally engaging in
iterative processes that moved companies toward their goals while minimizing the waste of time
and resources (Bhuiyan & Baghel, 2005; Temponi, 2005; Vinodh et al., 2021). Perhaps the most
41
noticeable success of Continuous Improvement would be the adaptation of the practice in the
Japanese automotive industry. After World War II, Japan realized that they could not outproduce
America in quantity, so they recruited Deming to structure the industry to create the best quality.
Several years later, Japan had not only established itself as a leader in the automobile industry,
but they had also become leading practitioners of what they referred to as kaizen (Singh &
Singh, 2014). Singh and Singh described the kaizen approach to Continuous Improvement as a
never-ending journey to improvement, a scaffolded approach to innovation, and participative
practice.
The primary driver of Continuous Improvement success across industries was its
grounding in the scientific method and engagement of stakeholders. Central to any Continuous
Improvement effort modeled to maximize innovation, quality, and growth is the use of a cycle of
Plan-Do-Study-Act (PDSA) while taking employee input and experience into account (Knouse et
al., 2009; Temponi, 2005; Tichnor-Wagner et al., 2017). As opposed to a top-down approach to
management, Continuous Improvement requires a bottom-up approach that increases
opportunities for success by engaging employees at every level of the organization. The rationale
is that the answers to the challenges of an organization will be most visible to those on the front
lines (McLean and Antony, 2014).
Police Continuous Improvement
The success of Continuous Improvement in industry and manufacturing eventually led to
the adoption of this strategy for policing in the early 1990s. The ideology of Problem Oriented
Policing (POP) was based on the understanding that identifying problems in crime would be key
to developing strategies to reduce crime. Herman Goldstein developed the philosophy that crime
could be reduced if it were studied and a process similar to Plan-Do-Study-Act was practiced
42
(Halle et al., 2014; McGarrell et al., 2007; Peterson, 2005). According to Goldstein’s version of
PDSA “analysis, study, and evaluation are at the core of problem-oriented policing” (Peterson,
2005, p. 10). This was the beginning of Continuous Improvement in policing.
Problem-Oriented Policing became part of a larger strategy for policing referred to as
Intelligence-Led Policing (ILP). ILP was an approach to policing that included a collection of
strategies based on the idea that effective policing to reduce crime would engage the community,
center solving problems, and commit to a cycle of inquiry for improvement (Lum & Koper,
2015; McGarrell et al., 2007). Intelligence-Led Policing sought to deliberately apply business
practices to policing resulting in what is commonly referred to as Compstat. Though New York
is most notorious for the Compstat program, the first application of this Continuous
Improvement approach to policing was in New Orleans and Newark in 1997. Both cities lead the
nation in crime reduction during that first year of implementation. Compstat lever for success
was the four-step approach that included accurate and timely intelligence, a rapid deployment
that is concentrated, synchronized and focused, effective tactics, relentless follow-up, and
assessment (Halle et al., 2014).
Similar to the approach used by police to drive improvement, the medical system has also
adopted the scientific and business methods used in industry to improve the quality of care for
patients (Kilbourne et al., 2018; Kilo et al., 1998). Although Kilo et al. favored an emphasis on
systems necessary to ensure that improvement was not accidental, other practitioners have
asserted that the key to successful medical based Continuous Improvement relies most on the
connection to humanity and the engagement of medical staff (Chowanec, 1994; Stelson et al.,
2017). Though medical Continuous Improvement emerged first with an association with Total
43
Quality Management (TQM), it redefined the focus on quality products to an understanding that
their product was the well-being of patients (Chowanec, 1994; Salmond & Echevarria, 2017).
Medical Continuous Improvement
Appropriately, in the adaption of medical Continuous Improvement, practitioners were
hyper-aware of the humans involved. In What Drives Continuous Improvement Project Success
in Healthcare, the authors point out that effective implementation of the strategy requires that
professionals place less emphasis on the business tools necessary for scientific engagement but
on the need to build buy-in from staff, to build a culture that understands the mission to improve,
and to engage staff in the cycles of inquiry that inspire solutions to problems (Chowanec, 1994;
Stelson et al., 2017).
Continuous Improvement in Education
Like the adaptations of Continuous Improvement in other social science sectors,
education reform efforts do not stray far from the lessons of Deming’s industrial approach.
Though Continuous Improvement has been practiced for decades, the term itself is relatively new
in education spaces. Educators have essentially been practicing continuous improvement without
knowing. Bush-Mecenes (2022) describes Continuous Improvement as the umbrella term that
encompasses elements of most of the strategies practiced to improve schools since the 1960s.
Specifically, Continuous Improvement encompasses the strategies that have attempted to reform
schools by engaging school communities in cyclical problem-solving processes designed to
address challenges more quickly with decreased repetition of failure as part of a school’s regular
routine and operations (Byrk, 2015; Park et al., 2013).
Continuous Improvement has been regarded as the most promising approach to school
reform thus far, however, there are clear challenges to implementing Continuous Improvement
44
that have created skeptics (Bush-Mecenes, 2022). The complexity of a system that requires
educators to engage collaboratively using tools that may be foreign to their already time and
resource-strapped environment has created adoption challenging ways not dissimilar to those
experienced by industrial practitioners (Byrk, 2009; McLean & Antony, 2014). Continuous
Improvement success is highly dependent on renewable resources to support the introduction of
Continuous Improvement to school stakeholders, the culture that is built around the mission to
improve, and the ability to build the capacity of teachers and administrators (Yurkofsky et al.,
2020).
Though Continuous Improvement has been considered a viable School Improvement
strategy for the last three decades, it is now revealing itself to be a promising practice for
addressing issues of equity and the response to the COVID-19 pandemic (Bush-Mecenes, 2022).
Why Continuous Improvement? Why Now?
In 2020, the COVID-19 virus created a worldwide health pandemic that exposed the
realities of educational and social inequities in the United States. Across the world, in a
preventative measure to slow the spread of the Covid-19 virus, schools closed for students. For
students already struggling to reach their potential against the threat of inescapable learning loss,
their struggles were exacerbated (Doyle, 2020; Simmons, 2021). Amid social unrest and calls for
racial inequity, many citizens in the United States have expressed a desire for solutions to these
issues. In general, Continuous Improvement presents an opportunity to shift the focus from
accountability to impact, however, Continuous Improvement also creates an opportunity to
address the challenges of equity and the impact of the covid-19 pandemic.
Why Continuous Improvement?
45
The constant improvement focus of Continuous Improvement lends itself as a valuable
tool to be employed by educators for any set of circumstances. While Continuous Improvement
cannot always predict the unpredictable, it does create a structured process for educators to
follow when new challenges emerge. Using the process, teachers can come to a common
agreement about the issues that they want to tackle, the outcomes they expect to see if they
address the issue, how they will measure success, the strategies they will use to address a specific
problem, the process they will use to assess if their strategy was successful, and adjustments they
can make to strategies in real-time (Shakman, 2020).
Why Now?
While programs designed to address specific challenges are limited to their narrowed
scope, Continuous Improvement provides a framework to be applied to an unlimited number of
challenges. The events that unfolded in 2020 demonstrate the value of having Continuous
Improvement as a resource when accountability metrics became less important than human
well-being. The top-down approach to managing educators became a serious point of contention
as teachers began to question the legitimacy of most White male leaders in the country making
critical decisions for students in communities that were foreign to them (Herman & Gribbons,
2001; Simmons, 2021). Continuous Improvement favors a bottom-up approach to identifying
challenges, creating solutions, and monitoring impact led by educators on the front lines.
Equity
If a school desires to address inequities in its learning environment and dares to set a goal
to eliminate inequities in their school, the Continuous Improvement Framework provides a
process for schools to identify the root causes of the inequity, develop a strategy to address the
root cause issue, set goals for how and when inequities will be eliminated, and a process for the
46
ongoing monitoring their progress toward inequity using data. In The Equity Imperative of In
Collective Impact, Kania and Kramer (2015) made a case for how the use of Continuous
Improvement can transition intentional statements to real actions by breaking down improvement
efforts into micro-processes that aggregate into actions that address inequality.
The directness and transparency of Continuous Improvement provide a vehicle to address
inequity in any policy, program, initiative, or project despite its intent. The 1965 Elementary and
Secondary Education Act and each update until the most recent 2015 Every Student Succeeds
Act (ESSA) were intended to address the issues of inequity in America, however, in action, they
are evidence of the entrenchment of racism in the fabric of its society. The fragmentation of
American education policy has created an accountability-driven educational system that has
consequently provided support to maintaining an achievement, wealth, and access gap
(Bush-Mecenes, 2022; Herman & Gribbons, 2001). Bush-Mecenes promoted a change in policy
that favors an interest in Continuous Improvement. She notes that under Obama’s ESSA, a third
of states in the country have selected Continuous Improvement as the primary tool that they will
use to reduce the opportunity gap.
Covid-19
According to Congressman Robert Scott of Virginia, in the aftermath of the Covid-19
pandemic, the federal Department of Education issued $31 billion in relief with guidance for
distribution to marginalized communities, however, elected representatives diverted those
resources from low-income public schools to wealthier private schools. At the same time that
teachers were reporting a 20% difference in access to technology in low-income, predominantly
Black schools from high-income, predominately White schools, resources necessary to prevent
47
learning loss were being withheld from those in most need (Bozkurt, 2022; Kazouh et al., 2020;
US House of Representatives, 2020).
The gravity of the impact of Covid-19 on education can be better understood considering
the extent of the uncertainty that now exists. NWEA, the testing company that sets grade norms
for many states, issued a statement in December 2022 that recent scores were not telling of the
full impact of COVID-19 due to the many unknowns. The uncertainty is so great that the testing
agency recommended testing of impact as much as possible now and then testing of impact later
(Bozkurt, 2022; Kuhfeld et al., 2022). Policy Analysis for California Education issued a brief
report in 2020 warning state educators of the risks of uncertainty and the need to avoid top-down
management. Myung and Kimner (2020) stated that “...rapid cycles of improvement are
essential for identifying approaches to address student needs. Even in such conditions, the
enabling organizational conditions for continuous improvement—shared purpose, mutual trust,
structures and resources that foster collaborative work, and the preparation and mobilization of
improvement capacities—are needed to facilitate improvement and organizational learning” (p.
1).
While the issues of inequity are not all new but existing conditions that have been
exacerbated by the COVID-19 pandemic, Continuous Improvement has risen to become a
recognized and promising approach to improving opportunities for all students. Understanding
that the needs of all students are different, it is reasonable to assume that the needs of schools are
also different. Previous school improvement efforts have shown that there is no one-size-fits-all
solution, however, with guidance from past experiences the final section of this chapter will
share an adaptable framework for a best approach.
Best Practice Framework For Continuous Improvement Implementation
48
In a survey of professionals working in Norway’s automotive industry, which has thrived
and remained competitive because of Continuous Improvement, it was concluded that there are
different approaches to how to best implement Continuous Improvement (Holtskog, 2013).
According to Holtskog, the best approaches to Continuous Improvement vary from organization
to organization, department to department, and individual to individual. Similar to the
automotive industry, while there are many variations in schools across America, in “Learning to
Improve: How America’s Schools Can Get Better At Getting Better,” the authors present Six
Core Principles of Improvement as a best practice, adaptable framework for how to develop and
implement Continuous Improvement in schools (Bryk et al., 2015). The following section
examines each principle as a guide for exploring best practice opportunities for Continuous
Improvement.
Principle One
The first principle of improvement as described by Bryk et al. (2015) is to ensure that the
improvement effort is focused on solving the root of problems with the stakeholders involved at
the center of the problem-solving process (Wolcott et al., 2021). At times, individuals rely on
prior experience to assume the solution to a problem and skip the root cause analysis step to
confirm that the right issue has been identified before proposing a strategy. Good strategies are
developed by intentionally considering all of the contributing factors to the issue at a given time
and in a given situation (Zsambok & Klien, 1997). That is not to say that prior experience is not
relevant, but that prior knowledge is relevant when taken in context (Darling-Hammond, 2015;
Leithwood & Steinbach, 1992). Leithwood and Steinbach described a powerful model for
addressing problems that includes the steps of (1) interpreting the problem within the context of
the situation and stakeholders, (2) setting clear goals for the ideal outcome when the problem is
49
solved, (3) check the goals against common principles and values like equity, (4) acknowledging
constraints, (5) engaging in a process for considering possible solutions, and (5) accessing the
effect.
Principle Two
The second principle of improvement suggests that the core of strategy and data analysis
involves a process of examining variation in outcomes to understand what works for whom and
under what circumstances (Bryk et al. 2015). In the early 2000s, the Data Wise movement
advocated for and supported the establishment of teacher teams using protocols to compare
student performance across and between grade levels (Bocola & Boudett, 2015; Boudett et al.,
2006). Engaging in this collaborative process of examining differences can help teachers broaden
their minds to the possibilities that exist to solve a problem (Runesson, 2006; Sun et al., 2015).
The comparison of data extends beyond teacher teams to school districts and the school system
across the country. As opposed to only comparing data vertically and horizontally, having access
to data over time creates the opportunity to better understand trends at a broader level with more
possibilities for solutions (Edchat, 2016).
Principle Three
The third principle of improvement states that educators should “See the system that
produces the current outcomes” (Bryk et al., 2015). This principle is somewhat more difficult to
implement on the ground where real-life challenges are more unpredictable and complex. This
especially becomes problematic when policymakers create policy without relevant
boots-on-the-ground research to back their assumptions (Salvin, 2002). Not only is there a gap in
the relevance of research to set the most effective policy, but there is also a gap in the research to
assess the impact of less informed policy on students at the local level (Kyriakides et al., 2015).
50
Whereas developing a strategy with proper insight from teachers who are most knowledgeable of
their particular teaching and learning circumstances can be inspirational to school
transformation, choosing an ill-informed strategy can have a fragmented impact on school
culture, teacher motivation, and ability to affect change (Fullan, 2011; Molloy et al., 2020; Nir &
Ben Ami, 2005)
Principle Four
The fourth principle of improvement asserts that what cannot be measured cannot be
improved. This can become particularly challenging when educators are focused on
accountability metrics more than relevant local metrics. Often, accountability-based performance
metrics are set because access to them is simpler and more convenient from a systems standpoint
even though they do not necessarily point to the actual issue they seek to encourage
improvement for (Colyvas, 2012; Darling-Hammond et al., 2016). The result of focusing on
these metrics ranges from implementing an ineffective strategy to creating methods to improve
the accountability metric without creating the targeted improvement. The University of Chicago
created a new set of metrics that were better predictors for the intended outcome of some of these
accountability metrics. The Five Essentials created effective drives of impact by measuring the
quality of school leadership, teacher collaboration, family engagement, environment, and
instructional intent (Spain & McMahon, 2016). Locally, these kinds of metrics are more
actionable for school improvement.
Principle Five
Bryk et al. (2105) described the fifth principle of improvement as a commitment to
implementing consistent short cycles of data inquiry rooted in the scientific approach of
Plan-Do-Study-Act (PDSA). Doing away with the practice of adopting and abandoning new
51
initiatives before properly interrogating their potential for impact is the path to reducing waste
and fully understanding what can and cannot work (Rohanna, 2017). While there is a challenge
to commit the time and effort to refine the process of PDSA, the practice of relentlessly engaging
in the cycles builds the capacity of schools to become more efficient and effective to the point of
realization of their efforts (Rohanna, 2017; Zangwill, 1988). The primary challenge of will and
capacity is overcome by the rewards created through engagement in short-cycle inquiry when
educational leaders and policymakers support the efforts with time and resources
(Tichnor-Wagner et al., 2017).
Principle Six
The final principal acknowledges that educators can move forward better and faster
together. The long-standing practice of establishing local Professional Learning Communities
(PLCs) has shown evidence of teacher capacity improvement, however, it has failed to show
significant evidence of school and systems improvement. It is in the connection of smaller PLCs
to larger Networked Improvement Communities (NICs), often research institute-supported,
networks of schools and districts that larger levels of improvement can be achieved (Pregner et
al., 2017; Proger et al., 2017). Networked Improvement Communities consist of educators,
administrators, and researchers who can collaborate using data on larger sets of problem cases.
The collaborative nature of sharing challenges, designing strategies, and analyzing results allows
NICs to learn and improve much more quickly.
Conceptual Framework
This study will use Clark and Estes’ (2008) gap analysis model as its conceptual
framework. The basis of the gap analysis model directly correlates with the ultimate goal of this
study which is to provide a promising practice for performance improvement in public schools.
52
Clark and Estes’ gap analysis model asserted that organizational improvement can be achieved
by optimizing controllable inputs of knowledge (K), motivation (M), and organization (O) (Clark
and Estes, 2008). Referred to collectively as KMO, these inputs are the root of a model of
performance maximization that identifies opportunities to adjust levels of knowledge,
motivation, and organizational efficiency. In the case of this study, KMO is the framework used
to assess best practices used by a school that has shown significant improvement as a result of
optimization of employee knowledge of Continuous Improvement, motivation to improve and
implement improvement strategies, and organizational effectiveness.
According to Clark and Estes (2008), if an organization desires performance
improvement, there must be an understanding of where the opportunity for improvement is. The
gap analysis model categorizes those opportunities into three manageable buckets for
observation. In observing the gap that exists between the current performance level of
individuals or an organization and expressed goals for the future, utilization of the gap analysis
model creates an opportunity to implement strategies that will bridge those gaps. In the gap
analysis process, knowledge, motivation, and organization are the constants used to guide the
investigation.
When referring to knowledge, the framework operates with the understanding that
employees are not capable of implementing what they do not understand. Regardless of the
reason for the lack of knowledge, the learning demands of individuals must be met in order for
them to reach the expected level of performance. Clark and Estes (2008) suggested that there are
steps that can be taken to enhance understanding and skills. Similarly, this study will seek to
understand how a high-performing school has approached the challenge of meeting the learning
53
demands of staff. Specifically, how has the school leadership shared information, provided job
aids, provided training, and/or education?
The gap analysis model also suggests that lack of employee motivation may also be a
contributing factor to challenges in performance. According to Clark and Estes, it is not
unnatural for employees to experience barriers to motivation, however, if improvement is desired
there must be an effort to understand the nature and origin of the motivational challenge. The
effort to address issues of motivation should take into account ways that individuals motivate
themselves as well as ways that organizations can motivate employees. In this study, motivation
is examined with heightened sensitivity to how organizations have responded to the culture that
often leads to the disengagement of educators.
While knowledge and motivation are factors that individuals are primarily responsible for
shifting, the process, procedures, and resources supporting the change are more directly the
responsibility of the organization. Organizational processes that do not support employees can
have a deep impact on the culture of an organization and diminish the opportunities for success.
The structure of an organization can have a psychological impact on how employees perceive the
culture of the organization and then their willingness to promote a positive climate (Schneider et
al., 1996). This study also focused on understanding how the daily practices and policies of the
school create a culture of willingness to engage in school improvement efforts.
The conceptual map below (Figure 1) illustrates how the use of the gap analysis
framework begins with the assessment of a school’s current state relative to their goals. Using the
KMO model, the differences between the school’s current state and desired future states are
examined for improvement opportunities. Based on this gap analysis, interventions are designed
and implemented and performance improvement occurs.
54
Figure 1
Conceptual Map
Summary
This promising practice study provides an example of a low-income school that has used
Continuous Improvement strategy to improve the academic performance of its students. Though
Continuous Improvement is not entirely new, this literature review provides historical context to
its emergence as a preferred school improvement approach for schools and school districts that
seek to improve their approach to support their most vulnerable students. Given that Continuous
Improvement is a collection of best practice approaches to school improvement, this chapter
begins with the origin of school improvement itself by defining the what and how of measuring
improvement in schools. The research then dives deep into the policy that has shaped and
impacted the School Improvement Research movement. As Continuous Improvement is a
strategy that has been borrowed from the industrial sector, the literature review examines the
origins of Continuous Improvement as well as adaptations of Continuous Improvement in other
sectors. Then the research explores why Continuous Improvement is more important now since
the COVID-19 health pandemic and the racial uprising that exacerbated the issues of inequity in
55
education. Before explaining how the Clark and Estes gap analysis model was used to frame this
study, a guide for best practice implementation of Continuous Improvement was shared.
56
CHAPTER THREE: METHODOLOGY
This chapter reviews the methodology used to study the insights from a school that has
successfully implemented Continuous Improvement. As a best practice study of a low-income
public school that has shown significant improvement based on academic assessments, this study
will take a qualitative approach to capture data that can be used to understand how school
teachers approached and responded during the implementation of Continuous Improvement. This
chapter will provide an overview of the research questions, the methodology used for data
collection, and the setting of the research. Additionally, this chapter will provide insight into the
researcher, research subject, and consideration of ethics.
Research Questions
1) What is teacher knowledge and motivation for Continuous Improvement practices?
2) How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
3) What recommendations do teachers have around knowledge, motivation, and school
organization that may help other schools increase their likelihood of successfully
implementing Continuous Improvement systems?
Overview of Design
This research project was conducted as a qualitative study focused on teachers in
low-income schools who were implementing continuous improvement successfully. As a best
practice study intended to provide recommendations for similar schools so that they too could
improve, teachers were selected as the best subjects for this study. Teachers could provide
first-hand insights that would be useful for other teachers. As a qualitative study, nonprobability
sampling was used to delve deeper into the topic with specific experiences (Merriam & Tisdell,
57
2016). Because the goal of this research was to gather useful insight from a specific scenario in
low-income schools, this was a purposeful sampling (Merriam & Tisdell, 2016).
The primary method of research was conducted using teacher interviews. Considering the
Clark and Estes Gap Analysis Framework, the questions asked were structured around the
concept that performance improvement could be viewed through the lens of knowledge,
motivation, and organizational influences (Clark & Estes, 2008). Being the case, interview
questions were segmented into three primary categories intended to collect the interviewee's
perspectives of Continuous Improvement in their schools within the framework of KMO.
- Knowledge: the first set of questions captured perspectives on how knowledge of
Continuous Improvement was transferred to teachers and how it was received
- Motivation: the second set of questions captured perspectives on what kept teachers
motivated to implement Continuous Improvement
- Organization: the final set of questions captured the understanding of how the school’s
structure supported the implementation and sustainability of Continuous Improvement
Table 1
Data Sources
Research Questions Method 1
[RQ1] What is teacher knowledge and motivation for Continuous
Improvement practices?
X
[RQ2] How does a school’s culture and climate impact teachers’
ability to implement Continuous Improvement effectively?
X
[RQ3] What recommendations do teachers have around knowledge,
motivation, and school organization that may help other schools
increase their likelihood of successfully implementing Continuous
Improvement systems?
X
58
Research Setting
The setting for this research study targeted a school in a larger public school district in
the United States. The study focused on a school located in an urban area with a diverse
population of students; however, the school studied had a larger population of low-income,
Black, and Latino students. Given that school improvement efforts emerged from the Elementary
and Secondary Education Act of 1965, which focused on improving efforts to support
marginalized students, the school used in this study had at least 60% of its student population
identified as Black, Latino, and/or low-income. The racial achievement gap had again widened
by as much as 20% in recent years and was closely associated with income achievement gaps
(Baker et al., 2016). This study examined the impact of Continuous Improvement on student
performance in schools that reflected the challenges of poverty on students of color.
The school used in this study was a member of a school district that had actively
attempted to implement Continuous Improvement in its schools. The school had shown evidence
of measuring the degree to which Continuous Improvement had been implemented by teachers.
The participants of this study had been teachers at the school during the period when the degree
of Continuous Improvement implementation had been tracked. The schools that were considered
for participation in this study had at least eight teachers who met this qualification. Additional
information about the participant group was provided in the participant description section
below.
The Researcher
At the start, it was important to note that as the primary instrument for data collection, I
might have had some bias. My objective was not to eliminate the bias that naturally existed due
to my positionality in this study but to monitor my bias to ensure that it did not interfere with my
59
interpretation of the research or findings in data (Merriam & Tisdell, 2016). As a Black educator
with a public, inner-city, elementary, and high school education, I did have a perspective on the
quality of education in public schools. Additionally, having directed much of the effort to
implement Continuous Improvement in a large public school district, I had some pre-existing
notions on best practices for implementing Continuous Improvement. While I would naturally
like to believe that my prior knowledge was correct, I had come to terms with the notion that
because I had seldom seen the quality implementation of Continuous Improvement, my prior
knowledge might have largely been irrelevant to this study. As an educator who was deeply
motivated to see the impact of Continuous Improvement on African American students, I was
mindful of my potential bias toward or against findings and remained mindful of my potential to
bend toward my desires.
In my interactions with fellow educators, it was common for my expertise and knowledge
of Continuous Improvement to be recognized. As the first hire in the large-scale Continuous
Improvement investment of the ten largest school districts in the country, it was important for me
to limit the insertion of my prior experience in my interactions with interviewees as a means of
reducing discomfort (Whalen, 2020). As mentioned by Whalen, my calm demeanor had been an
asset to building trust and buy-in. I intended to continue to use this asset to build trust with my
interviewees. I was careful to build the kind of rapport with interviewees that was necessary for
them to be open and honest. I remained cognizant of my positionality and how it could
potentially influence subjects to be open or to be guarded. My objective was to assure subjects
that my experience did not equate to having answers. My rapport-building reinforced my
dependence on them as the experts in the telling of their authentic experiences in the
implementation of Continuous Improvement in their school.
60
As a former data strategist for a large public school district, the researcher had deep
knowledge of the workings of schools and school districts. Having served hundreds of schools in
their implementation of Continuous Improvement and data readiness, I had a thorough
understanding of the education space and a higher likelihood of understanding the experiences of
the school and teachers that I studied. This prior experience and my master’s level education in
Predictive Analytics served well in the analytical portion of this study. In addition to engaging
my skill in analyzing data, I also had colleagues in the field who could serve as resources to
ensure the accuracy and validity of the study.
Data Sources
The primary source of data for this study was interviews. The data that was captured via
interviews was aligned with the research questions of this study. Responses were analyzed and
categorized as factors for the Knowledge, Motivation, or Organizational indicators of the Clark
and Estes Gap Analysis model. The following sections provide details on the interviews,
participants, data collection, and analysis processes of this study.
Interviews
This study interviewed teachers from a school that had experienced improvement due to
the implementation of Continuous Improvement as a standardized practice of inquiry. All
participants were asked the same (or very similar) questions based on the interview questionnaire
developed in advance of the interviews. The interview protocol used for this study was a
digital-based document that included space to record notes and quotes for each question in the
interview. Interviews were recorded using Zoom to capture the transcription of the interview;
however, the video feature was not engaged. The interview itself was semi-structured to allow
61
for flexibility in the questions with consideration that each subject might have had different ways
of processing information and telling their story (Merriam & Tisdell, 2016).
The interview questions were directly aligned with the Clark and Estes Gap Analysis
Framework. Consisting of 12 items, the questions helped guide subjects in structuring their
responses to the KMO conceptual framework. This made it easier for the researcher to code
responses as contributions to the theory that best practices aligned with KMO would produce the
desired outcome. The questions were primarily open-ended to allow subjects to be flexible and
personal in their responses (Patton, 1990).
Participants
As a qualitative study, nonprobability sampling was used to delve deeper into the topic
with the specific experiences of the participants (Merriam & Tisdell, 2016). While the population
of teachers at schools showing improvement due to the use of Continuous Improvement would
have been highly informative, the participants in this study represented a sampling of the
population. As the goal was to gather useful insight from a specific scenario in low-income
schools, this was a purposeful sampling (Merriam & Tisdell, 2016). Because the research was a
best practice study intended to provide recommendations for similar schools so that they too
could improve, teachers at these target schools were the best subjects as they could provide
first-hand insights that would be useful for other teachers.
Recruiting for this interview was indicative of convenience sampling. Recruitment began
by identifying a pool of schools that had experience with Continuous Improvement before the
COVID-19 pandemic and that had experienced academic improvement according to published
performance data. When the schools were identified, schools that had at least ten qualifying
62
teachers that could be included in the study were identified. The final selection of the school was
based on the largest levels of improvement among schools agreeing to participate in the study.
Instrumentation
Following the guidance of Patton (2002), the interview protocol used for this study was
the interview guide approach. In this approach, the researcher had guiding questions that were
structured to capture all pertinent information. The structure of the questions in this interview
protocol allowed for data collection to be structured and consistent. The protocol that was used in
this study allowed the interview to remain conversational as the researcher had a guide but was
also able to jump around in the protocol to keep the conversation feeling natural. Given the
potential sensitivity of the teachers being interviewed, the interview protocol needed to remain
non-threatening. For this reason, the range of the types of questions asked in the protocol was
diverse. The protocol followed the guidelines of the six types of questions that were useful in
stimulating the desired type of response (Patton, 2002).
The interview protocol was segmented into five distinct sections that aligned with the
qualitative research guidance of Merriam and Tisdell (2016). The first section of the interview
protocol was conversational and nonconsequential to data collection for analysis; however, this
portion of the protocol was essential to capture information about the interviewee and to build
rapport. The next three sections of the protocol were the primary data sources for the study.
These three sections consisted of four main questions each that provided interviewee information
about the Knowledge, Motivation, and Organization experiences of the interviewees. The final
section of the protocol created an open-ended opportunity for the researcher to reengage with the
interview if needed.
63
Data Collection Procedures
The interviews for this study were conducted in person at the site of the school being
researched. Zoom was used to capture the transcription of the interview; however, the
transcription of the interview was not used as an artifact for this study once the data were coded
and analyzed. Intentionally, the interviews were conducted at the convenience of the
interviewees as teachers tended to be quite busy. While Weiss (1994) pointed out that interviews
should be no less than half an hour and may go as long as 8 hours, the interviews of this study
fell between 30 and 45 minutes. This ensured that the interviewees had enough time to
adequately answer the questions without impeding too much on their teacher preparation time.
The transcription for the interviews was saved on my private computer. Interviewees were
informed that the video data collected would not be shared without their permission and would
be destroyed after the data were collected and analyzed. At the interviewees' discretion, Zoom
might have been used to conduct the interview remotely for their comfort and convenience.
Data Analysis
The questions asked during the interview process were designed to provide information
that could be used to assess the degree to which the Continuous Improvement model was being
implemented effectively. When implemented effectively, there were supposed to be positive
indicators of efforts to improve the acquisition of knowledge, the activation of motivation, and
the building of personal and group confidence in the support from the organization (Clark &
Estes, 2008).
Clark and Estes (2008) guided how to assess effective knowledge sharing with employees
via a knowledge and skills solution requirements checklist. This checklist was reformatted to
capture the experience of the interviewee during the Continuous Improvement implementation
64
process at their school. The checklist assessed the quality of information and expectations
shared, job aids provided, and the impact of coaches and training. In total, this portion of the data
analysis tool provided six data points for review. Motivation was assessed using guidance
provided by Clark & Estes on the work environment elements that destroy work motivation,
elements for increasing motivation, individual confidence builders, and team confidence
builders. This portion of the data analysis tool provided seven data points for review.
Organizational influence was evaluated using Clark and Estes’ five types of support necessary
for most organizational change processes.
The 18 data points found in Appendix B of this text were used to provide metrics for each
element of KMO, but they were also used to provide an overall assessment of the performance of
the Continuous Improvement implementation system according to guidance from Clark and
Estes (2008). The data points were useful in identifying the specific gaps that could be targeted
for improvement.
Validity and Reliability
I used two strategies to convince consumers of this research of the validity and reliability
of this study. The first strategy was ensuring that the design of the research aligned with a trusted
process (Merriam & Tisdell, 2016). In this case, I ensured that the interview questions were
aligned with a known theoretical framework. Aligning the interview questions to a known
framework created a standard for how the collected data would be examined consistently.
Additionally, in my analysis of the collected data, I used triangulation to ensure validity. The
decoding of the data could potentially have been somewhat subjective, so I engaged multiple
unbiased data analyses with experience in education to assist in ensuring that the process for
decoding the data was accurate and consistent. As opposed to collecting data from one source
65
(one interviewee), I collected from multiple interviewees with multiple truths (Merriam &
Tisdell). Additionally, I was overly cautious in my practice of reflexivity. While I did have a
vested interest in the improved implementation of Continuous Improvement in schools, I
remained supportive of teacher and school challenges in their ability to implement Continuous
Improvement. My research was done with a preference toward literature that provided useful
insight for those who were interested in supporting teachers and schools in their improvement
efforts. In my effort to avoid being solipsistic, I was very cognizant of my positioning and
self-checked my bias by placing the larger need for improvement above my ego (Merriam &
Tisdell).
Ethics
Though the public institutions being studied and the institution sponsoring this study had
strict ethical guidelines, it was primarily my responsibility to ensure that this research study was
conducted ethically (Merriam & Tisdell, 2016). For me, it was important that teachers continued
to feel safe and supported in conducting their Continuous Improvement work and sharing their
experiences. Because there was a delicate balance between the trial and error of Continuous
Improvement and the accountability of school improvement mandates, teachers might have
naturally felt that any inquiry into their Continuous Improvement efforts could be used against
them at some point. For this reason, the first communication with participants in this study
reinforced the commitment to the anonymity of participants. The study excluded any indicators
of school districts, schools, participants, and students. Participants were assured that their
participation was voluntary, and they had the option to end their participation in part or totality at
any point in the study.
66
Limitations and Delimitations
The primary limitation of this study was the pool of participants. Since the COVID-19
pandemic, many teachers have opted to leave the field of education. This study sought to capture
the teachers who were teaching in school before the pandemic, so there might have been some
school personnel changes that caused me to adjust school participant selection decisions. Also,
following the Covid-19 pandemic, school performance data might have been skewed. This might
have had an impact on assessing how ongoing school and student improvement was defined.
Additionally, there might have been some limitations to the way that participants responded to
questions. Due to some potential fears about accountability, some teachers might have been
consciously or subconsciously motivated to inflate positive responses on behalf of themselves
and/or their schools.
The delimitations of this research study were related to the questions that were asked.
Questions were carefully asked to elicit open responses that could be interpreted by the
researcher to limit instances of inflated responses. While I did believe I was skilled and
experienced enough to adequately interpret responses, I had to acknowledge my own bias to
ensure that I did not inflate my interpretations. Additional delimitations included the school
selection process. Few schools had evidence that they had effectively been implementing
Continuous Improvement, so I intentionally limited my selection of schools to school districts
and schools that could show some evidence that they had been implementing and measuring their
degree of implementation of Continuous Improvement.
67
CHAPTER FOUR: FINDINGS
This best practice study intends to identify how urban schools serving predominately
low-income and Black and Latinx students before the COVID-19 health pandemic achieved
incremental and consecutive years of academic growth relative to schools of similar
demographics. The study examines a single school, Shabazz Leadership Academy (SLA), in a
large urban school district. The school name, Shabazz Leadership Academy, is pseudonym being
used to protect the identify of the school and its staff. Growth is measured according to publicly
accessible school reports that consolidate standardized assessment and qualitative measures of
performance in attendance, reading attainment, math attainment, reading growth, math growth,
teacher perception, parent perception, and student perception. The study intends to support an
improved understanding of how this school leveraged Continuous Improvement practices to
drive achievement for students as a means for using this new understanding to support similar
schools in driving achievement for their students. Using Clark and Estes’ gap analysis
framework, this study examines the use of Continuous Improvement knowledge, motivation, and
school organization as critical levers for driving achievement. This chapter first reviews the
stakeholders included in this study and their positions at Shabazz Leadership Academy. The
chapter then reviews the results of the interviews conducted with teachers and the interview
analysis completed using Clark and Estes KMO assessment tools. These findings will be shared
in alignment with their relationship to the following research questions:
1) What is teacher knowledge and motivation for Continuous Improvement practices?
2) How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
68
3) What recommendations do teachers have around knowledge, motivation, and school
organization that may help other schools increase their likelihood of successfully
implementing Continuous Improvement systems?
The research questions above were the basis for the interviews that were conducted with
teachers at Shabazz Leadership Academy. The 10 primary interview questions were designed to
capture responses specific to the interviewees’ perception of knowledge, motivation, and
organization as it relates to Continuous Improvement practices in their school. Additional
probing questions were designed to capture additional qualitative evidence of the degree to
which Continuous Improvement knowledge was transferred, motivating factors that supported
the implementation of Continuous Improvement, and how the school operations were adjusted to
support the demands of Continuous Improvement. Eight teachers participated in interviews at
Shabazz Leadership Academy. The eight teachers interviewed were comprised of teachers
directly and indirectly involved in the maintenance of the school’s Continuous Improvement
systems across grade bands and disciplines. The interviews took approximately 45 minutes to
complete and were conducted in person.
Stakeholders
The stakeholders included in this study represented a sampling of teachers at Shabazz
Leadership Academy. The school has a total population of 26 full-time teachers providing
instruction for prekindergarten to eighth grade. Shabazz Leadership Academy currently services
289 students, which is a decline from the 419 students serviced before the pandemic in the 2019
school year. Of the 289 students at Shabazz Leadership Academy, 97.2% of the students identify
as Black or African American while 2.4% identify as Hispanic or Latinx. 73.7% of students at
SLA are categorized as low-income and 22.5% of students are categorized as having additional
69
learning needs. Shabazz Leadership Academy is one of 16% of schools in their school district
that met the qualification for inclusion as showing performance uncommon growth,
predominately Black and Latinx, and low-income. This study limits participation to schools that
showed growth before the pandemic to limit performance abnormalities related to post-pandemic
interventions and data availability. Study participants were employed by the school but may not
have been operating in a teaching capacity. The next section will provide additional information
about teachers who participated in interviews.
Interview Participants
The qualitative interviews were conducted with teachers at Shabazz Leadership Academy
who were employed by the school at some point before 2020 and who were employed as
teachers at the school for at least two years since 2020. Eight teachers were interviewed to
capture a significant sample of teacher sentiment. The teachers included in this study represented
a diverse sampling of Shabazz Leadership Academy. Continuous Improvement implementation
is often managed by grade-level cohorts, therefore, the study sought to capture the experiences of
teachers from different grade bands. Figure 2 shows the population included in the study by the
grade bands of prekindergarten to second grade (37.5%), third to fifth grade (37.5%), and sixth to
eighth grade (25%).
70
Figure 2
Interview Population by Grade Band
Teachers who participated in the study had diverse teaching backgrounds in terms of
years spent teaching overall and the number of years spent at Shabazz Leadership Academy.
Table 2 shows the length of time teachers have been teaching in their career and it shows the
number of years that the teachers have been a part of the SLA community. Teacher names have
been coded for primary, elementary, and middle school grades. Primary grade teachers included
in this study have an average of 12 years of teaching overall and 17 years of experience at SLA.
Elementary teachers have an average of 13 years of teaching overall and 10 years at SLA.
Middle School teachers have an average of 15 years of teaching overall and 15 years at SLA.
The average number of total years teaching was 13 and the average number of total years
working for SLA was approximately 14 years. The minimum number of years teaching at SLA
was 3 years and the maximum number of years teaching at SLA was 25 years.
71
Table 2
Years Teaching Versus Years Teaching at SLA
Teacher Years teaching Years at SLA
P1 15 8
P2 18 18
P3 3 25
E1 3 5
E2 12 9
E3 25 15
M1 4 6
M2 25 24
Results and Findings
The results and findings section reports on the findings of the interviews and interview
analysis protocols. The results and findings are divided into three sections aligned with the
interview questions of this study. This study found that three unique themes emerged from the
first two interview questions related to knowledge, motivation, and organization. Table 3 shares
these themes and the conceptual framework that will be used as a lens for exploring the themes.
These themes and their subthemes will be explored more deeply in the first and second research
question sections.
72
Table 3
Themes Aligned to Research Study Questions
Title Theme Research question Conceptual
framework
Theme 1 Learned or Led
Knowledge
What is teacher knowledge and
motivation for Continuous
Improvement practices?
KMO
Theme 2 Intrinsic and Extrinsic
Motivation
What is teacher knowledge and
motivation for Continuous
Improvement practices?
KMO
Theme 3 Learning to Lead
Environment
How does a school’s culture and
climate impact teachers’ ability to
implement Continuous
Improvement effectively?
KMO
Findings: Research Question 1
What is teacher knowledge and motivation for Continuous Improvement practices?
Knowledge
Richard Clark and Fred Estes assert that increasing knowledge is the key factor in an
organization’s ability to drive improvement (2008). This study seeks to understand how a large
urban school serving predominantly Black and Latinx students increases knowledge of
Continuous Improvement practices as a lever for growth in their school. The interview questions
associated with knowledge ask teachers at Shabazz Leadership Academy to consider how
Continuous Improvement was transferred to them and their peers. As seen in Table 4, an analysis
of the interviews with teachers of SLA showed that 100% of teachers expressed that there was a
limited amount of direct training provided on Continuous Improvement, and the majority of their
knowledge transfer was a result of hands-on engagement in Continuous Improvement practices.
Three-fourths of the interviewed teachers expressed that they benefited from peer collaboration.
73
Thematically, Shabazz Leadership Academy faces the question of whether is knowledge best
transferred via direct learning opportunities or whether teachers should be led through immersion
in Continuous Improvement activities. In the interviews with teachers, evident in their discussion
of their limited direct training versus their experience with hands-on and peer collaborative
learning opportunities, teachers often found their experience to be overwhelming. Five of the
eight teachers interviewed used the word “overwhelming” specifically to describe their learning
experience. One teacher said their initial reaction to their Continuous Improvement learning was
like “they dropped a bomb.”
Table 4
Knowledge of Continuous Improvement Means of Acquisition Indicator Count
Learned or Led
Teachers Limited training Hands-on Peer collaboration
P1 4 3 2
P2 1 2 n/a
P3 4 3 1
E1 7 3 1
E2 4 1 1
E3 3 1 n/a
M1 3 2 1
M2 4 4 1
Limited Training
Clark and Estes make a point that for knowledge to be successfully transferred, training must be
provided in a way that it is clear to employees how to execute the expectation in the sequence
that the job is to be performed (2008). Additionally, goals and objectives should be clear,
74
exemplars should be provided, and feedback should be shared. Based on the interviews with
teachers at Shabazz Leadership Academy, this approach was inconsistent. All teachers expressed
that they had received limited direct training that met this criteria. Based on the interviews, there
were three reasons why training was limited: leadership not sufficiently trained, entry points into
Continuous Improvement teams, and time available for training.
Interviewees expressed that they felt that training was limited due to the limited capacity
of leadership to provide sufficient training to teachers. The transfer of knowledge of Continuous
Improvement practices was primarily led by the school administration. Teachers expressed that
the administration was given limited training themselves with the expectation that they would
then lead their school teams in implementing Continuous Improvement through strategic plan
development, grade-level team meeting design, and Instructional Leadership Team (ILT)
improvement management with no resources beyond exemplars of other school work plans.
These plans did not include guidance for strategy implementation, cadence for review, or
guidance for evaluation of impact.
Interviews with teachers from Shabazz Leadership Academy had varying degrees of
experience with Continuous Improvement training and engagement. While all teachers are
expected to engage in the Continuous Improvement processes at SLA, two of the eight teachers
interviewed had not received any degree of direct training on Continuous Improvement as they
had not served on the Instructional Leadership Team. The other six interviewees had served on
the ILT at some point in their tenure at SLA, though they may have had varying degrees of
knowledge of Continuous Improvement based on the point in which they were engaged with
Continuous Improvement as a member of the Instructional Leadership Team. One teacher
expressed that they had not received formal training while they were on the ILT but that they
75
were sure that current members received more formal training which had not been transferred
effectively to others in the school.
A large factor contributing to the limited training on Continuous Improvement at Shabazz
Leadership Academy was the challenge of time dedicated to improvement strategy. When
discussing their initial reaction to Continuous Improvement, teachers expressed that they felt
overwhelmed largely due to the magnitude of the effort and the amount of time that is needed
outside of classroom instruction. Teachers expressed that their time was already limited and time
specifically on Continuous Improvement felt overwhelming. One teacher stated, “I am a firm
believer that the reality is that education was 100% built on the backs of like, unpaid [teachers],
after contract work hours.” They also stated that if they were not compensated to support
Continuous Improvement, due to the time that they have to commit to the effort, they would not
commit to supporting the school’s established improvement efforts.
Hands-On Training
While direct training is regarded as a critical component to transferring knowledge,
Shabazz Leadership Academy used a hands-on approach to transfer knowledge while building
systems of Continuous Improvement. All of the interviewees indicated that their Continuous
Improvement training was largely hands-on. According to Clark and Estes (2008), this is more
typical with smaller training groups and dependent on the level of unconscious expertise held by
the trainer. In the case of SLA, evidence from interviews shows that the trainer opted for a
trial-and-revise approach rather than a more structured cognitive task analysis approach. As seen
in Table 5 below, seven of the eight interviewees expressed that training was mostly hands-on
and there were opportunities to receive feedback for corrective action. In one teacher’s
description of Continuous Improvement training, they stated, “I hate to say it this way, but it was
76
like, you know, you just get thrown in and you got to, you know, catch up. You got to catch on,
you know, you got to. You know…and that was pretty much it.” The teacher described a training
situation that was too busy for “hand-holding.” They did state that if they needed additional
assistance, they knew they would be supported and they knew the process for getting support,
however, they ended their discussion of the matter with, “You just kind of gotta fit in where you
can get in. And that's just how it is.”
Table 5
Indications of Hand-on Training with Feedback on Objectives
Teachers Hands-on Received feedback
on objectives
P1 x x
P2 x x
P3 x x
E1 x x
E2 x x
E3 x n/a
M1 x x
M2 x x
While feedback was offered to teachers from the school administration, teachers still
considered the training process to be frustrating. One teacher referred to the process as being a
“trial by fire,” referring to an experience of having to learn about the Continuous Improvement
process while actively writing the school’s improvement plan. Other teachers experienced some
doubt in their ability to lead improvement due to their uncertainty of the process. One teacher
opted to disengage from the school’s improvement efforts completely, citing a loss of faith in
77
their ability to deliver on the goals of improvement given the complicated and shifting practices
of the schools. The same teacher did make a point that the challenge was not a loss of faith in the
school administration as much as it was distrust for the school district’s requirements for
improvement practices without suitable resources to support the school’s learning demands.
Three teachers interviewed attributed their ability to expand their knowledge of Continuous
Improvement to the collaborative learning communities that they created with their peers.
Peer Collaboration
The interviews conducted supported the notion that in the absence of explicit training for
teachers, learning could be led by peer collaboration. One teacher describes the role they were
given as one member of a five-member team tasked with leading learning cycles for their peers.
They stated, “I'm in charge, and then I bring it back to the team and we discuss, okay, ‘What
changes can we make?’ And then we go back to the teachers, and say, ‘What if we try it this
way?’ So like, it's just a big process of different steps.” These teachers are leading their own
learning through a peer-led, hands-on, collaborative, trial-and-error process. As seen in Table 5,
though not explicitly asked in the interviews, six of the eight teachers offered that learning was
often facilitated by peer collaboration. Teachers identified collaboration in planning, supporting,
and developing structures for Continuous Improvement.
As stated in the Six Core Principles of Improvement, networked communities can move
forward faster when they move forward together (Bryk et. al., 2015). This approach can be seen
in the ways that teachers collaborated at SLA to increase their knowledge of Continuous
Improvement. One teacher expressed that while they were frequently uncertain about their
implementation of improvement practices, they were certain that they could rely on their peers to
provide feedback from their understanding and experience. This allowed them to bridge the gaps
78
between what they learned hands-on and what they had not experienced. Teachers were able to
plan together and design strategies that had previously been tested by their peers. In addition to
planning within the SLA environment, teachers shared that they were also relying on teachers
from other schools to check their understanding during the planning phase for their school.
Though this network across schools was formal, there were also internal systems that were
beneficial in building teacher knowledge.
One of the less experienced teachers shared, “Now, in the event that, you know, I had
questions or I didn't know what to do. I knew that I had a support system here.” The teacher went
on to express that having a knowledge gap due to not having a robust training experience was not
a concern to them because they knew that they could depend on their peers to support their
learning needs. This informal training network provided some means for new teachers to keep up
with the learning demands of Continuous Improvement, however, teachers still expressed
concerns about not being able to lead improvement in a timely manner. Despite the support
available from more experienced teachers, there was still a need for trial and error in learning as
“you had to fit in where you could get in” (as one teacher put it).
This informal system of support did provide teachers with some comfort, however, the
more formal meeting structures are where teachers saw the most benefit to their improvement
efforts. Shabazz Leadership Academy did have a formal structure for grade-level team meetings
and Instructional Leadership Team meetings that provided additional learning opportunities. As
one teacher described, the school benefited most from its formal structures that built on
Continuous Improvement knowledge from year to year.
Knowledge Assessment
79
Richard Clark and Fred Estes determined that when gaps are visible in knowledge and
skills, solutions can then be created to bridge those gaps (2008). In that understanding, a
knowledge checklist was developed to assist practitioners in assessing where gaps can be
identified. For this study, the researcher amended the aforementioned checklist to align with the
need to assess knowledge transfer and gaps in the context of schools attempting to implement
Continuous Improvement. The Knowledge Assessment, completed by the researcher using
interview data, captures the degree to which efforts to transfer knowledge measure up to the
intent. The Knowledge Assessment considers five indicators and rates the evidence on a scale
from one to four (least evident to most evident). The assessment then provides a final score that
can be used as an indicator of potential gaps in school knowledge of Continuous Improvement.
Table 6 shows that the overall assessment of knowledge transfer at Shabazz Leadership
Academy was 2.0. Based on interviews with eight teachers from SLA, the indicators show that
overall there was low evidence of effective knowledge transfer. According to this Knowledge
Assessment, there is an opportunity for Shabazz Leadership Academy to bridge the gap in
knowledge transfer and potentially improve the impact of their Continuous Improvement efforts.
It should also be noted that the lowest average score was for the job aids indicator. According to
interview responses from six of the interviewees, “how to” guides were not readily available for
onboarding or throughout their implementation of Continuous Improvement at their school.
Motivation
Knowledge of Continuous Improvement is a critical factor impacting teachers' ability to
drive growth in their school, however, even with knowledge and skill, a teacher may be
challenged to establish and maintain growth if not properly motivated. Clark and Estes make a
point that gaps in motivation, while sometimes difficult to identify, often negate improvement
80
Table 6
Knowledge Assessment
Least Evident = 1, Low Evidence = 2, Some Evidence = 3, Most Evident = 4
Indicator Score
“How to” aids provided with practice
opportunity
1.3
Learning material is shared with clear
performance objectives and goals
1.5
Trainings are exact replicas of real-life
application of skills
1.6
Performance is monitored and feedback is
provided to ensure understanding and
application
2.6
Opportunity to apply training is presented in a
timely manner after training (w/i 72 hours)
1.5
OVERALL 2.0
efforts if not identified and addressed (2008). This study seeks to understand the indicators of
motivation that hinder or accelerate Shabazz Leadership Academy teachers’ ability to implement
practices that result in overall school improvement. The interview questions associated with
motivation ask teachers at Shabazz Elementary to consider their motivation for implementing
Continuous Improvement in their school. As seen in Table 7, an analysis of the interviews with
teachers of SLA showed that 100% of teachers expressed that they or their peers were
intrinsically and extrinsically motivated to engage in Continuous Improvement practices.
Without being probed, early in the interviews, seven of the teachers interviewed volunteered that
they were generally motivated to work on behalf of “the kids.” Taking a student-centered
approach to teaching, each teacher provided indicators that they had an internal drive to improve
practice in their classrooms and school to create better outcomes for their students. When probed
81
for specific motivation around Continuous Improvement practices, all teachers indicated they
were driven by aspirations for students more than their career advancement. Upon further
probing for external factors of motivation, 100% of teachers directly credited the school
administrators as a critical factor in their motivation for Continuous Improvement. Additionally,
while not explicitly citing recognition and celebration, six of eight interviewees expressed that at
some point in their careers, they had been celebrated and recognized for their contributions to
Shabazz Leadership Academy’s Continuous Improvement efforts.
Table 7
Motivation for Continuous Improvement Indicator Count
Intrinsic and extrinsic motivation
Teachers Student-centered Administration
supported
Recognition and
celebration
P1 7 2 1
P2 4 2 n/a
P3 2 3 2
E1 6 1 2
E2 1 3 2
E3 3 3 n/a
M1 1 1 1
M2 2 3 3
Student-Centered Motivation
Consistent across all interviews, teachers expressed an internal desire to meet the needs
of students at their school. In almost a seemingly rehearsed fashion, one after the other, teachers
at SLA responded to questions of motivation with “for the kids.” The overwhelming majority of
82
teachers expressed that they had an obligation to ensure that the students who attended SLA
would be valuable contributors to society. As one teacher described their motivation, “...my main
motivation is investing to make sure they are part of a productive society.” This was echoed
throughout interviews with teachers. While there was a motivation that was self-gratifying,
teachers also seemed to attach a goal to their motivation. Intrinsic motivation is more
concentrated for impact when attached to a goal that is either self-imposed or provided
(Mavrogordato et al., 2023). Whether or not this goal was imposed could not be determined,
however, the goal of developing “productive citizens” was a consistent expression of the teachers
interviewed.
Administration-Supported Motivation
Based on the interviews with teachers, there seemed to be a strong desire to improve
based on personal, internal motivators. Of the teachers interviewed, two of the teachers explicitly
stated that they were only self-motivated. Upon further inquiry, it was evident that all teachers
were also being externally motivated by the school administration. Analysis of the interviews
revealed that the school administrators had been deliberate in making efforts to motivate their
teachers around Continuous Improvement. Across interviews, there was evidence that teachers
were being engaged in team-building activities to maintain positive morale, the administration
had cultivated a positive environment, and the administration was intentional about ensuring
teachers felt heard.
Teachers shared that the school administration regularly engaged them in team-building
exercises that made the task of improvement seem more attainable as a collective effort. Teachers
spoke of team-building activities being built into the structure of their weekly improvement
meetings as well as off-site activities that were coupled with improvement meetings. In addition
83
to team-building activities, the administrators focused on boosting the morale of teachers. As one
teacher stated, “We’d have a meeting and then we would go do something fun.” The teacher
noted the importance of coupling the stress of improvement planning with building strong
relationships and trust among each other.
In addition to building strong relationships and boosting morale, the school
administration of Shabazz Leadership Academy also cultivated a positive environment that
motivated teachers to want to engage in Continuous Improvement. Teachers expressed that they
felt motivated to engage in the Continuous Improvement process because they felt confident in
their ability to implement Continuous Improvement. The administrators complimented their staff
regularly on their efforts and ensured that teachers felt confident in themselves and each other.
Table 8 below shows the frequency in which teachers expressed personal confidence and
perception of peer confidence. While all of the teachers felt that they or their peers were
confidently motivated, seven of the eight expressed confidence in their peers but not themselves,
however, six felt confident in themselves but perhaps not in their peers. When the teacher who
expressed that they were not confidently motivated for Continuous Improvement was probed for
further clarity, it was revealed that their lack of confidence was not due to administrators not
supporting them but rather a lack of faith or willingness to engage for the school district.
Nonetheless, they did feel that their peers were confidently motivated. One teacher did not assert
that their peers were not motivated. They simply refused to speak on their behalf. The other
teacher, however, questioned the motivations of all of the teachers due to teacher turnover due to
the pandemic.
The administrators intentionally established systems that extrinsically motivated their
teachers, however, they also ensured that they were responsive to teacher needs and requests.
84
Table 8
Continuous Improvement Implementation Confidence
Teachers Self-confidence Peer-confidence
P1 x x
P2 n/a x
P3 x x
E1 x x
E2 x n/a
E3 x x
M1 x x
M2 x n/a
Teachers expressed a willingness to engage in Continuous Improvement because they felt that
they would be supported by their school administration at least to the extent that the
administration could provide support. When teachers were asked to recall a time when they
asked the school administrators for support with an improvement issue, seven of the teachers
were able to share a narrative of an issue that they had and how the administration responded
creatively to keep them motivated. One teacher recalled a challenge with their ability to
implement an improvement strategy and the administrator's response of pairing the teacher with
another teacher who was having success in implementing the strategy. They expressed that they
did not feel judged but were motivated to continue. Another teacher spoke about how the
administration empowered teachers to be creative in addressing improvement challenges. When
asked if they could recall a situation when they shared a challenge with the administration and
felt heard and the administration responded to help, the teacher responded with, “Yeah, I mean, it
happens all the time.” While discussing challenges to a strategy focused on improving the
85
reading ability of younger students, the teacher shared, “And last year, when I went to them
because I had a whole group of kids that weren't reading, we created a day where the older kids
were reading with the younger kid.” The teachers expressed that they felt heard, supported, and
confident that they could create change because, in instances like these, the administration
allowed them to think outside of the box. Though there were not enough adults available to read
to the younger students, the administration kept teachers motivated by allowing one day a week
when older students in the school would fill that void by reading to younger students.
At a more simplistic level, but not to be excluded, school administrators established
systems of recognition and celebration to motivate teachers. Though evidence shows that
tangible non-monetary incentives have little to no impact on school performance, Shabazz
Leadership Academy has established a culture that recognizes and celebrates teacher
commitment to Continuous Improvement (Ngasi, 2020). Teachers report that they are regularly
acknowledged as part of the meeting structure. As part of the administration's approach to
motivating teachers, six of the eight teachers interviewed indicated that they experienced
recognition as part of the regular processes of the school. Teachers who reported that they have
not personally been celebrated see that as a motivation tactic as well. One teacher provided a
comparison of advanced teacher recognition to the experience of students in classrooms that
overly reward star pupils to the detriment of students who need extra motivation to achieve.
Teachers identified efforts to be motivated by school administrators in newsletters, weekly
meetings, and daily intercom announcements.
Motivation Assessments
Similar to the position on knowledge, Richard Clark and Fred Estes determined that when
gaps are visible in motivation, solutions can then be created to bridge those gaps (2008). In that
86
understanding, Clark and Estes also provided a list of factors that could be used as pressure
points for identifying gaps in choice, persistence, and effort. For this study, the aforementioned
factor list was amended to align with the need to assess potential motivational gaps in the context
of schools attempting to implement Continuous Improvement. The Motivation Assessment,
completed by the researcher using interview data, captures the degree to which motivation for
Continuous Improvement exists at a school. The Motivation Assessment considers seven
indicators and rates the evidence on a scale from one to four (least evident to most evident). The
assessment then provides a final score that can be used as an indicator of potential gaps in school
motivation for Continuous Improvement.
Table 9 shows that the overall assessment of motivation at Shabazz Leadership Academy
was 3.5. Based on interviews with eight teachers from SLA, the indicators show that, overall,
there was a significant amount of evidence for motivation for Continuous Improvement
implementation. According to this Motivation Assessment, Shabazz Leadership Academy is
positioned well in terms of teachers' intrinsic motivation and the school leadership’s efforts to
maintain motivation for teachers. Most notable of the factors motivating teachers is the
administration's practice of ensuring teachers’ problems and concerns are heard and considered.
87
Table 9
Motivation Assessment
Least Evident = 1, Low Evidence = 2, Some Evidence = 3, Most Evident = 4
Indicator Score
There is personal and team confidence 3.3
Positive feelings about org and environmental
barriers to achieving goals
3.3
Positive emotional climate 3.3
Positive feeling about personal and team
values for goals
3.4
Challenging but achievable goals assigned 3.6
Acknowledged for achievement 3.8
Problems are heard and considered 4.0
OVERALL 3.5
Findings: Research Question 2
How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
Organization
Teachers and schools that have effectively addressed the learning demands and
motivational factors for Continuous Improvement have not filled the gaps if the culture and
climate needs of their organization have not been properly addressed. According to Clark and
Estes (2008), organizing for success includes consideration of work processes, value chains or
value streams, and material resources. The availability and alignment of work processes,
resources, and value chains for the mission, goals, and objectives of an organization impact how
employees experience the organization. Their experience shapes their understanding of the
88
culture of the organization relative to their ability to perform and improve (Clark and Estes,
2008).
This study seeks to understand the indicators that impact the culture at Shabazz
Leadership Academy and how they help or hinder teachers’ ability to realize the Continuous
Improvement objectives of the organization. The interview questions associated with school
organization and culture ask teachers at Shabazz Leadership Academy to consider how well the
school is organized for improvement. Table 10 shows an analysis of the interviews with teachers
around SLA culture. As seen in the table, interview responses suggest that SLA created a culture
that encouraged teachers to internalize improvement practices by learning to lead through
collaborative meetings and distributed leadership opportunities. Of teachers interviewed, 100%
specifically indicated that collaborative meetings were created to facilitate Continuous
Improvement practices. Six teachers described events that demonstrated the SLA administrator’s
distribution of leadership opportunities to lead Continuous Improvement initiatives. Additionally,
six teachers provided indications that administrators worked to remove barriers to allow
Continuous Improvement efforts to persist.
89
Table 10
Culture for Continuous Improvement Indicator Count
Learning to lead
Teachers Collaborative meeting Distributed leadership Removing barriers
P1 6 2 2
P2 2 n/a n/a
P3 1 2 3
E1 3 n/a 3
E2 3 1 3
E3 3 1 n/a
M1 2 1 2
M2 5 7 4
Collaborative Meetings
As previously stated, effective work processes that align with the improvement objectives
are essential to successful Continuous Improvement systems. Shabazz Leadership Academy, in
the absence of robust learning systems, developed collaborative meeting opportunities for
teachers to lead SLA Continuous Improvement processes. All of the teachers interviewed,
whether knowingly engaged in Continuous Improvement efforts or not, described weekly all
staff meetings that emerged. One teacher described these meetings as being the specific catalyst
for schoolwide engagement in improvement practices and a point of elevation in their efforts.
The teacher suggested that previous plans were ineffective because they were planning but not
meeting to ensure plans were being implemented. The teacher discussed the point when this
practice changed. They stated, “I actually brought it up. ‘Like, are we gonna have follow through
from all the plans in the summer?’ So then we started meeting every month about that. And then
all that stuff was falling into place.” A portion of their weekly meeting was dedicated to progress
90
monitoring. These meetings were the designated space where work processes were reinforced
and schoolwide data were reviewed. A portion of the meetings were dedicated to teacher
engagement in data reviews, collaborative planning, and mission alignment. One teacher stated
that these meetings begin with a restating of the school mission and then teachers are allowed an
opportunity to state how their improvement efforts have aligned to the mission. Often, this is an
opportunity to celebrate accomplishments and provide feedback on processes that are working or
that need refinement.
In addition to the weekly all-staff meetings, as one teacher stated, the grade-level teacher
teams and Instructional Leadership Team (ILT) met more frequently than they had previously.
These teams are specifically organized around the objectives of the school’s Continuous
Improvement work plan. As a work process, these meetings serve as a consistent working
meeting to assess the progress toward the school’s goals and an opportunity to course correct if
necessary. Similar to the networked communities described in Chapter 2 of this text, these
collaborative meeting opportunities created opportunities for teachers to share effective practices
locally to improve faster.
Distributed Leadership
Clark and Estes (2008) describe the value chains and value streams as a deep
understanding of how processes make an organization work in relation to the ultimate goal of the
organization. The goal of value streams is to understand how the organization operates. Value
chains, on the other hand, are more focused on how goals are achieved. Given Shabazz
Leadership Academy’s limited financial resources, a creative approach was implemented to
capture this metadata while increasing teacher capacity and potential for sustainable Continuous
Improvement engagement. The Distributed Leadership team was created to observe the rigor of
91
instruction, however, the team stretches beyond the limits of instruction and has had additional
usefulness in interrogating the school’s improvement efforts.
Evidence from interviews shows that teachers felt that the school was more organized for
Continuous Improvement because these rigor walks provided them an opportunity to actively
participate in process management. One of the Distributed Leadership Team members stated,
“It’s almost like the principal gives us autonomy to kind of lead what we think we need, what we
think the improvement should be.” Though the principal provided some guidance and support,
teachers were largely allowed to learn through leading. This process allowed teachers to improve
their understanding of the systems of improvement at their school. Along with increasing buy-in
to the Continuous Improvement process, teachers also noted their improved understanding of the
systems being used to support Continuous Improvement efforts.
Removing Barriers
Aligning resources to an organization’s improvement practice expectations is a critical
practice for ensuring there are fewer barriers to implementation. As it relates to school
Continuous Improvement, the resources necessary for the implementation of improvement
practices are primarily time. Teachers at Shabazz Leadership Academy frequently reported that
additional resources of time were needed to drive school improvement planning and initiatives.
In previous years, SLA’s school district provided compensation to teachers for engagement in
Continuous Improvement training and regular meetings via federal stimulus resources
(Confidential, 2009). One teacher reported that they chose to no longer engage in Continuous
Improvement planning because they no longer had the time to invest in the effort. That teacher
was unaware that Shabazz Leadership Academy’s administration was removing this barrier by
again compensating teachers for engaging in Instructional Leadership Team meetings. While
92
63% of teachers interviewed expressed some issue with the amount of time they had available to
dedicate to activities outside of classroom instruction, Table 11 shows that nearly all teachers felt
the administration provided time and opportunity to contribute to Continuous Improvement. This
is evidence that the school administration acknowledged the resource issue and made a
reasonable attempt to remove that barrier. The teacher who shared that they were being
compensated also expressed that they would not be on the ILT otherwise. This same teacher,
however, also shared, “I do think they lessened up on expectations for lesson plan. So like, it still
needs to be there, but maybe not as in-depth.” The teacher, while identifying resource issues, also
speaks to ways in which the school administration has removed some of the issues with time and
how the administration has prioritized the use of resources to support Continuous Improvement
engagement.
Table 11
Continuous Improvement Time Available versus Time Made Available
Teachers Concerns about
available time
Felt administration
provided time
P1 n/a x
P2 x x
P3 x x
E1 n/a n/a
E2 n/a x
E3 x x
M1 x x
M2 x
93
When teachers were asked to recall if the school administration provided any additional
resources to support the Continuous Improvement efforts, all of the teachers reported that the
administration created new and consistent opportunities for them to engage in Continuous
Improvement. Interestingly, the two teachers who claimed that they were not engaged in
Continuous Improvement acknowledged that more time was being allotted to the entire school to
implement Continuous Improvement. One teacher shared that 80% of their weekly schoolwide
meeting was dedicated to reviewing data and that guides were provided to simplify the process.
Additional evidence that resources were provided to address issues of time could be found in one
another teacher’s statement that the school frequently provided classroom coverage so that
teachers could participate in Continuous Improvement meetings during the school day.
Organization Assessments
Similar to the position on knowledge and motivation, Richard Clark and Fred Estes
determined that when gaps are visible in organization and organization culture, solutions can
then be created to bridge those gaps (2008). Beginning in the 1990s, Nancy Dixon published a
series or articles and books on analyzing organizational structures as a lever for developing
learning opportunities for improvement. Whereas Clark and Estes developed guidance for
knowledge and motivation analysis, they were inspired by Nancy Dixon’s work on
organizational change models to present six supports necessary for organizational change
processes. For this study, five of the six supports are used to assess the organizational change
process supporting Continuous Improvement at Shabazz Leadership Academy. The Organization
Assessment, using interview data, captures the degree to which organizational practices support
Continuous Improvement. The Organization Assessment considers six indicators and rates the
evidence on a scale from one to four (least evident to most evident). The assessment then
94
provides a final score that can be used as an indicator of potential gaps in school organization for
Continuous Improvement.
Table 12 shows that the overall assessment of organization at Shabazz Leadership
Academy was 3.4. Based on interviews with eight teachers from SLA, the indicators show that
overall there was a significant amount of evidence that the school supported the demands of
Continuous Improvement with shifts in organization and culture. While 80% of the indicators
show evidence of strong organizational support, there was little evidence in support of efforts to
ensure teachers had adequate knowledge, skills, and motivation support for improvement.
Table 12
Organization Assessment
Least Evident = 1, Low Evidence = 2, Some Evidence = 3, Most Evident = 4
Indicator Score
Clear vision, goals, and ways to measure
progress
3.8
Aligned the structures and processes of the
organization with goals
3.4
Communicated constantly and candidly to
those involved about plans and progress
3.3
Top management continually involved in the
improvement process
3.9
Adequate knowledge, skills, and motivational
support for everyone
2.5
OVERALL 3.4
Findings: Research Question 3
What recommendations do teachers have around knowledge, motivation, and school
organization that may help other schools increase their likelihood of successfully implementing
Continuous Improvement systems?
95
The question above intends to collect data that may be useful to support schools similar
to Shabazz Leadership Academy in their Continuous Improvement journey. The performance
growth experienced by SLA in the years before the COVID-19 health pandemic was not typical
of schools serving low-income, Black, and Latinx students in their school district. During the
interviews with teachers from SLA, teachers were directly asked to provide recommendations
from their experience in attempting to implement Continuous Improvement that might be helpful
to other schools. Overwhelmingly, teachers referred to the need to collaborate and to be
consistent.
Collaboration
A theme that was noticeable across interviews was the willingness of teachers at Shabazz
to collaborate. In their attempt to meet the learning needs of Continuous Improvement, even in
the absence of specific guidance, teachers and administrators collaborated to develop and
implement a Continuous Improvement program. In their attempts to remain motivated, even
when they felt that the demands of Continuous Improvement were overwhelming, teachers and
administrators collaborated. In the absence of financial resources to support the organizational
demands of Continuous Improvement, teachers and administrators collaborated. As stated by a
veteran teacher at SLA, “Schools should try to improve by making improvement a team effort
and not just leaving improvement to school administrators.”
The sentiment that collaboration was the strongest lever for implementing Continuous
Improvement was echoed by six of the teachers interviewed. Though not interviewed, evidence
from the interviews supported that the school administration supported the notion of
collaboration amongst staff. When examining the structures supporting Continuous
Improvement, it is evident that an intentional effort was made to foster opportunities for teachers
96
to collaborate thereby increasing the opportunities for teachers to buy-in to the process. One
teacher discussed the challenges that some teachers were having to engage in Continuous
Improvement based on whether or not they had bought into the strategy. Teachers who were not
members of the ILT or who had not attended meetings were most likely to lose focus on
Continuous Improvement at some point in the school year. Conversely, when speaking about the
importance of collaboration, another teacher stressed the importance of collaboration as a means
for cultivating buy-in. The teacher states, “The way we have it set up here…when you have input
from all aspects of a school…you have to have that. And I think that was my buy-in. You know,
having not just one mindset. You have to have everybody's point of view to come up with
something that's beneficial for the whole school.” The school’s summer planning, grade-level
team meetings, Instructional Leadership Team meetings, Distributed Leadership Team meetings,
and weekly all-staff meetings each provided opportunities for teacher collaboration and
whole-school engagement.
Consistency
Though not discussed at the length of collaboration, three of the eight teachers made
assertions that a key factor to the successful implementation of Continuous Improvement was
consistency. Again, frequently discussed throughout interviews, teachers felt that Shabazz
Leadership Academy had increased the frequency of their meetings for Continuous Improvement
and embedded the mission and goals for improvement as the focal point for every meeting. One
teacher expressed that though some strategies may seem to fail, remaining consistent creates an
opportunity to adjust the strategy and to try again until something works. At times, teachers may
feel defeated but if they have not remained consistent in their improvement practices then they
have not committed the effort necessary to see the change that they seek.
97
Additional Recommendations
The themes of collaboration and consistency were the primary recommendations for
schools struggling to implement systems of Continuous Improvement, however, there were a few
other recommendations that may be helpful. Insight from teachers' interviews also suggested that
trust was a critical element in successful Continuous Improvement efforts. Teachers mentioned
the importance of feeling comfortable with sharing their failures as well as their successes. One
teacher said that a culture had been developed at SLA where they felt comfortable critiquing
each other's strategies and impact without feeling attacked. This level of comfort is a result of
consistent collaboration. Another recommendation was that schools must make their school’s
mission and goals visible for everyone to see and internalize. Shabazz Leadership Academy
made a point to embed the mission in goals in every meeting and atop every meeting agenda.
One of the interview questions asked teachers if they could talk about their school mission and
goals and 75% of teachers responded, almost verbatim, with the same vision as well as how that
vision was represented in the school’s improvement plan goals.
Summary
This study sought to gather information and understanding of how a predominately
low-income school serving mainly Black and Latinx could achieve exceptional growth in the
years before the COVID-19 health pandemic. The study explored the extent to which knowledge,
motivation, and school culture and organization factored into their implementation of Continuous
Improvement. Specifically, the study’s objective was to identify ways that the school addressed
the gaps in knowledge, motivation, and organization as a framework for recommendations that
can be used by similar schools as a model for how they might gain traction in their Continuous
Improvement efforts.
98
The study consisted of a series of interviews with teachers who experienced Continuous
Improvement at Shabazz Leadership Academy. Interview questions were framed around the
concepts of knowledge, motivation, and organization as defined by Richard Clark and Fred Estes
(2008). In addition to interviews conducted with teachers, the researcher used context from the
interviews to complete an analysis of knowledge, motivation, and organization according to
adaptations of assessments provided by Clark and Estes. In addition to aligning the interview
questions to Clark and Estes’ Gap Analysis Framework, the interviews provided evidential
responses to the following research questions:
1) What is teacher knowledge and motivation for Continuous Improvement practices?
2) How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
3) What recommendations do teachers have around knowledge, motivation, and school
organization that may help other schools increase their likelihood of successfully
implementing Continuous Improvement systems?
From the interviews, three themes emerged. The path to gaining knowledge was not a directly
traditional learning path as much as it was an activity of being led through learning. Teacher
motivation was highly intrinsic as one might expect, however, extrinsic motivation from the staff
was an intentional lever for improvement. The organizational need for Continuous Improvement
of SLA was embedded in a culture that stretched resources by encouraging teachers to learn to
lead.
Though exact data cannot be shared to protect the anonymity of the school and school
district, Shabazz Leadership Academy showed exceptional growth by exceeding district
performance norms for two consecutive years. Despite this academic performance and
99
feedback-based achievement, evidence from the interviews conducted with teachers showed that
there were large gaps in the knowledge systems supporting Continuous Improvement. The
interviews attributed the gaps to the ineffectiveness of the school district’s train-the-trainer model
that prepared the school administration to transfer knowledge to their teachers. The model did
not provide tangible resources to be used during or after training. The model did not provide
exemplars of what a cycle of Continuous Improvement should look like in practice throughout
the school year and in meetings. Also, the model did not account for continued training
opportunities for previously trained teachers as well as new teachers. Shabazz Leadership
Academy’s knowledge system was largely hands-on and dependent on trial and error. Teachers
found this process to be overwhelming and unnecessarily complicated. Analysis of the
knowledge domain at SLA showed that there were gaps and growth opportunities that could be
used to boost school performance even further.
Evidence from interviews revealed that teachers at Shabazz Leadership Academy are
largely intrinsically motivated. When asked what motivated them most, all teachers expressed
that they were motivated to have an impact on the lives of the young learners in their school.
When probed about extrinsic motivators, nearly all teachers expressed their appreciation for the
efforts that the school administration makes to motivate them through recognition and
celebration of their impact. Additionally, teachers acknowledged that the administration made
efforts to establish systems of support that encouraged teachers to actively engage in the
Continuous Improvement initiative at SLA. Evidence strongly suggests that Shabazz Leadership
Academy had very few gaps in motivation.
Interviews also revealed that Shabazz Leadership Academy had very few gaps in
organization or organizational culture. The school administration made efforts to ensure that the
100
demands of Continuous Improvement were supported by the resources and processes necessary.
Though financial resources were not unlimited or beyond those of similar schools, the
administration was creative in creating sustainable systems that encouraged collaboration,
provided time for planning and implementation, and established realistic processes. The
organization analysis also supported the evidence from interviews that there were few gaps in the
school organization or culture for Continuous Improvement.
The ultimate goal of this study was to uncover best practice evidence as well as
recommendations from teachers at SLA that could be used by similar schools so that they too can
experience exceptional school and student performance. The recommendations provided by
teachers were not specific to knowledge, motivation, or organization, however, they can be
considered when bridging the gaps in knowledge, motivation, and organization. Collaboration
was presented as a critical lever for the learning process, maintaining motivation, and organizing
for Continuous Improvement. Additionally, schools should remain consistent in practice, despite
adversity, if they desire change.
101
CHAPTER FIVE: DISCUSSION AND RECOMMENDATIONS
This chapter provides a discussion of findings emerging from the study on how
knowledge, motivation, and organizational culture influence a school’s ability to effectively
implement Continuous Improvement as a strategy for school improvement. It begins with a
review of the findings connected to the three phases of school improvement outlined in Chapter
Two’s literature review. The discussion of findings is followed by recommendations for practice
aligned with the Clark and Estes Gap Analysis Framework and observations from research at
Shabazz Leadership Academy. Subsequently, the limitations and delimitations of this study are
reviewed, followed by recommendations for future research. Finally, this chapter concludes with
a brief discussion of insights from the study and its importance.
Findings
As this was a best practice study of a school experiencing exceptional growth in
comparison to like schools, the expectation was that when examining the knowledge, motivation,
and organization influences on Continuous Improvement strategy there would be evidence of
few to no gaps observed. That was not the case. While Shabazz Leadership Academy did show
evidence of high motivation and removal of barriers to the implementation of Continuous
Improvement, the gaps in knowledge for Continuous Improvement were larger than expected as
reported by study participants. Despite the gaps in traditional means of knowledge transfer
according to Clark and Estes, SLA continued to show year-over-year growth. As expected,
growth could be attributed to how knowledge, motivation, and organizational influences are
connected to Continuous Improvement practices.
Continuous Improvement is the umbrella term attached to best practices of school
improvement since its conception (Kim, 2007). In the three phases of Continuous Improvement,
102
some practices emerged as impactful practices that could be transferred from one phase to the
next. One such practice suggested that schools adopt an Effective Schools mindset. In the
absence of resources and adequate support, Shabazz Leadership Academy prioritized organizing
Continuous Improvement efforts around teaching and learning, student outcomes, and
demonstration of quality and equity (Lezotte & Bancroft, 1985; Johnson et al., 2018). This focus
allowed SLA to equip teachers with the knowledge, motivation, and culture to support these
priorities.
Participants in this study referenced the administration-led practice of developing the
capacity of teachers to interrogate the teaching and learning quality in their classrooms and
amongst their peers. This practice, as part of the distributed leadership model of SLA, bridged
knowledge gaps that existed in these areas. Taking a hands-on approach to learning, teachers
became knowledgeable of the practices being used by their peers to impact learning for students.
This process also motivated teachers by instilling trust in the administrators that they were
capable of taking ownership of improvement in their school. Teachers shared that they felt
confident in their ability but they also felt confident in their peers. The administrators establish a
culture of inquiry and analysis by creating the infrastructure that would allow teachers to solve
their problems together as opposed to being led to solutions.
Phase one of Continuous Improvement also asserted the importance of teachers becoming
teacher-researchers who were capable of sharing their knowledge with other schools and
teachers. Through this practice of distributed leadership, SLA teachers have become
teacher-researchers despite the prevailing mindset that teachers are too close to the classroom to
provide usable insight for other teachers (Edmonds, 2020; Ellis and Castle, 2010). The findings
of this study show that there are lessons that can be learned from Shabazz Leadership Academy’s
103
approach to Continuous Improvement that can be useful to similar schools facing challenges in
effectively implementing Continuous Improvement practices.
As expected and expressed by study participants, the challenge of low-income and
marginalized students requires additional resources that may not be immediately available to
address their unique needs. Schools are often conflicted between having a sustained impact and
being responsive to the accountability requirements of policymakers. Shabazz Leadership
Academy administrators empowered their schools by developing leadership in their teachers as a
strategy to bridge the resource gaps left by federal and state policymakers (Lozette, 1991;
Jackson et al., 2020). This finding suggests that to be an effective school, low-income schools
will have to organize to make improvements beyond one-year gains while institutionalizing
strategies that have sustained support to the school’s Continuous Improvement infrastructure.
As expected, this study showed that bridging the gaps in knowledge, motivation, and
organization for engaging in discipline cycles of inquiry is critical to effective Continuous
Improvement strategies. Participants in this study provided evidence that SLA followed the
guidance of the second phase of school improvement science. The second phase of school
improvement suggests that schools set high expectations of students and staff by engaging in
constant cycles of inquiry (Hopfenberg, 1990; Levin, 2017). In schools with high numbers of
marginalized students, the recommendation is that schools create learning communities that have
a shared unity of vision, foster positive relationships, and have a shared culture (Biddle, 2002;
Roig-Vila & Alvarez-Herrero, 2022).
As teachers stated, they engaged in multiple weekly, cycle of inquiry meetings that
standardized a practice of launching meetings with a review of the school mission and strategic
objectives for achieving their goal. Equipped with the knowledge of their improvement goals,
104
teachers were rewarded for their participation in Continuous Improvement practices, and they
were celebrated for the impact that they had on students. Though the challenge of consistently
engaging in Continuous Improvement cycles was initially impeded by issues of time to dedicate
to the effort, SLA administrators reorganized the school’s meeting structures to prioritize
meetings dedicated to school improvement.
The third phase of school improvement, which encompasses the current phases
commonly referred to as Continuous Improvement, focuses on whole-school and whole-student
improvement. Previous phases isolated the focus of school improvement based on the policy
interests of a particular time. While the initial school improvement policy of 1965 sounded the
alarm for equitable school improvement, the reissues of the policy with changing Presidents
created fragmentation in school improvement intent. As mentioned previously, while the No
Child Left Behind policy created an accountability culture that hurt school improvement culture
around the country, low-income schools serving populations of students similar to SLA
benefitted tremendously. The third phase of school improvement promoted a whole-school
design that suggested schools reassess curriculum, school organization, patterns of meeting, and
instructional programming (Husband & Hunt, 2015; Kidron & Darwin, 2007).
The foundation of the improvement mindset at SLA, as expressed by study participants,
was an intrinsic motivation to have an impact on the lives of their students. This was possibly a
coincidence in staff hiring, however, it was also embedded in the motivational and organizational
strategy of the school administration. Teachers had a unified vision of their whole school and
whole student approach to improvement which was reinforced in all Continuous Improvement
meetings. Teachers reported discovering issues in the curriculum during improvement meetings
105
and the importance of having a school improvement model that supported the reevaluation of
that curriculum in the interest of students.
The study of Shabazz Leadership Academy’s implementation of Continuous
Improvement and its impact on their school performance is a study adaptation. Though not
provided with the traditional learning supports in their early adoption of Continuous
Improvement, SLA improvised how they learned. This study shows the impact of using
collaborative learning strategies to bridge gaps in knowledge of Continuous Improvement. The
study also shows how school administrators can proactively plan to keep teachers motivated to
engage in Continuous Improvement. Additionally, the study reveals the importance of building
an organizational culture that supports the overwhelming demands of Continuous Improvement
when limited resources are available.
Recommendations for Practice
The recommendations for practice section provide recommendations for improving
Continuous Improvement practices at schools. The recommendations are based on findings from
interviews in this study and literature review in relation to the research questions of the study.
The research questions seek to understand how bridging gaps in knowledge, motivation, and
organization for Continuous Improvement can help drive performance improvement in schools.
Recommendations provided in this section focus on the gap opportunities of knowledge,
motivation, and organization.
Recommendation 1: Bridging Knowledge Gaps
In regards to concepts that require simple guidance or information to apply practices for
improvement, it may be sufficient to share information that provides a general understanding of
other concepts with the faith that learners have the necessary knowledge and experience to
106
implement the practice (Clark & Estes, 2008). That is not the case for bridging gaps in
Continuous Improvement. Due to the complexity of Continuous Improvement learners require
ongoing and direct guided training that provides opportunities for practice and feedback with
periodic retraining. In particular, for bridging knowledge gaps of Continuous Improvement in
schools, it is recommended that training centers on the development of an understanding of why
Continuous Improvement is necessary, how to engage in collaborative cycles of learning, and
how to share learning experiences (Park, 2019).
Evidence from interviews with teachers revealed that many teachers felt that the training
provided to them focused more on the processes of Continuous Improvement with minimum
insight into the experience of Continuous Improvement. Consequently, many teachers felt
overwhelmed and sometimes burdened to the point of disengagement from Continuous
Improvement at some point. While consistent participation in Continuous Improvement meetings
did create clarity for some, other teachers suggested that they or their peers cycled through a
willingness to seriously engage and felt that the process was overly complicated without a clear
understanding of the expected benefits. One teacher reported that 90% of the teachers at SLA
were consistently engaged in the improvement efforts of the schools and that the other 10%
became disengaged as the school year progressed. The gap in the understanding of the why of
Continuous Improvement will help teachers in their unified vision and understanding of how
Continuous Improvement practices are being leveraged to create opportunities to achieve the
improvement that they individually desire faster and more consistently (Park, 2019). This will
aid teachers in addressing some of the issues that they expressed of having enough time to
address all of the needs of their students.
107
Teachers who were interviewed shared that the majority of their training was hands-on
but primarily focused on the logic of Continuous Improvement planning, and secondarily
focused on the implementation of cycles of improvement. One of the most critical practices of
Continuous Improvement is engagement in disciplined cycles of inquiry (Park, 2019; Byrk
2015). Teachers at SLA reported that their engagement in cycles of inquiry in grade-level teacher
team meetings and Instructional Leadership Team meetings was dependent on trial and error as
there was no formal training on the practice, job aid for implementation of the practice, and
feedback on the effectiveness of their approach. Providing direct training on the implementation
of cycles of inquiry to school administrators and teachers will significantly shorten the learning
gap and provide additional time to dedicate to monitoring and diagnosing the efficacy of
implemented strategies.
Though there are few opportunities for schools to find time to collaborate in their
learning, it is suggested that networked communities of learning are developed to allow
educators to move forward faster together (Byrk, 2015). As demonstrated by Shabazz Leadership
Academy, even in the absence of formal and ongoing training, creating opportunities for
educators to collaborate in the Continuous Improvement learning process bridges knowledge
gaps by allowing teachers to take ownership of their learning. The weekly meetings at SLA
allowed teachers the opportunity to share their learning experiences regularly while allowing
teachers to learn by teaching their peers. Teachers benefit from the opportunity to learn from
each other even if their networked communities are in their schools.
Recommendation 2: Motivate with Intention
Clark and Estes (2008) describe motivation as the product of the work environment that
is created. That said, willingness to engage in Continuous Improvement is directly related to an
108
employee's level of satisfaction with how they perceive the quality of work, the mission of the
organization, and the treatment of the people around them. Just as motivation can be negatively
impacted, creating an unwillingness to engage, motivation can be positively impacted to create a
willingness to engage. While many teachers are naturally inclined to be motivated to see school
improvement, that intrinsic motivation remains dependent on leadership's ability to influence
their environment. Two critical factors that affect motivation to engage in Continuous
Improvement are teacher self-efficacy and social environment (Jurburg et al., 2019). The
recommendation is that leadership be intentional about how they contribute to teacher efficacy
and environment.
In the case of Shabazz Leadership Institute, teachers were already intrinsically motivated
to engage in Continuous Improvement, however, the school administration made intentional
efforts to support teachers’ self-efficacy and to establish the necessary social environment. The
study showed evidence that teachers had the confidence to improve collectively, however,
interviews also showed evidence that the school administrators established processes to build
confidence in teachers. The administrators created opportunities for teachers to lead learning
activities, act in leadership roles, and collaborate to solve problems on their own. One teacher
discussed the principal’s deliberate action of transitioning meeting formats from principal-led to
teacher-led facilitation. Additionally, the school administrators created a social environment that
supported and celebrated the growth that the school and individual teachers were achieving as a
formal process in the regular operations of the school. Teachers discussed a feeling that the
principal would go to extreme lengths to create a sense of transparency without penalty which
allowed teachers to feel like Continuous Improvement was a growth opportunity as opposed to
an accountability measure.
109
Recommendation 3: Organize to Support Improvement Needs
Schools across the country struggle with competing priorities of how to distribute limited
resources and allocate time to improvement efforts, however, schools must organize to meet the
demands of their Continuous Improvement strategy (Park, 2019). Though organizations have
typically standardized patterns of response to issues and challenges, engagement in school
Continuous Improvement will constantly create new challenges that require shifts in operations
and culture (Clark & Estes, 2008). The recommendation is that schools organize themselves to
be responsive to demands by meeting consistently and collaboratively to discover challenges,
establishing systems of shared leadership, and creatively removing barriers.
Participants in this study identified a common challenge of finding the time to implement
Continuous Improvement in their regular meeting structure. In response, the administrators of
their school created collaborative meeting structures that refocused how their time was being
spent. The principal provided daily prep time for teachers but also provided job aids and
feedback on how to best spend prep time. Teachers were provided with avenues for raising
awareness of issues that emerged in their Continuous Improvement efforts and 88% of teachers
expressed confidence that their challenges would be addressed by their school administration.
One teacher shared an example of an issue that they experienced and the principal’s direction to
rely on their peer leadership to solve the issue in collaboration.
Recommendation Four: Cultivate School Leadership
According to Feldholf et al. (2016), Continuous Improvement practices require that
school leaders effectively manage their schools. The need for effective management and strong
leadership remains a best practice of Continuous Improvement as this research has shown. In
order to ensure strong indicators for knowledge, motivation, and organization, the school leader
110
is critical. If a school is to leverage KMO of Continuous Improvement to improve school
performance, a strong leader is needed to drive initiatives and maintain consistency of practices.
The recommendation is that Continuous Improvement practices be coupled with leadership
development for school administrators to prepare school administrators for implementation of
Continuous Improvement in alignment with strategies for bridging gaps in knowledge,
motivation, and school organization.
In the case of Shabazz Leadership Academy, evidence showed that teachers felt the
school administrators skillfully supported teachers’ by providing guidance in their understanding
of Continuous Improvement. Though gaps were evident in the knowledge domain, the school
administrator created opportunities for teachers to work together to build their understanding.
Teachers also indicated that the school administrators intentionally built a culture that motivated
staff by celebrating and acknowledging their achievements. Additionally, the leadership at
Shabazz Leadership Academy diligently responded to the needs that emerged as a result of
Continuous Improvement findings. Leaders identified fiscal and human resources gaps and
creatively bridged those gaps to create a culture of improvement. Without strong school
leadership, Continuous Improvement implementation efforts would not be realized.
Limitations and Delimitations
The primary limitation of this study was the pool of participants. Since the COVID-19
health pandemic, many teachers have opted to leave the field of education. This study sought to
capture the teachers who were teaching in schools before the pandemic, so some schools could
not participate because their teacher turnover rates were so high. Also, since the COVID-19
pandemic, school performance data may be skewed. This has had an impact on assessing how
ongoing school and student improvement is defined. This study was limited to growth before the
111
2020 school year due to the lack of reliability of student data beyond 2020. Additionally, there
may have been some limitations to the way that participants responded to questions. Due to some
potential fears about accountability, some teachers may have been consciously or subconsciously
motivated to inflate positive responses on behalf of themselves and/or their schools.
The delimitations of this research study are related to the questions that were asked.
Questions were carefully asked to elicit open responses that the researcher could interpret to limit
instances of inflated responses. While I believe the researcher is skilled and experienced enough
to adequately interpret responses, the researcher must still acknowledge their own bias to ensure
that they do not inflate their interpretations. Additional delimitations included the school
selection process. Few schools have evidence that they have effectively been implementing
Continuous Improvement. Hence, the researcher intentionally limited their selection of schools
to school districts and schools that can show some evidence that they had been implementing
Continuous Improvement.
Recommendations for Future Research
The initial assumption before beginning this research was that all of the schools that met
the high growth criteria for inclusion in this study would have few gaps in knowledge,
motivation, and organization for Continuous Improvement. The assumption was that for the
small number of schools showing multiple years of substantial growth in schools serving
low-income Black and Latinx students, there could not be gaps in any of these indicators. Only
16% of schools in SLA’s district met the criteria for inclusion in this study, however, SLA was in
the bottom third of those schools. Future studies should seek to understand the degree to which
growth is impacted by bridging gaps in knowledge, motivation, and organization. Future
research should sample teachers from three schools. Each school should represent the top and
112
middle thirds of growth. This research could help improve understanding of the extent to which
impact could be predicted by developing interventions that reduce gaps to nil across the three
indicators provided by Clark and Estes.
Consideration was given to completing this research in a different school district,
however, schools were reluctant to participate despite meeting the inclusion criteria of this study.
A conversation with a school administrator in that district revealed that Continuous Improvement
was often referred to as a punitive measure and for that reason, some schools might be less
inclined to associate themselves with the practice. Nonetheless, the district was touting the
benefits of Continuous Improvement and a growing number of schools had been engaging in the
practice. Further research should be undertaken to better understand these indicators as a lever
for driving improvement in lower-performing schools that are mandated to engage in Continuous
Improvement.
Though this research focused on indicators of knowledge, motivation and organization
for Continuous Improvement, an unexpected realization was the degree to which school
leadership played a role in bridging the gaps in these indicators. Further research should be done
to better understand the leadership practices and traits of administrators leading Continuous
Improvement strategies in their schools. Understanding the variance in leadership practices and
its correlation to performance may be valuable in providing guidance for Continuous
Improvement readiness support for administrators that wish to explore Continuous Improvement
as a tool for their school improvement.
Conclusion
The goal of this best practice study was to examine the Continuous Improvement
practices of a low-income school serving primarily Black and Latinx students. The study
113
explored the knowledge, motivation, and organization indicators as a framework for identifying
potential gaps in Continuous Improvement implementation at schools similar to the school
researched in this study. As the academic achievement gap between low-income Black and
Latinx students and their more affluent counterparts continues to widen (Michelmore &
Dynarski, 2017), there has been increased attention to closing this gap. Continuous Improvement
has emerged as a critical lever for improvement in disadvantaged schools though the
implementation efforts have been met with varying degrees of success. This study seeks to
provide insight into how schools can bridge specific gaps in knowledge, motivation, and
organization for Continuous Improvement to replicate the success that other similar schools have
achieved despite policy shifts that have often hindered school improvement efforts.
The study reveals that schools with few gaps in knowledge, motivation, and organization
for Continuous Improvement are likely to see improvement in school and student achievement.
As evidenced by observations at the Shabazz Leadership Academy, schools should make efforts
to ensure that teachers are trained effectively in the implementation theory and practices of
Continuous Improvement and are provided with the appropriate resources to support their
learning. Schools motivate teachers to engage in Continuous Improvement by building
confidence to support teacher self-efficacy and developing a school culture that celebrates
improvement. Additionally, schools should organize to support the changing demands of
Continuous Improvement and ensure that teachers have the resources and leadership to address
the needs as they arise. Schools that take deliberate steps to identify potential gaps in knowledge,
motivation, and organization for Continuous Improvement are likely to realize the full potential
of the school improvement strategy and see improvement in their school, thereby, increasing the
likelihood of future success for their students.
114
References
Aderet-German, T., & Ben-Peretz, M. (2020). Using data on school strengths and weaknesses for
school improvement. Studies in Educational Evaluation, 64, 100831.
Alvarez, M. E., & Anderson-Ketchmark, C. (2011). Danielson's framework for teaching.
Children & Schools, 33(1), 61-63.
Bai, X., Zhang, F., Li, J., Guo, T., Aziz, A., Jin, A., & Xia, F. (2021). Educational big data:
predictions, applications, and challenges. Big Data Research, 26, 100270.
Baker B. D., Farrie D., Sciarra D. G. (2016). Mind the gap: 20 years of progress and
retrenchment in school funding and achievement gaps (Research Report No. RR-16-15).
Educational Testing Service. https://doi.org/10.1002/ets2.12098
Bessent, A., Bessent, W., Elam, J., & Long, D. (1984). Educational productivity council employs
management science methods to improve educational quality. Interfaces, 14(6), 1-8.
Bhuiyan, N., & Baghel, A. (2005). An overview of continuous improvement: from the past to the
present. Management Decision, 43(5), 761-771.
Biddle, J. K. (2002). Accelerated schools as professional learning communities.
Bocola C., & Boudett, K. P. (2015). Teaching educators habits of mind for using data wisely.
Teachers College Record, 117, 040304.
Borman, G. D., Hewes, G. M., Overman, L. T., & Brown, S. (2002). Comprehensive school
reform and student achievement: A meta-analysis.
Boudett, K. P., City, E. A., & Murnane, R. J. (2006). The ‘data wise’improvement process.
Harvard Education Letter, 11(4), 1-3.
Bozkurt, A., Karakaya, K., Turk, M., Karakaya, Ö., & Castellanos-Reyes, D. (2022). The impact
of COVID-19 on education: A meta-narrative review. TechTrends: Linking Research and
115
Practice to Improve Learning, 66(5), 883-896.
doi:https://doi.org/10.1007/s11528-022-00759-0
Bryk, A. S. (2009). Support a science of performance improvement. Phi Delta Kappan, 90(8),
597-600.
Bryk, A. S. (2009). Support a science of performance improvement. Phi Delta Kappan, 90(8),
597-600.
Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How
America’s schools can get better at getting better. Harvard Education Press.
Bush-Mecenas, S. (2022). "The business of teaching and learning": Institutionalizing equity in
educational organizations through continuous improvement. American Educational
Research Journal, 59(3), 461-499. https://doi.org/10.3102/00028312221074404
Christoff, P. (2018). Running PDSA cycles. Current problems in pediatric and adolescent health
care, 48(8), 198-201.
Coburn, C. E., & Penuel, W. R. (2016). Research–practice partnerships in education: Outcomes,
dynamics, and open questions. Educational researcher, 45(1), 48-54.
Cochran-Smith, M. (2020). Teacher education for justice and equity: 40 years of advocacy.
Action in teacher education, 42(1), 49-59.
Colyvas, J. A. (2012). Performance metrics as formal structures and through the lens of social
mechanisms: When do they work and how do they influence?. American Journal of
Education, 118(2), 167-197.
Darling-Hammond, L. (2015). Getting teacher evaluation right: What really matters for
effectiveness and improvement. Teachers College Press.
116
Darling-Hammond, L. (2018). From “separate but equal” to “No Child Left Behind”: The
collision of new standards and old inequalities. In Thinking about schools (pp. 419-437).
Routledge.
Darling-Hammond, L., Bae, S., Cook-Harvey, C. M., Lam, L., Mercer, C., Podolsky, A., &
Stosich, E. L. (2016). Pathways to new accountability through the Every Student
Succeeds Act.
"Davey, B. (1991). Evaluating teacher competence through the use of performance assessment
tasks: An overview. Journal of Personnel Evaluation in Education, 5(2), 121.
https://usc.illiad.oclc.org/illiad/pdf/979089.pdf"
Doyle, O. (2020). COVID-19: Exacerbating educational inequalities. Public Policy, 9, 1-10.
Edmonds, R. (2020). Characteristics of effective schools. In The school achievement of minority
children (pp. 93-104). Routledge.
Ellis, C., & Castle, K. (2010). Teacher research as continuous process improvement. Quality
Assurance in Education: An International Perspective, 18(4), 271-285.
doi:https://doi.org/10.1108/09684881011079134
Feldhoff, T., Radisch, F., & Bischof, L. M. (2016). Designs and methods in school improvement
research: A systematic review. Journal of Educational Administration, 54(2), 209-240.
https://10.1108/JEA-07-2014-0083
Ferguson, S. (2004). How grades fail our kids. Maclean's, 117(2), 28-34.
https://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=11843931&authtype
=sso&custid=s8983984
117
Fischer, C., Pardos, Z. A., Baker, R. S., Williams, J. J., Smyth, P., Yu, R., ... & Warschauer, M.
(2020). Mining big data in education: Affordances and challenges. Review of Research in
Education, 44(1), 130-160.
Fullan, M. (2011). Choosing the wrong drivers for whole system reform (pp. 3-4). Victoria:
Centre for Strategic Education.
Gorey, K. M. (2009). Comprehensive school reform: Meta-analytic evidence of Black-White
achievement gap narrowing. Education Policy Analysis Archives, 17(25).
Haertel, E. (1986). Measuring school performance to improve school practice. Education and
Urban Society, 18(3), 312–325. https://doi.org/10.1177/0013124586018003004
Halle, D., Beveridge, A. A., & Beveridge, S. (2014). As newly returned New York Police
Commissioner, Bill Bratton’s first task will be to regain the trust of the city’s most
heavily policed groups. LSE American Politics and Policy.
Hanushek, E. A., Peterson, P. E., Talpey, L. M., & Woessmann, L. (2019). The achievement gap
fails to close: Half century of testing shows persistent divide between haves and
have-nots. Education Next, 19(3), 8-18.
Harks, B., Rakoczy, K., Hattie, J., Besser, M., & Klieme, E. (2014). The effects of feedback on
achievement, interest and self-evaluation: the role of feedback’s perceived usefulness.
Educational Psychology, 34(3), 269-290.
Harris, A. (2000). What works in school improvement? Lessons from the field and future
directions. Educational research, 42(1), 1-11.
Herman, J. L., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry
and continuous improvement: Final report to the Stuart Foundation. Center for the Study
of Evaluation, National Center for Research on Evaluation, Standards, and Student
118
Testing, Graduate School of Education & Information Studies, University of California,
Los Angeles.
Hopfenberg, W. S. (1990). Accelerated Schools.
Hopkins, D. (2001). School Improvement for Real (1st ed.). Routledge.
https://doi.org/10.4324/9780203165799
Hopkins, D. (2020). Unleashing greatness-a strategy for school improvement. Australian
Educational Leader, 42(3), 8-17.
Hopkins, D., & Reynolds, D. (2001). The past, present and future of school improvement:
Towards the third age. British Educational Research Journal, 27(4), 459-475.
Horner, R. H. (2020). The marriage of policy, practices, and data to achieve educational reform.
American Journal on Intellectual and Developmental Disabilities, 125(5), 340-344.
Huber, D. J., & Conway, J. M. (2015). The effect of school improvement
planning on student achievement. Planning and Changing, 46(1), 56-70.
Hunzicker, J. (2017). Using Danielson's framework to develop teacher leaders. Kappa Delta Pi
Record, 53(1), 12-17.
Husband, T., & Hunt, C. (2015). A review of the empirical literature on No Child Left Behind
from 2001 to 2010. Planning and Changing, 46(1/2), 212.
Iwanicki, E. F., & McEachern, L. (1983). Teacher self-improvement: A promising approach to
professional development and school improvement. Journal of Staff Development, 4(1),
62-77. Retrieved from
http://libproxy.usc.edu/login?url=https://www.proquest.com/scholarly-journals/teacher-se
lf-improvement-promising-approach/docview/63508985/se-2
119
Jackson, C. K., Porter, S. C., Easton, J. Q., & Kiguel, S. (2020). Who benefits from attending
effective schools? Examining heterogeneity in high school impacts (No. w28194).
National Bureau of Economic Research.
James W. Guthrie & Matthew G. Springer (2004) A Nation at Risk revisited: Did "wrong"
reasoning result in "right" results? At what cost?. Peabody Journal of Education, 79(1),
7-35. DOI: 10.1207/s15327930pje7901_2
Johnson, W. L., Johnson, A. M., & Johnson, J. W. (2018). The Three Generations of Effective
Schools Research. Online Submission.
Jurburg, D., Viles, E., Tanco, M., Mateo, R., & Lleó, Á. (2019). Understanding the main
organizational antecedents of employee participation in continuous improvement. TQM
Journal, 31(3), 359–376. https://doi-org.libproxy2.usc.edu/10.1108/TQM-10-2018-0135
Kamenetz, A. (2018). What ‘a nation at risk’got wrong, and right, about US schools. National
Public Radio.
Kania, J., & Kramer, M. (2015). The equity imperative in collective impact. Stanford Social
Innovation Review, 1-6.
Kazouh, A., Hollowell, A., Fox, L., & Bentley-Edwards, K. (2020). Pre-K through 12 education
and COVID-19: Landscape analysis of impact indicators. Public School Forum of North
Carolina. Retrieved from ERIC Retrieved from
http://libproxy.usc.edu/login?url=https://www.proquest.com/reports/pre-k-through-12-edu
cation-covid-19-landscape/docview/2535065749/se-2
Kershner, R., Flutter, J., & Rudduck, J. (1998). Teacher research as a basis® for school
improvement: but is it useful beyond the school in which it was carried out?. Improving
Schools, 1(2), 59-62.
120
Kidron, Y., & Darwin, M. J. (2007). A systematic review of whole school improvement models.
Journal of Education for Students Placed at Risk, 12(1), 9-35.
Kilbourne, A. M., Beck, K., Spaeth‐Rublee, B., Ramanuj, P., O'Brien, R. W., Tomoyasu, N., &
Pincus, H. A. (2018). Measuring and improving the quality of mental health care: a
global perspective. World psychiatry, 17(1), 30-38.
Knouse, S. B., Carson, P. P., Carson, K. D., & Heady, R. B. (2009). Improve constantly and
forever: The influence of W. Edwards Deming into the twenty‐first century. The TQM
Journal, 21(5), 449-461. https://10.1108/17542730910983371
Kuhfeld, M., Soland, J., Lewis, K., Ruzek, E., & Johnson, A. (2022). The COVID-19 school
year: Learning and recovery across 2020-2021. Aera Open, 8, 23328584221099306.
Kyriakides, L., Creemers, B. P., Antoniou, P., Demetriou, D., & Charalambous, C. Y. (2015). The
impact of school policy and stakeholders' actions on student learning: A longitudinal
study. Learning and Instruction, 36, 113-124.
Leithwood, K., & Steinbach, R. (1992). Improving the problem-solving expertise of school
administrators: Theory and practice. Education and Urban Society, 24(3), 317-345.
Levin, H. M. (1989). Accelerated schools: A new strategy for at-risk students. Policy Bulletin.
Levin, H. M. (2017). Accelerated Schools for At-Risk Students. CPRE.
Lezotte, L. W., & Bancroft, B. A. (1985). School improvement based on effective schools
research: A promising approach for economically disadvantaged and minority students.
The Journal of Negro Education, 54(3), 301-312.
Lezotte, L. (1991). Correlates of effective schools. The first and second generation.
Lezotte, L. (2001). Revolutionary and evolutionary: The effective schools movement. Okemos,
MI: Effective Schools Products, Ltd, 5.
121
Lim, C. P. (2007). Improving schools and educational systems edited by A. Harris and J.
Chrispeels. British Journal of Educational Technology, 38(6), 1139-1140.
https://10.1111/j.1467-8535.2007.00772_8.x
Lohr, S. L. (2015). Red beads and profound knowledge: Deming and quality of education.
Education Policy Analysis Archives, 23(80), n80.
Lum, C., & Koper, C. S. (2015). Evidence-based policing. Critical Issues in Policing:
Contemporary Readings, 260-274.
Lynch, B. D. (2000). Policing efficacy: Socioeconomic destiny or effective information analysis.
Journal of International Information Management, 9(1), 6.
Malik, R. S. (2016). Equipping Teachers and School Leaders with Effective Pedagogical
Practices and Research-Based Teaching for Deep Learning: Global Scenarios to Improve
Teacher Education: Focus on Some Successful Education Systems. Proceeding of
ICMSE, 3(1), M-1.
Masters, G. N. (2010). Teaching and learning school improvement framework.
Mavrogordato, M., Youngs, P., Donaldson, M. L., Kang, H., & Dougherty, S. M. (2023).
Motivating leadership change and improvement: How principal evaluation addresses
intrinsic and extrinsic sources of motivation. Educational Administration Quarterly,
0013161X231188706.
McGarrell, E. F., Freilich, J. D., & Chermak, S. (2007). Intelligence-led policing as a framework
for responding to terrorism. Journal of Contemporary Criminal Justice, 23(2), 142-158.
https://doi.org/10.1177/1043986207301363
McGeehan, T. P. (2018). Countering russian disinformation. The US Army War College
Quarterly: Parameters, 48(1), 7.
122
McLean, R., & Antony, J. (2014). Why continuous improvement initiatives fail in manufacturing
environments? A systematic review of the evidence. International Journal of
Productivity and Performance Management, 63(3), 370-376.
https://doi.org/10.1108/IJPPM-07-2013-0124
Meyers, C. V., & VanGronigen, B. A. (2021). Planning for what? An analysis of root cause
quality and content in school improvement plans. Journal of Educational Administration,
59(4), 437–453. https://doi-org.libproxy1.usc.edu/10.1108/JEA-07-2020-0156
Molloy, E., Boud, D., & Henderson, M. (2020). Developing a learning-centred framework for
feedback literacy. Assessment & Evaluation in Higher Education, 45(4), 527-540.
Muijs, D., Kyriakides, L., Van der Werf, G., Creemers, B., Timperley, H., & Earl, L. (2014).
State of the art–teacher effectiveness and professional learning. School Effectiveness and
School Improvement, 25(2), 231-256.
Muir, T., Livy, S., Herbert, S., & Callingham, R. (2018). School leaders’ identification of school
level and teacher practices that influence school improvement in national numeracy
testing outcomes. The Australian Educational Researcher, 45, 297-313.
Myung, J., & Kimner, H. (2020). Continuous improvement in schools in the COVID-19 context:
A summary brief. Policy Analysis for California Education, PACE.
Retrieved from
http://libproxy.usc.edu/login?url=https://www.proquest.com/reports/continuous-improve
ment-schools-covid-19-context/docview/2535224762/se-2
National Commission on Excellence in Education. (1983). A nation at risk: The imperative for
educational reform. The Elementary School Journal, 84(2), 113-130.
123
Nelson, J., & Campbell, C. (2019). Using evidence in education. In What Works Now? (pp.
131-150). Policy Press.
Newmann, F. M., Smith, B., Allensworth, E., & Bryk, A. S. (2001). Instructional program
coherence: What it is and why it should guide school improvement policy. Educational
Evaluation and Policy Analysis, 23(4), 297-321.
Ngasi, M. A., Dawo, J. I., & Sika, J. (2020). Linking school-based tangible non-monetary
incentives for teachers withstudents’ academic performance in public secondary schools
in Kisumu West Sub-county, Kenya.
Nir, A. E., & Ben Ami, T. (2005). School-parents relationship in the era of school-based
management: Harmony or conflict?. Leadership and Policy in Schools, 4(1), 55-72.
Park, S., Hironaka, S., Carver, P., & Nordstrum, L. (2013). Continuous Improvement in
Education. Advancing Teaching--Improving Learning. White Paper. Carnegie
Foundation for the Advancement of Teaching.
Park, V. (2019). Bridging the Knowing-Doing Gap for Continuous Improvement: The Case of
Long Beach Unified School District. Policy Analysis for California Education, PACE.
Peterson, M. (2005). Intelligence-led policing: The new intelligence architecture. Washington,
DC: Bureau of Justice Assistance.
Peterson, P. E. (2016). The end of the Bush-Obama regulatory approach to school reform: Choice
and competition remain the country's best hope. Education Next, 16(3), 22-33.
Pijanowski, J. (2019). Historical Policy Influences on Balancing Educational Equity, Adequacy,
and Local Control. eJEP: eJournal of Education Policy.
124
Prenger, R., Poortman, C. L., & Handelzalts, A. (2017). Factors influencing teachers’
professional development in networked professional learning communities. Teaching and
Teacher Education, 68, 77-90.
Proger, A. R., Bhatt, M. P., Cirks, V., & Gurke, D. (2017). Establishing and sustaining networked
improvement communities: Lessons from Michigan and Minnesota. US Department of
Education.
Rafiq, S., Afzal, A., & Kamran, F. (2022). Exploring the problems in teacher evaluation process
and its perceived impact on teacher performance. Gomal University Journal of Research,
38(4), 482-500.
Rohanna, K. (2017). Breaking the “adopt, attack, abandon” cycle: A case for improvement
science in K–12 education. New Directions for Evaluation, 2017(153), 65-77.
Roig-Vila, R., & Alvárez-Herrero, J. F. (2022). Learning Communities. In Innovation and ICT in
Education (pp. 93-102). River Publishers.
Rothstein, R. (2015). The racial achievement gap, segregated schools, and segregated
neighborhoods: A constitutional insult. Race and Social Problems, 7, 21-30.
Runesson, U. (2006). What is it possible to learn? On variation as a necessary condition for
learning. Scandinavian Journal of Educational Research, 50(4), 397-410.
Salmond, S. W., & Echevarria, M. (2017). Healthcare transformation and changing roles for
nursing. Orthopedic nursing, 36(1), 12.
Sanchez, L., & Blanco, B. (2014). Three decades of continuous improvement. Total Quality
Management & Business Excellence, 25(9-10), 986-1001.
Shaked, H. (2023). How organizational management supports instructional leadership. Journal of
Educational Administration, 61(1), 60-77.
125
Shaked, H., & Schechter, C. (2017). Systems thinking among school middle leaders. Educational
Management Administration & Leadership, 45(4), 699-718.
Shakman, K., Wogan, D., Rodriguez, S., Boyce, J., & Shaver, D. (2020). Continuous
Improvement in Education: A Toolkit for Schools and Districts. REL 2021-014. Regional
Educational Laboratory Northeast & Islands.
Silverstein, R. (2014). Data Inquiry for School Improvement: Root Cause Analysis. Q&A with
Roni Silverstein. REL Mid-Atlantic Webinar. Regional Educational Laboratory
Mid-Atlantic.
Singh, J., & Singh, H. (2014). Performance enhancement of a manufacturing industry by using
continuous improvement strategies–a case study. International Journal of Productivity
and Quality Management, 14(1), 36-65.
Skedsmo, G., & Huber, S. G. (2019). Forms and practices of accountability in education.
Educational Assessment, Evaluation and Accountability, 31, 251-255.
Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and
research. Educational Researcher, 31(7), 15-21.
Spain, A., & McMahon, K. (2016). More than just test scores: Leading for improvement with an
alternative community-driven accountability metric. Journal of Cases in Educational
Leadership, 19(2), 21-30.
Steifel, L. (2013). Measuring school performance & efficiency. Routledge.
Stoll, L. (2014). Teacher growth in the effective school. In Teacher development and educational
change (pp. 104-122). Routledge.
Stronge, J. H. (2006). Teacher evaluation and school improvement: Improving the educational
landscape. Evaluating Teaching: A Guide to Current Thinking and Best Practice, 2, 1-23.
126
Sun, J., Levey, J., & Vaux, N. (2015). An evolving data wise culture (DWC): A case study.
Journal of Interdisciplinary Studies in Education, 4(1), 78-100.
Szőköl, I. (2018). Continuous improvement of the teaching process in primary education.
Journal of Language and Cultural Education, 6(1), 53-64.
Temponi, C. (2005). Continuous improvement framework: implications for academia. Quality
Assurance in Education, 13(1), 17-36.
The importance of data comparability in today’s #assessment environment #edchat. (2016).
Teach. Learn. Grow. Retrieved Feb 15, 2023, from
https://www.nwea.org/blog/2016/the-importance-of-data-comparability-in-todays-assess
ment-environment/
Thornton, B., Shepperson, T., & Canavero, S. (2007). A systems approach to school
improvement: program evaluation and organizational learning. Education, 128(1), 48-56.
Tichnor-Wagner, A., Wachen, J., Cannata, M., & Cohen-Vogel, L. (2017). Continuous
improvement in the public school context: Understanding how educators respond to
plan–do–study–act cycles. Journal of Educational Change, 18, 465-494.
Topolovčan, T., & Dubovicki, S. (2019). The heritage of the cold war in contemporary curricula
and educational reforms. Center for Educational Policy Studies Journal, 9(2), 11-32.
US House of Representatives. Budget cuts and lost learning: Assessing the impact of COVID-19
on public education. hearing before the committee on education and labor. U.S. house of
representatives, one hundred sixteenth congress, second session (june 15, 2020). serial
no. 116-58 (2022). Retrieved from
http://libproxy.usc.edu/login?url=https://www.proquest.com/government-official-publicat
ions/budget-cuts-lost-learning-assessing-impact-covid/docview/2722400783/se-2
127
van Elk, R., & Kok, S. (2016). The impact of a comprehensive school reform policy for weak
schools on educational achievement; results of the first 4 years. De Economist, 164,
445-476.
Vinodh, S., Antony, J., Agrawal, R., & Douglas, J. A. (2021). Integration of continuous
improvement strategies with industry 4.0: A systematic review and agenda for further
research. TQM Journal, 33(2), 441-472. https://10.1108/TQM-07-2020-0157
Wade, H. H. (2001). Data inquiry and analysis for educational reform.
Whalen, S. P. (2020). Transforming Central Office Practices for Equity, Coherence, and
Continuous Improvement: Chicago Public Schools under the Leadership of Dr. Janice K.
Jackson. Online Submission.
Wolcott, M. D., McLaughlin, J. E., Hubbard, D. K., Rider, T. R., & Umstead, K. (2021). Twelve
tips to stimulate creative problem-solving with design thinking. Medical Teacher, 43(5),
501-508.
Yurkofsky, M. M., Peterson, A. J., Mehta, J. D., Horwitz-Willis, R., & Frumin, K. M. (2020).
Research on continuous improvement: Exploring the complexities of managing
educational change. Review of Research in Education, 44(1), 403-433.
Zangwill, W. I., & Kantor, P. B. (1998). Toward a theory of continuous improvement and the
learning curve. Management Science, 44(7), 910-920.
https://doi.org/10.1287/mnsc.44.7.910
Zsambok CE, Klein G. Naturalistic Decision Making. Psychology Press; 1997.
doi:10.4324/9781315806129
128
Appendix A: Interview Protocol
Interview Protocol
Participant ID: ____ Date of Interview: ______ Start Time: _____ End Time: _____
Location of Interview: ___ On-site ____ Video Conference
Research Questions:
● RQ1. What is teacher knowledge and motivation for Continuous Improvement practices?
● RQ2. How does a school’s culture and climate impact teachers’ ability to implement
Continuous Improvement effectively?
● RQ3. What recommendations do teachers have around knowledge, motivation, and
school organization that may help other schools increase their likelihood of successfully
implementing Continuous Improvement systems?
Interview Questions Potential Probes NOTES
1. Tell me your name
and how long you
have been teaching
at your school.
How did you get
interested in
teaching?
What has been the
best part of your
teaching experience?
Has it been what you
expected it to be?
What has been most
shocking?
2. Continuous
Improvement has
been defined as an
intentional system of
improvement that
requires a cyclical
process of disciplined
inquiry into
challenges.
What do you know
about Continuous
Improvement or the
CIWP?
3. Think back to your
first introduction to
Continuous
Can you tell me how
you were introduced
to the CIWP? Was
129 Improvement at your school. Please share how you were introduced to Continuous Improvement.
there any training
made available?
4.
D
e
s
c
rib
e
y
o
u
r initial feelings in response to the introduction of Continuous Improvement. How did the othe
r
t
e
a
c
h
e
r
s
r
e
c
eiv
e it
?
KNOWLEDGE
5.
C
a
n
y
o
u
w
alk
m
e
t
h
r
o
u
g
h
t
h
e
s
t
a
n
d
a
r
d
p
r
o
c
e
s
s, if
a
n
y, t
h
a
t
w
a
s
u
s
e
d
t
o
t
r
ain
y
o
u
o
n
t
h
e
u
s
e
o
f
C
o
n
tin
u
o
u
s Improveme
n
t
(
CIW
P
)
a
t
y
o
u
r
s
c
h
o
ol?
W
e
r
e
g
o
als
a
n
d
o
bj
e
c
tiv
e
s
fo
r
c
o
n
tin
u
o
u
s improveme
n
t
m
a
d
e
cle
a
r
?
W
a
s
a
m
o
d
el o
r
e
x
e
m
pla
r
p
r
o
vid
e
d
fo
r
“
h
o
w
t
o im
ple
m
e
n
t
y
o
u
r
CIW
P
”
?
W
e
r
e
g
o
als
fo
r
CIW
P
t
r
ainin
g
m
a
d
e
cle
a
r
? In your training on CIWP monitoring and implementation, was any feedback provided or effort made to ensure that you understood the processes?
130
MOTIVATION
6.
W
h
a
t, if
a
n
y
t
hin
g, motivated you to implement Continuous Improvement? or to engage with your CIWP? If
y
o
u
h
a
v
e
b
e
e
n
m
o
tiv
a
t
e
d, w
e
r
e
y
o
u
s
elfm
o
tiv
a
t
e
d
o
r
w
e
r
e
t
h
e
r
e
e
x
t
e
r
n
al motivators? Do you think you or your colleagues felt confident that Continuous Improvement would have a positive impact? How often would yo
u
s
a
y
y
o
u
a
n
d
y
o
u
r
p
e
e
r
s
a
r
e
a
c
k
n
o
wle
d
g
e
d
fo
r
a
c
hie
v
e
m
e
n
t
?
C
a
n
y
o
u
t
alk
a
b
o
u
t
a
tim
e
w
h
e
n
y
o
u
w
e
r
e
h
a
vin
g
a
h
a
r
d
tim
e
r
e
a
c
hin
g
a
n improvemen
t
g
o
al and your problem was heard and considered?
ORGANIZATION
7.
C
a
n
y
o
u
t
alk
a
b
o
u
t
a
n
y
s
hift
s in
y
o
u
r
s
c
h
o
ol’s
s
y
s
t
e
m
s, policies, and procedures that w
e
r
e
m
a
d
e
t
o
s
u
p
p
o
r
t
C
o
n
tin
u
o
u
s Improveme
n
t
?
131
In ways, if any, has
your school aligned
its structures and
processes to assist
you in reaching your
improvement goals?
How much time does
your school provide
for strategizing on
your classroom
improvement?
8.
W
h
a
t
p
r
a
c
tic
e
s
w
e
r
e
u
s
e
d
t
o
e
n
s
u
r
e
t
h
a
t
C
o
n
tin
u
o
u
s Improvement wa
s
s
u
s
t
ain
e
d in
y
o
u
r
s
c
h
o
ol?
Can you tell me about
your school’
s vision
for improvement,
what their main goals
are, and how progress
toward the goals is
shared?
In what ways would
you say leadership
ensures that teachers
have the knowledge,
skills, and
motivational support
to reach their goals?
RECOMMENDATIONS
9.
B
a
s
e
d
o
n
y
o
u
r
e
x
p
e
rie
n
c
e, w
h
a
t
r
e
c
o
m
m
e
n
d
a
tio
n
s
w
o
uld
y
o
u
h
a
v
e
fo
r
a
s
c
h
o
ol t
h
a
t is
s
t
r
u
g
glin
g
t
o implement Continuous Improvemen
t
?
132
10.
T
h
e
g
o
al o
f
t
his
s
t
u
d
y is
t
o
c
a
p
t
u
r
e
t
his
s
c
h
o
ol's
CI experience and
t
o
g
e
t
a
b
e
t
t
e
r
u
n
d
e
r
s
t
a
n
din
g
o
f
b
e
s
t
p
r
a
c
tic
e
s
t
h
a
t
c
a
n
b
e
s
h
a
r
e
d
t
o
h
elp
o
t
h
e
r
s
c
h
o
ols im
p
r
o
v
e. W
h
a
t
els
e
c
a
n
y
o
u
s
h
a
r
e
t
h
a
t
mig
h
t
b
e important considerin
g
t
h
e
s
e
g
o
als
?
133
Appendix B: Interview Data Analysis
Participant ID: ____ Date of Interview: ______ Start Time: _____ End Time: _____
Knowledge Average: ______
Motivation Average: ______
Organization Average: ______
Knowledge
Criteria Most Evident
Rate 4
Some Evidence
Rate 3
Low Evidence
Rate 2
Least Evident
Rate 1
“How to do it” information and aids
provided
“How to” aids provided with practice
opportunity
Learning material is shared with
clear performance objectives and
goals
Trainings are exact replicas of real
life application of skills
Performance is monitored and
feedback is provided to ensure
understanding and application
Opportunity to apply training is
presented in a timely manner after
training (w/i 72 hours)
Motivation
Criteria Most Evident
Rate 1
Some Evidence
Rate 3
Low Evidence
Rate 2
Least Evident
Rate 1
There is personal and team
confidence
Positive feelings about org and
environmental barriers to acheiving
goals
134
Positive feeling about personal and
team values for goals
Challenging but achievable goals
assigned
Acknowledged for achievement
Corrective feedback focused on
strategy and not person
Problems are heard and considered
Organization
Criteria Most Evident
Rate 4
Some Evidence
Rate 3
Low Evidence
Rate 2
Least Evident
Rate 1
Clear vision, goals, and ways to
measure progress
Aligned the structures and processes
of the organization with goals
Communicated constantly and
candidly to those involved about
plans and progress
Top management continually
involved in the improvement process
Adequate knowledge, skills, and
motivational support for everyone
Process matched the expectation of
Continuous Improvement
135
Appendix C: Information Sheet for Exempt Research
University of Southern California
Rossier School of Education
Waite Phillips Hall, 3460 Trousdale Pkwy
Los Angeles, CA 90089
INFORMATION SHEET FOR EXEMPT RESEARCH
STUDY TITLE: The Continuous Failure of Continuous Improvement: The Challenge of
Implementing Continuous Improvement in Low-Income Schools
PRINCIPAL INVESTIGATOR: Jacare E. Thomas, MSPA
FACULTY ADVISOR: Maria Ott, PhD
You are invited to participate in a research study. Your participation is voluntary. This
document explains information about this study. You should ask questions about
anything that is unclear to you.
PURPOSE
The purpose of this study is to understand knowledge, motivation, and organizational
support of Continuous Improvement implementation in public schools. The goal of this
study is to better understand practices that can be used to support similar schools in
implementing Continuous Improvement to impact student achievement and school
success. You are invited as a possible participant because you:
● Have at least 3 years of teaching experience
● Have participated in school Continuous Improvement implementation
● Teach at a school that has had 3 years of improvement
● Were teaching prior to March 13, 2020
PARTICIPANT INVOLVEMENT
If you decide to take part, you will be asked to participate in a one-hour interview
conducted over Zoom or in person if you choose. You may be contacted after the
interview has ended if clarification is needed. Audio recording and transcription will be
used to ensure accuracy of your responses. You may decline to be recorded and
continue with your participation.
CONFIDENTIALITY
The members of the research team and the University of Southern California
Institutional Review Board (IRB) may access the data. The IRB reviews and monitors
research studies to protect the rights and welfare of research subjects.
136
Your responses will be kept confidential. You will be given a pseudonym and your
responses will not be identifiable. The recording and transcript will be kept in a private
Zoom account on a private computer, both with strong passwords. You have a right to
review and edit the audio recordings and transcripts. All data collected will be destroyed
promptly after analysis.
INVESTIGATOR CONTACT INFORMATION
If you have any questions about this study, please contact Jacare Thomas at
jacareth@usc.edu or Maria Ott, PhD at mariaott@rossier.usc.edu.
IRB CONTACT INFORMATION
If you have any questions about your rights as a research participant, please contact the
University of Southern California Institutional Review Board at (323) 442-0114 or email
irb@usc.edu.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The implementation of response to intervention: an adapted gap analysis
PDF
The role of executives’ knowledge and motivation in enabling organizational supports for diversity, equity, and inclusion
PDF
A call for transparency and accountability: continuous improvement in state education agency policy implementation
PDF
Understanding the reason for variation in high school graduation rates of Latino males
PDF
The role of international school teacher leaders in building leadership capacity within their teams
PDF
The importance of teacher motivation in professional development: implementing culturally relevant pedagogy
PDF
Examining the implementation of district reforms through gap analysis: making two high schools highly effective
PDF
Building teacher competency to work with middle school long-term English language learners: an improvement model
PDF
The implementation of a multi-tiered system of support at Downtown Unified School District: an analysis of teacher needs
PDF
An evaluation of teacher retention in K-12 public schools
PDF
Teachers assumptions on the importance of executive function: a gap analysis evaluation study
PDF
Implementation of the Social Justice Anchor Standards in the West Coast Unified School District: a gap analysis
PDF
Perceptions of professional development from the lens of the global teacher in a rapidly evolving, linguistically diverse instructional environment
PDF
A comparative case study on the impact of Black male teachers on implementing inclusive and antiracist practices in elementary-middle education
PDF
Learning the language of math: supporting students who are learning English in acquiring math proficiency through language development
PDF
Teacher retention in an urban, predominately Black school district: an improvement study in the Deep South
PDF
Sustained mentoring of early childhood education teachers: an innovation study
PDF
Establishing the presence of Georgetown University's School of Continuing Studies in the Russian Federation: a gap analysis
PDF
Multi-tiered system of support as the overarching umbrella: an improvement model using KMO gap analysis to address the problem of school districts struggling to implement MTSS
PDF
Universal Wellness Network: a study of a promising practice
Asset Metadata
Creator
Thomas, Jacare Everett
(author)
Core Title
The continuous failure of Continuous Improvement: the challenge of implementing Continuous Improvement in low income schools
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Degree Conferral Date
2024-05
Publication Date
01/31/2024
Defense Date
01/17/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
best practices,Black and Latinx students,collaboration,collaborative learning,continuous improvement,educational equity,Educational Leadership,educational policy,effective implementation,implementation strategies,knowledge gaps,low-income students,motivation strategies,OAI-PMH Harvest,organizational culture,organizational support,policy impact,policy impacts,research methods,school context,school culture,school culture impact,school improvement,school performance,school reform,self-efficacy,student achievement,teacher motivation,teacher self-efficacy,Teacher Training,Urban Education
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ott, Maria (
committee chair
), Crew, Rudolph (
committee member
), Phillips, Stefanie (
committee member
)
Creator Email
jacareth@usc.edu,jacareveretthomas@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113817132
Unique identifier
UC113817132
Identifier
etd-ThomasJaca-12648.pdf (filename)
Legacy Identifier
etd-ThomasJaca-12648
Document Type
Dissertation
Format
theses (aat)
Rights
Thomas, Jacare Everett
Internet Media Type
application/pdf
Type
texts
Source
20240131-usctheses-batch-1124
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
best practices
Black and Latinx students
collaboration
collaborative learning
continuous improvement
educational equity
educational policy
effective implementation
implementation strategies
knowledge gaps
low-income students
motivation strategies
organizational culture
organizational support
policy impact
policy impacts
research methods
school context
school culture
school culture impact
school improvement
school performance
school reform
self-efficacy
student achievement
teacher motivation
teacher self-efficacy