Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A call for transparency and accountability: continuous improvement in state education agency policy implementation
(USC Thesis Other)
A call for transparency and accountability: continuous improvement in state education agency policy implementation
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Call for Transparency and Accountability: Continuous Improvement in State Education
Agency Policy Implementation
Rebekah Renee Harris
Rossier School of Education
University of Southern California
A dissertation submitted to the faculty
in partial fulfillment of the requirements for the degree of
Doctor of Education
December 2023
© Copyright by Rebekah Renee Harris 2023
All Rights Reserved
iv
Abstract
This study applies Clark and Estes’s (2008) gap analysis model to understand how a state
education agency’s decision making, during reform policy implementation, impacts the success
of the initiative. Specifically, the purpose of this study was to assess the effect of the Texas
Education Agency’s (TEA) decision making, during the policy design and implementation of the
House Bill 3 (HB3) reading academies, to identify strengths and areas of growth for continuous
improvement, as perceived by cohort leaders, third-party contractors tasked with implementing
the initiative. Clark and Estes’s gap analysis model was used to create a semistructured interview
protocol. Ten cohort leaders who implemented the HB3 reading academies during the first year
of the initiative were interviewed. Findings from this study indicate TEA decision making related
to organizational structure supports negatively influenced cohort leader knowledge and
motivation related to their role as program implementers; however, cohort leaders could
overcome this influence by leveraging personal relationships and internal motivation to serve
their wider community. This study brings awareness to the role of state education agencies as the
designers and lead implementers of reform policy, an area where there is a current gap in
research.
v
Dedication
To the teachers who show up every day and give their best to their students. Thank you for your
service. It matters.
To my teachers, who have guided me, loved me, and supported me throughout my educational
career.
To my very favorite teacher—my Mom. Being your daughter was the greatest privilege, and I
am grateful to be part of your legacy. I’ll love you forever. I’ll like you for always. Forever and
ever, my Mama you’ll be: Charlotte Renee Minton, 1962–2022.
vi
Acknowledgments
This dissertation would not have been possible without the participation of the cohort
leaders who shared their experiences implementing reading academies. Thank you for your time,
your honesty, and your reflections. It was an absolute pleasure learning from you.
A huge debt of gratitude is owed to my wonderful dissertation chair, Dr. Ekaterina
Moore. Thank you for your guidance, mentorship, and support over the last 2 years. Working
with you made all the difference.
And finally, many thanks to my committee members, Dr. Eric Canney and Dr. Maria Ott.
I am so grateful for your feedback and support throughout the dissertation process. Your
expertise was invaluable.
vii
Table of Contents
Abstract.......................................................................................................................................... iv
Dedication....................................................................................................................................... v
Acknowledgments.......................................................................................................................... vi
List of Tables .................................................................................................................................. x
List of Figures................................................................................................................................. x
Chapter One: Introduction to the Study.......................................................................................... 1
Context and Background of a State Education Policy Initiative......................................... 1
Stakeholder of Focus........................................................................................................... 3
Purpose of the Project and Research Questions.................................................................. 4
Importance of the Study...................................................................................................... 5
Overview of Theoretical Framework and Methodology .................................................... 6
Definitions........................................................................................................................... 8
Organization of the Dissertation ......................................................................................... 9
Chapter Two: Literature Review .................................................................................................. 11
A History of Federal Education Accountability Reform in the United States.................. 11
Continuous Improvement and Education Reform ............................................................ 27
Continuous Improvement in State Education Agencies ................................................... 41
A Model for Assessment: State Education Reform Policy Implementation..................... 45
Conceptual Framework..................................................................................................... 58
Summary........................................................................................................................... 59
Chapter Three: Methodology........................................................................................................ 61
Research Questions........................................................................................................... 61
Overview of Design .......................................................................................................... 62
Research Setting................................................................................................................ 62
viii
The Researcher.................................................................................................................. 63
Data Sources ..................................................................................................................... 65
Data Collection Procedures............................................................................................... 68
Data Analysis.................................................................................................................... 68
Credibility, Transferability, Dependability, and Confirmability ...................................... 69
Ethics................................................................................................................................. 71
Chapter Four: Findings................................................................................................................. 73
Participants........................................................................................................................ 73
Findings............................................................................................................................. 74
Research Question 1: What Were the Cohort Leaders’ Perceptions of Their
Knowledge Related to HB3 Reading Academies Implementation? ................................. 75
Research Question 2: What Were the Motivational Influences Affecting Cohort
Leaders’ Ability to Implement HB3 Reading Academies? .............................................. 83
Research Question 3: How Did State Organizational Structures Impact
Implementation of HB3 Reading Academies From the Cohort Leaders’
Perspective? ...................................................................................................................... 91
Summary......................................................................................................................... 108
Chapter Five: Discussion and Recommendations....................................................................... 110
Knowledge and Motivation............................................................................................. 110
Organizational Structures................................................................................................ 113
Limitations and Delimitations......................................................................................... 122
Recommendations for Future Research.......................................................................... 124
Conclusion ...................................................................................................................... 125
References................................................................................................................................... 127
Appendix A: Interview Protocol................................................................................................. 145
Appendix B: Recruitment Documents........................................................................................ 150
Appendix C: Participant Theme Review Email Instructions...................................................... 153
ix
Appendix D: Participant Theme Review Survey........................................................................ 156
x
List of Tables
Table 1: Study Themes and Theme Analysis Survey Statements ............... Error! Bookmark not
defined.6
Table 2: Knowledge Theme Analysis Survey Findings ..............Error! Bookmark not defined.7
Table 3: Motivation Theme Analysis Survey Findings................................................................ 84
Table 4: Organizational Structures Theme Analysis Survey Findings......................................... 93
Table A1: Questions and Probes………………………………………………………………..146
Table C1: Themes……………………………………………………........................................153
xi
List of Figures
Figure 1: Clark and Estes’ Gap Analysis Process Model ........................................................... 477
Figure 2: Conceptual Framework: State Education Agency Performance Gap Analysis ............ 60
Figure D1: Participant Theme Review Survey Instructions....................................................... 156
Figure D2: Participant Theme Review Survey Question 1......................................................... 157
Figure D3: Participant Theme Review Survey Question 1.A..................................................... 158
Figure D4: Participant Theme Review Survey Question 1.B..................................................... 158
Figure D5: Participant Theme Review Survey Question 2......................................................... 159
Figure D6: Participant Theme Review Survey Question 2.A..................................................... 159
Figure D7: Participant Theme Review Survey Question 2.B..................................................... 160
Figure D8: Participant Theme Review Survey Question 3......................................................... 160
Figure D9: Participant Theme Review Survey Question 3.A..................................................... 161
Figure D10: Participant Theme Review Survey Question 3.B................................................... 161
Figure D11: Participant Theme Review Survey Question 4....................................................... 162
Figure D12: Participant Theme Review Survey Question 4.A................................................... 162
Figure D13: Participant Theme Review Survey Question 4.B................................................... 163
Figure D14: Participant Theme Review Survey Question 5....................................................... 163
Figure D15: Participant Theme Review Survey Question 5.A................................................... 164
Figure D16: Participant Theme Review Survey Question 5.B................................................... 164
Figure D17: Participant Theme Review Survey Question 6....................................................... 165
Figure D18: Participant Theme Review Survey Question 6.A................................................... 165
Figure D19: Participant Theme Review Survey Question 6.B................................................... 166
Figure D20: Participant Theme Review Survey Question 7....................................................... 166
xii
Figure D21: Participant Theme Review Survey Question 7.A................................................... 167
Figure D22: Participant Theme Review Survey Question 7.B................................................... 167
Figure D23: Participant Theme Review Survey Question 8....................................................... 168
Figure D24: Participant Theme Review Survey Question 8.A................................................... 168
Figure D25: Participant Theme Review Survey Question 8.B................................................... 169
1
Chapter One: Introduction to the Study
Many complex factors impact student achievement (Manna, 2012). To quantify and
assess the impact of education policy implementation on student outcomes, researchers have
recommended education agencies collect and analyze data on the implementation of reform
policy at the organizational level to determine its impact on the success of the initiative (CohenVogel et al., 2015); however, state education agencies (SEAs) regularly use student performance
scores rather than organizational performance data to evaluate education policy implementation
(VanGronigen & Meyers, 2019). Without transparent data related to the design and
implementation of a policy initiative, SEAs are limited in their ability to measure the impact of
policy design and implementation for continuous improvement use (Redding et al., 2017;
Shakman et al., 2020; VanGroningen & Meyers, 2019).
This research project analyzed the impact of a SEA’s design of an education reform
policy from the perspective of third-party contractors hired to implement the project in local
contexts. The goal in selecting this topic was to better understand how a SEA’s decision making
impacts the overall success of an initiative, in the absence of public-facing organizational
performance data, and to identify strengths and areas of growth for continuous organizational
improvement. Specifically, this case study will focus on the Texas Education Agency’s (TEA,
n.d.b) design and rollout of the House Bill 3 (HB3) reading academies, a reform policy initiative
focused on improving literacy instruction for early childhood students in Texas.
Context and Background of a State Education Policy Initiative
The TEA (n.d.b.) serves as the SEA for the state of Texas and oversees the 1,247 public
education entities operating in the state. The agency defined its mission as “improv(ing)
outcomes for all public school students in the state by providing leadership, guidance, and
2
support to school systems” (TEA, n.d.b, para. 3. As part of the Texas state governmental system,
the TEA’s roles and responsibilities include administering state and federal assessments,
overseeing the development of statewide curriculum adoption procedures, and monitoring
compliance related to state and federal legislation (TEA, n.d.b). The TEA’s work is divided into
four strategic priorities and three organizational enablers. The strategic priorities include (a)
recruit, support, and retain teachers and principals, (b) build a foundation of reading and math,
(c) connect high school to career and college, and (d) improve low-performing schools (TEA,
2020). The three organizational enablers are (a) increase transparency, fairness, and rigor in
district and campus academic and financial performance, (b) ensure compliance, effectively
implement legislation and inform policymakers, and (c) strengthen organizational foundations
(TEA 2020). The TEA (n.d.b) is led by the state commissioner of education, appointed by the
governor. Seven deputy commissioners report to the state commissioner of education and are
responsible for overseeing the program implementation for their respective divisions (TEA,
n.d.b). Additionally, the TEA (n.d.b) is overseen by the Texas Board of Education and the Texas
Board for Educator Certification, which contains a mixture of appointed and elected positions.
During the 86th Texas Legislative Session in June 2019, HB3, legislation affecting
school finance reform and instructional excellence, was passed and signed into law. As part of
the law, each teacher serving students in kindergarten through Grade 3 and all elementary
principals were required to participate in a teacher literacy achievement academy by the 2022–
2023 school year (HB3, 2018). The TEA was charged with assisting school districts in
complying with this requirement and monitoring the program’s implementation (HB3, 2018).
In response to HB3, the TEA created the HB3 reading academies, 60-hour professional
development experience to be completed over the course of 11 months. As part of the
3
implementation plan, the TEA selected 30 authorized providers—education agencies, districts, or
nonprofit education organizations—to provide HB3 reading academies to the state’s districts and
public charter schools. Authorized providers were tasked with implementing two reading
academy delivery models: (a) blended and (b) comprehensive. The blended delivery model
consisted of online modules that a participant was required to complete in a given time frame.
The comprehensive delivery model provided in-person professional development and
individualized coaching sessions for participants. School districts and charter organizations were
required to pay an enrollment fee for all participants, with the blended model costing $400 per
participant and the comprehensive model costing $3,000 per participant.
In July 2020, 13 months after the initial passage of HB3, the first HB3 reading academies
cohorts were launched. These initial cohorts began amid the COVID-19 pandemic, as teachers
were preparing to return to school for the first time since leaving their classrooms in March
2020. Due to the federal regulations requiring the social distancing of large groups, the
comprehensive cohorts were moved online, with professional development and coaching
sessions held over the Zoom platform.
Stakeholder of Focus
To provide HB3 reading academies to required attendees, authorized providers were
charged with hiring cohort leaders, or literacy coaches, to lead participants through the HB3
reading academies content. To be eligible to serve as a cohort leader for an authorized provider,
applicants were required to pass a three-part literacy screener designed and moderated by TEA.
This literacy screener included a multiple-choice assessment of literacy acquisition knowledge, a
multiple-choice assessment of situational instructional simulations, and instructional work
products graded by a TEA-guided team. While the passage of the literacy screener was the only
4
TEA-required qualification to become a cohort leader, many authorized providers added
additional experience requirements to their job postings, including required previous experience
as a classroom teacher and/or as a literacy coach. As such, most cohort leaders came into the role
with experience in the public education sector and previous experience with educational
coaching.
While school districts and charter organizations were required to contract directly with an
authorized provider to enroll staff in an HB3 reading academy cohort, TEA remained largely in
control of the implementation process through detailed business rules that authorized providers
were required to follow, lest they lose their authorized provider status during one of the required
biyearly reauthorizations. Additionally, TEA the explicitly directed the role requirements of
cohort leaders, even though authorized provider organizations hired cohort leaders. For example,
job-specific training for cohort leaders led by a TEA-contracted organization and mechanisms
for the TEA to monitor cohort leader grading were built into implementation procedures.
The HB3 reading academies cohort leaders served as the stakeholder of focus for this
research study. Because of their direct interaction with the TEA’s implementation rules,
regulations, and trainings, along with their regular contact with school staff enrolled in the
professional development experience, they bring a unique insight into the impact of the TEA’s
decision making on the overall implementation of the reform policy. As part of this study, cohort
leaders were interviewed to assess the perceived impact of TEA planning and decision making
on their ability to successfully implement the HB3 reading academies in their cohorts.
Purpose of the Project and Research Questions
This study aims to assess the effect of the TEA’s decision making during the policy
design and implementation of the HB3 reading academies to identify strengths and areas of
5
growth for continuous organizational improvement, as perceived by cohort leaders. The HB3
reading academies are an ideal case study to assess the strengths and areas of growth of state
agency decision making in continuous improvement research due to two characteristics. First, the
explicit centralized control of project implementation by TEA supports the reader in drawing
conclusions about the impact of SEA decision making on an education policy initiative.
Secondly, as the cohort leader role was designed by TEA but hired by local authorized providers,
cohort leader participants are well positioned to provide insight into TEA’s role during
implementation without being constrained by an employer/employee relationship. Three research
questions were used:
1. What were the cohort leaders’ perceptions of their knowledge related to HB3 reading
academies implementation?
2. What were the motivational influences affecting cohort leaders’ ability to implement
HB3 reading academies?
3. How did state organizational structures impact implementation of HB3 reading
academies from the cohort leaders’ perspective?
Importance of the Study
The passage of No Child Left Behind (NCLB) in 2001 fundamentally changed the role of
SEAs from that of compliance monitor to an organizational focus deeply embedded in the design
and implementation of education reform policy (Heise, 2017; Shaul & Ganson, 2005;
VanGronigen & Meyers, 2019). After the passage of NCLB, states were held responsible for the
academic outcomes of their students by the national government (Heise, 2017). In response to
this new accountability, SEAs created and led school reform policy initiatives to support student
academic success (VanGronigen & Meyers, 2019).
6
Designing and implementing policy in and of itself fails to ensure successful student
outcomes (Cohen-Vogel et al., 2015). A survey of states indicated 80% had “significant gaps in
expertise” in their capacity to support education reform initiatives (Tanenbaum et al., 2015, p. 1).
These gaps are evident because access to excellence and equity in public education varies widely
across states (Manna, 2012). These gaps in expertise are especially concerning when considering
that due to the passage of the Every Student Succeeds Act (ESSA) of 2015, states hold nearly
complete control in the ability to mandate, design, and implement education reform efforts. In a
democracy, the constituent’s role is to hold the government responsible for its actions; however,
to do so, they must have access to data that allows public understanding and analysis of
governmental activity (Clinton & Grissom, 2015). By collecting, using, and transparently sharing
data related to organizational education reform design and implementation, SEAs will have the
opportunity to use continuous improvement research to systematically improve public education
in their state (Shakman et al., 2020) and provide greater transparency to their constituents in the
name of governmental accountability (U.S. Department of Defense Education Agency, 2018).
Overview of Theoretical Framework and Methodology
The theoretical framework guiding this research study is Clark and Estes’s (2008) gap
analysis framework. This framework is a continuous improvement model whose utility is to
identify performance gaps in the areas of knowledge, motivation, and structural support to ensure
the organizational ability to meet key goals and metrics (Clark & Estes, 2008). This theoretical
framework allows this study to analyze how state education decision making in these three
critical areas supported or inhibited cohort leaders’ ability to effectively implement the HB3
reading academies.
7
In the Clark and Estes’s (2008) gap analysis framework, a process model is provided to
support organizational success in reaching performance outcomes. This model includes
organizational and programmatic goal setting, identification of performance gaps, analysis of the
cause of performance gaps, identification of solutions, and implementation of identified solutions
(Clark & Estes, 2008). This study used a modified version of this framework, including an
additional focus on performance strengths in addition to the identification of performance gaps.
The first step of the process model is to identify organizational goals (Clark & Estes,
2008). While not explicitly stated by the TEA in relation to HB3 reading academies, two
overarching goals related to this project can be pulled from the TEA strategic plan.: (a) Strategic
Priority 2: build a foundation of reading and math and (b) Enabler 3: Ensure compliance,
effectively implement legislation, and inform policymakers. Specifically, effective
implementation of legislation was the focus of this study.
During the implementation of the HB3 reading academies, the TEA did not provide
public-facing programmatic goals for the initiative. Without a measurable set of outcomes, best
practices in established research will be used to assess TEA decision making and implementation
impacts. This study leveraged the three areas identified by Clark and Estes (2008) as the
recommended focus of gap analyses: the knowledge, skills, and motivation of those involved in
the implementation and organizational barriers that inhibited implementation.
The data collection process allowed cohort leaders to provide perspective on identifying
performance strengths, gaps, and potential causes for each. To collect relevant data, a qualitative
case study approach was used to analyze cohort leaders’ perceptions of the quality of support
they received from the TEA throughout the implementation process. In addition, an inductive
qualitative approach allowed for a greater understanding of the lived experience of the cohort
8
leaders and support the identification of larger themes in state education policy implementation
(Creswell & Creswell, 2018). To meet these research goals, the data collection process for this
dissertation consisted of semistructured interviews with individual cohort leaders (Merriam &
Tisdell, 2016). After data collection is complete, the author examined of performance strengths
and gaps through the data analysis process and provide solutions based on the analysis and
related research.
Definitions
The key topics of this dissertation are education reform, continuous improvement, and
accountability. Each term is defined in this section. While both continuous improvement and
accountability have a rich depth of meaning outside the education sector, the definitions provided
here have been taken from education organizations to provide an additional layer of contextual
meaning specific to this study. Additionally, terms related to the context of the study are also
defined.
Accountability is the obligation to take responsibility for performance considering
commitments and expected outcomes (U.S. Department of Defense Education Agency, 2018).
An authorized provider is an organization selected to locally implement HB3 reading
academies by the TEA. Authorized providers employ cohort leaders.
In this study, the blended model refers to the HB3 reading academies model that requires
participants to complete online content modules. A cohort leader can lead a total of three blended
cohorts at a time, with a maximum of 300 participants.
In this study, cohort refers to a group of teachers participating in HB3 reading academies.
In the blended model, 100 teachers who work through content in the same online course. In the
9
comprehensive model, 60 teachers who attend the same in-person professional development
sessions.
Cohort leaders are literacy coaches responsible for leading a cohort of teachers through
the HB3 reading academies.
The comprehensive model is the HB3 reading academies model that requires participants
to attend in-person professional development sessions and complete individual coaching
sessions. A cohort leader can lead a total of one comprehensive cohort at a time, with a
maximum of 60 participants.
Continuous improvement involves using a cyclical and iterative assessment process that
supports continuous growth and systems change (Bae, 2018; Cohen-Vogel et al., 2015; Joyce &
Cartwright, 2020; Kaufman et al., 2019).
Education reforms are efforts designed to improve the educational system and achieve
other, often ideological, goals (Sunderman, 2010).
Organization of the Dissertation
In Chapter 1, readers were introduced to the purpose and importance of the research
study and were provided an overview of the TEA, the HB3 reading academies, and the cohort
leaders, who will serve as the stakeholder of focus. In Chapter 2, readers will gain a general
overview of the history of education reform in the United States, situate continuous improvement
science in that context, and then explore how current research in adult learning theory,
motivation, and change management can be used to collect data on SEA policy implementation
effectiveness. Chapter 3 will provide a blueprint for the data collection process, analysis, and
detailed methodology design. A thorough analysis of qualitative data related to the HB3 reading
academies state education policy initiative will be provided in Chapter 4. Finally, in Chapter 5,
10
the results of the data analysis will be connected with associated themes in research and practice,
and implications for future research will be shared.
11
Chapter Two: Literature Review
This literature review aims to situate state education reform policy in continuous
improvement science. First, a brief history of education reform in the United States will be
provided. This historical review will end with the researcher contextualizing continuous
improvement in the education accountability reform movement. Next, continuous improvement
science will be described in detail, including continuous improvement models and continuous
improvement as a lever for organizational change and excellence. Additionally, a review of
continuous improvement research in the public education sector will be provided, including the
identification of a gap in research for SEA use of continuous improvement sciences. The
literature review will then justify the need for continuous improvement data collection and use in
SEA. The chapter concluded with a discussion of research in adult learning, motivation, and
continuous improvement models to present and explain a conceptual framework for collecting
SEA improvement data using the Clark and Estes (2008) gap analysis model.
A History of Federal Education Accountability Reform in the United States
To situate U.S. education accountability movement in its historical context, an overview
of federal education reform legislation is provided. To keep this narrative informative yet
succinct, the following organizational decisions were made. First, this historical overview centers
on the development of standards, assessment, and instructional support levers for the general
U.S. student population. As such, the overview will focus on legislation that pertains to all
students enrolled in public U.S. schools rather than specific subsets of students. Policy related to
populations of diverse learners, such as special education students or English language learners,
or how their specific instructional needs were met through federal legislation and accountability
structures (McGuinn, 2015) is beyond the scope of this study.
12
Secondly, this overview aims to provide an organized description of how federal and
state interactions shaped the development of education accountability systems and structures.
Due to this focus, the impact of special interest groups on education reform was excluded;
however, the researcher recognizes the growing involvement of nongovernmental interest groups
and grassroots movements in education has had a profound impact on education reform,
especially in the areas of school choice and the competitive education market (Lubienski et al.,
2016; Piazza, 2019; Rogers, 2015).
Finally, while this study focuses on SEAs, this historical review will primarily describe
the actions of the federal government and its impact on state agencies. This federal focus is due
to the variance in state education governance structures (Manna, 2012), staffing capacity and
access to resources (VanGronigen & Meyers, 2019), and conceptual approaches to education
reform across the 50 states (Brodersen et al., 2017). Attempting to describe singular state
reactions to federal policy would overly burden the narrative. Therefore, the reader is encouraged
to infer how federal shifts in education reform posed unique challenges to individual states.
Historical Context: A Review of the Early 20th Century
The U.S. education reform movement was rooted in the United States’s emergence as a
global superpower during World War II and the Cold War (Casalaspi, 2017; Johanningmeier,
2010; McGuinn, 2015; Superfine, 2005). Until this point in history, the federal government had
minimal influence on public education because it did not fall under federal jurisdiction, as
defined by the constitution; therefore, its responsibility was delegated to state governments
through the 10th amendment (Egalite et al., 2017). Outside of a mandate that required states to
offer public education beginning in the early 1800s, the national government’s role in education
13
throughout the first 150 years of its existence was to gather and disseminate data related to public
education (McGuinn, 2015).
During the post–World War II era, many federal leaders recognized the growing need for
a well-educated U.S. workforce to support future military efforts and ensure the United States
could compete in the emerging global economy (Johanningmeier, 2010; McGuinn, 2015). To
balance a growing desire to support international competition with a widespread antifederalist
education sentiment, efforts during the 1940s and 1950s revolved around how to best fund public
education through specific and limited programs (Casalaspi, 2017). A significant example of this
type of financial support came from the National Defense Education Act of 1958, which, in
response to the USSR’s launch of Sputnik, provided federal funding to promote innovation in
science (Johanningmeier, 2010); however, the War on Poverty of the 1960s would open the door
to a more active federal role in public education oversight through their protection of and
advocacy for the educational rights of disadvantaged students (Heise, 2017).
The Elementary and Secondary Education Act of 1965
The purpose of the Elementary and Secondary Education Act (ESEA) of 1965 was to
create equal educational opportunities for students from low-socioeconomic communities
(Casalaspi, 2017; Kantor, 1991; McGuinn, 2015; Thomas & Brady, 2005). This legislation,
through five chapters, labeled Titles I–V, provided extensive funding for state and local
education agencies to support equitable learning opportunities for underprivileged school
districts; provide for instructional materials; create local education centers; develop education
research laboratories; and provide funds for SEAs to support the goals of ESEA (Casalaspi,
2017). As part of President Johnson’s War on Poverty, the legislation aimed to end generational
poverty cycles through equal access to quality education (Borman, 2005).
14
While the intentions of ESEA may have been based in educational equality, the passing
of this landmark legislation opened the door for federal oversight of public education through the
supervision of ESEA fund expenditures (Borman, 2005; Casalaspi, 2017; Kantor, 1991;
McGuinn, 2015). Almost immediately after its passage, there was widespread fear that by
accepting federal funds, state and local education agencies would be forced to allow federal
involvement in local decision making (Casalaspi, 2017). On the opposite end of the spectrum,
critics argued that throughout the development and implementation of ESEA legislation,
oversight to guarantee that funds were being used to benefit disadvantaged students needed to be
more robust (Borman, 2005). One way the federal government explicitly used ESEA funds to
force state and local action was through its requirements for all recipients of ESEA funds to
comply with national desegregation laws (Kantor, 1991); however, outside of student civil rights,
congress worked to assuage fears of federal overbearance by continuously clarifying the goals of
the ESEA through four amendments between its original passage in 1965 and 1980 (Thomas &
Brady, 2005).
A Nation at Risk: Establishing Education in the National Agenda
Between the passage of the ESEA and the publishing of A Nation at Risk in 1983, the
federal government’s role in education continued to expand through oversight of a growing
number of programs that aimed to support equity in education (Good, 2010; McGuinn, 2015).
This expansion included creating the U.S. Department of Education in 1980, which aimed to
grant funds to state and local education agencies and collect national education data (McGuinn,
2015); however, the election of President Ronald Reagan, who publicly expressed his desire to
dismantle the U.S. Department of Education, threatened advocates of federal public education
oversight (Good, 2010; McGuinn, 2015). Secretary of Education T. H. Bell was determined to
15
keep the focus on a “widespread public perception that something (was) seriously remiss in our
educational system” (Gardner et al., 1983, p. 7) and to protect the U.S. Department of Education
from the president’s political agenda (Good, 2010). To do so, Secretary Bell created the National
Commission on Excellence in 1981 to investigate and report on the quality of public education in
the United States (Good, 2010).
The result of the National Commission on Excellence’s investigation was A Nation at
Risk, an inflammatory report that concluded the U.S. education system was in crisis (Gardner et
al., 1983; Good, 2010; Johanningmeier, 2010; Ruff, 2019; Superfine, 2005). This report called
for widespread reform of U.S. education, citing the country’s need for an educated workforce to
support its participation in the global community (Gardner et al., 1983). According to National
Commission on Excellence members, powerful and alarming language was purposefully used in
the report to capture the nation’s attention and underscore the seriousness of the country’s
educational crisis (Good, 2010). As a result, persuasive and potent phrases from the report
quickly became ubiquitous in the media and instigated a nationwide call for education reform
(Johanningmeier, 2010). One famous example of this type of language was the quote, “If an
unfriendly foreign power had attempted to impose on the United States the mediocre educational
performance that exists today, we might well have viewed it as an act of war” (Gardner et al.,
1983, p. 9).
The findings of A Nation at Risk were not unprecedented; the report generally repeated
the call for federal involvement in public education that had been discussed since the post–WWII
era (Johanningmeier, 2010). Instead, the significance of A Nation at Risk came from its ability to
mobilize the U.S. public in a united effort to create change in the public education sector (Good,
2010). As a result, this report marked a turning point in education reform and accountability and
16
solidified the federal government’s role in the reform movement for the next two decades
(Superfine, 2005).
The Charlottesville Education Summit
Throughout the rest of the 1980s, the reverberations of A Nation at Risk were felt through
a public focus on education reform (Superfine, 2005). To address this growing public concern,
President George H. W. Bush invited all 50 state governors to a bipartisan education summit: the
Charlottesville Education Summit of 1989 (Tirozzi & Uro, 1997). This summit led to a
collaborative national focus on education reform while also clearly defining the roles that the
state and federal government would hold in the movement (Executive Office of the President,
1990; National Education Goals Panel, 1993; Tirozzi & Uro, 1997; U.S. Department of
Education, 1991).
During this summit, six national goals were identified as the focus of U.S. education
reform (Superfine, 2005): (a) preschooling support for all students, (b) raising the high school
graduation rate, (c) K–12 competency in core subject areas, (d) first-class mathematics and
science achievement, (e) adult literacy for all, and (f) safe public schooling environments (U.S.
Department of Education, 1991). The national goals were based on the concepts of public
education shifting from a focus on compliance to a focus on performance; identifying rigorous
standards and student assessments; and a call for all students to have access to skilled teachers
(Executive Office of the President, 1990). In addition, the president and governors clarified that
while education was a state responsibility, participation in the global community required a
national response to education reform that the federal government could support through goal
setting, funding, and technical support (Tirozzi & Uro, 1997; U.S. Department of Education,
1991). To address this commitment, the National Education Goals Panel (1993) was created to
17
track progress toward the six goals identified at the summit and to keep the public informed of
education reform progress through the annual National Goals Report (Fuentes & Stratoudakis,
1992).
Goals 2000 and Improving U.S. Schools
The work of the Charlottesville Education Summit continued into the last decade of the
century and led to two federal laws that kept a continued spotlight on educational accountability:
the Goals 2000 Act of 1994 and the reauthorization of ESEA under the title of the Improving
America’s Schools Act (IASA) of 1994 (Tirozzi & Uro, 1997). The IASA and the Goals 2000
Act expanded the federal role in education accountability policy through a focus on
comprehensive reform, which called for rigorous standards, aligned assessments, and a focus on
teaching and learning (McGuinn, 2015; Riley, 1995; Tirozzi & Uro, 1997; U.S. Department of
Education, 1996).
The first of these two laws, the Goals 2000 Act of 1994 (Goals 2000), identified eight
education reform goals: the original six goals identified at the Charlottesville Education Summit
and two additional goals centered on educator professional development and parental
involvement (Tirozzi & Uro, 1997). This law functioned as a grant opportunity for states to
develop challenging standards, assessments, and accountability systems to encourage student
academic development by the year 2000 (Superfine, 2005). Goals 2000 stressed the importance
of holistic approaches to education reform through aligned standards and assessments,
accountability for student results, and a focus on flexibility in how state and local education
agencies met these goals (Riley, 1995).
Goals 2000 was then enhanced by the IASA; Riley, 1995; U.S. Department of Education,
1996). The IASA revised key metrics in Title I legislation that allowed state and local education
18
agencies to leverage funds for whole-school improvement when a school served a specific
percentage of economically disadvantaged students (U.S. Department of Education, 1996). Goals
2000 were tied legislatively to IASA through the Title I premise of providing educational equity
for underprivileged students. Specifically, the IASA spoke to the need for rigorous standards for
all students rather than relying on a perceived pattern of lowering student expectations in Title I
schools (U.S. Department of Education, 1996). The IASA further called for regular assessment
of Title I students to ensure schools supported their ability to show educational gains against
rigorous academic standards (Riley, 1995). Thus, the concept of adequate yearly progress (AYP)
was born (U.S. Department of Education, 1996). In addition to ensuring high standards for all
children, the IASA also included provisions for the assurance of quality educators by expanding
Title II professional development funds (Tirozzi & Uro, 1997).
Together, Goals 2000 and the IASA created a comprehensive federal reform package that
attempted to provide more flexibility in implementation than federal policy had previously
afforded while maintaining a focus on accountability for student results (Riley, 1995; Superfine,
2005; U.S. Department of Education, 1996). As part of IASA, waivers were introduced to ensure
that federal requirements did not interfere with a state or local education agency’s ability to meet
student needs (U.S. Department of Education, 1996); however, neither waivers nor the
cooperation that began at the Charlottesville Education Summit could keep critics from
expressing serious concerns about this new, unprecedented level of federal involvement in
education reform (Superfine, 2005). To assuage fears, the U.S. Department of Education erred on
the side of flexibility rather than accountability when assessing states’ progress toward program
goals. Due to the lack of sanctions, many states did not fulfill their grant or title requirements
(McGuinn, 2015). The absence of results and growing Congressional concern about the level of
19
federal involvement in education ultimately led to a lack of reauthorization for both programs
(Superfine, 2005).
No Child Left Behind: A Major Shift to Federal Control
The vacuum left by Goals 2000 and IASA led to a new authorization of the ESEA in
2001, inciting one of the most contentious debates around federalism in public education (Ruff,
2019). This ESEA reauthorization, called the NCLB Act of 2001, began a period of federal
control in the education accountability movement that has significantly shaped the political
climate of U.S. education in the 21st century (Heise, 2017; Klute et al., 2016; Ruff, 2019; Shaul
& Ganson, 2005). At a basic level, NCLB did not differ significantly from the legislation’s 1990s
predecessors (Superfine, 2005). Like Goals 2000, NCLB stressed the importance of states’
adoption of rigorous standards paired with aligned assessments to track the educational progress
of students across the United States (NCLB, 2002); however, the accountability framework of
this law showed a substantial divergence from previous federal approaches to support state and
local control of public education (Dee & Jacob, 2009). Through this ESEA authorization, the
federal government instituted several accountability structures to ensure that state and local
education agencies would be answerable for their students’ academic success (Heise, 2017).
This new accountability package included a deadline for all students to reach proficiency
by the year 2014 (NCLB, 2002). To track states’ progress toward this goal, the federal
government required reports indicating schools and states were making AYP toward student
proficiency (Superfine, 2005). Consequences for failure to make AYP were included in the
legislation (Shaul & Ganson, 2005) and could require the replacement of school staff, school
closures, financial interventions, or changes in site leadership (Klute et al., 2016).
20
In addition to the accountability structures related to student achievement, new
requirements were also set for educator certification standards and instructional decision making
(Shaul & Ganson, 2005). Under NCLB, teachers were required to meet a federally mandated
level of qualification; the legislation additionally required reform in educator preparation
programs, professional development, and educator certification exams (NCLB, 2002).
Furthermore, educators identified as not making AYP had to report how they were intervening to
support student success and had to show that the intervention strategies being used were
grounded in scientific research (Shaul & Ganson, 2005). The underlying premise behind these
new requirements was that educators needed an incentive to change behaviors undermining
student success (Dee & Jacob, 2009). To eradicate these issues, NCLB (2002) hoped to use
stringent performance goals, coupled with incentives for success, to promote greater efficiency in
public education.
Criticism at the level of federal oversight created by this law and the hard-lined approach
the U.S. Department of Education took in its implementation was swift (Heise, 2017; McGuinn,
2015; Ruff, 2019; Superfine, 2005). Initial complaints from state agencies centered around the
specificity of NCLB requirements, which undermined the progress that individual states had
made over the previous decade toward implementing state-created standards and assessments
(Ruff, 2019; Wrabel et al., 2018). Additionally, NCLB was perceived to impede the state
accountability systems used to track student achievement that had already been created and
implemented (Ruff, 2019; Wrabel et al., 2018).
Multiple states questioned whether the federal government held the authority to create or
enforce education accountability structures (Superfine, 2005). Nevertheless, the federal
government held its line, and through stringent implementation practices and using penalties
21
when goals were not met, the federal government forced state compliance (McGuinn, 2015);
however, NCLB did not result in collective success (Dee & Jacob, 2009). As the first decade of
the new millennium ended, 80% of U.S. public schools were predicted to fail to meet the
proficiency requirement of 2014 (Heise, 2017). In addition, the collaborative relationship
between the national government and state and local education agencies from the previous
decade had dissipated, and a contentious relationship based in a power struggle over educational
authority remained (Finch, 2017).
Race to the Top: Using Grants to Spur Reform
Congress entered the second decade of the new millennium, unsure of how to best
proceed with federal education reform (Heise, 2017; McGuinn, 2015). No Child Left Behind had
not been reauthorized on schedule in 2007 due to bipartisan disagreements on how to best
overhaul the statute (McGuinn, 2015) and because congress was focused on the financial crisis
of 2008 and its immediate aftermath (Heise, 2017). During this lapse of legislation, President
Obama and his education secretary, Arne Duncan, used the Race to the Top (RTTT) grant
program to further their federal education reform agenda (Finch, 2017; James-Burdumy & Wei,
2015; McGuinn, 2012). As part of the American Recovery and Reinvestment Act of 2009, the
U.S. Department of Education was given 100 billion dollars for the expressed purpose of the
creation of two federal grant programs: RTTT and School Improvement Grants (SIGs; JamesBurdumy & Wei, 2015).
These two federal grant programs offered a competitive opportunity for states (RTTT)
and low-performing schools (SIG) to receive funds to broaden internal organizational capacity to
implement educational reform that aligned with the administration’s priorities (Finch, 2017).
Political objectives of these grants included developing common standards and assessments,
22
creating robust student performance data systems, adopting specific school turn-around
strategies, and an increased focus on teacher accountability through training and evaluation
(James-Burdumy & Wei). While the RTTT only lasted for two grant cycles, this program
indicated a shift in federal accountability policy from federal mandates to the use of federal
funds to spur state innovation and capacity building (Finch, 2017). In addition, these programs
highlighted two educational reform goals that would become foundational in Obama-era
accountability: the expansion of charter schools and data-based teacher evaluation systems
(McGuinn, 2012).
Waivers and Educational Federalism
While the U.S. Department of Education was exploring the functionality of RTTT
innovation grants to promote student success, the deadline for the NCLB requirement of 100%
student academic proficiency by 2014 was quickly approaching (Saultz et al., 2016). In 2011,
Secretary Duncan began issuing education waivers that released states from the proficiency
deadline (Wrabel et al., 2018); however, states accepting waivers were required to adhere to the
U.S. Department of Education’s reform agenda, which was not being addressed through
legislation (Egalite et al., 2017; Heise, 2017; Saultz et al., 2016; Wrabel et al., 2018).
To qualify for a federal accountability relief waiver, SEAs had to commit to reform in
four key areas: (a) college and career-ready standards and assessments, (b) identification and
improvement of the bottom 15% of schools, (c) promoting effective instruction and leadership
through the addition of student growth performance to educator evaluations, and (d) reduce
duplication and unnecessary burden of administrative requirements (Wrabel et al., 2018). The
use of waivers as policy enactors created immediate backlash (Egalite et al., 2017; Heise, 2017;
Saultz et al., 2016). One critical concern was that, while states’ application to the waiver process
23
was technically voluntary, the threat of NCLB sanctions made many states feel obligated to
participate to avoid their education system being labeled as a failure (Saultz et al., 2016).
Compounding the situation was that Secretary Duncan showed in both word and action that relief
would not be granted to the states without strict compliance to the four federal reform areas
(McGuinn, 2015).
Another primary concern was that the first reform agenda item, adoption of college and
career-ready standards and assessments, seemed to be a veiled requirement for states to adopt the
Common Core State Standards (Heise, 2017). This requirement represented to many a federal
overstep by mandating participation in a national standards and assessment system (Heise, 2017).
Congress took significant issue with what they felt was a circumvention of the legislative process
and an overstep of executive authority (Egalite et al., 2017). Central to their argument was that
the original intent of waivers was to provide relief to state agencies rather than require
compliance with the priorities of the U.S. Department of Education (Saultz et al., 2016).
Every Student Succeeds Act
Congress quickly addressed the perceived executive overstep by passing the ESSA of
2015, with strong bipartisan support (Egalite et al., 2017). The ESSA, the most recent
reauthorization of the ESEA, severely diminished the role that both the executive branch and the
U.S. Department of Education could perform in the initiation and oversight of education
accountability policy (Egalite et al., 2017; Heise, 2017; Saultz et al., 2016; VanGronigen &
Meyers, 2019). While the ESSA requires tracking student achievement and state intervention in
low-performing schools, this legislation gave control of how to design and implement
accountability measures back to states, marking a significant decline in federal influence in
education reform (Heise, 2017). Specific powers granted to SEAs included developing and
24
assessing standards (Heise, 2017), and the law provided complete state autonomy in developing
and enforcing intervention models for low-performing schools (VanGronigen & Meyers, 2019).
A provision of the law also changed the federal focus from the general performance of all
schools to schools performing in the bottom 5% of their state or any high school that graduated
less than two-thirds of their students (Mathis & Trujillo, 2016). In addition, while states were
still required to submit their education intervention plans to the U.S. Department of Education,
the education secretary was stripped of the ability to reject or require revisions to those plans
(Heise, 2017).
The ESSA also dealt with Secretary Duncan’s infringement of legislative authority
through his use of waivers (McGuinn, 2015). Not only did the law strip current and future
secretaries of education from the ability to grant accountability waivers (Saultz et al., 2016), it
also voided all waivers previously granted by the U.S. Department of Education (Heise, 2017).
In addition, it explicitly prohibited the secretary of education from defining any aspect of
educator evaluations, a specific component of Secretary Duncan’s waiver requirements (Egalite
et al., 2017). Combined, each of these provisions placed the accountability for public education
squarely back under state authority, where it continues to live presently (VanGronigen &
Meyers, 2019).
Continuous Federal Legislative Supports: The Institute of Education Sciences
The last 30 years have seen a rise and fall in federal authority and oversight in education
accountability and reform policy (VanGronigen & Meyers, 2019); however, one section of the
U.S. Department of Education, the Institute of Education Sciences (IES), has managed to make a
continuous and reliable impact on public education through the funding and production of
education research (IES, 2019; Kuenzi & Stroll, 2017; Polanin et al., 2021). Established by the
25
Education Sciences Reform Act of 2002, the IES was created to support education accountability
programs in the United States through the dissemination of rigorously collected education data,
the establishment of scientifically rigorous research standards, and the production of research to
support continuous improvement in public education (Education Sciences Reform Act, 2018).
Before 2002, there had been iterations of research-focused branches of the U.S. Department of
Education (Sroufe, 2003); however, there was much congressional concern that these
predecessors of IES allowed the political agendas of the secretary of education and their
appointing president to influence the content and the quality of research provided to the general
public (Sroufe, 2003). The legislative establishment of IES contained two key levers to ensure
high-quality, nonpartisan research production. First, while IES technically falls under the
organizational structure of the U.S. Department of Education, it functions as a separate entity
whose research agenda is created with and approved by a board of directors rather than the
Secretary of Education (Kuenzi & Stroll, 2017). Secondly, accountability for research production
and quality was built into the relationship between IES and its board of directors, alongside
requirements for strict fiscal oversight (Education Sciences Reform Act, 2018; Sroufe, 2003).
The mission of IES is to increase the knowledge of educators, education leaders,
researchers, policymakers, and the broader community at large in the progress of education
reform; to encourage the implementation of research-based practices that support student
achievement; and to provide general oversight and evaluation of federal and other education
programs (Education Sciences Reform Act, 2018). This mandate was met through the creation of
four national education centers: (a) the National Center of Education Research, charged with
sponsoring, creating, and promoting education research that supports equitable student
achievement in public education; (b) the National Center of Education Statistics, whose
26
responsibility is to collect, analyze, and report nonpartisan education statistics for the benefit of
the education community; (c) the National Center for Education Evaluation and Regional
Assistance, who are called to provide technical assistance to state and local education agencies,
conduct evaluations of federally funded education programs, and encourage the use of
scientifically validated strategies in public education; and (d) the National Center for Special
Education Research, whose role is to investigate the needs of exceptional learners (Education
Sciences Reform Act, 2018; Kuenzi & Stroll, 2017).
Since its inception, IES and its ancillary centers have dramatically impacted the
education community by funding various research programs (Polanin et al., 2021). These
programs include creating research networks to extend the national understanding of best
practices in education reform, evaluating various school improvement programs, and creating 10
regional education laboratories to meet specialized state needs (IES, 2019). In addition, IES has
provided extensive support to research in instructional leadership, the assessment of datainformed program implementation and its scale, data-based decision-making models, continuous
improvement and educator development, and the coordinated use of best instructional practices
(IES, 2019). When considered through a holistic lens, each of these individual projects
contributes to an overall program focus, collectively known as continuous improvement research
(Tichnor-Wagner et al., 2018).
The History of Federal Education Reform in the United States
Since the early 20th century, there has been a growing federal focus on the quality of
public education, grounded in U.S. participation in the global community (McGuinn, 2015).
Beginning with the ESEA of 1965, the federal government expanded its power in the public
education sector by tying requirements to federal education funds (Thomas & Brady, 2005). The
27
federal government’s role continued to expand throughout the next 50 years (Ruff, 2019), and
with the passage of NCLB (2002), the federal government took control of education reform by
creating a rigorous accountability system; however, due to the stringent requirements of NCLB
and the perceived overstep of the U.S. Department of Education in tying state accountability
waivers to federal reform agendas, congress passed the ESSA of 2015 (Egalite et al., 2017). This
legislation limited the federal government’s role in state oversight, and the design and
implementation of education reform policy was returned to the states (Vangronigen & Meyers,
2019); however, the federal government still supports education reform through the IES, which
provides funding and support for research-based practices that support student achievement (IES,
2019), including a focus on continuous improvement in public education (Tichnor-Wagner et al.,
2018).
Continuous Improvement and Education Reform
While the original purpose of the IES was to expand the use of experiential research in
education (Education Sciences Reform Act, 2018), the organization has increasingly focused on
addressing the gaps between research and practice that continue to inhibit student achievement
(Joyce & Cartwright, 2020). Practitioners and researchers continue to struggle to bridge the
contextual factors that can inhibit the scale of quasi-experimental research across communities
and contexts in the education sector (Wiseman, 2010). Contributing to this issue are research
structures that require fidelity at the expense of local, contextual knowledge (Cohen-Vogel et al.,
2015). To address this issue, there is a growing movement in education to use the scientific basis
of research in combination with the flexibility to adjust implementation plans to meet the needs
of individual communities (Cohen-Vogel et al., 2015). Continuous improvement science,
grounded in systems theory, has shown the potential to meet these needs (Wang & Fabillar,
28
2019). Initially used in the manufacturing and healthcare sectors (Shakman et al., 2020; TichnorWagner et al., 2018), continuous improvement science is being increasingly adopted by local
education agencies, education foundations, and education nonprofits (Yurkofsky, 2020), due to
its ability to address complex change (Wang & Fabillar, 2019).
A literature review is provided to situate continuous improvement science in the
education sector. First, the characteristics of continuous improvement science will be described,
along with how it supports short, iterative research projects and the development of
organizational structures and processes grounded in adult learning. Immediately following will
be a brief overview of how continuous improvement science is used in the education reform
movement. Finally, the review will conclude by describing how continuous improvement
research can be used as a public accountability structure to promote best practices in SEAs to
ensure continuous growth toward student success.
Continuous Improvement
Continuous improvement is “a research-based, ongoing process in which institutions
engage for the purpose of increasing its overall effectiveness and making a positive, measurable
impact on all stakeholders” (U.S. Department of Defense Education Agency, 2018, p. 4). The
Education Development Center similarly stated continuous improvement research is grounded in
“creating and testing solutions to address problems as (we) try to get better at improving (our)
daily work practices” (Wang & Fabillar, 2019, p. 11). Nestled into the broader category of
improvement science (Park et al., 2013), continuous improvement is sometimes used
interchangeably with the terms quality management and performance management; however,
there are distinct differences between these approaches (Shakman et al., 2020), which are beyond
the scope of this text. This section describes continuous improvement as a cyclical and iterative
29
assessment process supporting continuous growth and systems change (Bae, 2018; Cohen-Vogel
et al., 2015; Joyce & Cartwright, 2020; Kaufman et al., 2019).
Continuous improvement as a cyclical and iterative process is grounded in the routine of
identifying a problem of practice, determining root causes for the problem, creating an
intervention plan, and then collecting data to determine the effectiveness of the intervention
(Park et al., 2013; Shakman et al., 2020; Wang & Fabillar, 2019). This process allows for rapid
testing of prescribed interventions and the ability to adjust if the intervention is not producing the
desired results (Kaufman et al., 2019). By allowing education agencies to test interventions
quickly, leaders and practitioners can better assess whether an intervention fits their
organizational needs and adjust the program or innovate in a way that traditional experiential
research cannot support (Yurkofsky, 2020). This work is guided by three key questions:
1. What problem are we trying to solve?
2. What change might result in an improvement?
3. How will we know if that change is making the improvement we need it to make?
(Cohen-Vogel et al., 2015; Wang & Fabillar, 2019)
The second part of the continuous improvement definition supports both continuous
growth and systems change (Bae, 2018; Cohen-Vogel et al., 2015; Joyce & Cartwright, 2020;
Kaufman et al., 2019) refers to the deliberate attention paid to organizational decision making
related to the implementation of the intervention (Cohen-Vogel et al., 2015; Joyce & Cartwright,
2020; Shakman et al., 2020; Wang & Fabillar, 2019). This focus ensures an organization attends
to both the problem of practice and simultaneously assesses the design process of the change
implementation (Cohen-Vogel et al., 2015). This dual focus requires participants to hold both
subject matter knowledge in the change initiative and knowledge and a commitment to concepts
30
such as systems thinking, change management, and capacity building (Wang & Fabillar, 2019).
This commitment to the effectiveness of change implementation then allows iterative and rapid
testing of change-related decision making to build organizational improvement capacity (Joyce
& Cartwright, 2020). Additionally, it allows education agencies to build robust intervention
plans that can then be scaled across and outside of the originating organization (Shakman et al.,
2020; Wang & Fabillar, 2019).
Of significant note are two key concepts of continuous improvement that are alluded to in
the definition but need to be specifically addressed. First, this definition requires a change in
focus from implementation fidelity to implementation integrity, which can also be described as a
shift from a focus on compliance to a focus on outcomes (Tichnor-Wagner et al., 2018). This
fundamental difference supports the adaptive ability of the organization to meet the needs of
local contexts in service of the overall goal (Yurkofsky, 2020). Additionally, by engaging in the
continuous improvement process, organizational staff are no longer implementers of an
intervention plan; they move into the role of practitioner, where they can take ownership of the
work and build capacity for future change initiatives (Cohen-Vogel et al., 2015; Wang &
Fabillar, 2019).
Data Analysis Through Continuous Improvement Models
In a recent Learning Policy Institute publication describing equity advancement through
the ESSA, Cook-Harvey et al. (2016) identified assessment as a critical opportunity pillar for
ensuring that all students have access to quality education opportunities; however, the authors
spoke not of adding additional assessment measures of student success. They called for an
assessment of education agencies to track institutional progress toward equity goals and provide
opportunities to build reflective practices into the organizational culture (Cook-Harvey et al.,
31
2016). While data analysis has become a common activity in the education community, Spillane
(2012) noted data analysis in and of itself is not enough to drive successful change. Educators
need an explicit framework to guide their data collection and analysis and provide support to
ensure that new knowledge is acted on (Spillane, 2012). Continuous improvement models have
created a structure that educators and education organizations can use to codify continuous
improvement in the change process and build a collaborative continuous improvement culture
(Wang & Fabillar, 2019).
Multiple continuous improvement models have been created to lead organizations
through the data analysis action process. The six sigma model (LeMahieu et al., 2017) and the
plan-do-study-act (PDSA) cycle (Redding et al., 2017) are popular examples of independent
models that educators use to structure their continuous improvement processes (Shakman et al.,
2020). In addition, many education organizations have created their own continuous
improvement models specific to their organizations, such as the Education Development Center
(Wang & Fabillar, 2019) and the Oregon State Department of Education (n.d.). While each of
these individual models has its own idiosyncrasies, all consistently include, at minimum, four
actions: (a) identification of a problem and the creation of a plan to address the problem, (b)
implementation of that plan, (c) assessing the implementation, and (d) analyzing the assessment
data to determine next steps (LeMahieu et al., 2017; Redding et al., 2017; Shakman et al., 2020;
Wang & Fabillar, 2019). Because the PDSA cycle aligns with the commonalities across models
(Cohen-Vogel et al., 2015) and is the model that is recommended by the IES (Shakman et al.,
2020), PDSA will be used to illustrate an example of rapid cycle testing using a continuous
improvement model.
32
During the initial step of the PDSA cycle, the goal is to identify the problem of focus,
evaluate the potential causes contributing to the problem using existing data, and then determine
the initial actions the organization will take to intervene (Shakman et al., 2020). This process
should include setting goals for the cycle and identifying how the organization will collect and
analyze data to evaluate the intervention (Wang & Fabillar, 2020). The next stage of PDSA is to
carry out the intervention plan identified during the plan stage (Cohen-Vogel et al., 2015). At the
end of the do stage, data on the intervention’s effectiveness and the implementation process’s
effectiveness should be gathered (Park et al., 2013).
During the study phase of the PDSA cycle, data is analyzed to determine whether the
original continuous improvement plan progressed toward the organization’s intervention goals
(Shakman et al., 2020). In addition, both the effects of the intervention itself and the
organization’s structures and processes should be analyzed for potential impact on the problem
of practice (Park et al., 2013). In the final stage of PDSA, the data analysis completed in the
previous stage contributes to adjustments or continued commitment to the decisions made in the
original plan phase (Wang & Fabillar, 2019). At this point, a decision could be made to adjust or
change the original plan based on perceived areas of growth identified during the study phase
(Cohen-Vogel et al., 2015). Alternatively, if the data suggests the intervention is working, an
organization could decide to stay the course or even begin implementing the intervention in new
contexts or settings (Park et al., 2013). Regardless of the decision made, the cycle typically
begins again to continuously strive toward the desired outcomes of the intervention (Shakman et
al., 2020).
33
Key Considerations
Throughout the process of the PDSA cycle, there are three key factors that education
organizations should consider as they implement a continuous improvement model. First,
moving through the phases does not automatically categorize an organization’s work as
continuous improvement science (Cohen-Vogel et al., 2015). To truly develop a continuous
improvement culture, an organization must use the model to improve practice, student outcomes,
and future implementation efforts (Wang & Fabillar, 2019). Best practice always includes a
reflection on whether progress is being made toward the identified goal and whether the
organization uses the data analysis process to adapt the implementation to reach the identified
goal in the most efficient manner (Tichnor-Wagner et al., 2018).
Second, rich data sources should guide the continuous improvement process (Kaufman et
al., 2019). The data should include student outcome measures and assess the process of the
implementation itself and should measure any unintended costs of the implementation (Park et
al., 2013). Once data is collected, the use of deep analyses to determine the root cause of issues
related to the problem of practice, rather than making surface-level observations, will ensure that
a continuous improvement model is successful (Wang & Fabillar, 2019).
Finally, the use of a continuous improvement model, similar to an organization’s
commitment to continuous improvement science, must be rooted in a collaborative culture
(Cohen-Vogel et al., 2015; Kaufman et al., 2019; Wang & Fabillar, 2019). Collaborative cultures
promote shared responsibility and engagement, essential for a change initiative’s success
(Kaufman et al., 2019). In addition, educators must approach this process with open
communication, opportunities for feedback, and a shared commitment to finding solutions to
promote student success (Wang & Fabillar, 2019).
34
Continuous Improvement to Scale Change
While there is inherent and practical value in continuous improvement models,
continuous improvement science has more to offer to the overall success of an organization
beyond individual PDSA cycles. In particular, an organization’s ability to build continuous
improvement routines (Cohen-Vogel et al., 2015; Redding et al., 2017; Spillane, 2012) and to
leverage continuous improvement practices to build organizational capacity (Kaufman et al.,
2019; Tichnor-Wagner et al., 2018; Yurkofsky, 2020) can provide opportunities to create a
collaborative culture that can promote widespread impact far beyond an individual project or
practice (Dixon & Palmer, 2020; Park et al., 2013; Shakman et al., 2020).
Scaling a continuous improvement initiative is a typical goal in the public education
reform culture (Redding et al., 2017). While there are benefits to classroom-level instructional
improvement, many leaders across education agencies hope that individual pockets of success
can be translated across multiple locations and contexts to benefit a wider number of students
(Park et al., 2013). There are many situations where the PDSA cycle is used to adapt and refine
an improvement to scale it across a broader system (Cohen-Vogel et al., 2015). This process
allows the organization to identify and address any issues, either with the intervention itself or
the implementation process, before rolling out the program on a broader scale (Tichnor-Wagner
et al., 2018).
Scaling is tied closely to the last phase of the PDSA cycle: act (Wang & Fabillar, 2019).
After new data from a previous cycle has been analyzed, the organization must determine
whether the intervention should be adopted, adapted, or abandoned (Cohen-Vogel et al., 2015;
Redding et al., 2017; Shakman et al., 2020). If an intervention has failed to show progress toward
identified outcomes, despite multiple adaptations, the organization may abandon the initiative
35
(Shakman et al., 2020). Finally, if the organization is seeing progress but still feels there are
more issues to address, they may choose to continue to adapt, or refine, the original plan
(Shakman et al., 2020).
Once the intervention has undergone multiple iterations and the organization is ready to
implement it on a broader scale, it may adopt the program (Shakman et al., 2020). That is not to
say there is no additional program analysis once an intervention is adopted. Continuous
improvement research notes that PDSA cycles should systematically continue as the organization
continuously improves the implementation process to scale and through the scale of the program
(Cohen-Vogel et al., 2013; Wang & Fabillar, 2020); however, once the project moves to a
scaling phase, the PDSA cycles shift to monitor new goals (Wang & Fabillar, 2019). The new
organizational focus includes helping the intervention imbed in the organizational knowledge
base, supporting staff buy-in, ensuring the reach of the project, and ensuring the project’s
sustainability (Redding et al., 2017; Wang & Fabillar, 2019). Once an intervention can support
the organization through these critical areas, continuous improvement moves past a cycle
focused on an individual intervention and begins to support systems change (Dixon & Palmer,
2020; Redding et al., 2017; Tichnor-Wagner et al., 2018). Spillane (2012) gave an apt visual,
describing how the organization creates macro structures to support the micro behaviors of the
wider staff.
Across organizations, three essential supports must be in place to support broader
continuous improvement: (a) organizational structures that facilitate continuous change, (b) a
culture that supports and rewards improvement, and (c) staff capacity for implementation success
(Dixon & Palmer, 2020; Kaufman et al., 2019; Redding et al., 2017; Shakman et al., 2020; Wang
& Fabillar, 2019). The first step an organization must take to support continuous improvement is
36
to create structures that support change management (Kaufman et al., 2019). These should
include a well-developed plan for innovation implementations (Redding et al., 2017),
organizational routines that allow for staff growth and development in continuous change
through data analysis (Spillane, 2012), and a process to assess staff capacity and organizational
support continuously (Cohen-Vogel et al., 2015).
Additionally, building specific structures for collaboration, professional learning, and
shared decision making set the stage for developing a culture of improvement (Dixon & Palmer,
2020; Kaufman et al., 2019; Wang & Fabillar, 2019; Yurkofsky, 2020). Leadership actions that
provide psychological safety for innovation and growth are essential for developing a culture of
improvement (Dixon & Palmer, 2013; Redding et al., 2017). These actions include moving away
from the habit of looking for blame and promoting a shared focus on solutions (Dixon & Palmer,
2020). Moreover, by using structures such as professional learning communities and scaffolding
support for continuous improvement work in these structures, staff can take ownership of
continuous collaborative improvement outside of the oversight of the leadership team (Dixon &
Palmer, 2020; Kaufman et al., 2019). In including this relational focus in continuous
improvement plans, education agencies are positioned to handle typical buy-in barriers to change
initiatives (Yurkofsky, 2020).
Once there are structures and a strong culture in place to support continuous
improvement, the final consideration is ensuring that all staff in the education agency have the
capacity to take ownership of implementation initiatives (Kaufman et al., 2019; Redding et al.,
2017; Shakman et al., 2020). Many factors are crucial to ensuring staff capacity for systemic
change. First, on a practical level, education agencies must ensure that staff have the resources to
meet organizational goals (Kaufman et al., 2019; Redding et al., 2017). Resources could be fiscal
37
or physical (Redding et al., 2017), but should include a strong focus on providing opportunities
to build the knowledge and skill needed to implement change initiatives (Kaufman et al., 2019;
Redding et al., 2017).
Continuous improvement should focus on both the innovation AND the implementation
process (Joyce & Cartwright, 2020), as should any development opportunities designed to build
staff capacity for change (Wang & Fabillar, 2020). Knowledge resources can also be addressed
through communication pipelines that inform and engage staff in the continuous improvement
process (Shakman et al., 2020) and through access to quality professional development
experiences (Kirkpatrick & Kirkpatrick, 2016). Through the development process, staff should
also be afforded opportunities to engage with data systems to immerse themselves in the process
of implementation quality analysis (Redding et al., 2017). Finally, the organization must
constantly assess staff capacity to implement new initiatives by analyzing the organization’s
change agenda to ensure that multiple change initiatives do not interfere with each other (Dixon
& Palmer, 2020).
Continuous Improvement in Public Education Reform
Armed with a more robust understanding of change management sciences, it is time to
focus on how this work has impacted the education sector. Throughout the public education
reform movement, multiple stakeholders have been identified as contributing to the overall
success of the education system. These stakeholders include the state and federal government,
which create education policy (Canfield-Dais & Jain, 2010; Heise, 2017; Sunderman, 2010;
Superfine, 2005; Wrabel et al., 2018); the education agencies that design and carry out the
implementation of education policy (Park et al., 2013; Shakman et al., 2020; Wang & Fabillar,
38
2019), and the individual teachers and leaders who work with students daily (Cook-Harvey et al.,
2016; Wang & Fabillar, 2019).
As education agencies are typically charged with implementing education policy
(Brodersen et al., 2017; Klute et al., 2016; VanGronigen & Meyers, 2019), this section focuses
on their use of continuous improvement in the education reform movement; however, because of
the devolution of power to the states under the ESSA (Egalite et al., 2017), continuous
improvement in the federal government is not addressed, and the focus ends at the state level.
Therefore, this section synthesizes current research on how continuous improvement is used in
SEAs, local school districts, individual school sites, and the instructional staff working in these
systems.
Instructional Staff
Instructional staff, specifically classroom teachers, have embraced continuous
improvement research through two primary practices: (a) using data analysis to drive instruction
and (b) targeted skill growth through observations and coaching cycles (Knight & Skrtic, 2021;
Meyer-Looze, 2015; Prenger & Schildkamp, 2018; Will et al., 2019). In data-based decision
making, instructional staff has adopted the practice of using assessment cycles to drive
instructional decision making (Will et al., 2019) and the analysis of data to cater instruction to
student needs (Prenger & Schildkamp, 2918). In addition to student data analysis, many teachers
participate in collaborative classroom observation and debrief cycles with their colleagues to
refine best instructional practices (Meyer-Looze, 2015). Teachers also use instructional coaching
to set and meet individual goals when coaches are available (Knight & Skrtic, 2021).
39
School Sites and Leadership
The principal drives continuous improvement at individual school sites and creates a
culture of excellence for students and staff (Jones & Thessin, 2015; Kiral, 2020; Osher et al.,
2020; Schildkamp, 2019). In the current education climate that heavily focuses on student
academic achievement, current educational leadership research has noted the importance of
creating a data-driven continuous improvement school culture (Kiral, 2020; Schildkamp et al.,
2017). This culture includes providing structures to support data-driven instruction through
professional learning communities (Jones & Thessin, 2015); the use of continuous improvement
models for goal-setting purposes (Schildkamp, 2019); and providing a shared vision for student
success and opportunities to evaluate school progress toward those goals (Schildkamp et al.,
2017); however, the principal’s role is to manage the site in its entirety, not just in the area of
academic success (Kiral, 2020). Continuous improvement work of school leadership also
includes regularly assessing the physical and psychological safety of the school for both students
and staff (Osher et al., 2020) and continuously improving the relationship between the school
staff and the families of its students (Mac Iver et al., 2021).
Local School Districts and Leadership
Districts have promoted the use of continuous improvement science by approaching the
management of the district as a system and using research on how to best address the
organizational needs of that system through data analysis (Andreoli & Klar, 2021; Cohen-Vogel
et al., 2016; Hackmann et al., 2019; Peurach et al., 2016). Through this practice, districts have
built unique and sustainable innovations that meet individual community needs (Bodily et al.,
2017; Hackmann et al., 2019; Mac Iver et al., 2018). Examples range across education topics,
with recent research documenting district use of using continuous improvement data to make
40
significant scale adjustments to district curriculum (Bodily et al., 2017), leveraging research to
redesign community relationships (Mac Iver et al., 2018), and promoting effective college and
career pipelines for at-risk students (Hackmann et al., 2019).
The entirety of this improvement work has not been completed in silos. Districts are
reaching for support outside of their organizations to ensure they have access to continuous
improvement methods to meet the needs of their communities through school improvement
networks (Andreoli & Klar, 2021; Banghart, 2021; Cohen-Vogel et al., 2016; Peurach et al.,
2016). These partnerships with similar districts (Banghart, 2021) or research organizations
(Cohen-Vogel et al., 2016) are providing opportunities to shift focus from student data to the
effectiveness of the organization as a whole (Peurach et al., 2016). These partnerships allow
districts to problem solve collaboratively to meet individual and collective needs (Banghart,
2021) and use continuous improvement models to ensure intervention implementation success
(Cohen-Vogel et al., 2016). Additionally, they provide opportunities to engage leaders across the
district in opportunities to develop their craft in school reform (Andreoli & Klar, 2021) and use
these cooperatives as assessment and accountability partners to ensure district growth and
development in large-scale change (Banghart, 2021; Peurach et al., 2016).
State Education Agencies: Gap in Research
While there is ample evidence of continuous improvement throughout schools and
districts, research becomes uncharacteristically silent once the SEA is reached (VanGronigen &
Meyers, 2019). Despite a review of research in the education and government sectors, the
researcher could not find research relating to SEA use of continuous improvement or
organizational data analysis to meet the needs of its constituents. The absence of this research
does not indicate there is no focus on continuous improvement in SEAs. For example, while not
41
technically considered a state agency, the U.S. Department of Defense Education Agency (2018),
the governing agency for public schools on U.S. military bases, has created a public blueprint for
continuous improvement to support organizational success.
The lack of research related to the use of continuous improvement science by SEAs demonstrates
what continuous improvement work may be occurring is not currently being assessed
transparently. This lack of public data is concerning due to the power given to SEAs in the area
of education reform through the ESSA, specifically in the area of the design and implementation
of reform policy (Klute et al., 2016; Lam et al., 2016; VanGronigen & Meyers, 2019). With the
general call in education to move from a culture of compliance to one of continuous
improvement (Bae, 2018; Redding et al., 2017), this gap in research related to the use of
continuous improvement sciences in SEAs becomes the basis for this research project. A case is
made for the importance of SEAs consistently and transparently using continuous improvement
science to allow for organizational accountability. Then, current research in adult learning and
motivation is used alongside change management practices to develop a model for assessing a
SEA’s implementation of a policy initiative to promote continuous improvement and transparent
accountability to constituents.
Continuous Improvement in State Education Agencies
State education agencies are central to the goal of education reform (Lam et al., 2016;
Manna, 2012; VanGronigen & Meyers, 2019; Wilcox et al., 2017). Under the ESSA, SEAs are
given the power to design education accountability systems (Mathis & Trujillo, 2016), determine
site and district consequences for underperformance on student achievement measures (Egalite et
al., 2017; Mathis & Trujillo, 2016); and are responsible for the design, implementation, and
monitoring of school improvement plans (Lam et al., 2016; VanGronigen & Meyers, 2019);
42
however, because of the stipulation that the U.S. Department of Education cannot reject or
remove parts of a state’s improvement plan, there is little oversight on the design or
implementation of state education reform from the federal government (Egalite et al., 2017).
Additionally, because SEAs’ only reporting mandates require public disaggregated student data
(ESSA, 2015) and education reform policy varies widely from state to state (Klute et al., 2016),
it is difficult to determine whether states are collecting or using data related to internal
continuous improvement methods. When we consider the nested interrelationship between state
and local education agencies (Wang & Fabillar, 2019), it becomes crucial to identify whether
SEAs are participating in the practice of data-based decision making. Data-based decision
making has become integral to the education reform movement led by state and federal entities
(Wilcox et al., 2017) and promoted by educational research organizations such as the IES
(Shakman et al., 2020).
With so much power currently located at the state level (ESSA, 2015), education reform
policy is currently designed by the SEA, which then works with local school districts to
implement policy in their organizations (ESSA, 2015; Wang & Fabillar, 2019; Wilcox et al.,
2017). With multiple stakeholder groups involved in implementing a specific reform policy,
there is an opportunity for the various groups to impact other groups in the pathway, either
positively or negatively (Wang & Fabillar, 2019); however, without data to both understand and
evaluate how a state’s implementation of education reform policy impacts its overall success
(VanGronigen & Meyers, 2019), school districts will continue to be held responsible for reform
outcomes that may not be entirely within their control (Klute et al., 2016; VanGronigen &
Meyers, 2019; Wang & Fabillar, 2019).
43
While there may not be specific data available to evaluate a SEA’s ability to implement
education reform initiatives successfully, recent research has indicated a general lack of capacity
to do so (Egalite et al., 2017; Klute et al., 2016; Manna, 2012; VanGronigen & Meyers, 2019).
Many state education leaders have reached out for support from education research centers,
citing their need for a deeper understanding of education reform decision-making trends (Klute
et al., 2016). In addition, research into SEA education reform capacity has indicated a state’s
ability to implement education reform (Egalite et al., 2017; VanGronigen & Meyers, 2019) and
their success at education reform implementation (Manna, 2012) varies widely from state to
state. This variation is especially concerning when considering that allowing for district
involvement in ESSA improvement planning is encouraged but not required (Egalite et al., 2017)
and that a district can be subjected to changes in staff, financial interventions, school closings, or
complete school takeovers if improvement plans are not successful (Klute et al., 2016).
This comprehensive power held by SEAs highlights a fundamental question: What
accountability should SEAs be held to as a governmental entity? Policy research has shown
government agencies tend to support the agenda of important political interest groups, sometimes
at the expense of the general public due to their highly politicized nature (Lubienski et al., 2014;
Sunderman, 2010; Wiseman, 2010). To ensure state agencies work toward the general common
interest, the burden of holding government agencies accountable for how their actions impact the
larger community falls to its constituents (Clinton & Grissom, 2015). Clint and Grissom (2015)
noted for successful constituent advocacy, citizens need access to objective performance data.
Additionally, a landmark study by Tetlock (1992) found public accountability structures
inherently promoted responsiveness to the constituency because the organization understood that
they could be called to account for their decision making by the general public. These oversight
44
structures become increasingly important as SEAs shift from compliance managers to the
designers and implementers of substantial change initiatives (Manna, 2012).
The key to state education reform accountability is public accessibility to organizational
continuous improvement data (Galey, 2015; Kaufman et al., 2019; VanGronigen & Meyers,
2019). Since the implementation of NCLB, most data cited by SEAs as proof of reform failure or
success have been based on high-stakes student achievement scores (VanGronigen & Meyers,
2019); however, continuous improvement research tells us that organizational data must be
collected to promote the analysis and improvement of professional practice (Galey, 2015;
Kaufman et al., 2019). The U.S. Department of Defense Education Agency (2018), in its guiding
document on continuous organizational improvement, noted sharing key success indicators
promotes data accountability to encourage public involvement in analyzing organizational
growth. By providing this level of transparency in organizational data, education stakeholders
could become more knowledgeable about the continuous improvement process and act as an
additional accountability lever for SEAs (VanGronigen & Meyers, 2019).
Sharing organizational continuous improvement data would hold SEAs accountable for
student achievement and the work of continuous improvement (Peurach et al., 2016). Continuous
improvement supports a transformational approach to quality organizational improvement
through systems change (Dixon & Palmer, 2013; Park et al., 2013; Shakman et al., 2020).
Through a public pledge to continuous improvement, a SEA would be showing its commitment
to a transition from compliance to that of learning and growth (Bae, 2018; Cohen-Vogel et al.,
2015) and the use of data as a decision-making tool (Cohen-Vogel et al., 2015; Wilcox et al.,
2017). Additionally, it would indicate alignment with the larger effort in the educational
45
community to apply learning design principles to support systemic school improvement through
a capacity for change (Cohen-Vogel et al., 2015; Peurach et al., 2016; Redding et al., 2017).
A Model for Assessment: State Education Reform Policy Implementation
To create a framework to collect data to identify a SEA’s areas of strength and growth in
education reform policy implementation, research in best practices across adult learning,
motivation, and change management is leveraged. First, a theoretical basis for the conceptual
framework of this study is provided. The principles discussed will then be organized into a
framework and applied to the HB3 reading academies case study.
Theoretical Foundation
Enacting policy is a complex task with a history of failure across the education sector
(Elliott, 2020; Kainz et al., 2021; Stokes, 2018). Education policy must overcome structural,
financial, and sociocultural barriers to promote large-scale change (Elliott, 2020; Souto-Otero,
2011). Part of this complexity comes is because public education is a multifaceted, dynamic
system, and simple cause-and-effect decision making does not engage the multiple structures in
the system, nor does it account for the needs of diverse stakeholders (Dowd & Liera, 2018;
Kainz et al., 2021). The effects of inefficiency in education policy implementation have included
lost financial resources, staff turnover, and a continuing need to find interventions that support
student outcomes (Stokes, 2018).
Adding to this complexity of evaluating education policy implementation is the habit of
using student test scores to measure the success of a policy initiative (McClellan, 2011). While
student achievement is the goal of public education, education agencies, like other organizations
engaged in continuous improvement, need data sources to evaluate the implementation itself
(Clark & Estes, 2008; Stokes, 2018). Evaluative data need to include whether the education
46
agency provides an environment that supports successful change (Elliott, 2020; Kainz et al.,
2021; Souto-Otero, 2011; Stokes, 2018). This environment would include effective systems for
engaging stakeholders throughout the planning and implementation process (Souto-Otero, 2011;
Stokes, 2018), a clear understanding of the collaborative outcomes the intervention is working
toward (Elliott, 2020; Kainz et al., 2021), and the appropriate structures that scaffold support to
stakeholders to ensure implementation success (Dowd & Liera, 2018; Kainz et al., 2021).
To meet these ambitious goals, research supports the use of detailed and collaborative
policy design and agile implementation (Clark & Estes, 2008; Kainz et al., 2021; McClellan,
2011; Souto-Otero, 2011). Creating explicit action steps toward identifiable outcomes has the
added benefit of giving the education agency the ability to evaluate the progress of a policy
initiative against its original plan (Kainz et al., 2021). Additionally, the education agency would
provide continuous engagement opportunities for stakeholders, as agile implementation requires
regular design adjustments aligned with policy outcomes (Kainz et al., 2021).
Clark and Estes (2008) created a gap analysis process model to evaluate goals and
support agile implementation toward a clear and measurable outcome (see Figure 1).
47
Figure 1
Clark and Estes’s (2008) Gap Analysis Process Model
Note. Adapted from Turning Research Into Results: A Guide to Selecting the Right Performance
Solutions by R. E. Clark and F. Estes, 2008, Information Age Publishing, Inc., p. 22.
Similar to the PDSA model described earlier in this chapter, the Clark and Estes’s (2008)
gap analysis process model is a cyclical mechanism for identifying, analyzing, and adjusting
business goals and processes (Clark & Estes, 2008). The gap analysis process model asks
organizations to identify or create organizational goals, and team and individual goals that align
with the organization’s desired outcomes (Clark & Estes, 2008). Once goals have been designed,
gaps between goals and organizational performance are identified (Clark & Estes, 2008). To
understand the gaps between goals and performance, an in-depth analysis of the root causes of
48
these performance gaps is used to create solutions that meet the organization’s individual needs
(Clark & Estes, 2008). These may include revising original goals or new or changed structural
supports that enable the organization to meet its chosen metrics (Clark & Estes, 2008).
The Clark and Estes (2008) gap analysis model brings value in evaluating SEA policy
implementation through its specificity in determining performance gaps, which can be used in
monitoring the implementation of a state education policy initiative. Clark and Estes (2008)
identified three causes of performance gaps that must be addressed to ensure organizational
success: (a) employee knowledge and skill, (b) motivation, and (c) organizational structures.
These areas will be discussed in detail in the next section by drawing on foundational research in
adult learning and organizational change and management.
Knowledge and Skill
The first component identified by Clark and Estes (2008) that can support or inhibit an
organization’s ability to meet its goals is employee knowledge and skill. The differentiation
between knowledge and skill is best described through the revised Bloom’s taxonomy
(Krathwohl, 2002), which is divided into two dimensions: (a) knowledge and (b) cognitive
(Anderson, 2005; Irvine, 2017; Krathwohl, 2002). The knowledge dimension describes content
acquisition, whereas the cognitive dimension relates to a person’s ability to use knowledge as
part of a cognitive process (Anderson, 2005; Krathwohl, 2002). The knowledge dimension aligns
closely with Kirkpatrick and Kirkpatrick (2016), who identified knowledge as understanding a
concept and skill as the ability to apply that knowledge through observable behaviors. In both
examples, knowledge is the precursor to a behavior or skill that expresses participant knowledge
(Anderson, 2005; Kirkpatrick & Kirkpatrick, 2016; Krathwohl, 2002).
49
Bloom’s revised taxonomy identifies four types of knowledge: (a) actual knowledge:
basic topic-related knowledge such as definitions and details, (b) conceptual knowledge: the
interrelatedness of factual knowledge in the larger content system, (c) procedural knowledge: the
knowledge of how to apply factual and conceptual knowledge, and (d) metacognitive
knowledge: the awareness of personal cognition (Anderson, 2005; Krathwohl, 2002). In the
cognitive, or skill, dimension, Bloom’s taxonomy includes six ways knowledge can be applied:
(a) remembering, (b) understanding, (c) applying, (d) analyzing, (e) evaluating, or (f) creating
(Anderson, 2005; Krathwohl, 2002).
Clark and Estes (2008) noted to ensure employees are prepared to meet individual and
team goals in support of larger business goals, knowledge, and skill can be scaffolded through
information giving, job aids, training, or education. In this classification system, information is
the lowest level of support, with basic knowledge provided but no behavioral support provided
(Clark & Estes, 2008). Job aids give information to employees and add structures of how to use
that information to ensure behavioral success (Clark & Estes, 2008). Training and education
provide heavy knowledge and behavioral support but are differentiated by the scope of support
provided. Training focuses on providing procedural knowledge with built-in feedback cycles to
promote skill. At the same time, education lays a broad theoretical foundation that includes a
higher focus on the application of factual and conceptual knowledge (Clark & Estes, 2008).
The scaffolding of knowledge and skills to stakeholders in education policy
implementation is imperative for success (Elliott, 2020; Kainz et al., 2021; Kirkpatrick &
Kirkpatrick, 2016). Research has found effective learning and development can overcome
sociocultural barriers typically found in policy initiative implementation (Elliott, 2020);
however, practical learning and development are highly dependent on the skill of the trainer and
50
the development of the training process (Clark & Estes, 2008; Kirkpatrick & Kirkpatrick, 2016).
The decisions made during training design directly affect how participants organize new
knowledge into existing knowledge frameworks and whether participants can access and use new
knowledge in the future (Clark & Estes, 2008).
Specific to the HB3 reading academy implementation, the factual, conceptual,
procedural, and metacognitive knowledge of HB3 reading academies cohort leaders would center
on their understanding of the systems and rules required to implement HB3 reading academies in
their regions. Necessary factual knowledge for cohort leaders would include understanding the
implementation rules created by TEA. Conceptual knowledge would require cohort leaders to
understand how individual implementation rules related to the plan and goals of HB3 reading
academies implementation. Procedural knowledge would be assessed through a cohort leader’s
ability to apply implementation rules in their role. Metacognitive knowledge would require
cohort leaders to analyze when they had sufficient knowledge to make implementation decisions,
versus when they need to clarify their understanding with TEA. Through the assessment of
cohort leaders’ various levels of knowledge, it is additionally necessary to understand how the
training provided by TEA impacted the overall knowledge level of this group.
Motivation
The second component, identified by Clark and Estes (2008), which can support or
inhibit organizational performance is motivation. Motivation is a cognitive process that
incentivizes beginning and sustaining action (Bandura, 1977; Chiu, 2018; Clark & Estes, 2008).
Ryan and Deci (2000) noted motivation is the determining factor for human behavioral
regulation and is based on the psychological needs of competence, relatedness, and autonomy. It
determines whether a person is willing to actively pursue goals, persist in the face of challenges,
51
and develop creative solutions (Clark & Estes, 2008). Workplace motivation has been widely
studied, and the resulting research has identified two motivation constructs that are particularly
relevant to the concept of change management: intrinsic and extrinsic motivation (Broedling,
1977; Ford, 2020; Ryan & Deci, 2000; Yoon et al., 2015) and self and collective efficacy
(Bandura, 1977, 1997; Kunnari et al., 2018; Stokes, 2018).
Intrinsic motivation and extrinsic motivation are regularly discussed as paired concepts,
but they are not two diametric concepts on a singular scale (Chiu, 2018; Ryan & Deci, 2000;
Yoon et al., 2015). Intrinsic motivation is motivation that comes from internal desires (Chiu,
2018; Slemp et al., 2021; Ryan & Deci, 2000) and can derive from joy, interest, achievementorientation, or meaning-making processes (Chiu, 2018; Specht et al., 2018). Intrinsic motivation
has been linked to creativity, persistence, and higher levels of job performance (Chiu, 2018).
Intrinsic motivation can be encouraged through autonomy, a challenging work environment, and
supportive feedback; relatedly, it can be inhibited through threats, punitive measures, and a
perceived lack of control by the employee (Ryan & Deci, 2000).
Extrinsic motivation comes from positive and negative external pressures and produces
mixed performance results (Chiu, 2018; Ryan & Deci, 2000; Yoon et al., 2015). Some external
motivation mechanisms relate to internal reactions to external systems, such as the desire to
adapt to societal norms or the importance of being outwardly perceived as competent and
dependable (Ryan & Deci, 2000). Others are direct reactions to external stimulators, such as fear
of punishment or a desire for a reward (Ryan & Deci, 2000; Slemp et al., 2021). Some external
motivation behaviors positively affect employee performance (Ryan & Deci, 2000; Yoon et al.,
2015). Intangible rewards, such as praise and recognition, have been found to support both
employee intrinsic and extrinsic motivation toward creativity goals (Yoon et al., 2015).
52
Additionally, regulation behaviors motivated by external sources have been shown to positively
impact performance (Ryan & Deci, 2000); however, tangible external rewards have consistently
been linked to inhibiting intrinsic motivation and lower employee desires to meet ambitious
goals (Yoon et al., 2015). External pressure and fear of consequences have been shown to
negatively affect basic motivation levels (Ning & Jing, 2012). Perceived threats and fear of
consequences negatively impact motivation by lowering a person’s self-efficacy (Bandura,
1977).
Self-efficacy relates to a person’s belief in whether they can complete a task or succeed in
a specific situation (Bandura, 1977; Stokes, 2018). Self-efficacy is important to assess and track
in an organization because it strongly predicts how successful an employee will be at completing
the functions of their job role (Bandura, 1977; Liou & Daly, 2020). Additionally, it can predict
whether an employee will accomplish complex assignments and persist in the face of challenges
(Bandura, 1977; Wilcox & Lawson, 2018). Self-efficacy can be built through various
experiences, including past individual success; witnessing or vicariously experiencing success
through peer interactions; verbal encouragement; or the ability to maintain a sense of calm in the
context of task completion (Ford et al., 2020). In change management literature, research has
found employees with high levels of self-efficacy related to their job role can better adapt and
succeed during change initiatives (Tuan, 2017).
Collective efficacy describes a group’s belief in whether the group can complete a
specific task or succeed in a specific role (Bandura, 1997). While similar to self-efficacy,
collective efficacy is not the sum of each group member’s individual efficacy (Bandura, 1997).
Instead, it is a separate construct that measures an individual or group’s perception of the
efficacy of the group as a whole (Bandura, 1997). Collective efficacy is affected by individual
53
members’ perceptions, attitudes, and skills and can be purposely built by leaders who manage
and motivate the group based on individual needs and characteristics (Bandura, 1997; Wilcox &
Lawson, 2018). During change initiatives, team beliefs in their efficacy to implement change are
directly and positively correlated to team success (Kunnari et al., 2018; Stokes, 2018).
Specific to the HB3 reading academies, the motivation of cohort leaders focuses on their
perceived capability to successfully implement reading academies and what motivated them to
do so. Self-efficacy includes an individual cohort leaders perceived ability to implement reading
academies, focusing on individual competency beliefs. Collective efficacy describes the
perception of whether cohort leaders, as a group, felt confident in their ability to implement
reading academies, highlighting the general beliefs of cohort leaders as a collective. Cohort
leaders’ intrinsic and extrinsic motivation reports what factors contributed to a cohort leader’s
desire (or lack thereof) to successfully implement reading academies. Throughout these
questions, the overall impact of TEA decision making communicates how a state agency’s
communication and program implementation can impact motivation, both for the individual and
the group as a whole.
Employee motivation and organizational culture are closely related (Clark & Estes, 2008;
Slager et al., 2021). Organizational cultures with changing performance targets, lack of
transparency and compassion, lack of employee autonomy, and highly critical managers
consistently undermine the motivation of their employees (Clark & Estes, 2008). Additionally,
whether teams or individuals have the motivation to succeed may be impacted by their capacity
to meet performance goals (Slager et al., 2021). The following section discusses organizational
structures as supporting or inhibiting organizational change. Many of these identified structures
54
directly impact employee and team self-efficacy and organizational culture (Clark & Estes,
2008).
Organizational Structures
The final component that can support or impede organizational change is the support and
climate an organization provides to its stakeholders, collectively known as organizational
structures (Clark & Estes, 2008; Kainz et al., 2021; Stokes, 2018). Continuous improvement
researchers have noted an initiative’s success or failure depends on the decision making of
organizational leadership and their commitment to agile implementation to ensure success
(Dixon & Palmer, 2020; Shakman et al., 2020; Wang & Fabillar, 2019). Research on continuous
improvement and education reform consistently identifies four key organizational behaviors that
support the success of change initiatives: (a) creating robust internal structural processes, (b)
keeping a human-centered approach, (c) providing a quality improvement culture, and (d) using
high-leverage change behaviors (Clark & Estes, 2008; Kainz et al., 2021; Wang & Fabillar,
2019).
The creation of internal structural processes is relatively straightforward. This
organizational behavior requires the establishment of clearly defined roles, processes, and
communication channels (Dixon & Palmer, 2020; Park et al., 2013; Shakman et al., 2020; Wang
& Fabillar, 2019). Opportunity for the use of structural processes begins at the inception of an
implementation initiative with the creation of a detailed and adaptable implementation plan that
uses system thinking to ensure the interaction of multiple teams in a system (Park et al., 2013;
Shakman et al., 2020; Wang & Fabillar, 2019). It continues throughout the implementation
process in the form of clear and actionable procedures for using continuous improvement science
through a variation of a PDSA cycle to ensure processes are consistently monitored and adjusted
55
toward the end goal (Shakman et al., 2020; Wang & Fabillar, 2019). Part of this continuous
improvement work is developing an accountability system that ensures organizational
performance is on track to meet goals. (Dixon & Palmer, 2020). Finally, clear communication
channels must be used throughout the implementation to keep all stakeholders engaged in the
process (Dixon & Palmer, 2020; Park et al., 2013). By providing these channels, stakeholders
understand the goal of a change initiative and the specific actions they need to take to succeed,
ultimately leading to buy-in and ownership of the process (Dixon & Palmer, 2020; Park et al.,
2013; Wang & Fabillar, 2019).
The following organizational behavior relates specifically to whether organizational
leadership engages in change management with a human-centered approach. A human-centered
approach in the education sector requires all stakeholders, from district leadership to the students
they serve, to be placed at the heart of a change initiative (Dixon & Palmer, 2020; Park et al.,
2013; Wang & Fabillar, 2019). This approach requires consistent assessment of the change
initiative to ensure that it is a practical request of the broader education community and that the
implementation design supports the initiative’s sustainability (Wang & Fabillar, 2019). It also
necessitates that leadership approaches change with humility and vulnerability and be willing to
put the success of a change initiative over the individual desire to protect situational power
(Dixon & Palmer, 2020). Additionally, it requires leaders to share the design of the change
initiative with stakeholders and demonstrate compassion and understanding through the
uncertainty of change (Dixon & Palmer, 2020). By engaging with stakeholders through these
authentic practices, leaders can model the core behaviors that eventually lead to a culture of
continuous improvement through providing psychological safety in the change initiative (Park et
al., 2013).
56
Leaders modeling learning sets the stage for the third organizational behavior needed for
change success: providing a quality improvement culture (Dixon & Palmer, 2020; Park et al.,
2013; Shakman et al., 2020; Wang & Fabillar, 2019). Quality improvement is described as using
qualitative and quantitative data analysis to guarantee successful outcomes (Park et al., 2013).
This focus on organizational advancement toward a common goal is the cultural underpinning
that allows continuous improvement science to produce results (Dixon & Palmer, 2020). To
develop a quality improvement culture, organizational leadership must model curiosity through
deep understanding and involvement in the change management process (Dixon & Palmer, 2020;
Park et al., 2013). Practically, providing a quality improvement culture relates to creating
structural supports (Shakman et al., 2020). Leaders can build an improvement culture by creating
routines for collecting data, designing space for reflection, and the willingness to adjust
implementation planning in the name of outcomes (Shakman et al., 2020; Wang & Fabillar,
2019). Organizational leadership’s commitment to a continuous improvement culture lays the
foundation for PDSA cycles to take root and flourish in a system (Dixon & Palmer, 2020; Park et
al., 2013; Shakman et al., 2020; Wang & Fabillar, 2019).
The final organizational behavior needed for successful implementation is using highleverage change behaviors based on theoretical research and practice (Dixon & Palmer, 2013;
Shakman et al., 2020; Wang & Fabillar, 2019). High-leverage change behaviors include a range
of potential behaviors based on the unique needs of a change initiative. Considerations include
the use of systems thinking in designing implementation procedures (Dixon & Palmer, 2013),
ensuring the organization’s readiness to engage in continuous improvement efforts (Shakman et
al., 2020) and promoting the depth, spread, and sustainability of a change initiative (Wang &
Fabillar, 2019). Another behavior that falls in this category and is specific to the education
57
community is breaking down structural barriers between state and district leaders to ensure
collaborative support of an initiative across education agencies (Dixon & Palmer, 2013;
Shakman et al., 2020). Overall, the key to enacting high-leverage change is a commitment by
leadership to identify barriers to successful implementation (Shakman et al., 2020) and to find
innovative ways to overcome those barriers for the sake of the initiative and the betterment of the
organization (Dixon & Palmer, 2013; Wang & Fabillar, 2019).
Specific to the HB3 reading academies implementation, organizational supports differ
from the knowledge and motivation sections. Cohort leaders must reflect on their understanding
of TEA decisions and behaviors and how those perceived decisions and behaviors impacted their
overall ability to implement reading academies in their regions. Specific behaviors to be reflected
on in organizational structures and processes include a clear implementation plan, accountability
systems for all stakeholders (including the TEA), and clear and accessible communication
channels. The concept of a human-centered approach for this case study describes whether the
needs of cohort leaders, authorized providers, and district leadership and staff were solicited and
considered throughout the implementation process. Closely related, data-driven decision making
asks whether cohort leaders, authorized providers, and district leadership and staff input were
regularly and structurally solicited and whether feedback was incorporated into the overall
implementation plan. High-leverage change behaviors focus on whether Texas educators were
adequately prepared to implement HB3 reading academies and whether the education
community perceived they were a collaborative partner during the implementation process.
Additionally, high-leverage change behaviors include whether ongoing barriers to successful
implementation were identified and fixed in a timely manner to minimize the impact on
participants.
58
Conceptual Framework
The previous research on knowledge, motivation, and organizational behaviors was used
to create the conceptual framework for this study, with the Clark and Estes (2008) gap analysis
framework used as the theoretical structure. By drawing on the gap analysis framework created
by Clark and Estes, the conceptual framework of this study will analyze the knowledge,
motivation, and organizational structures provided by TEA to support the implementation of the
HB3 reading academies. These three domains will be assessed to determine whether the primary
implementers of the HB3 reading academies, the cohort leaders, had the knowledge, motivation,
and organizational structures needed to implement reading academies effectively. The following
research questions guided this process:
1. What were the cohort leaders’ perceptions of their knowledge related to HB3 reading
academies implementation?
2. What were the motivational influences affecting cohort leaders’ ability to implement
HB3 reading academies?
3. How did state organizational structures impact implementation of HB3 reading
academies from the cohort leaders’ perspective?
To examine Research Question 1, the four types of knowledge identified by the revised
Blooms taxonomy—factual knowledge, conceptual knowledge, procedural knowledge, and
metacognitive knowledge (Krathwohl, 2002)—will be used to determine the strengths and gaps
in cohort leader knowledge related to the implementation of the HB3 reading academies. To
examine research question two, the motivational concepts of external motivation and internal
motivation (Ryan & Deci, 2000), self-efficacy (Bandura, 1977), and collective efficacy
(Bandura, 1997) will be used to determine how motivational influences impacted cohort leaders’
59
ability to implement HB3 reading academies effectively. Finally, to examine research question
three, continuous improvement science will be leveraged to determine whether TEA employed
effective structural supports, a human-centered change approach, a quality improvement culture,
and high-leverage change behaviors to support cohort leader implementation success (Dixon &
Palmer, 2020; Park et al., 2013; Shakman et al., 2020; Wang & Fabillar, 2019). These 12
constructs, nestled in the gap analysis framework, are included in Figure 2.
Summary
In this chapter, a review of literature on continuous improvement in education reform was
provided. It began with a recent history of U.S. education accountability reform to situate
continuous improvement in the historical context. Next, the relationship between continuous
improvement and education reform was explored through the analysis of continuous
improvement research, specifically focusing on the use of continuous improvement models and
continuous improvement science to scale change. Then, a general overview of continuous
improvement practices across the education sector was provided, and the use of continuous
improvement science by SEAs was identified as a research gap. Finally, Clark and Estes’s (2008)
gap analysis process model, the revised Bloom’s taxonomy, and research in motivation and
continuous improvement were used to create a conceptual model to assess a state education
reform policy implementation.
60
Figure 2
Conceptual Framework: State Education Agency Performance Gap Analysis
Knowledge
Factual
Knowledge
Conceptual
Knowledge
Procedural
Knowledge
Metacognitive
Knowledge
Motivation
Intrinsic
Motivators
Extrinsic
Motivators
Self-Efficacy
Collective
Efficacy
Organization
Structural
Support
Processes
HumanCentered
Approach
Quality
Improvement
Culture
High-Leverage
Change
Behaviors
61
Chapter Three: Methodology
This chapter describes the methodology of the study, the purpose of which is to assess the
impact of the TEA’s decision making during the policy design and implementation of the HB3
reading academies to identify strengths and areas of growth for continuous organizational
improvement. The chapter begins with a brief overview of the research questions, the research
setting, and the researcher’s positionality. Then, a description of the data collection and analysis
process is provided, along with information about the credibility, transferability, dependability,
and confirmability of the study. Finally, this chapter concludes by describing the practices and
procedures used to ensure that the study meets the ethical considerations required by social
sciences research.
Research Questions
To identify strengths and performance gaps in a SEA’s implementation of an education
reform policy, the Clark and Estes (2008) gap analysis framework was used to identify cohort
leaders’ perceptions as to whether they had opportunities to gain the knowledge and motivation
supports needed to effectively implement the HB3 reading academies, and whether the correct
organizational structures were provided to ensure their success in implementation. Three
research questions were designed to guide this analysis:
1. What were the cohort leaders’ perceptions of their knowledge related to HB3 reading
academies implementation?
2. What were the motivational influences affecting cohort leaders’ ability to implement
HB3 reading academies?
3. How did state organizational structures impact implementation of HB3 reading
academies from the cohort leaders’ perspective?
62
Overview of Design
A qualitative research design was used in this study to support the conceptualization of
how SEA policy implementation impacts an initiative’s overall success. Specifically, a
qualitative case study approach was employed to understand cohort leaders’ lived experiences
throughout the HB3 reading academies implementation process. Merriam and Tisdell (2016)
described qualitative case studies as an “in-depth analysis of a bounded system” (p. 42). This
research study met the criterion of this definition, as it focused exclusively on data collection
related to the HB3 reading academies implementation.
The choice to use a qualitative design also provided the opportunity for the collected data
to guide the research project’s focus concurrently with the theoretical lens of the gap analysis
framework and be supported by the historical lens of education reform policy implementation
and continuous improvement science. Using an inductive approach allowed for a greater
understanding of the lived experience of the cohort leaders and supported the identification of
larger themes in state education policy implementation. To meet these research goals, the data
collection process for this dissertation consisted of semistructured, individual cohort leader
interviews.
Research Setting
The research setting for this study was an immaterial collaborative space between the
TEA and the HB3 reading academies authorized providers, whom the state agency selected to
facilitate HB3 reading academies implementation. At the onset of the initiative, TEA authorized
30 regional service centers, independent school districts, and nonprofit education agencies for
this purpose. Authorized providers were charged with hiring HB3 reading academies cohort
63
leaders to lead cohorts of up to 100 participants through the reading academies blended content
modules.
While authorized providers employed cohort leaders, management structures related to
the role were designed and overseen by TEA. To be hired for a cohort leader role, candidates had
to pass a stringent three-step certification process designed and managed by TEA. Additionally,
all cohort leaders were trained for their role by TEA and followed explicit role functions as
designed by the state agency.
The HB3 reading academies structures required cohort leaders to bridge organizational
membership between both their employer, the authorized provider, and their role developer,
TEA. Cohort leaders were selected to be the participants for this specific purpose. Because of
their role relationship with TEA, cohort leaders could speak to the implementation’s training,
culture, and organizational design, similar to employees of the state agency; however, because
their employment contracts are with outside organizations, cohort leaders had more freedom to
speak candidly about their experiences without fear of identification and retribution.
The Researcher
I have supported the implementation of the HB3 reading academies in multiple settings.
As a former employee of the TEA, I served as the grant manager for the READ grant, which was
the precursor to the HB3 reading academies. Additionally, I served on the HB3 reading
academies team at the project’s inception, allowing insight into the early planning stages in the
SEA.
I transitioned to a role with an HB3 reading academies authorized provider, where I
served as the manager and coordinator of the project. I implemented HB3 reading academies
regionally in this role and managed a team of cohort leaders. These experiences have supported
64
my deep understanding of the strengths and challenges experienced by cohort leaders during the
first year of HB3 reading academies implementation, and the overall impact that TEA decision
making had on the success of the initiative. Additionally, my experience has aided my overall
contextual understanding of the HB3 reading academies implementation process, enhancing my
ability to design this study; however, as the purpose of this study was to understand how TEA
decision making impacted the success of implementation from the perspective of the cohort
leader, protective structures were built into the study design to ensure the cohort leaders’ lived
experiences, rather than my own, guided the findings.
First, I purposefully modified the Clark and Estes (2008) gap analysis framework to not
only include performance gaps, but identification of strengths in TEA decision making as well.
Allowing for both strengths and areas of growth supported a more holistic understanding of the
impact of SEA decision making on an initiative and provided opportunities for cohort leaders to
share positive experiences that differed from my lived experience. The interview protocol (see
Appendix A) was explicitly designed to reflect this modification.
Secondly, two protective structures will be built into the interview protocol and data
analysis process to ensure that my lived experiences did not impact the data collection or
resulting themes. First, during data collection, if participants referenced common experiences, I
shared with participants that I would not be discussing my experiences to focus on their
perceptions of implementation. Secondly, member checks were used during the data analysis
process to ensure that identified themes were attributable to each individual research
participant’s experience, rather than cocreated with my lived experiences.
65
Data Sources
Data collection consisted of individual interviews with HB3 reading academies cohort
leaders to assess the knowledge acquisition, motivational supports, and organizational structures
experienced as part of the HB3 reading academies implementation process. Using interviews
allowed data gathering related to a participant’s authentic experience during the HB3 reading
academies implementation (Creswell & Creswell, 2018). Details of the participants,
instrumentation, and analysis of collected data are included in the next section.
Participants
Participants for this study were recruited using the network sampling approach, which
allowed the researcher to request current research participants to connect the researcher with
additional potential participants interested in contributing to the study (Merriam & Tisdell,
2016). This type of sampling approach is also known as snowball sampling. Ten HB3 reading
academies cohort leaders were interviewed, each having been employed by a different authorized
provider. Additionally, each cohort leader was selected to represent one of Texas’ 20 education
regions. Because the researcher did not have access to demographic information of participating
HB3 reading academies cohort leaders, she was unable to identify characteristics of a participant
sample that would be representative of the total population of cohort leaders who served in the
first year of implementation. Instead, this study interviewed cohort leaders representing half of
all education regions in the state of Texas. This representation of education regions allowed for
the potential of a diverse geographic sample of experience while acknowledging the limitations
of the number of interviews the researcher could reasonably complete due to the time constraints
of the dissertation process. The specific regions and authorized providers represented in this
study are not disclosed to protect participant confidentiality.
66
To qualify for study participation, cohort leaders were required to meet three
characteristics. First, any cohort leader interviewed needed to have a minimum of 1 year of
instructional coaching experience in the public education sector prior to their role as a HB3
reading academies cohort leader to compare the knowledge, motivational supports, and
organizational structures provided during previous instructional coaching experiences to the
support provided during HB3 reading academies. Additionally, the requirement to have
completed 1 year of instructional coaching in the public education sector ensured a basic
understanding of public education structures that helped cohort leaders evaluate the impact of
TEA decision making on the public education system. Second, selected participants served as a
cohort leader during the first year of HB3 reading academies implementation (School Year
2019–2020). By participating in the initial rollout, participants were better positioned to analyze
the initial planning and adaptive qualities of the project implementation in real-time through the
interview’s assessment of cohort leaders’ perceived knowledge and motivation related to the
implementation, and the adjustment to organizational structures during Year 1 of
implementation. Candidates were not required to have continued in the cohort leader role in Year
2 of implementation; however, many of the participants in this study had multiple years of
experience as a HB3 reading academies cohort leader. Finally, selected participants needed to
have completed an entire cycle of HB3 reading academies from the initial launch of the cohort
through the close of content 1 month later. Requiring this full experience allowed participants to
speak to the complete implementation process and reflect on the initiative as a whole. Outside of
these three characteristics, no other demographic data were collected to ensure complete
confidentiality and protect participants from adverse consequences that might result from study
participation.
67
There is a second category of HB3 reading academies cohort leaders hired by an
independent school district, which then contracts with an authorized provider to provide
logistical support and general oversight for the district’s cohorts. In this structure, school district
leadership manages the relationship with the authorized provider, who then acts as an
intermediary between the district and TEA. Due to these additional organizational layers, the
participant pool did not include cohort leaders employed by independent school districts who are
not authorized providers of HB 3 reading academies.
Instrumentation
This study used a semistructured interview protocol to collect data which included a list
of specific topics to cover but also provided flexibility to allow the research participant’s
experience to guide the data collection process (Merriam & Tisdell, 2016). As described in the
conceptual framework, each participant was asked questions designed to identify organizational
strengths and areas of growth in TEA’s HB3 reading academies implementation. These questions
were broken into three subcategories: (a) questions designed to assess whether cohort leaders’
knowledge needs were met or unmet using the revised Bloom’s taxonomy knowledge dimension
(Krathwohl, 2002), (b) questions designed to assess motivational influences on cohort leaders’
implementation ability drawing from Ryan and Deci’s (2000) research on internal and external
motivation and Bandura’s (1977, 1997) work in self-efficacy and collective efficacy, and (c)
questions designed to assess whether TEA organizational structures supported or inhibited cohort
leader implementation using continuous improvement research (Dixon & Palmer, 2020; Park et
al., 2013; Shakman et al., 2020; Wang & Fabillar, 2019).
68
Data Collection Procedures
All participant interviews were conducted one on one through web-based platforms.
Using individual interviews allowed for the strict protection of participant identity from outside
stakeholders and other participants (Creswell & Creswell, 2018). In addition, while interviewing
on web-based platforms could have potentially limited the interactions between participant and
researcher due to the geographic size of the state of Texas, using a web-based platform promoted
participation by allowing for interviews to occur based on interviewee scheduling needs
(Merriam & Tisdell, 2016).
Each identified cohort leader participated in a 60-minute interview to discuss their
holistic experience as a HB3 reading academies cohort leader. A semistructured interview
protocol was used (see Appendix A). With participant permission, all interviews were recorded
and transcribed so that the researcher could engage with the research participant more effectively
during the interview process (Merriam & Tisdell, 2016). The researcher then used the recorded
interview to check the transcription for errors. Once an interview transcription was deemed
accurate, the recording was permanently deleted. In addition, all identifiable participant
information was removed from the transcription to protect participant confidentiality (Merriam
& Tisdell, 2016).
Data Analysis
Data analysis began by coding transcripts using the 12 concepts identified as part of the
conceptual framework development. In the knowledge group, the four categories used were (a)
factual knowledge, (b) conceptual knowledge, (c) procedural knowledge, and (d) metacognitive
knowledge. In the motivation group, the four categories used were (a) intrinsic motivators, (b)
extrinsic motivators, (c) self-efficacy, and (d) collective efficacy. Finally, in the organization
69
group, the four categories were (a) structural supports, (b) human-centered approach, (c) quality
improvement culture, and (d) high-leverage change behaviors. The a priori coding of data
included two codes for each of these 12 subcategories: a code to indicate the presence of a
researched-based practice being used and a code to indicate the absence of a researched-based
practice. Additionally, open coding was developed simultaneously during the data analysis
process.
Once coding was complete, the researcher generated themes from the coding analysis.
Participants were then asked to evaluate the identified themes and categorize their experiences
against each theme to validate researcher findings. Participants were specifically asked to
determine whether each theme was holistically reflective of their experience, whether the theme
was partially reflective of their experience, or whether participant experience entirely deviated
from the identified themes (see Appendices C & D). For example, if the theme was identified as
communication to cohort leaders was inconsistent, participants would be asked to indicate
whether that theme was reflective of their experience as a cohort leader, partially reflective of
their experience, or not reflective of their experience. Cohort leaders were asked to provide
context around the theme for partially reflective or not reflective responses to support
understanding of how their experience deviated from the identified theme. There were no
identified themes that most participants indicated were either partially or not reflective of their
experience. In cases where a small subset of participants experienced deviations from themes,
the departure of experience will be described as part of the data analysis in Chapter 4.
Credibility, Transferability, Dependability, and Confirmability
In the realm of qualitative research, there is a focus on ensuring the credibility,
transferability, dependability, and confirmability of the data collection and analysis process
70
(Merriam & Tisdell, 2016). By ensuring that each of these four concepts are addressed, the
researcher can build the trustworthiness of the study. Trustworthiness is defined by Merriam and
Tisdell (2016) as conducting qualitative research in a rigorous manner that allows readers to trust
or believe the study’s findings. In this section, the researcher will address how this study’s
methodology will be purposefully designed to ensure trustworthiness in the qualitative context.
Merriam and Tisdell (2016) described the concept of credibility as the ability of findings
to align with reality. To ensure the findings of this study align with participant perception of their
captured reality, member checks, which allow participants to analyze and comment on the
accuracy of data analysis findings, were employed (Creswell & Creswell, 2018). Additionally, as
part of the dissertation process, peer review of research findings was naturally built into the data
analysis process.
Transferability describes how generalizable the study’s findings are to new situations
(Merriam & Tisdell, 2016). To promote the transferability of the conceptual framework, the
researcher used research commonly used across educational contexts rather than literacy-specific
studies. The use of the revised Bloom’s taxonomy (Krathwohl, 2002), Ryan and Deci’s (2000)
description of internal and external motivation, Bandura’s (1977, 1997) work related to selfefficacy and collective efficacy, and general and widespread research relating to continuous
improvement in education (Dixon & Palmer, 2020; Park et al., 2013; Shakman et al., 2020;
Wang & Fabillar, 2019) provided an opportunity to transfer the conceptual framework across
education reform contexts. Additionally, transferability was supported by selecting participants
across authorized providers and Texas education regions. By ensuring that cohort leaders were
selected to represent a diverse geographical perspective, the researcher capture stakeholder
feedback representing the larger state rather than a specific organization or region in the system.
71
Dependability and confirmability were addressed through adequate engagement in data
collection and providing rich descriptions of data themes. Dependability is based on the concept
that readers of qualitative analyses feel that results “make sense,” while confirmability requires
that thematic analysis be based on the data collected during the research process (Merriam &
Tisdell, 2016). By providing detailed data descriptions and full discussions relating to data that is
divergent from general themes, readers could feel confident that the analysis provided is
consistent with the lived experiences of participants (Merriam & Tisdell, 2016).
Ethics
Paramount to the role of a researcher is the ethical ability to protect research participants
from harm related to the study. Because this study asked participants to contribute data that could
be considered politically inflammatory, the ethics of participant safety were heightened (Merriam
& Tisdell, 2016). Critical research safeguards were included throughout the data collection
process to protect participants from undue harm. First, as participants were being recruited for
the study, informed consent was addressed through explicit descriptions of the study purpose, the
voluntary nature of participation, and the researcher’s commitment to participant confidentiality,
including detailed descriptions of the data management process. A copy of the informed consent
document can be found in Appendix B. Additionally, verbal, rather than written, consent of the
participant’s willingness to participate in the research was acquired during the interview process
so that the researcher was not storing any documentation of the identity of participants
throughout the data collection and analysis process.
To ensure participant privacy, all recorded interviews were stored on the researcher’s
computer hard drive and only for the time frame it took the researcher to check the interview
transcript against the video. Once transcription checks had taken place, all recorded interview
72
materials were permanently deleted. Additionally, as part of the transcription process, all
identifiable information was removed from the transcript, and pseudonyms were substituted for
participant names, authorized providers, and any districts or schools mentioned in the interview.
Using this procedure ensured that no one outside the researcher and participant, including the
researcher’s dissertation committee, had access to the participant’s identity. Additionally, no
demographic data related to the participant group was provided in the study to ensure participant
privacy.
An argument could be made that initiating a performance audit of the TEA without their
direct involvement could sidestep ethical research guidelines. In a private organization, this
dilemma might be much more pronounced; however, because TEA is a governmental
organization that is ultimately a steward of public funds and the education system in the state, the
researcher believes that providing evaluative stakeholder perceptions to the general public does
not constitute an ethical breach of conduct (Merriam & Tisdell, 2016, p. 262).
73
Chapter Four: Findings
The purpose of this research study was to assess the effect of the TEA’s decision making
during the policy design and implementation of the HB3 reading academies to identify strengths
and areas of growth for continuous improvement, as perceived by cohort leaders. This study used
a qualitative case study approach. A Clark and Estes (2008)–modified gap analysis framework
served as the theoretical basis for the study. Data collection consisted of 60-minute individual
interviews with HB3 reading academies cohort leaders to assess the knowledge acquisition,
motivational supports, and organizational structures experienced as part of the HB3 reading
academies implementation process. This chapter outlines the following elements of the research
study: participants and summary of findings. Three research questions guided this study:
1. What were the cohort leaders’ perceptions of their knowledge related to HB3 reading
academies implementation?
2. What were the motivational influences affecting cohort leaders’ ability to implement
HB3 Reading Academies?
3. How did state organizational structures impact implementation of HB3 reading
academies from the cohort leaders’ perspective?
Participants
Ten HB3 reading academies cohort leaders participated in this research study. All
participants were employed by a separate organization that the TEA authorized to implement
reading academies, representing approximately one third of all authorized providers during the
initial year of reading academies implementation. Each participant additionally represented a
single Texas education region, or half of all Texas education regions in total.
74
Each participant had previous instructional coaching experience prior to their role as a
HB3 reading academies cohort leader. Additionally, all participants served as a HB3 reading
academies cohort leader for the entirety of the initial year of implementation (School Year 2020–
2021). Most participants continued in this role through Years 2 and 3 of reading academies
implementation (School Year 2021–2022 and School Year 2022–2023, respectively). As
outlined in Chapter 3, no other demographic information was collected due to the highly political
nature of the project. To protect individual identities, each participant was assigned a pseudonym
based on the order of their interview.
Findings
Eight themes emerged during the data analysis process. These themes are organized
under the study’s three research questions, which focused on the cohort leaders’ knowledge
perception, motivational influences, and perspective of organizational structure impact on
implementation.
After themes were identified, participants were asked to determine whether each theme
was reflective of their experience as a cohort leader, partially reflective of their experience, or
not reflective of their experience through a theme analysis survey. If participants selected either
partially reflective or not reflective, they were prompted to provide additional context related to
their response. Participants who indicated a theme was reflective of their experience were given
the option to provide additional context but were not required to do so.
All participants completed and returned the theme analysis survey. Overall, participants
indicated the identified themes were reflective of their experience in 93.75% of their responses
and partially reflective of their experience in 6.25% of their responses. In two instances, a
participant’s partially reflective response indicated the participant did not understand the theme.
75
These two responses were removed from the overall analysis (themes filling knowledge gaps and
accountability). No participants indicated that the themes were not reflective of their experience
as a cohort leader. See Table 1 for a list of identified themes and statements from the theme
analysis survey.
The remainder of this chapter will discuss individual themes in detail. Each theme is
presented and organized under its related research question. Participant responses from the theme
analysis survey will be discussed alongside the theme discussion for each research question.
Research Question 1: What Were the Cohort Leaders’ Perceptions of Their Knowledge
Related to HB3 Reading Academies Implementation?
Cohort leaders were asked four interview questions to assess their perceptions of
knowledge related to the HB3 reading academies implementation, explicitly focusing on the first
year of implementation as they were onboarding to their new role (see Appendix A). Two themes
emerged from their responses. First, cohort leaders indicated three types of knowledge were
needed in their new role: content, technology, and procedural knowledge. Cohort leaders
indicated that their level of knowledge in each of these three areas was highly dependent on
previous experiences prior to their role as a cohort leader. Additionally, cohort leaders indicated
that when knowledge gaps existed, they typically relied on personal relationships and research
skills to bridge these knowledge gaps. The theme analysis survey findings for these two themes
can be found in Table 2.
76
Table 1
Study Themes and Theme Analysis Survey Statements
Research question Theme Theme analysis survey statement
What were the
cohort leaders’
perceptions of
their knowledge
related to HB3
reading
academies
implementation?
Knowledge
needs and
contexts
Cohort leaders had three types of needed knowledge
for their role: content, technology, and procedural.
A cohort leader’s level of knowledge was highly
dependent on previous experiences and/or
background knowledge.
Filling
knowledge
gaps
Cohort leaders leveraged research skills and
relationships to fill identified knowledge gaps.
What were the
motivational
influences
affecting cohort
leaders’ ability
to implement
HB3 reading
academies?
Intrinsic
motivation
Cohort leaders were intrinsically motivated to be
successful in their role by their passion for highquality literacy instruction, desire to support
students and teachers, and loyalty to their
authorized provider.
Collective and
self-efficacy
While cohort leaders did not feel that as a group they
were set up for success in their role, they found
role success by leveraging their individual
knowledge, intrinsic motivation, and resiliency
skills.
How did state
organizational
structures
impact
implementation
of HB3 reading
academies from
the cohort
leaders’
perspective?
Preplanning and
responsive
management
Cohort leaders believe that in-depth preplanning and
responsive management structures could have
supported a more consistent and effective
implementation of HB3 reading academies.
Stakeholder
communication
channels
Cohort leaders identified that reading academies
implementation could have been more successful
with stronger and more structured communication
channels between Texas Education Agency and all
stakeholder groups.
Accountability Cohort leaders believe that a public-facing
accountability plan would have made HB3 reading
academies implementation more impactful.
Human-centered
approach
Cohort leaders felt that the content of the HB3
reading academies was high quality; however,
using human-centered design would have made it
more applicable for participants and would have
garnered higher stakeholder buy-in.1
1 The term “human-centered design” was used by the researcher and participants as shorthand when discussing the
use of the human-centered approach in designing policy implementation plans. This term is distinct from the humancentered design process, which utilizes iterative design phases to move from project conceptualization to
implementation.
77
Table 2
Knowledge Theme Analysis Survey Findings
Theme Theme analysis survey
statement
Theme analysis survey finding
Knowledge needs and
contexts
Cohort leaders had three types
of needed knowledge for
their role: content,
technology, and procedural.
A cohort leader’s level of
knowledge was highly
dependent on previous
experiences and/or
background knowledge.
100% of participants indicated
that the theme was reflective
of their experience as a
cohort leader.
Filling knowledge gaps Cohort leaders leveraged
research skills and
relationships to fill identified
knowledge gaps.
100% of participants indicated
that the theme was reflective
of their experience as a
cohort leader.
Knowledge Needs and Contexts
When cohort leaders were asked to discuss role-specific knowledge, they indicated three
types of knowledge a cohort leader needed to succeed in their role: (a) content knowledge, (b)
technology knowledge, and (c) procedural knowledge. Cohort leaders agreed their prior
experience and background knowledge in these three areas heavily predicted whether they had
the appropriate knowledge base to be successful in their role at the onset of HB3 reading
academies implementation. This focus on prior experience and background knowledge was
because cohort leaders generally did not feel they were provided with enough role-specific
information in provided professional development for cohort leaders.
Content knowledge encompassed the early literacy and adult learning expertise a cohort
leader needed to effectively lead their participants through the HB3 reading academies content.
Due to their previous experience in literacy-related roles and/or educational experiences, most
78
participants felt they had a strong base of content knowledge they could draw on as cohort
leaders. Bailey summarized the importance of content knowledge well when she shared, “The
coursework was so expansive that previous content knowledge was a must. Facilitators who
weren’t experts in all grade bands of K–3 literacy struggled.”
Cohort leaders additionally drew on previous knowledge related to adult learning to
support participants in completing the professional development requirements. Multiple cohort
leaders mentioned leveraging previous coaching knowledge to identify how to build scaffolded
support opportunities into participant assignments by creating sample videos so that they could
model how to complete the assignment. Annie shared,
I feel like a lot of times we preach “I Do, We Do, You Do.” That gradual release piece
that is best practice, if you will. But that wasn’t really at the forefront of TEA’s rollout.
… So, we worked very hard as an authorized provider to develop how we can make that
look for them by way of, okay, here’s an example using math, or here’s an example using
science so that we could show them what the I Do, We Do looked like that was the task
of the [assignment]. We felt like we had to develop that on our own because it wasn’t
clear enough to the participants of what to do.
Cohort leaders who led comprehensive cohorts additionally drew on previous adult
learning knowledge to facilitate in-person professional development sessions effectively. These
types of activities included building engagement opportunities into the session guides that TEA
provided.
Technology knowledge related to the cohort leader’s ability to navigate the technological
platforms used for the HB3 reading academies content. The two project-related platforms were
Canvas, where cohort leaders reviewed and graded participant assignments and digitally
79
interacted with participants, and Zoom, which was used for in-person professional development
and coaching cycles (comprehensive only) and cohort leader professional development and
support. Cohort leaders who had used these platforms previously could effectively navigate both
systems; however, not all cohort leaders had experienced these platforms prior to the HB3
reading academies rollout. Ivy noted,
We had to learn how to use Canvas, which I’d never used before. I had to learn how to
use Zoom, which, it was new to all of us. Right? And then I had to learn all of the
intricacies of—How do you grade? How do you post an announcement? All of that,
which we had to learn on our own.
Finally, background procedural knowledge for cohort leaders was generally related to
participation in a statewide grant that had served as the precursor to the HB3 reading academies,
called the READ grant. The READ grant’s purpose was to provide similar content to a subset of
selected teachers and administrators across the state of Texas before the passage of HB3, which
then required all K–3 teachers to participate in an early literacy academy. HB3 reading
academies cohort leaders who had served as READ grant coaches felt generally comfortable
with their procedural knowledge of HB3 reading academies implementation, as the two roles
were very similar; however, cohort leaders without that prior experience initially struggled with
understanding their new role. Ella, who had served as a READ grant coach prior to her role as a
cohort leader, noted,
Honestly, I think I was a lucky one in this situation because I did participate in the READ
grant previously. … I don’t think that if I didn’t participate in the [READ] grant, I mean,
I probably would have been like the rest of my team, asking questions about what this
80
role looked like, how much work we would really have to do… So, I think I was only
prepared because of the [READ] grant.
All participants agreed with the statement that their knowledge level was highly
dependent on previous experience and/or background knowledge on the theme analysis survey.
In her optional additional response, Heidi noted she “would have had some much more difficulty
leading cohorts without all of these being strong skills before becoming a cohort leader.” Cohort
leaders generally did not feel that the role-specific professional development provided for cohort
leaders nor the business rules in the first year of implementation were sufficient in building their
content, technological, or procedural knowledge; however, as role-specific professional
development and business rules fall under the category of organizational support, they are
discussed later in this chapter. To fill identified knowledge gaps, cohort leaders leveraged
personal research skills and relationships to access the knowledge needed to succeed.
Filling Knowledge Gaps
Throughout the HB3 reading academies implementation, cohort leaders generally used
alternative rather than official channels to fill role-specific knowledge gaps. Cohort leaders did
not immediately use TEA as a knowledge source for a variety of reasons, including a high level
of staff turnover on the TEA reading academies team throughout the initiative, and a lack of
consistency in the timeliness and content of responses, which will be discussed in greater detail
through the organizational supports themes. Instead, cohort leaders leveraged both personal
relationships and research skills to fill knowledge gaps. In describing why it was more beneficial
to use alternative information pathways, Diana noted when contacting TEA with questions,
You have to kind of jump through certain hoops. Like, you have to email this person for
this or call this person for this or fill out this form for that and it’s just very convoluted
81
and confusing. So, it’s easier to go to someone you know and say, “Hey, what do you
know about XYZ?” Then to try to remember where to find this or what is in what hub
and what document and go hunting.
Some cohort leaders relied heavily on their internal team at their authorized provider
organizations to fill knowledge gaps. This collaboration sometimes included formal knowledgebuilding activities, such as meeting regularly to review and annotate professional development
content as a team; however, more often, cohort leaders noted their relationships focused on
informal information sharing as knowledge was accrued by individuals in the team. Gemma
described how her team supported each other in building role-specific knowledge when she said,
We asked a lot of questions, not only by ourselves, but as a group. Because we wanted to
make sure that everybody was doing their best. So, if one of us asked a question either to
TEA itself or to someone else, we were just constantly sharing, “This is what I learned
today if it can help you.” So, it just took a lot of teamwork.
Other cohort leaders leveraged relationships across the state to fill knowledge gaps.
Several cohort leaders who had worked together on previous state-wide literacy initiatives
developed a working group that met monthly via Zoom. Annie described the function of this
group as, “pull[ing] each other along and work[ing] together to problem solve or identify how we
could better support each other and our cohorts.” Similar to the authorized provider–specific
teams, this state-wide team participated in more formal knowledge-building activities, such as
creating additional materials for individual cohorts, but also functioned as a general monthly
information source where participating cohort leaders could pool and compare the information
they had learned since the last time they had met.
82
Some cohort leaders, either due to personal preference or lack of access to relationship
networks, chose to supplement their knowledge by leveraging personal research skills to locate
missing and needed knowledge. Leveraging personal resource skills happened most frequently in
the realm of content-specific knowledge. Because of the amount of content presented in the HB3
reading academies, cohort leaders would sometimes need to supplement their current literacy
knowledge. Claire shared,
There are things that I don’t know, but I feel like I need to go and find out. I don’t usually
go and ask someone from TEA, though. I will seek [out] that other information. … If I
have a question about content, they’ve given me the researchers’ names, they’ve given
me other articles to read. And so, I would rather do that instead of going in and saying,
“Can you explain this to me?”
On the theme analysis survey, all participants felt that the statement “Cohort leaders
leveraged research skills and relationships in order to fill identified knowledge gaps” was
reflective of their experience. One participant response was removed from this analysis, due to a
lack of understanding of the theme. Faith confirmed the theme by sharing that “Cohort leaders
very often relied on one another and a project manager in their authorized provider system, [and]
across authorized provider systems, to figure things out that were left unclear.” Cohort leaders’
ability to use personal relationships in and outside of their authorized provider and research skills
supported their overall understanding of their role and allowed them to find success as a cohort
leader.
83
Research Question 2: What Were the Motivational Influences Affecting Cohort Leaders’
Ability to Implement HB3 Reading Academies?
Cohort leaders were asked three interview questions to identify the motivational
influences affecting their ability to implement HB3 reading academies (See Appendix A). Their
responses discussed implementation broadly throughout multiple years of the project, unlike the
responses in the knowledge section, which focused heavily on the first year of implementation
due to the heavy initial acquisition of knowledge during the initial roll out. Two themes emerged
from their responses. First, cohort leaders indicated that they were intrinsically motivated to
succeed in their role by their passion for high-quality literacy instruction, desire to support
students and teachers, and loyalty to their authorized provider organization. Cohort leaders also
shared that while they did not feel that they were set up for success as a group, they could find
success in their role by leveraging their knowledge, intrinsic motivation, and personal resiliency
skills. The theme analysis survey findings for these two themes can be found in Table 3.
Intrinsic Motivation
When discussing their motivation for role success, cohort leaders overwhelmingly
indicated they were driven by intrinsic motivators. The primary motivators cohort leaders
discussed included a passion for high-quality literacy instruction, a desire to support student
literacy acquisition, and a desire to support teacher success. An additional motivator repeatedly
noted was an intrinsic desire to successfully support the larger mission and success of the
authorized provider employing the cohort leader.
84
Table 3
Motivation Theme Analysis Survey Findings
Theme Theme analysis survey
statement
Theme analysis survey finding
Intrinsic motivation Cohort leaders were
intrinsically motivated to be
successful in their role by
their passion for high-quality
literacy instruction, desire to
support students and
teachers, and loyalty to their
authorized providers.
100% of participants indicated
that the theme was reflective
of their experience as a
cohort leader.
Collective and selfefficacy
While cohort leaders did not
feel that as a group they were
set up for success in their
role, they found role success
by leveraging their individual
knowledge, intrinsic
motivation, and resiliency
skills.
80% of participants indicated
that the theme was reflective
of their experience as a
cohort leader.
20% of participants indicated
that the theme was partially
reflective of their experience
as a cohort leader.
A common motivator identified by the group was the desire to support high-quality
literacy instruction. The design of the HB3 reading academies content coincided with a statewide
shift to a more comprehensive focus on building student literacy skills, called the science of
teaching reading. Many cohort leaders chose to serve in their role to support the widespread use
of this reading model. Bailey shared,
I believe that the science of teaching reading is beneficial. … And having gone through
all the literacy initiatives, I did not realize how many districts and how many teachers
have a misunderstanding of literacy and the components of it.
In addition to focusing on traditional literacy instruction, the initial launch of the HB3
reading academies was the first time that the TEA offered a biliteracy professional development
85
session to the entire state. A subset of participants with a background in bilingual literacy
education noted their excitement to participate in an initiative that provided much-needed
support to bilingual teachers. When asked what motivated her to succeed, Ivy said, “This is my
passion. Biliteracy and bilingual education. [Reading academies biliteracy courses] are needed.”
Closely related to cohort leader passion for high-quality literacy instruction was a general
desire to support both students and teachers in the literacy classroom. This motivator was distinct
because it focused on the individual’s success (student or teacher) rather than a literacy skill set.
Due to their previous experiences as literacy coaches and teachers, multiple participants
discussed their motivation for role success in supporting others to achieve a sense of
accomplishment and mitigating the frustration that students and teachers may feel when
unsuccessful.
June shared a personal story of how a child in her family was impacted by their teacher’s
participation in reading academies. The mother of the child had just returned from a parentteacher conference, where the teacher had attributed the child’s success in reading to the new
instructional methods she had learned during the HB3 reading academies. June expressed a
desire to support a similar impact for the teachers she was serving by saying,
I just don’t know how we make sure every kid is getting what my 5-year old’s getting,
you know? Her in-class scores were just off the charts. Her mother is like, look at her
scores, and I know it’s because of [reading academies]. I mean, I know she is bright, but
she has a wonderful teacher who’s teaching her a great foundation, you know?
Heidi shared what drove her to be successful was the frustration that she saw when a
teacher’s instruction did not seem to make a difference for students. She noted,
86
You know, it’s awful as a teacher to be doing things if your kids aren’t getting it. You’re
wasting all this time and you’re doing all these things but it’s just not ever getting them
right where they need to be, so that they can be independent [readers and writers]. … So,
I felt like it was very motivating for me, because I thought, I can really help change the
lives of kids by helping their teachers.
These examples highlighted a commonly expressed desire to make a positive impact on
the people most directly impacted by the HB3 reading academies professional development
sessions: the participating teachers and the students in their classrooms. All cohort leaders noted
in their interviews that intrinsic motivation was a contributing, if not the main driving factor, in
their desire for role success.
The final motivator cohort leaders commonly discussed was the desire to succeed due to
their loyalty to their authorized provider organizations. At the onset of the HB3 reading
academies implementation, Authorized provider organizations were notified that they would
have to regularly reapply for authorization to provide reading academies to their stakeholders.
Data related to implementation, including participant success rates, would factor into the
reauthorization process. Additionally, a subset of authorized providers were Texas Education
Service Centers (ESCs). In Texas, ESC organizational metrics are set by the commissioner of
education and include a metric related to HB3 reading academies implementation. Cohort leaders
shared they wanted to support the success of their authorized provider through the
implementation metrics. Gemma noted, “We are a very small authorized provider, and so any
metric can significantly impact the whole organization. … If you tell us the metrics, we’re
getting that metric. You know, we exhausted all options.”
87
Multiple participants were also motivated to support their organization’s overall success
with stakeholders because of their pride in their authorized provider. Cohort leaders noted they
wanted to contribute to a shared sense of organizational quality held by the wider community the
authorized provider served. Ella shared an example of a time when her participants were
unhappy with the HB3 reading academy’s structure, which TEA designed. She noted,
I think we did spend some time as cohort leaders trying to communicate that we aren’t
TEA. … Like, I remember saying that a lot because we always are nervous that because
we are affiliated with an authorized provider, we don’t want them to think this is how we
do professional development.
Gemma and Ella illustrated the motivation of cohort leaders to present their authorized
provider as a successful organization, both to TEA through implementation metrics and to their
communities through successful professional development experiences.
When directly asked whether TEA contributed to cohort leader motivation, the answer
was typically no. Cohort leaders felt somewhat isolated from the organization’s goals and the
direct impact of TEA staff, noting the authorized provider served as a buffer between the two.
Additionally, cohort leaders indicated a stronger interest in the regions they served rather than
focusing on TEA’s statewide outcomes. When asked why she thought TEA did not have an
impact on her motivation for role success, Faith noted,
I didn’t really ever hear a lot of TEA people ever talking about how it was actually
affecting classroom instruction, or what our goals were. … You know, the law passed.
You already have the READ grant. So, obviously they were kind of taking that in a
direction to fit a mandate. And the people that were involved in that were much more
interested in feeling important and accomplished. I was interested in helping teachers
88
teach kids, and not really because of anything conveyed by the TEA people that I ever
had contact with.
All participants indicated this theme was reflective of their experience as a cohort leader.
In her optional survey response, Baily indicated one way TEA may have inadvertently impacted
the overall motivation that led to cohort leader success. Due to the rigorous statewide screening
process cohort leaders had to pass to be eligible to lead a cohort, she noted the possibility that
“the program was super selective … [in choosing] leaders with a passion for literacy.”
Regardless, participants overwhelmingly indicated that leveraging intrinsic motivation related to
high-quality literacy instruction, a desire to support student literacy acquisition, a desire to
support teacher success, and a desire to contribute to the success of their authorized provider
organization as key to their success as a cohort leader.
Collective and Self-Efficacy
There was a general consensus among participants that cohort leaders were not
collectively set up for success in their role. This lack of collective efficacy was typically
attributed to two causes. First, TEA’s lack of in-depth preplanning was identified as a significant
cause. When June was asked if she felt that cohort leaders were set up for success, June said,
“No. Not at all. Because I don’t think we had any real guidance on what [reading academies] was
going to look like or the time [it would take].” The general need for in-depth preplanning will be
discussed in further detail under the theme preplanning and responsive ranagement, in the
Organizational Structure section of this chapter.
The second reason that cohort leaders felt they were not collectively set up for success in
their role was due to the onset of the COVID pandemic coinciding with the initial rollout of HB3
reading academies. In Texas, Authorized Provider organizations and schools went fully remote
89
in March 2020 due to the global pandemic. The first HB3 reading academies cohorts launched 4
months later, in July 2020. The impacts of COVID on HB3 reading academies will be discussed
in greater detail in the theme Human-Centered Approach. While cohort leaders did not feel
collectively set up for success, each participant noted their ability to find individual efficacy in
their role by drawing on their knowledge, intrinsic motivation, and resiliency skills. These three
areas helped cohort leaders persevere through uncertainty to succeed in their role.
A subset of cohort leaders shared that, to feel successful, they focused on pursuing
personal knowledge growth. Claire shared that prior to her role as a cohort leader,
I was working as an instructional specialist, but I wanted to know more. … I was getting
ready to work on another degree. So, I’m getting ready to learn a little bit more about
reading. … And so, my initial step out there was so that I could be able to use what I was
learning in a better way.
This drive for greater understanding that led Claire to the cohort leader role also
supported her perseverance throughout her time as a cohort leader. Multiple participants agreed
that focusing on building knowledge that they could use to support teachers and students
supported their ability to succeed.
Participants additionally indicated focusing on their individual motivations for taking the
role helped them find success in their role. For some, this came through a personal belief in a
growth mindset. When discussing her role-specific accomplishments, Diana shared,
I always wanted to do better and to learn more. I had heard a presenter early in my career
that said, if you ever have a lesson plan that you think is good enough and you don’t have
a hundred percent success, then you don’t have a lesson plan that’s good enough. And so,
90
I have always lived by that. … We all wanted to do better, and we all wanted to change
and do things for kids.
Focusing on a growth mindset supported Diana’s perseverance. Other cohort leaders
grounded themselves in the stakeholders they were serving. When Faith was asked what
motivated her to be successful as cohort leader, she said,
Just because of teachers first. Like, they have to do this. So, I’m going to make sure that
they feel supported and that they at least learn something from it. So, my motivation was
that this is a really big ask, and I don’t want it to be a useless item on a checklist that took
you 60 to 80 hours to complete, and you didn’t get anything out of it, and you felt
frustrated the whole time.
Multiple participants agreed with Faith and shared that they leveraged determination to
be successful for their participants and the broader communities they represented. To ensure their
success, cohort leaders drew on personal resiliency and problem-solving skills. When confronted
with a lack of needed materials, cohort leaders collaborated to create them. When participants
asked for support beyond what was provided in the professional development modules, cohort
leaders created new structures to meet their teachers’ needs. Heidi summarized it best when she
said, “I was going to be successful. There was no way I was going to let these teachers down. I
had a vested desire of wanting them to succeed, wanting to succeed myself, wanting to not let
my (participants’ district) down.”
Eight out of 10 participants felt that the statement “While cohort leaders did not feel that
as a group they were set up for success in their role, they found role success by leveraging their
individual knowledge, intrinsic motivation, and resiliency skills” was reflective of their
experience. Annie shared, “Given that the pandemic interrupted life as we know it, I certainly
91
had to stay resilient and motivated and use my knowledge to get creative with how we would get
teachers to complete necessary requirements.” Two participants indicated that this theme only
partially reflected their experience as a cohort leader. During the participant theme review
survey, both participants who indicated the theme was partially reflective emphasized the belief
that TEA intended to set cohort leaders up for success. Bailey believed the “resources were
intended to set up cohort leaders for success,” and Claire noted, “There were things in the
beginning that were very helpful [LETRS training and PLCs]. Just not enough”; however,
specific to Claire’s comment, the two structures referenced (LETRS training and PLCs) were
provided to READ grant coaches, not HB3 reading academies cohort leaders. Claire’s reflection
supports the theme knowledge needs and contexts.
While cohort leaders did not feel collective efficacy in their role due to lack of in-depth
preplanning and the COVID pandemic, participants could leverage intrinsic motivation and
personal resiliency to find self-efficacy as a cohort leader. This self-efficacy was heavily
grounded in the desire to serve teachers and the wider community and a desire for personal
growth and development. Barriers to collective efficacy will be discussed in greater detail
throughout the four organizational structures themes.
Research Question 3: How Did State Organizational Structures Impact Implementation of
HB3 Reading Academies From the Cohort Leaders’ Perspective?
Cohort leaders were asked six interview questions to assess their perspective of how state
organizational structures impacted the HB3 reading academies implementation. (See Appendix
A). Some responses were specific to the first year of implementation, while others addressed the
full scope of project implementation. During data analysis, four themes emerged. First, cohort
leaders indicated in-depth preplanning and responsive management structures could have
92
supported a more consistent and effective implementation of HB3 reading academies. Secondly,
cohort leaders identified reading academies implementation could have been more successful
with stronger and more structured communication channels between TEA and stakeholder
groups. Thirdly, cohort leaders believed that a public-facing accountability plan would have
made HB3 reading academies implementation more impactful. Finally, cohort leaders felt that
using a human-centered approach to the implementation design would have made HB3 reading
academies content more applicable for participants and would have garnered higher stakeholder
buy-in. The theme analysis survey findings for these four themes can be found in Table 4.
Preplanning and Responsive Management
As discussed in the theme filling knowledge gaps, cohort leaders regularly experienced
gaps in knowledge related to their role. Participants identified that these gaps were partially due
to a lack of in-depth preplanning and inconsistent management structures during the HB3
reading academies implementation rollout. In multiple situations, cohort leaders felt appropriate
time and attention had not been allocated for decisions related to the HB3 reading academies
business rules, which inhibited their ability to successfully implement the program in their
regions.
Cohort leaders identified that there was a general need for clear-cut rules related to the
rollout. One example of this need was the participation requirements for the HB3 reading
academies. The original statute indicated that all kindergarten through Grade 3 teachers and
elementary principals would be required to attend; however, Bailey noted, “Within that (statute),
there was much discrepancy. … It was not until February of 2022 that it was truly clarified for
us. … And that totally changed the implementation.”
93
Table 4
Organizational Structures Theme Analysis Survey Findings
Theme Theme analysis survey
statement
Theme analysis survey finding
Preplanning and
responsive
management
O.1: Cohort leaders believe indepth preplanning and
responsive management
structures could have
supported a more consistent
and effective implementation
of HB3 reading academies.
One hundred percent of
participants indicated the
theme was reflective of their
experience as a cohort
leader.
Stakeholder
communication
channels
O.2: Cohort leaders identified
that reading academies
implementation could have
been more successful with
stronger and more structured
communication channels
between TEA and all
stakeholder groups.
One hundred percent of
participants indicated the
theme was reflective of their
experience as a cohort
leader.
Accountability O.3: Cohort leaders believe that
a public-facing accountability
plan would have made HB3
reading academies
implementation more
impactful.
Eighty-nine percent of
participants indicated the
theme was reflective of their
experience as a cohort
leader.
Eleven percent of participants
indicated the theme was
partially reflective of their
experience as a cohort
leader.
One participant response
removed, due to lack of
understanding of the theme.
Human-centered
approach
O.4: Cohort leaders felt the
content of the HB3 reading
academies was high-quality;
however, using humancentered design would have
made it more applicable for
participants and would have
garnered higher stakeholder
buy-in (see Footnote 1).
One hundred percent of
participants indicated that
the theme was reflective of
their experience as a cohort
leader.
94
It was communicated to some authorized providers and districts that exceptions would be
made for specific educators that did not teach literacy, such as third-grade departmentalized math
and science teachers; however, the exceptions were never fully clarified across the state, which
led to teachers completing reading academies and then being informed after the fact that they had
not been required to attend. This caused a general lack of buy-in from district leadership and
staff. Cohort leaders felt they often took the brunt of districts’ anger, even though they did not
have the authority to determine parameters for participation. Cohort leaders requested
clarification related to participation in Fall 2020, but as Bailey noted, they did not receive a
consistent, state-wide response until February 2022. Participants noted this type of lag in
decision making was common throughout their experience as cohort leaders.
Similarly, participants identified that, especially in the first year of implementation, the
required materials and documents needed for the cohort leader role were not completed
promptly. Missing or incomplete materials impacted participants, as they needed more
information to fulfill their roles successfully. This was especially true cohort leader expectations
related to the coaching and grading of participant artifacts. Annie described the result of
receiving grading guidance from TEA after teachers had already begun submitting assignments:
For example, scaffolds in a video. When a teacher uses scaffolds, if their fingers were
backwards in the camera, it was going to be x amount of points off. Well, that wasn’t
rolled out until later, and many teachers had already submitted their videos. … They’re
going to get that taken off and that will now be our fault; where if that had been rolled out
sooner and all of that had been prepared at an earlier time by TEA, then we would have
been able to communicate that.
95
Multiple cohort leaders additionally noted that even when decisions were made and
communicated, there were regular instances where decisions were changed in the middle of an
implementation cycle. Midstream changes were especially detrimental to the implementation
when they related to reading academies content, as expectations given to teachers by their cohort
leader when they began the course might not stay consistent throughout the program. Cohort
leaders noted communication related to their coaching and grading practices was especially
susceptible to change. Ella described an instance where discussion posts were rewritten and
updated in the middle of a cohort cycle:
Even within the year, there has been where some of my learners have responded to a
discussion post that then within like 2 weeks, the discussion post has been changed. …
Like how often the pieces move even within a year is so much that I could not even say
the same course I went through from year one is the same course today. But also, even
last week.
Not only did cohort leaders have to manage frustrations from stakeholders related to
changing expectations, but cohort leaders also dealt with their own frustrations, as they were
typically not consulted by TEA when midstream changes were made. In relation to cohort leader
input, Faith shared,
TEA often caused unintended consequences or unforeseen circumstances when issuing
directives on logistics or structure of content. Cohort leaders foresaw the consequences
[but] were not consulted on major decisions. There has been little to no collaboration
with or input from cohort leaders, authorized providers, district leaders, or teachers
related to major decisions.
96
Because they could not share how midstream changes would affect implementation in the
field, cohort leaders felt that many changes had negative impacts that could have been avoided
had cohort leaders been consulted before announcing the change. Ella’s discussion post story is
an example of this, as she had to create new systems to recheck previously graded discussion
posts, as TEA did not communicate when discussion posts were changed. Because she was
concerned that her participants would be selected for a “spot check” by TEA, she wanted to
ensure that their answers, and her grading, reflected the appropriate question stem. However, this
additional work required substantial time and effort.
Finally, even when cohort leaders identified an issue and requested support from TEA,
there were inconsistent response rates. When asked if she felt that identified issues were
addressed promptly, Diana shared,
If it was [an issue] that could be fixed in a timely manner, if it was like a technical issue
or something small that could be physically done quickly? Yes, it was. If it was a larger
issue, that took much longer to be addressed.
Cohort leaders generally felt that in certain instances and dependent on the TEA staff
member they were working with, there were times when barriers to implementation were
addressed quickly and appropriately; however, in other instances, it could take months to receive
a response, or they might not receive any response. Because participants were unsure when or
whether they would receive support, they could not build consistent structures or procedures for
their cohorts that would have automatized the implementation of reading academies in their
cohorts.
All participating cohort leaders indicated the statement “Cohort leaders believe that indepth preplanning and responsive management structures could have supported a more
97
consistent and effective implementation of HB3 reading academies” reflected their experience.
Heidi summarized this sentiment well when she shared, “This was a big issue! We needed clarity
to provide the best customer service to our stakeholders. … It would have been helpful to have
better parameters.” Cohort leaders agreed in-depth preplanning and responsive management
structures would have better supported their role-specific knowledge and would have positively
impacted their ability to implement HB3 reading academies with their stakeholders.
Stakeholder Communication Channels
Cohort leaders unanimously identified communication channels as an area of
improvement in HB3 reading academies implementation. In this theme, two significant
categories emerged. First, the participants noted that TEA communication related to general
implementation rules could have been more efficient and consistent. Additionally, participants
wished there had been more opportunities for feedback, both before and during implementation.
Cohort leaders’ need for information related to HB3 reading academies implementation
has been discussed in detail in previous themes. In this theme, the focus will be on the
communication delivery method rather than the content. Generally, cohort leaders wanted a clear
understanding of appropriate communication channels. Depending on the type of needed
information, cohort leaders might be expected to contact a specific staff member at TEA, fill out
an information request form, or post the question in a cohort leader discussion hub; however,
cohort leaders regularly needed clarification on which communication avenue was expected with
each question that arose. Adding to the confusion, the TEA reading academies team had multiple
turnovers during the implementation process, which contributed to both information process
changes and a lack of understanding of who to contact to clarify process changes. Claire shared,
98
“I don’t know who our reading academies specialists are, so I don’t know if I can go directly to
you because I don’t know who you are.”
Adding to the confusion was the fact that information was passed down to multiple
stakeholder groups without a clear information pathway being defined. There were times that
changes were announced to cohort leaders before their authorized provider organizations, which
led to cohort leaders attempting to communicate new changes to their organization, who would
then reach out to TEA for further clarification. Additionally, there were times when districts
were informed of updates to HB3 reading academies before either authorized providers or cohort
leaders. Bailey shared,
There were Zoom meetings between the commissioner and superintendents, and
something would be said ahead of cohort leaders knowing and authorized providers. And
so, we were always backtracking. You know, tell us what to do, and we’ll do it. We just
need to know ahead of the districts.
Due to the lack of clear communication pathways, there were multiple times that various
stakeholder groups would hear an update about HB3 reading academies implementation that
would be taken for fact, only to find out later that the information was incorrect. Multiple cohort
leaders shared instances where district leaders would create and begin an implementation plan
for their teachers based on incorrect information, which caused high levels of frustration for both
parties.
When cohort leaders did receive information, they noted that at times it was too late to be
actionable. Faith shared, “It seemed like things were handled late. According to their timeline,
instead of according to the timeline that we needed in order to prepare.” An example of this was
99
discussed in the previous theme, when cohort leaders did not have access to grading guidelines
before teachers began to submit assignments.
The quality of the communication was also inconsistent. There were multiple instances
where a cohort leader asked a question and then found out later that another cohort leader had
asked the same question but received a different response. Combined with the midstream
changes discussed in the previous theme, cohort leaders felt that they worked in a constant state
of ambiguity, not knowing whether they had the correct information or whether it might change.
When asked to elaborate on how communication channels impacted her ability to implement
reading academies, Annie said,
[Questions] weren’t always accurately answered. And then down the road after we had
already started implementing, things were changed. And so, it made it difficult for us to
then hunt and try to figure out now what are we going to do to try to manage this
problem.
Alongside a desire for clearer and more consistent communication channels, cohort
leaders also expressed they wished cohort leaders had opportunities to provide feedback prior to
and during HB3 reading academies implementation. Generally, cohort leaders expressed an
understanding that few, if any, stakeholder groups had provided feedback to the original HB3
reading academies implementation plan. Ivy noted, “When I came into the picture, it was a done
deal. And this is how you roll it out. It was all laid out already. There was really no
collaboration, no discussion.”
After the HB3 reading academies rollout, participants had limited opportunities to
provide feedback based on their implantation experiences. Ella shared, “If they asked for
feedback, it was about content or technology issues. … Those were the two avenues. It wasn’t a
100
third avenue of business rules or the process they were using.” This limited cohort leaders’
ability to mitigate issues that business rules and midstream changes made to teachers and their
districts. Participants additionally noted that even when feedback was requested, it was not
always used. When asked if she felt that cohort leader feedback was used, Claire stated,
Sometimes yes, oftentimes no. And I don’t know if it’s because they don’t really
understand what we are saying in the surveys and in the communication to them or if it’s
just not a squeaky enough wheel to address, or they don’t have the manpower to.
Other cohort leaders shared similar sentiments. Multiple participants who had served as
READ grant coaches prior to their cohort leader role noted they had the opportunity to provide
feedback on content modules, as it was piloted in the READ grant before the initial HB3 reading
academies rollout; however, the feedback provided, from clarity issues to spelling errors in the
modules, still needed to be updated in the Canvas modules.
All participants agreed with the statement, “Cohort leaders identified that reading
academies implementation could have been more successful with stronger and more structured
communication channels between TEA and all stakeholder groups.” In participant interviews,
cohort leaders indicated that communication issues were a significant barrier to their rolespecific knowledge acquisition. It was also one of the main reasons cohort leaders indicated as a
contributor to their lack of collective efficacy. Cohort leaders believed having access to
consistent and detailed communication for their peers, authorized providers, districts, and
teachers would have positively impacted stakeholder buy-in and the overall implementation of
HB3 reading academies.
101
Accountability
Due to the lack of feedback channels and unresponsive management structures,
participants indicated that they perceived there needed to be more accountability for the TEA
throughout the implementation process. When asked about statewide accountability, June
indicated,
I don’t know if TEA is held accountable. I mean, I don’t know if the right people
complain if things get changed, you know, like in lots of politics. … I don’t know who
holds them accountable because you know districts aren’t happy with all of this. … I
mean, they’re the boss. Whatever they roll out, we have to implement the initiative, and
put a smile on our face and act like it’s the best thing that’s ever happened.
Cohort leaders generally expressed that a public-facing accountability system could have
mitigated many of the problems that occurred during HB3 reading academies implementation.
Participants identified that clear program objectives, a detailed assessment plan, and access to
actionable data would have greatly benefited implementation.
Many cohort leaders identified a need for clear programmatic objectives related to the
program. While there was general messaging across the state that reading academies would
support student reading scores, participants generally wished there had been more explicit
objectives to clarify the program’s purpose and to help contextualize the need for the
professional development experience. When asked about issues she experienced during
implementation, Gemma noted a significant barrier to success was
getting teachers to understand this is something that you have to do. This is something
that is important. Even getting administrators to understand it. It was met with a lot of
102
resistance, and it was hard to explain why this initiative was important and what it could
do for the teachers and what it could do for the students as well.
Cohort leaders additionally discussed the need for a consistent and purposeful assessment
system. For each year of implementation, metrics for participant success have shifted.
Additionally, some metrics set sent the wrong message about the program’s purpose. Initial
metrics for HB3 reading academies were based on participant completion rates, rather than
teacher behaviors. Participants indicated these initial metrics had set a general belief that the
professional development program is a requirement to complete rather than an opportunity to
positively transform classroom instruction. June noted, “There some data they’re grabbing
[related to completion]. But that doesn’t mean they’re implementing in their classrooms. We
have no way of knowing in the blended cohort whether it’s being implemented.”
There were additional issues when the metrics were updated, due to a need for consistent
data to compare teacher growth. Ivy shared that during a recent visit by the commissioner of
education, she was asked,
“Are you seeing teacher behaviors change? And how are you tracking that data?” What I
really wanted to say was, “Did you set that expectation upon us?” … When he comes
back to visit again, I am supposed to have some data.
Ivy was unsure what data she could provide because she was not asked to track teacher
behaviors until Year 4 of implementation and has not received guidance on what data she should
be using to track that data. Cohort leaders expressed that they wished that a detailed assessment
system had been designed as part of the initial implementation plan so that they could track
appropriate data to meet the program’s overall objectives.
103
Additionally, participants expressed the need for a plan for the TEA to share actionable
data with authorized providers and cohort leaders to support day-to-day implementation.
Currently, teachers participating in HB3 reading academies are asked to complete satisfaction
surveys related to their experience; however, those survey results are not shared with authorized
providers or cohort leaders. This inhibits cohort leaders’ ability to make implementation changes
to support the needs of their learners. Additionally, cohort leaders indicated no reporting function
is built into Canvas that allows for downloading participant grading data. Diana shared,
Even to this day, there is not a way in which I can pull real-time information and report
that out to districts on where their participants are [in the program] unless I go through
each individual participant one by one in the grade book, which is what I still have to do.
While cohort leaders have created systems to work around data access issues, they
wished the TEA had planned for data-sharing opportunities that minimized the need for manual
data pulls that cohort leaders are currently completing.
Eight out of nine cohort leaders agreed with the statement, “Cohort leaders believe that a
public-facing accountability plan would have made HB3 reading academies implementation
more impactful.” Heidi summarized the sentiment of this group when she shared,
Had I known about the metrics and the big picture of how data was collected, I likely
would have approached deadlines in a different way. … Beginning with the end in mind
and sharing that thinking is important. Knowing how the game will be played and what
the rules are make a big difference.
The remaining cohort leader indicated they partially agreed with the statement. Bailey
noted in the comments of their participant theme review survey that specific to the timing of the
rollout overlapping with the global pandemic, “a public accountability plan might have made
104
things worse in the midst of COVID.” One participant response was removed from this analysis,
due to a lack of understanding of the theme.
Overall, participants agreed that providing clear objectives and an aligned assessment
system would have supported the overall implementation of the HB3 reading academies. While
there was perception that a public-facing accountability system during COVID-19 could have
caused unintended consequences, cohort leaders generally believed an aligned accountability
system would have positively supported the overall impact of the initiative. It additionally would
have provided clarity on the objectives and goals of the professional development experience,
which could have supported stakeholder buy-in. Finally, cohort leaders noted having access to
data related to implementation would have enabled them to be more successful in their role.
Human-Centered Approach
Most participants noted the content created for the HB3 reading academies was high
quality and provided information that would support teachers in providing excellent literacy
instruction to their students; however, there were two general concerns related to a lack of use of
a human-centered approach during the TEA’s design of the implementation process. First, cohort
leaders named that the professional development design and general participant requirements did
not consider teachers as holistic learners. Secondly, there was serious concern about the TEA’s
decision making for HB3 reading academies implementation amid the COVID-19 pandemic.
Generally, cohort leaders expressed concern about the design of the professional
development experience, especially for the blended model. In the blended course, teachers were
required to complete modules in the Canvas platform. In these modules, teachers read through a
series of slides that teachers read through independently and then were assessed through
discussion posts and assignments. Teachers had little interaction with their cohort leader or other
105
teachers throughout the course. Additionally, in early communication, the TEA indicated the
course would take 60 hours for teachers to complete; however, cohort leaders shared that it took
closer to 80–100 hours for many of teachers. The amount of content, coupled with the lack of
interactivity, made it difficult for teachers to internalize the material being presented. Faith
shared,
Many teachers and school leaders have continually and consistently expressed that the
quality of the content and the need for learning the content is not in dispute. Many have
expressed their desire to learn the content but were frustrated about the course structure
and the amount of content presented being an obstacle to true learning and
implementation.
Teachers struggled with the internalization of the content and experienced difficulty
completing the course, while teaching full time and managing their life outside of work. If a
teacher did not complete the content and pass all assignments in the 11-month window, they lost
all previously completed work and had to start over in a new course. Cohort leaders felt there
was not enough flexibility provided to teachers in this requirement and that the TEA did not
consider teachers’ holistic lives when creating the requirements or addressing teacher needs
throughout implementation. Gemma described some of the unique issues her teachers faced.
It was tough, as a cohort leader, to hear, “I’m starting chemotherapy, and I don’t know if
I’m going to be able to dedicate my time to reading academies.” Or “I’m about to take
maternity leave, and I’m trying to finish this before maternity leave. But there’s a lock,
and I can’t go on and finish this, but I really want to spend time with my baby.” What am
I to say, you know? But this reading academy over here is so important? I can’t say that.
106
Some cohort leaders wondered if the lack of responsiveness to teacher needs was due to
TEA’s general lack of understanding of the realities of many of the districts they serve,
especially in rural communities. Cohort leaders who served rural populations expressed
frustration that the design of the HB3 reading academies did not take into consideration the
multiple roles that teachers play in small communities. Diana reflected on attempting to advocate
for her rural teachers and shared,
And you have people who are in communities and schools that are unlike anything
anyone at TEA has ever seen. You know, I had an issue with trying to figure out what to
do with one of my teachers who was a teacher and a principal and the head girls’ athletic
director and the bus driver, and I remember speaking to someone, and they literally just
absolutely could not believe that there would be somebody that had to wear all of these
hats.
The COVID-19 pandemic put additional pressures on teachers, schools, and districts, but
cohort leaders did not feel that the TEA was responsive to this extraordinary situation. Many
participants indicated a general need for more understanding of how the educational environment
changed during the pandemic by the state agency. While all Texas teachers worked remotely in
Spring 2020 at the onset of the pandemic, due to the political climate, many returned to the
classroom that fall. Additionally, due to many parents’ discomfort with the idea of sending
students back to school at that point in the pandemic, many teachers had to offer in-person and
online instruction, doubling their workload. Attempting to additionally complete reading
academies was too much for many teachers. Ella shared the general attitude of many of her
participants when she said, “I mean, 60 hours’ worth of material to be done in the middle of the
107
school year in the middle of COVID? If anything, that was probably the easiest year because
they were at home more.”
Cohort leaders also dealt with the changing climate at the school and district level during
the pandemic. While many districts had created a plan to support teachers through the process
before the pandemic, they faced a new host of challenges. Many teachers no longer had access to
the district’s technology infrastructure, limiting their access to hardware and the internet.
Additionally, while districts had originally planned to provide substitutes for classrooms to give
teachers time to complete HB3 reading academies coursework, there were no longer adults in the
community willing to come into schools and cover classrooms. When discussing the district she
served, Heidi noted,
And of course, with COVID, [the district] absolutely had no substitutes, absolutely could
not offer [teachers] any professional development days. … So, the world had just
crashed, and these teachers are trying to figure out virtual and hybrid (instruction) and
then having this course on top of it.
All participating cohort leaders agreed that the statement “Cohort leaders felt that the
content of the HB3 reading academies was high-quality; however, utilizing human-centered
design would have made it more applicable for participants and would have garnered higher
stakeholder buy-in” (see Footnote 1) was reflective of their experience as a cohort leader.
Several indicated this was the most impactful barrier to HB3 reading academies implementation.
Specifically, cohort leaders believed participants of the HB3 reading academies, the teachers,
and districts they served, should have driven both the design of the professional development
experience, and the decision-making process during implementation. Multiple cohort leaders
expressed that they are now involved in creating courses for their authorized providers with titles
108
such as I Completed Reading Academies, Now What? to attempt to support the teachers in
building literacy instruction skills they did not acquire during their HB3 reading academies
course.
Summary
In this chapter, the findings from the study were provided. Two themes were identified
under the first research question, which focused on the cohort leaders’ knowledge related to HB3
reading academies implementation. First, cohort leaders indicated there were three types of
knowledge needed for their role: content knowledge, technological knowledge, and procedural
knowledge. Cohort leaders perceived their levels of knowledge were highly dependent on their
previous experiences and background knowledge. Additionally, cohort leaders shared that to fill
gaps in knowledge, they leveraged personal research skills and relationships.
Two additional themes emerged under the second research question, which focused on
the motivational influences affecting cohort leaders’ ability to implement the HB3 reading
academies. Cohort leaders shared they were intrinsically motivated to be successful in their role
by their passion for high-quality literacy instruction, desire to support students and teachers, and
loyalty to their authorized provider organizations. Cohort leaders also indicated that while they
did not feel they were set up for success in their role, they could be successful through leveraging
their individual knowledge, intrinsic motivation, and resiliency skills.
Four themes were identified under the final research question, which asked cohort leaders
to reflect on how state organizational structures impacted HB3 reading academies
implementation. First, cohort leaders shared a general belief that in-depth preplanning and
responsive management structures could have supported a more consistent and effective
implementation process. Next, cohort leaders noted implementation could have been more
109
successful with stronger and more structured communication channels. Cohort leaders
additionally believed a public-facing accountability plan would have made implementation more
impactful for participants. Finally, cohort leaders felt the initiative would have been more
applicable for participants and would have garnered higher stakeholder buy-in if a humancentered approach had been applied to both content and implementation design.
110
Chapter Five: Discussion and Recommendations
The purpose of this research study was to assess the effect of the TEA’s decision making
during the policy design and implementation of the HB3 reading academies to identify strengths
and areas of growth for continuous improvement as perceived by cohort leaders. This chapter
will begin with a discussion of the findings of this study, which will include recommendations
for state education policy design and implementation. The chapter will conclude with the
limitations and delimitations of the study and recommendations for future research.
To investigate the effect of TEA’s decision making during the policy design and
implementation of the HB3 reading academies, a modified Clark and Estes’s (2008) gap analysis
process model was used to create the conceptual framework for the study, initially presented in
Chapter 2. Participating cohort leaders were asked to reflect on their experience implementing
the HB3 reading academies in the three areas identified by Clark and Estes as potential causes
for organizational performance gaps: knowledge, motivation, and organizational structures. In
these three areas, education and adult learning research were referenced to create a
semistructured interview protocol to guide conversations with participants (See Appendix A).
Concepts that contributed to the knowledge, motivation, and organizational structures framework
are captured in Figure 2.
Knowledge and Motivation
The first research question of the study focused on the perceptions of cohort leaders’
knowledge related to HB3 reading academies implementation. Two themes were identified. First,
cohort leaders indicated there were three types of knowledge needed for their role: (a) content
knowledge, (b) technological knowledge, and (c) procedural knowledge. Due to a lack of
consistent information provided by the TEA, cohort leaders perceived their levels of knowledge
111
were highly dependent on their previous experiences and background knowledge. Additionally,
cohort leaders shared that to fill identified gaps in knowledge, they leveraged personal research
skills and relationships.
The conceptual framework for the Knowledge section of this study centered on the four
types of knowledge as identified by Bloom’s revised taxonomy: (a) factual, or basic topic-related
knowledge such as definitions and details; (b) conceptual knowledge, or the interrelatedness of
factual knowledge within the larger content system (c), procedural knowledge, which is the
knowledge of how to apply factual and conceptual knowledge; and (d) metacognitive knowledge
(Anderson, 2005; Krathwohl, 2002). The findings of this study generally align with Bloom’s
revised taxonomy, with cohort leader content knowledge corresponding with Bloom’s factual
and conceptual knowledge and cohort leader technological and procedural knowledge consistent
with Bloom’s process knowledge. Cohort leaders additionally used metacognitive knowledge to
evaluate their own understanding and identify processes for filling knowledge gaps.
Because cohort leaders generally felt that there was an absence of needed information to
successfully fulfill their role, it is difficult to fully understand how SEA decision making
impacted the knowledge of the policy implementors; however, cohort leaders could identify
needed role-specific knowledge and create relational and research structures to access that
information. This could indicate that SEAs might consider a strong focus on metacognitive
learning strategies in future professional development provided for policy implementors.
The second research question aimed to understand how motivational influences affected
cohort leader’s ability to implement the HB3 reading academies. Two additional themes emerged
under this research question. Cohort leaders shared they were intrinsically motivated to be
successful in their role by their passion for high-quality literacy instruction, desire to support
112
students and teachers, and loyalty to their authorized provider organization. They generally did
not feel that the TEA contributed to either their intrinsic or extrinsic motivation. Cohort leaders
also indicated that while they did not feel collective efficacy in their role, they found individual
role efficacy by leveraging their knowledge, intrinsic motivation, and resiliency skills.
The conceptual framework for the Motivation section leveraged four key concepts: (a)
intrinsic motivation, (b) extrinsic motivation, (c) self-efficacy, and (d) collective efficacy.
Intrinsic motivation is defined as motivation that comes from internal desires, while extrinsic
motivation applies external pressure toward desired results (Chiu, 2018; Ryan & Deci, 2000;
Yoon et al., 2015). Intrinsic motivation typically derives from joy, interest, achievement
orientation, and meaning-making processes (Chiu, 2018; Specht et al., 2018). Cohort leaders
identified intrinsic motivators align closely with these derivates, with high-quality literacy
instruction connecting to interest and achievement orientation, and relational motivators aligning
with meaning-making processes.
Bandura (1977, 1997) described self-efficacy as a person’s belief in whether they can
complete a task or succeed in a specific situation and collective efficacy as a group’s belief in
whether they, as a group, can complete a specific task or find success in a specific role.
Collective efficacy is impacted by how leaders manage and motivate a group and can be affected
by individual members’ attitudes and skills (Bandura, 1997; Wilcox & Lawson, 2018). Selfefficacy is a key predictor of employee success in a role (Bandura, 1977; Liou & Daly, 2020),
and high levels of self-efficacy support employee adaptation during change initiatives (Tuan,
2017). This research explains how cohort leaders could overcome a lack of collective efficacy by
leveraging their individual self-efficacy in their roles.
113
Because cohort leaders did not feel the TEA contributed to either their intrinsic or
extrinsic motivations, there is a possibility SEAs do not have to heavily focus on building
individual motivation for policy implementors. Instead, SEAs may find a greater opportunity for
impact by supporting the collective efficacy of the group. Additionally, because cohort leaders
could overcome their lack of collective efficacy by leveraging personal efficacy and intrinsic
motivation, the highest leverage action could be to ensure that future candidates for policy
implementation roles are assessed for both personal efficacy characteristics and intrinsic
motivation related to the programmatic components of the initiative.
Organizational Structures
The last research question involved cohort leaders reflecting on how state organizational
structures provided by TEA impacted HB3 reading academies implementation. Four themes
were identified from cohort leader responses. First, cohort leaders shared a general belief that indepth preplanning and responsive management structures could have supported a more
consistent and effective implementation process. Next, cohort leaders noted implementation
could have been more successful with stronger and more structured communication channels.
Cohort leaders additionally believed that a public-facing accountability plan would have made
implementation more impactful. Finally, cohort leaders felt the initiative would have been more
applicable for participants and would have garnered higher stakeholder buy-in if a humancentered approach had been applied to both content and implementation design.
Cohort leaders generally identified that organizational structure topics were areas for
which the TEA held high responsibility, in terms of design and implementation, and thus had the
most significant impact on cohort leaders, districts, and teachers. Cohort leaders also noted
organizational structures had a significant and direct influence on their role-specific knowledge
114
and motivation. This is supported by Clark and Estes (2008), who noted knowledge, motivation,
and organizational structures both interact with and impact each other in an authentic
organizational context. To capture this relationship, the influence of organizational structures on
knowledge and motivation will be discussed alongside the four themes. Additionally, three
recommendations for practice are proposed that focus on high-leverage change behaviors that
TEA could consider to strengthen the organizational structures provided during the
implementation of future initiatives. Each recommendation will support an organizational
structure theme and either a knowledge or motivation theme identified from cohort leader
interviews. Additionally, each recommendation will align with the PDSA continuous
improvement framework presented in Chapter 2.
Preplanning/Responsive Management and Stakeholder Communication
Cohort leaders noted two specific areas of organizational structures that significantly
impacted their knowledge: (a) preplanning/responsive management structures and (b)
stakeholder communication channels. Participants shared that the lack of preplanning and
responsive management structures negatively impacted role clarity for cohort leaders. Because
many decisions were made after implementation began and mid-stream changes were common
throughout the life of the initiative, Cohort leaders struggled to understand the rules they should
be following in their role. Inconsistent communication structures exasperated this issue. The
cohort leaders indicated there was no consistent way to contact the TEA to clarify their roles,
resulting in cohort leaders relying on personal relationships and research skills to build rolespecific understanding. Participants also noted their lack of role clarity negatively impacted the
collective efficacy of cohort leaders.
115
The impact of the lack of preplanning, responsive management structures, and
stakeholder communication channels aligns closely with research recommendations for
programmatic implementation in the education community. By creating internal structural
processes that guide planning and communication, organizations can establish clearly defined
roles, processes, and communication channels that support implementer capacity to initiate and
sustain change (Dixon & Palmer, 2020; Park et al., 2013; Shakman et al., 2020; Wang &
Fabillar, 2019). Additionally, by providing these internal structural processes, change
participants can clearly understand the goals of a change initiative and how the implementation
plan will support implementers in reaching these goals, ultimately ensuring both buy-in and
ownership of the program (Dixon & Palmer, 2020; Park et al., 2013; Wang & Fabillar, 2019).
This research indicates strengthening preplanning, management structures, and communication
channels would have positively impacted cohort leaders’ knowledge and motivation during HB3
reading academies implementation.
Recommendation 1: Implementing Trials Prior to Scale
In response to participants’ desire for stronger preplanning and responsive management
structures, the first recommendation for the TEA is to implement trials prior to scaling new
policy initiatives. A trial is an opportunity for an organization to test the efficacy of an
implementation plan before scaling to a larger group (Association for Project Management,
2022). Implementing a trial allows an organization to mitigate implementation risks that occur
due to planning assumptions and use real-world data and insights to strengthen the
implementation plan prior to scale (Institute Project Management, 2023). Implementing a trial
aligns with PDSA research, which recommends the use of implementation cycles to test and
refine the implementation process (Park et al., 2013).
116
Using a trial during future policy implementation would support in-depth preplanning
and responsive management structures throughout the life of a project. A trial would give the
TEA additional time to create and finalize required materials and documents, such as those that
were missing during the first year of HB3 reading academies implementation. Additionally, a
trial could highlight areas where the TEA needs to clarify business rules before scaling across the
state. The TEA could pilot business rules, assess their efficacy, and make needed changes before
engaging all districts and teachers. This could minimize the midstream implementation changes
that cohort leaders identified as negatively impacting the efficacy of implementation and
stakeholder engagement during the HB3 reading academies rollout. Finally, the TEA could use a
pilot to assess their capacity for responsive communication structures, which might indicate a
need for either more staff or a more organized stakeholder engagement system. Doing so would
ensure that responsive communication structures were in place before scaled implementation.
Having these structures would allow future policy implementers to leverage TEA knowledge and
expertise throughout implementation rather than relying exclusively on relationships external to
the SEA. Having access to the TEA as an information source could also bolster future policy
implementers’ collective efficacy.
Accountability and Human-Centered Approach
The remaining two areas of organizational structures: (a) acountability and (b) humancentered approach, were identified as negatively influencing cohort leader motivation. The
perceived lack of a human-centered approach to the HB3 reading academies implementation
design caused stress for cohort leaders, who were seen as the face of the initiative for the districts
and teachers they served. When teachers and districts felt HB3 reading academies requirements
were unreasonable, cohort leaders believed they took the brunt of those frustrations, negatively
117
impacting their efficacy, as many indicated service to their communities as one of their highest
intrinsic motivators for their role.
Research indicates providing structural supports that allow stakeholders to serve as a
champion of a new initiative is one of the highest leverage ways to engage stakeholders, while
managing change (Dixon & Palmer, 2013; Park et al., 2013; Wang & Fabillar, 2019).
Stakeholder structural supports include providing opportunities for implementors to contribute to
the design and implementation of the change initiative (Dixon & Palmer, 2020), soliciting
feedback on the success of the change initiative (Park et al., 2013), and providing purposeful
opportunities for stakeholders to develop a sense of ownership over the initiative (Wang &
Fabillar, 2019). While cohort leaders developed senses of responsibility for HB3 reading
academies outcomes, they did not feel they could positively impact the overall day-to-day
implementation of the initiative. Without opportunities for stakeholder feedback from both
cohort leaders and the wider field, cohort leaders were left feeling unable to address the issues
they were facing.
Cohort leaders did not feel that there were accountability structures in place for TEA
leadership or the staff leading the initiative. As such, participants could not identify avenues to
resolve issues during the implementation process, leading to high frustration levels, and
contributing to the lack of collective efficacy. While cohort leaders could move past this
collective frustration by leveraging their passion for service, they indicated the lack of
accountability for the TEA was one of the highest role-specific stressors they experienced
throughout the life of the project.
Research indicates a key component of a change initiative is creating measurable goals
that align with the desired long-term outcomes of an organization (Kirkpatrick & Kirkpatrick,
118
2016). These goals should include both programmatic goals that focus on the outcome of the
initiative and process goals that monitor the effectiveness of implementation (Park et al., 2013).
Additionally, the success of a change initiative is dependent on leaders modeling a culture of
continuous improvement by adopting behaviors asked of change participants (Dixon & Palmer,
2020; Park et al., 2013; Shakman et al., 2020; Wang & Fabillar, 2019). By sharing internal
accountability goals with cohort leaders, the TEA could have modeled a culture of continuous
improvement and simultaneously provided opportunities for cohort leaders to understand the
process goals of implementation and indicate when those goals were not being met from the
cohort leader perspective.
Recommendation 2: Building Intentional Stakeholder Engagement Channels
Based on cohort leaders’ perceptions of their inability to positively impact the design and
implementation of the HB3 reading academies, the second recommendation to support future
policy implementation is to build intentional stakeholder engagement channels. Specifically, it is
suggested the TEA collect ongoing data from stakeholders throughout the implementation of an
initiative. This includes soliciting stakeholder input for both the design and implementation
processes.
Soliciting feedback to inform the design of a program’s implementation aligns closely
with the human-centered change model (Wang & Fabillar, 2019). Engaging stakeholders during
the design process would allow the TEA to understand the realities of public education from the
perspective of educators in the field (Park et al., 2013). A significant criticism of the HB3
reading academies implementation was the design of professional development and the
implementation plan did not consider the needs of the teacher participants or the districts who
employed them. Engaging stakeholders during the design of future policy implementation would
119
bring that needed perspective to the design of the implementation plan and foster buy-in with
education stakeholders.
Providing opportunities for stakeholder input during future implementation would also
support an initiative’s success. A key tenet of the PDSA cycle is that ongoing data should be
collected to inform changes that support the continuous improvement of the project (Wang &
Fabillar, 2019). By analyzing data collected throughout the life of the PDSA cycle, the
participating organization can determine whether the intervention is meeting the goals of the
initiative and adjust the implementation plan as needed (Kaufman et al., 2019). Providing
opportunities for feedback from future policy implementors, district leaders, and teachers as an
initiative is implemented would support the TEA’s understanding of the impacts of the
implementation experience on the field and how to adjust implementation to support
programmatic success. Additionally, if the TEA heard a component of implementation was
negatively impacting participants, the state agency could create a response plan that addressed
participants’ holistic needs while ensuring that they could engage with the content of the policy
in a way that positively impacted students.
Recommendation 3: Creating an Assessment System
The final recommendation to strengthen future policy implementation is to create an
aligned assessment system to promote accountability and positively impact programmatic
results. According to Wang and Fabillar (2019), organizations must collect data during
implementation and analyze that data to determine whether the implementation plan is moving
the organization toward desired results. To assess progress, an organization needs clear
objectives and a data collection and analysis system that allows the organization to measure
progress toward those objectives (Shakman et al., 2020). Had the TEA developed an assessment
120
system with these characteristics during the HB3 reading academies implementation, they could
have accounted for multiple themes raised by cohort leaders, including accountability, collective
and self-efficacy, and human-centered approach.
Kirkpatrick and Kirkpatrick (2007) recommended setting objectives based on the needs
of participants at the beginning of project planning to support the success of an initiative. This
aligns with principals of a human-centered approach to education program implementation,
which note participants of an initiative should drive both the design and desired outcomes of an
initiative (Wang & Fabillar, 2019). Additionally, organizations should set process-specific
objectives to ensure that internal decision making supports the initiative’s overall success (Dixon
& Palmer, 2020).
Once programmatic and process objectives have been set, an assessment system needs to
be designed to collect data that can be measured against the identified objectives. Kirkpatrick
and Kirkpatrick (2007) created an assessment design process that recommends determining
objectives by identifying participant behaviors that are needed to meet organizational outcomes.
Once objectives and an assessment system are identified, the organization can then design
training that supports participant knowledge, skill, and motivation to meet those behaviors,
which leads to the attainment of organizational outcomes (Kirkpatrick & Kirkpatrick, 2007).
Future policy implementation could benefit from this type of assessment design. If
objectives are created by identifying participant needs and desired behaviors, the implementation
plan of future policy could support both higher programmatic buy-in and a larger impact on the
education community. If those objectives are clearly communicated to policy implementors and
district leaders, the TEA could positively affect motivation in a way that was missing throughout
HB3 reading academies implementation. Additionally, future policy implementers may
121
experience higher levels of collective efficacy since there are clear objectives to work toward
outside participant completion.
An advantage of setting process objectives is the TEA and future policy implementers
could benefit from role-specific expectations and an understanding of what both groups could
anticipate from each other. Setting clear expectations could allow the TEA to build trust with
those implementing in the field. Future implementers would additionally benefit by clearly
understanding what materials, processes, and information they could expect from TEA staff,
minimizing levels of stress and frustration.
Finally, if feedback from the field is included in the identified data to be collected as part
of the assessment process (as mentioned in Recommendation 2), the TEA could develop a deeper
understanding of how implementation is occurring, and what business rule changes need to be
made to reach the identified objectives of the initiative. Cohort leaders noted on multiple
occasions that HB3 reading academies implementation decision making felt divorced from the
realities of public education. Additionally, midstream changes sometimes occurred that did not
align with the needs of teachers or district staff. By incorporating feedback from the field into the
assessment design, the TEA could minimize the need for multiple changes in future policy
implementation by leveraging the knowledge held by policy implementors of the districts and
teachers they serve.
Implications of Findings
Generally, the themes that emerged from the organizational structures research question
aligned closely with current research of programmatic implementation and change management
in the field of education. While there was not a direct alignment between the four identified
themes (preplanning and responsive management, stakeholder communication channels,
122
accountability, and human-centered approach) and the four subsections of the conceptual model
(structural support processes, human-centered approach, quality improvement culture, and highleverage change behavior), the themes and conceptual model could be described as organizing
similar topics under different categories This consistency across the themes and conceptual
framework indicates that strong implementation structures may not need to be heavily
personalized for SEAs when implementing reform policy.
Another key consideration of the findings is that organizational structures emerged as a
key lever for SEA policy implementation. While the theoretical and conceptual framework imply
that knowledge, motivation, and organizational structures equally impact those implementing
change, organizational structures provided by TEA seemed to have a unilateral impact on both
the knowledge and motivation of cohort leaders. This relationship may indicate SEAs may want
to prioritize the planning and implementation related to organizational structures to ensure the
success of an imitative.
Overall, the findings of this study aligned closely with the research presented in the
theoretical and conceptual frameworks that guided its design. While there is an absence of
research related to SEA policy implementation, these findings generally indicate that SEAs can
leverage recommendations for programmatic implementation and change management that have
been provided for the wider educational community, which has a more robust research base. This
would allow SEAs to use available research, while simultaneously determining nuances specific
to state policy implementation.
Limitations and Delimitations
The limitations of the study are grounded in the methodological structure and the
specificity of the study context. First, in solely using individual interviews, a potential limitation
123
is that the data collected is impacted by the inherent bias of self-reported data. Second, the time
constraints of in-depth interviews resulted in a limited sample of HB3 reading academies cohort
leaders being interviewed. While the total number of cohort leaders serving in the initial year of
HB3 reading academies implementation is unknown, the researcher estimated the number is
most likely approximately 225 cohort leaders. As such, the sample size of this study will not be
representative of the cohort leader population as a whole.
Because the performance analysis was initiated outside the state agency rather than in, the
researcher does not have project planning and implementation data to compare against cohort
leader perception. Due to this lack of access, the researcher could not definitively determine
whether a performance gap is rooted in an organizational structure or if the gap is rooted in a
cohort leader perception that instead indicates a breakdown in communication. Additionally, in
an ideal situation, cohort leaders, authorized provider managers, and TEA HB3 reading academy
staff members would be interviewed to gather multiple organizational perspectives; however,
due to the highly political nature of the SEA, the likelihood of candid perspectives from TEA
staff is negligible. Additionally, due to the limited number of authorized providers selected by
the TEA, it would be difficult to protect the confidentiality of authorized provider managers. As
such, only cohort leaders were included in the participant sample.
The following delimitations are built into the methodological structure. First, the protocol
of identifying cohort leaders from a cross-selection of authorized providers and education
regions allowed for greater confidence in identified themes, as participants were less likely to
have shared personal experiences due to geographic distance. Additionally, by focusing on
cohort leader perception and experience throughout the data collection process, there was a
greater likelihood of truthfulness of response.
124
While the researcher cannot access internal information from the SEA, common themes
are more likely to result from TEA decision making if they impact cohort leaders across a
geographically diverse sample than if multiple participants were selected from a single
authorized provider organization or education region. Finally, because cohort leaders have no
input or role-related responsibility for the design of the HB3 Reading Academies
implementation, they were most likely of all those responsible for the overall success of the
initiative to reflect on the implementation in an unbiased manner.
Recommendations for Future Research
Based on the design and findings of this study, three recommendations for future research
were identified. First, it would be beneficial to replicate this study with a narrow focus on the
organizational supports section of the conceptual framework. The findings of this study indicate
the organizational structures provided by the TEA during HB3 reading academies
implementation heavily influenced cohort leader knowledge and motivation. By focusing on
organizational structures, future research could investigate individual organizational structures
that move state agencies closer to their intended goals. Alternatively, the research could more
explicitly explore the relationship between organizational structures and the policy
implementor’s knowledge and motivation.
Next, because of the specificity of the case study format, it is recommended to duplicate
this study across geographical and reform policy contexts. Studying the implementation of
reform policy in multiple states would support a more comprehensive understanding of how
policy context impacts a state’s ability to implement an initiative. It would additionally build
general knowledge of the condition of state policy implementation across the United States.
125
Finally, future research should consider using either a quantitative or mixed-method
design when assessing state education reform policy implementation. While collecting
quantitative data would lessen the contextual information gathered by the study, it would provide
an opportunity to survey a larger sample and promote generalization of findings. Leveraging a
mixed-method study would also allow the research community to analyze quantitative data in
conjunction with qualitative data, providing a more holistic picture of state education reform
policy implementation.
Conclusion
This study aimed to identify the effect of the TEA’s decision making during the policy
design and implementation of the HB3 reading academies to identify strengths and areas of
growth for continuous organizational improvement, as perceived by cohort leaders. Study
findings indicate TEA decision making adversely impacted implementation as perceived by
cohort leaders, who were tasked with implementing the HB3 reading academies in the field.
Specifically, the TEA’s lack of strong organizational structures negatively influenced cohort
leader’s role-specific knowledge and motivation; however, cohort leaders could overcome this
impact by leveraging personal relationships and internal motivation to serve their wider
community.
The findings indicate a growing need for accountability structures for SEAs during
education reform policy implementation. This topic is highly relevant in the field of education, as
state agencies hold sole responsibility for education reform systems in their state (Mathis &
Trujillo, 2016) and do not have accountability reporting structures for their design or
implementation of reform policy (Egalite et al., 2017). Adding to this concern are emerging
indications of a lack of capacity to successfully implement reform policy by SEAs (Egalite et al.,
126
2017; Klute et al., 2016; Manna, 2012; VanGronigen & Meyers, 2019) and a lack of public data
related to SEA policy implementation (VanGronigen & Meyers, 2019). This lack of capacity and
public data access is especially problematic in the current education climate, where districts,
schools, and teachers are held responsible for reform outcomes that are not entirely in their
control (Klute et al., 2016; VanGronigen & Meyers, 2019; Wang & Fabillar, 2019). In a
democracy, it is the responsibility of the constituents to hold government agencies accountable
(Clinton & Grissom, 2015); however, to do so, constituents must have access to performance
data that allows them to identify the impact of a state agency on the wider population (Clinton &
Grissom, 2015). It is vital to the health of our democracy and our public education system for
SEAs to collect, analyze, and publicly publish reform policy implementation data to ensure that
students in their state are provided access to the high-quality education they deserve.
127
References
Anderson, L. W. (2005). Objectives, evaluation, and the improvement of education. Studies
in Educational Evaluation, 31(2), 102–113. https://doi.org/10.1016/j.stueduc.2005.05.004
Andreoli, P. M., & Klar, H. W. (2021). Becoming the drivers of change: Continuous
improvement in a rural research-practice partnership. Journal of Educational
Administration, 59(2), 162–176. https://doi.org/10.1108/JEA-04-2020-0078
Association for Project Management. (2022). What is the difference between a trial and a pilot?
https://www.apm.org.uk/resources/find-a-resource/what-is-the-difference-between-atrial-and-apilot/#:~:text=A%20trial%20is%20a%20small,effectiveness%20of%20any%20rollout%2
0tools
Bae, S. (2018). Redesigning systems of school accountability: A multiple measures approach
to accountability and support. Education Policy Analysis Archives, 26(8), 1–32.
https://doi.org/10.14507/epaa.26.2920
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral
change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-
295X.842.191
Bandura, A. (1997). Self-efficacy: The exercise of control. W. H. Freeman.
Banghart, K. (2021). Colorado’s network for local accountability. National Association of
State Boards of Education.
https://nasbe.nyc3.digitaloceanspaces.com/2021/01/Banghart_Jan-2021-Standard.pdf
128
Bodily, R., Nyland, R., & Wiley, D. (2017). The rise framework: Using learning analytics
to automatically identify open educational resources for continuous improvement.
International Review of Research in Open and Distributed Learning, 18(2).
https://doi.org/10.19173/irrodl.v18i2.2952
Borman, G. D. (2005). National efforts to bring reform to scale in high-poverty
schools: Outcomes and implications. Review of Research in Education, 29(1), 1–27.
https://eric.ed.gov/?id=EJ746360
Brodersen, R. M., Yanoski, D., Mason, K., Apthorp, H., & Piscatelli, J. (2017). Overview
of selected state policies and supports related to K–12 competency-based education.
Regional Educational Laboratory at Marzano Research.
https://ies.ed.gov/ncee/edlabs/projects/project.asp?projectID=1458
Broedling, L. A. (1977). The uses of the intrinsic-extrinsic distinction in explaining
motivation and organizational behavior. The Academy of Management Review, 2(2), 267–
276. https://doi.org/10.2307/257908
Canfield-Davis, K., & Jain, S. (2010). Legislative decision-making on education issues:
A qualitative study. The Qualitative Report, 15(3), 600–629.
https://files.eric.ed.gov/fulltext/EJ887904.pdf
Casalaspi, D. (2017). The making of a “legislative miracle”: The elementary and
secondary education act of 1965. History of Education Quarterly, 57(2), 247–277.
https://doi.org/10.1017/heq.2017.4
129
Chiu, H. H. (2018). Employees’ intrinsic and extrinsic motivations in innovation
implementation: The moderation role of managers’ persuasive and assertive strategies.
Journal of Change Management, 18(3), 218–239.
https://doi.org/10.1080/14697017.2017.1407353
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the
right performance solutions. Information Age Publishing, Inc.
Clinton, J. D., & Grissom, J. A. (2015). Public information, public learning, and public
opinion: Democratic accountability in education policy. Journal of Public Policy, 35(3),
355–385. https://doi.org/10.1017/S0143814X14000312
Cohen-Vogel, L., Cannata, M., Rutledge, S. A., & Socol, A. R. (2016). A model of
continuous improvement in high schools: A process for research, innovation design,
implementation, and scale. Teachers College Record, 118(3).
https://eric.ed.gov/?id=EJ1108493
Cohen-Vogel, L., Tichnor-Wagner, A., Allen, D., Harrison, C., Kainz, K., Socol, A.R., &
Wang, Q. (2015). Implementing educational innovations at scale: Transforming
researchers into continuous improvement scientists. Educational Policy, 29(1), 257–277.
https://doi.org/10.1177/0895904814560886
Collins, J. (2004). Education techniques for lifelong learning: Principles of adult learning.
Radiographics, 24(5), 1483–1489. https://dme.childrenshospital.org/wpcontent/uploads/2017/11/rg.245045020.pdf
Cook-Harvey, C., Darling-Hammond, L., Lam, L., Mercer, C., & Roc, M. (2016). Equity
and ESSA: Leveraging educational opportunity through the Every Student Succeeds Act.
Learning Policy Institute. https://learningpolicyinstitute.org/product/equity-essa-report
130
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and
mixed methods approaches (5th ed.). Sage Publications.
Dee, T., & Jacob, B. (2009). The impact of No Child Left Behind on student achievement.
National Bureau of Economic Research. http://www.nber.org/papers/w15531
Department of Defense Education Activity. (2018). Blueprint for continuous improvement.
https://www.dodea.edu/Blueprint/upload/Blueprint2021v2.pdf
Dixon, C., & Palmer, S. (2020). Transforming educational systems towards
continuous improvement: A reflection guide for K–12 executive leaders. Carnegie
Foundation for the Advancement of Teaching.
https://www.carnegiefoundation.org/resources/publications/transforming-educationalsystems-toward-continuous-improvement/
Dowd, A. C., & Liera, R. (2018). Sustaining organizational change towards racial equity
through cycles of inquiry. Education Policy Analysis Archives, 26(65).
https://dx.doi.org/10.14507/epaa.26.3274
Education Sciences Reform Act of 2002. (2015).
https://www.govinfo.gov/content/pkg/COMPS-747/pdf/COMPS-747.pdf
Egalite, A. J., Fusarelli, L. D., & Fusarelli, B. C. (2017). Will decentralization affect
educational inequity? The Every Student Succeeds Act. Educational Administration
Quarterly, 53(5), 757–781. https://doi.org/10.1177/0013161X17735869
Elliott, I. C. (2020). Organizational learning and change in a public sector context.
Teaching Public Administration, 38(3), 270–283.
https://doi.org/10.1177/0144739420903783
131
Every Student Succeeds Act, 20 U.S.C. § 6301 (2015). https://www.congress.gov/bill/114thcongress/senate-bill/1177
Executive Office of the President. (1990). National goals for education.
https://eric.ed.gov/?id=ED319143
Finch, M. A. (2017). Tennessee to the top: One state’s pursuit to win Race to the Top.
Educational Policy, 31(4), 481–517. https://doi.org/10.1177/0895904815604216
Ford, T. G., Lavigne, A. L., Fiegener, A. M., & Si, S. (2020). Understanding district support
for leader development and success in the accountability era: A review of the literature
using social-cognitive theories of motivation. Review of Educational Research, 90(2),
264–307. https://doi.org/10.3102/0034654319899723
Fuentes, E. J., & Stratoudakis, C. J. (1992). Public response to the 1991 goals report.
Outreach report to the national education goals panel. National Education Goals
Panel. https://eric.ed.gov/?id=ED350356
Galey, S. (2015). Education politics and policy: Emerging institutions, interests, and
ideas. Policy Studies Journal, 43(S1), S12–S39. https://doi.org/10.1111/psj.12100
Gardner, D. P., Larsen, Y. W., Baker, W. O., Campbell, A., Crosby, E. A., Foster, C. A., Francis,
N. C., Giamatti, A. B., Gordon, S., Haderlein, R. V., Holton, G., Kirk, A. Y., Marston,
M., Quie, A. H., Sanchez, F. D., Seaborg, G. T., Sommer, J., & Wallace, R. (1983). A
nation at risk: The imperative for educational reform. An open letter to the American
people. A report to the nation and the secretary of education. National Commission on
Excellence in Education. https://files.eric.ed.gov/fulltext/ED226006.pdf
132
Good, C. J. (2010). A nation at risk: Committee members speak their mind.
American Educational History Journal, 37(2), 367–386.
https://eric.ed.gov/?id=EJ911589
Hackmann, D. G., Malin, J. R., & Ahn, J. (2019). Data use within an education-centered crosssector collaboration. Journal of Educational Administration, 57(2), 118–133.
https://doi.org/10.1108/JEA-08-2018-0150
Heise, M. (2017). From No Child Left Behind to Every Student Succeeds: Back to a future
for education federalism. Publius: The Journal of Federalism, 46(3), 392–415.
https://scholarship.law.cornell.edu/facpub/1642/
House Bill 3 (HB3) Texas School Finance Act. (2018).
https://capitol.texas.gov/tlodocs/86R/billtext/pdf/HB00003F.pdf#navpanes=0
Hunzicker, J. (2010). Effective professional development for teachers: A checklist. Teacher
Development, 26(4), 177–179. https://doi.org/10.1080/19415257.2010.523955
Institute of Education Sciences (IES). (2019). Director’s biennial report to congress: Fiscal
Years 2017 and 2018. https://ies.ed.gov/pdf/IESBR2017_2018.pdf
Institute Project Management. (2023). Trial vs pilot: What’s the difference?
https://instituteprojectmanagement.com/blog/trial-vs-pilot-whats-the-difference/
Irvine, J. (2017). A comparison of revised Bloom and Marzano’s new taxonomy of
learning. Research in Higher Education Journal, 33. https://eric.ed.gov/?id=EJ1161486
James-Burdumy, S., & Wei, T. E. (2015). Usages of policies and practices promoted by Race
to the Top and School Improvement Grants. National Center for Education Evaluation
and Regional Assistance. https://ies.ed.gov/ncee/pubs/20154018/
133
Johanningmeier, J. (2010). A nation at risk and sputnik: Compared and reconsidered.
American Educational History Journal, 37(2), 347–365.
https://eric.ed.gov/?id=EJ911587
Jones, C. M., & Thessin, R. A. (2015). A review of the literature related to the change
process schools undergo to sustain PLCs. Planning and Changing, 46(2), 193–211.
https://eric.ed.gov/?id=EJ1146267
Joyce, K. E. & Cartwright, N. (2020). Bridging the gap between research and practice:
Predicting what will work locally. American Educational Research Journal, 57(3), 1045–
1082. https://doi.org/10.3102/0002831219866687
Kainz, K., Metz. A., & Yazejian, N. (2021). Tools for evaluating the implementation of
complex education interventions. American Journal of Evaluation, 42(3), 399–414.
https://doi.org/10.1177/1098214020958490
Kantor, H. (1991). Education, social reform, and the state: ESEA and federal education policy
in the 1960s. American Journal of Education, 100(1), 47–83.
https://www.jstor.org/stable/1085652
Kaufman, E. K., Cash, C. S., Coartney, J. S., Ripley, D., Guy, T. M., Glenn, W. J., Mitra, S.,
& Anderson II, J. C. (2019). Planning to create a culture of continuous improvement with
the department of defense education activity. Educational Planning, 26(4), 5–19.
https://files.eric.ed.gov/fulltext/EJ1234914.pdf
Kiral, E. (2020). Excellent leadership theory in education. Journal of Educational
Leadership and Policy Studies, 4(1). https://go.southernct.edu/jelps/files/2020-fallvolume4-issue1/4-Final-Excellent-Leadership-of-School-Administrators-09.18.2020.pdf
134
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2007). Implementing the four levels: A practical guide
for effective evaluation of training programs. Berrett-Koehler Publishers, Inc.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training
evaluation. Association for Talent Development.
Klute, M. M., Welp, L. C., Yanoski, D. C., Mason, K. M., & Reale, M. L. (2016). State policies
for intervening in chronically low-performing schools: A 50-state scan. Regional
Educational Laboratory at Marzano Research.
https://ies.ed.gov/ncee/edlabs/projects/project.asp?projectID=4535
Knight, D. S., & Skrtic, T. M. (2021). Cost-effectiveness of instructional coaching:
Implementing a design-based, continuous improvement model to advance teacher
professional development. Journal of School Leadership, 31(4), 318–342.
https://doi.org/10.1177/1052684620972048
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into
Practice, 41(4), 212-218. https://www.depauw.edu/files/resources/krathwohl.pdf
Kuenzi, J. J., & Stroll, A. (2017). The Education Sciences Reform Act: CRS report R43398
(Version 7). Congressional Research Service. https://eric.ed.gov/?id=ED597863
Kunnari, I., Ilomaki, L., & Toom, A. (2018). Successful teacher teams in change: The role
of collective efficacy and resilience. International Journal of Teaching and Learning in
Higher Education, 30(1), 111–126. https://files.eric.ed.gov/fulltext/EJ1169801.pdf
Lam, L. Mercer, C., Podolsky, A., & Darling-Hammond, L. (2016). Evidence-based
interventions: A guide for states. Policy brief. Learning Policy Institute.
https://eric.ed.gov/?id=ED606350
135
LeMahieu, P. G., Nordstrum, L. E., & Cudney, E. A. (2017). Six sigma in education.
Quality Assurance in Education, 25(1), 91–108. https://doi.org/10.1108/QAE-12-2016-
0082
Liou, Y., & Daly, A. J. (2020). Investigating leader self-efficacy through policy engagement
and social network position. Educational Policy, 34(3), 411–448.
https://doi.org/10.1177/0895904818773904
Longenecker, C., & Aberbathy, R. (2013). The eight imperatives of effective adult learning:
Designing, implementing, and assessing experiences in the modern workplace. Human
Resource Management International Digest, 21(7), 30–33.
https://doi.org/10.1108/HRMID-10-2013-0090
Lubienski, C., Brewer, T. J., & La Londe, P. G. (2016). Orchestrating policy ideas:
Philanthropies and think tanks in US education policy advocacy networks. The Australian
Educational Researcher, 43(1), 55–73. https://doi.org/10.1007/s13384-015-0187-y
Lubienski, C., Scott, J., & DeBray, E. (2014). The politics of research production,
promotion, and utilization in educational policy. Educational Policy, 28(2), 131–144.
https://doi.org/10.1177/0895904813515329
Mac Iver, M. A., Sheldon, S., & Clark, E. (2021). Widening the portal: How schools can
help more families access and use the parent portal to support student success. Middle
School Journal, 52(1), 14–22. https://doi.org/10.1080/00940771.2020.1840269
136
Mac Iver, M. A., Sheldon, S., Epstein, J., Rice, E., Mac Iver, D., & Simmons, A.
(2018). Engaging families in the high school transition: Initial findings from a continuous
improvement initiative. School Community Journal, 28(1), 37–66.
https://www.researchgate.net/publication/342835336_Engaging_Families_in_the_High_
School_Transition_Initial_Findings_From_a_Continuous_Improvement_Initiative
Manna, P. (2012). State education governance and policy: Dynamic challenges,
diverse approaches, and new frontiers. Peabody Journal of Education, 87(5), 627–643.
https://doi.org/10.1080/0161956X.2012.723508
Mathis, W. J., & Trujillo, T. M. (2016). Lessons from NCLB for the Every Student Succeeds
Act. National Education Policy Center.
https://nepc.colorado.edu/sites/default/files/publications/PB%20MathisTrujillo%20ESSA_0.pdf
McClellan, J. L. (2011). Beyond student learning outcomes: developing comprehensive,
strategic assessment plans for advising programmes. Journal of Higher Education Policy
and Management, 33(6), 641–652. https://doi.org/10.1080/1360080X.2011.621190
McDonough, D. (2014). Providing deep learning through active engagement of adult learners in
blended courses. Journal of Learning in Higher Education, 10(1), 9–16.
https://files.eric.ed.gov/fulltext/EJ1143328.pdf
McGuinn, P. (2012). Stimulating reform: Race to the Top, competitive grants and the
Obama education agenda. Education Policy, 26(1), 136–159.
https://doi.org/10.1177/0895904811425911
137
McGuinn, P. (2015). Schooling the state: ESEA and the evolution of the u.s. department
of education. RSF: The Russell Sage Foundation Journal of the Social Sciences, 1(3), 77–
94. https://doi.org/10.7758/rsf.2015.1.3.04
Merriam, S. B., Cafarella, R. S., & Baumgartner, L. M. (2007). Learning in adulthood. JosseyBass.
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design
and implementation (4th ed.). Jossey-Bass.
Meyer-Looze, C.L. (2015). Creating a cycle of continuous improvement through
instructional rounds. International Journal of Educational Leadership Preparation,
10(21). https://files.eric.ed.gov/fulltext/EJ1060972.pdf
National Education Goals Panel. (1993). Background on the national education goals
panel. https://eric.ed.gov/?id=ED361343
Ning, J., & Jing, R. (2012). Commitment to change: Its role in the relationship
between expectation of change outcome and emotional exhaustion. Human Resource
Development Quarterly, 23(4), 461–485. https://doi.org/10.1002/hrdq.21149
No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107-110, § 101, Stat. 1425
(2002). https://www.congress.gov/107/plaws/publ110/PLAW-107publ110.pdf
Oregon State Department of Education. (n.d.). Continuous Improvement Process and
Planning. https://www.oregon.gov/ode/schools-and-districts/Pages/CIP.aspx
Osher, D., Neiman, S., & Williamson, S. (2020). School climate and measurement.
National Association of State Boards of Education. https://www.nasbe.org/schoolclimate-and-measurement/
138
Park, S., Hironaka, S., Carver, P., & Nordstrum, L. (2013). Continuous improvement
in education. Carnegie Foundation for the Advancement of Teaching.
https://www.carnegiefoundation.org/resources/publications/continuous-improvementeducation/
Peurach, D. J., Glazier, J. L., & Winchell Lenhoff, S. (2016). The developmental evaluation
of school improvement networks. Educational Policy, 30(4), 606–648.
https://doi.org/10.1177/0895904814557592
Piazza, P. (2019). Antidote or antagonist? The role of education reform advocacy
organizations in educational policymaking. Critical Studies in Education, 60(3), 302–
320. https://doi.org/10.1080/17508487.2016.1252782
Polanin, J. R., Caverly, S., & Pollard, E. (2021). A reintroduction to the What
Works Clearinghouse. What Works Clearinghouse. https://eric.ed.gov/?id=ED610864
Prenger, R., & Schildkamp, K. (2018). Data-based decision making for teacher and
student learning: A psychological perspective on the role of the teacher. Educational
Psychology, 38(6), 734–752. https://doi.org/10.1080/01443410.2018.1426834
Redding, C., Cannata, M., & Taylor Hanes, K. (2017). With scale in mind: A
continuous improvement model for implementation. Peabody Journal of Education,
92(5), 589–608. https://eric.ed.gov/?id=ED578637
Riley, R. W. (1995). The improving America’s schools act and elementary and
secondary education reform. Journal of Law and Education, 24(4), 513–566.
https://eric.ed.gov/?id=EJ524324
139
Rogers, R. (2015). Making public policy: The new philanthropists and American education.
The American Journal of Economics and Sociology, 74(4), 743–774.
https://www.jstor.org/stable/43817538
Ruff, R. R. (2019). State-level autonomy in the era of accountability: A comparative analysis
of Virginia and Alabama education policy through No Child Left Behind. Education
Policy Analysis Archives, 27(6). https://doi.org/10.14507/epaa.27.4013
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of
intrinsic motivation, social development, and well-being. The American Psychologist,
55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
Saultz, A., McEachin, A., & Fusarelli, L. D. (2016). Waivering as governance: Federalism
during the Obama administration. Educational Researcher, 45(6), 358–366.
https://doi.org/10.3102/0013189X16663495
Schildkamp, K., Poortman, C., Luyten, H., & Ebbeler, J. (2017). Factors promoting
and hindering data-based decision making in schools. School Effectiveness and School
Improvement, 28(2), 242–258. https://doi.org/10.1080/09243453.2016.1256901
Schildkamp, K. (2019). Data-based decision-making for school improvement: Research
insights and gaps. Educational Research, 61(3), 257–273.
https://doi.org/10.1080/00131881.2019.1625716
Shakman, K., Wogan, D., Rodriguez, S., Boyce, J. & Shaver, D. (2020).
Continuous improvement in education: A toolkit for schools and districts. Regional
Educational Laboratory Northeast & Islands.
https://ies.ed.gov/ncee/edlabs/regions/northeast/pdf/REL_2021014.pdf
140
Shaul, M. S. & Ganson, H. C. (2005). The No Child Left Behind Act of 2001: The
federal government’s role in strengthening accountability for student performance.
Review of Research in Education, 29(1), 151–165. https://eric.ed.gov/?id=EJ746365
Slager, R., Gond, J. P., & Crilly, D. (2021). Reactivity to sustainability metrics: A
configurational study of motivation and capacity. Business Ethics Quarterly, 31(2), 275-–
307. https://doi.org/10.1017/beq.2020.20
Slemp, G. R., Lee, M. A., & Mossman, L. H. (2021). Interventions to support
autonomy, competence, and relatedness needs in organizations: A systematic review with
recommendations for research and practice. Journal of Occupational and Organizational
Psychology, 94, 427–457. https://doi.org/10.1111/joop.12338
Sogunro, O. A. (2015). Motivating factors for adult learners in higher education. International
Journal of Higher Education, 4(1), 22–37. https://eric.ed.gov/?id=EJ1060548
Souto-Otero, M. (2011). Breaking the consensus in education policy reform? Critical Studies
in Education, 52(1), 77–91. https://doi.org/10.1080/17508487.2011.536514
Specht, J., Kuonath, A., Pachler, D., Weisweiler, S., & Frey, D. (2018). How change
agents’ motivation facilitates organizational change: Pathways through meaning and
organizational identification. Journal of Change Management, 18(3), 198–217.
https://doi.org/10.1080/14697017.2017.1378696
Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decisionmaking phenomena. American Journal of Education, 118(2), 113–141.
https://doi.org/10.1086/663283
141
Sroufe, G. (2003). Legislative reform of federal education research programs: A
political annotation of the education sciences reform act of 2002. Peabody Journal of
Education, 78(4), 220–229. https://www.jstor.org/stable/1492963
Stokes, E. W. (2018). The development of the school reform model and the reform
readiness survey. Research Issues in Contemporary Education, 3(1), 4–18.
https://eric.ed.gov/?id=EJ1244634
Sunderman, G. L. (2010). Evidence of the impact of school reform on systems governance
and educational bureaucracies in the United States. Review of Research in Education,
34(1), 226–253. https://doi.org/10.3102/0091732X09349796
Superfine, B. M. (2005). The politics of accountability: The rise and fall of goals 2000.
American Journal of Education, 112(1), 10–43. https://eric.ed.gov/?id=EJ725605
Tannenbaum, C., Boyle, A., Graczewski, C., James-Burdumy, S., Dragoset, L., & Hallgren,
K. (2015). State capacity to support school turnaround. National Center for Education
Evaluation and Regional Assistance. https://eric.ed.gov/?id=ED556118
Tetlock, P. E. (1992). The impact of accountability on judgement and choice: Toward a
social contingency model. Advances in Experimental Social Psychology, 25(3), 331–376.
https://psycnet.apa.org/record/2003-00370-007
Texas Education Agency (TEA). (2020). Agency strategic plan: Fiscal years 2021 to 2025.
https://tea.texas.gov/sites/default/files/2021-2025-TEA-Strategic-Plan.pdf
Texas Education Agency (TEA). (n.d.a). HB3 reading academies: Overview.
https://tea.texas.gov/academics/early-childhood-education/reading/hb-3-readingacademies
142
Texas Education Agency (TEA). (n.d.b). Welcome and overview. https://tea.texas.gov/abouttea/welcome-and-overview
Thomas, J. Y. & Brady, K. P. (2005). The elementary and secondary act at 40:
Equity, accountability, and the evolving federal role in public education. Review of
Research in Education, 29(1), 51–67. https://www.jstor.org/stable/3568119
Tichnor-Wagner, A., Allen, D., Socol, A. R., Cohen-Vogel, L., Rutledge, S., & Xing, Q. W.
(2018). Studying implementation within a continuous-improvement process: What
happens when we design with adaptations in mind? Teachers College Record, 120(5).
https://journals.sagepub.com/doi/pdf/10.1177/016146811812000506
Tirozzi, G. N., & Uro, G. (1997). Education reform in the united states: National policy in
support of local efforts for school improvement. American Psychologist, 52(3), 241–249.
https://psycnet.apa.org/doiLanding?doi=10.1037%2F0003-066X.52.3.241
Tuan, L. T. (2017). Reform in public organizations: The roles of ambidextrous leadership
and moderating mechanisms. Public Management Review, 19(4), 518–541.
http://dx.doi.org/10.1080/14719037.2016.1195438
U.S. Department of Education. (1991). America 2000: An education
strategy. https://eric.ed.gov/?id=ED327985
U.S. Department of Education. (1996). Companion document: Cross-cutting guidance for
the elementary and secondary education act. https://eric.ed.gov/?id=ED408663
VanGronigen, B. A., & Meyers, C. V. (2019). How state education agencies are
administering school turnaround efforts: 15 years after No Child Left Behind.
Educational Policy, 33(3), 423–452. https://doi.org/10.1177/0895904817691846
143
Wang, A., & Fabillar, E. (2019). Building a culture of continuous improvement: Guidebook
and toolkit. Education Development Center.
https://www.edc.org/sites/default/files/uploads/EDC-Building-Culture-ContinuousImprovement.pdf
Wilcox, K. C., Lawson, H.A., & Angelis, J. I. (2017). COMPASS-AIM: A university/P–
12 partnership innovation for continuous improvement. Peabody Journal of Education,
92(5), 649-674. https://doi.org/10.1080/0161956X.2017.1368654
Wilcox, K. C. & Lawson, H. A. (2018). Teachers’ agency, efficacy, engagement, and
emotional resilience during policy innovation implementation. Journal of Educational
Change, 19, 181–204. https://doi.org/10.1007/s10833-017-9313-0
Will, K. K., McConnell, S. R., Elmquist, M., Lease, E. M., & Wackerle-Hollman, A.
(2019). Meeting in the middle: Future directions for researchers to support educators’
assessment literacy and data-based decision making. Frontiers in Education, 4, 1–8.
https://doi.org/10.3389/feduc.2019.00106
Wiseman, A. W. (2010). The uses of evidence for educational policymaking: Global contexts
and international trends. Review of Research in Education, 34(1), 1–24.
https://doi.org/10.3102/0091732X09350472
Wrabel, S. L., Saultz, A., Polikoff, M. S., McEachin, A., & Duque, M. (2018). The politics
of elementary and secondary education act waivers. Education Policy, 32(1), 117–140.
https://doi.org/10.1177/0895904816633048
144
Yoon, H. J., Sung, S. Y., Choi, J. N., Lee, K., & Kim, S. (2015). Tangible and intangible
rewards and employee creativity: The mediating role of situational extrinsic
motivation. Creativity Research Journal, 27(4), 383–393.
https://doi.org/10.1080/10400419.2015.1088283
Yurkofsky, M. M., Peterson, A. J., Mehta, J. D., Horwitz-Willis, R., & Frumin, K. M.
(2020). Research on continuous improvement: Exploring the complexities of managing
educational change. Review of Research in Education, 44(1), 403–433.
https://doi.org/10.3102/0091732X20907363
145
Appendix A: Interview Protocol
I want to start by thanking you for taking the time to meet with me today. My name is
Bekah Harris, and I am a doctoral student at the University of Southern California. This
interview is part of the data collection process for my dissertation. My dissertation aims to
identify performance strengths and gaps within a state education agency’s implementation of an
education reform policy for continuous organizational improvement. I am doing this by
evaluating TEA’s implementation of the HB 3 Reading Academies from the perspective of
cohort leaders. During this interview, I will ask you about your experience implementing the
HB3 reading academies with your cohorts. Participating in this study is voluntary, and all
information shared with me will be kept entirely confidential. Therefore, only I will know the
participants interviewed for this study, and I will not share your identity with my dissertation
team or other participants.
Before we begin, I want to state that I understand that protecting your confidentiality is
incredibly important. So, I want to share the steps I will take to ensure your confidentiality.
During this interview, I will ask you if I have your permission to record. I ask only to have
access to the video to create a transcript of our conversation for future reference. Once I have
created a transcript, the video of this interview will be permanently deleted. My next step will be
to go through the transcript and remove any information that could be used for identification
purposes. You and the authorized provider you were employed by will be given a pseudonym. I
will be the only person who sees the transcript of our conversation before I remove identifiable
information. Do you have any questions about the steps I take to protect your confidentiality? Do
I have your permission to record this interview to create a transcript of our conversation?
146
Last week, I sent you a document that listed your rights as a research participant and
asked you to read through it thoroughly and bring any questions you might have to the interview
today. Did you have a chance to read through the document? Are there any questions I can
answer for you?
During this interview, I am going to ask you questions about your experience as a cohort
leader for the HB3 reading academies. When I ask questions about supports provided to you in
this role, I am asking specifically about TEA, rather than the authorized provider you were
employed by. Finally, when you reflect on your experience, I would ask that you think back to
the Summer of 2020 and the 2020–21 school year, and answer based on the first year of
implementation, rather than future years you may have served as a cohort leader.
Table A1
Questions and Probes
Interview questions Potential probes Research
question
addressed
Key concept
addressed
How well did you understand
your role as a cohort
leader?
How did TEA provided
trainings contribute?
How did the business rules
contribute?
How did TEA communication
contribute?
1 Conceptual
knowledge
As a cohort leader, what were
your goals for
implementing HB3 reading
academies?
What were TEA’s goals for
implementation?
How did your goals relate to
TEA’s goals?
1 Metacognitive
Knowledge
Could you describe how well
you understood your role in
launching your initial
cohorts in Summer/Fall
2020?
How did information/support
from TEA help/hinder your
ability to initially launch
your first cohorts?
1 Factual and/or
procedural
knowledge
147
Interview questions Potential probes Research
question
addressed
Key concept
addressed
Could you describe how well
you understand your role in
supporting participants in
completing content and
artifacts throughout the first
year of implementation?
Could you give me an
example of a time that
communication related to
the rules or procedures of
implementation either
positively or negatively
impacted your ability to
lead your cohort?
How did TEA’s
communication contribute
to the outcome of this
situation?
1 Factual and/or
procedural
knowledge
How would you describe your
motivation to successfully
implement reading
academies?
What personal, internal
influences motivated you?
What external factors
motivated you?
How did TEA contribute to
this motivation?
2 Intrinsic/
Extrinsic
Motivation
Did you believe that you
could be successful in your
role as cohort leader?
Why/why not?
How did TEA contribute to
this feeling?
What else could TEA have
done to contribute to your
success?
2 Self-efficacy
Were cohort leaders
collectively set up for
success in their role?
Why/Why not?
How did TEA contribute to
this feeling?
What else could TEA have
done to set the group up for
success?
2 Collective
efficacy
How did the organizational
structures and processes
designed by TEA support
or inhibit your ability to
implement reading
academies?
Could you tell me how the
implementation plan
supported or inhibited your
role?
Could you tell me about the
accountability system for
cohort leaders? For TEA?
Could you describe the
communication channels
for when you needed
information for support?
What feedback structures
were built into the
3 Structural
supports
148
Interview questions Potential probes Research
question
addressed
Key concept
addressed
implementation process, if
any?
From your perspective, how
did the implementation
design by TEA either
support or undermine the
success of the participants
in your cohorts?
In your experience, how were
participants’ opinions and
experiences validated or
utilized?
Did teachers and students
seem to be at the heart of
implementation design and
decision making? Can you
tell me why or why not?
3 Humancentered
approach
How did TEA utilize ongoing
data collected during
implementation to make
decisions for HB3 reading
academies?
How were the cohort leaders’
opinions and
implementation knowledge
validated and/or utilized?
How was internal data related
to continuous improvement
collected and utilized?
When issues arose, how were
they addressed and fixed?
Did TEA seem to be willing
to acknowledge areas of
growth and course correct
when needed? Can you give
me an example?
3 Quality
improvement
culture
Do you feel your districts,
leaders, and teachers were
adequately prepared to
participate in and
successfully complete
reading academies?
Why or why not? 3 High-leverage
change
behaviors
Do you feel that districts,
authorized providers, and
TEA were collaborative
partners in the
implementation process?
Why or why not? 3 High-leverage
change
behaviors
Do you feel like barriers to
successful implementation
were identified and fixed in
a timely manner to
minimize the impact on
participants?
Why or why not? 3 High-leverage
change
behaviors
149
Interview questions Potential probes Research
question
addressed
Key concept
addressed
Is there anything we haven’t
discussed that is important
to include about your
experience?
Closing
question
Thank you again for your willingness to participate in my dissertation and share your
experiences from the reading academies implementation. Here are my next steps following this
conversation. First, I will use the video to create a transcript of our conversation, as we
previously discussed. Once the transcript is complete, I will delete the video and remove all
identifiable information from the transcript. I will have this completed within the week.
My next steps are to complete the rest of the interviews I have scheduled with other
participants. During this time, if I have your permission, I may follow up with you by email with
one or two clarifying questions. Is that ok with you?
After my interviews are complete, I will analyze them all for general themes and then
create a short document that describes each of the themes I found. At that point, I will reach out
to you and ask you to read through the list of themes and tell me whether you feel like they
reflect your experience. I will include instructions in the email on how to analyze the themes and
a deadline that I will ask you to respond by. Once you have done so, your participation in this
study will end. Do you have any questions or concerns about the next steps?
Have a fabulous day, and I will be in touch soon.
150
Appendix B: Recruitment Documents
INFORMATION SHEET FOR EXEMPT RESEARCH
STUDY TITLE: A Call for Transparency and Accountability: Continuous Improvement in
State Education Agency Policy Implementation
PRINCIPAL INVESTIGATOR: Rebekah Harris, Ed.D. Candidate, Organizational
Leadership and Change
FACULTY ADVISOR: Ekaterina Moore, Ph.D., Associate Professor of Clinical
Education
You are invited to participate in a research study. Your participation is voluntary. This
document explains information about this study. You should ask questions about
anything that is unclear to you.
PURPOSE
The purpose of this study is to assess the impact of the Texas Education Agency’s
decision-making during the policy design and implementation of the House Bill 3 (HB3)
Reading Academies. We hope to identify strengths and areas of growth in policy design
and implementation for continuous organizational improvement in education reform
policy design and implementation. You are invited as a possible participant because you
served as a HB3 Reading Academies Cohort Leader during the initial year of
implementation (2020-2021).
PARTICIPANT INVOLVEMENT
Participants will be asked to participate in a single 60–90-minute interview with the
primary investigator. During this interview, participants will be asked questions guiding
them to reflect on their experience as a HB3 Reading Academies Cohort Leader during
the initial year of HB3 Reading Academies implementation. Interviews will be held on
the Zoom platform and recorded for transcription purposes. Please see the
Confidentiality section of this document for details related to the storing and destruction
of audio-video recordings. Participants can decline to be recorded if they wish.
Once all participant interviews are complete, the primary investigator will follow up with
participants by sharing a list of themes identified through the data analysis process.
Participants will be asked to indicate whether a theme is reflective of their experience as
a Cohort Leader, partially reflective of their experience, or not reflective of their
experience.
If you decide to take part, you will be asked to participate in a 60–90-minute interview
with the primary investigator and review a list of themes generated during the data
151
analysis process to indicate the themes’ reflectiveness of your experience as a HB3
Reading Academies Cohort Leader during the initial year of implementation.
CONFIDENTIALITY
The members of the research team and the University of Southern California
Institutional Review Board (IRB) may access the data. The IRB reviews and monitors
research studies to protect the rights and welfare of research subjects.
When the results of the research are published or discussed in conferences, no
identifiable information will be used.
Your participation in this study will be kept confidential by removing all identifiable
information relating to your identity, employer, and Reading Academy Cohorts during
the interview process. Additionally, the only identifiable information requested from
participants will include their name and the Authorized Provider they were employed by
during the initial year of HB3 Reading Academies implementation. Only the primary
investigator will have access to identifiable information of participants. No digital
information will be stored for more than 7 days, including participant identifiable
information.
The primary investigator will record your interview for transcription purposes. Within 7
days of the interview, the primary investigator will create a written transcript of your
interview, removing all personally identifiable information of the participant. In addition,
pseudonyms will be assigned for participants, participant employers, and HB3 Reading
Academies cohorts. Once the transcription process is complete, the recorded interview
file will be permanently deleted. Prior to their deletion, the participant has the right to
review the audio/video recordings obtained by the researcher.
Additionally, all communication between the primary investigator and participant,
including emails, phone calls, text messages, and Zoom meeting logs, will be deleted
from the Primary Investigators’ phone, email, and Zoom accounts.
Deidentified transcripts will be stored in digital format on the primary investigator’s
personal computer and in print format in the primary investigator’s home through the
end of the study, which is anticipated to conclude in July 2023. At the study’s
conclusion, all digital and print copies of transcripts will be permanently destroyed.
However, at any point prior to deletion/destruction, the participant has the right to review
and edit their interview transcript.
INVESTIGATOR CONTACT INFORMATION
If you have any questions about this study, please contact Rebekah Harris at (857) 208-
8252 or email rrharris@usc.edu. To contact the faculty advisor, Ekaterina Moore, please
email ekaterim@usc.edu.
IRB CONTACT INFORMATION
152
If you have any questions about your rights as a research participant, please contact the
University of Southern California Institutional Review Board at (323) 442-0114 or email
irb@usc.edu.
153
Appendix C: Participant Theme Review Email Instructions
Hello Cohort Leaders!
Thank you for participating in the data collection process for my dissertation. After
reviewing our interview transcripts, eight “themes,” or patterns, emerged from your responses. In
order to ensure that I have drawn the correct themes from your interviews, I am asking that you
participate in a short survey to assess whether each theme is reflective of your experience as a
cohort leader, partially reflective of your experience, or not reflective. This evening, you will
receive an individual link to complete this survey.
These themes you will be asked to assess, which take the form of statement sentences, are
not direct quotes from any one interview. Rather, they are concepts that emerged across
interviews. Prior to completing the survey, please review the eight themes, included below.
Within the table, I have also included additional details or examples for each theme to support
your overall understanding. If you have any clarifying questions related to the themes, please
reach out to me prior to completing the survey. (Feel free to either email, call, or text!)
Table C1
Themes
Themes Details and/or examples
Theme 1: Cohort leaders had three types of
needed knowledge for their role: content,
technology, and procedural. A cohort leader’s
level of knowledge was highly dependent on
previous experiences and/or background
knowledge.
Content: Literacy-specific knowledge
Technology: Knowledge related to the use of
Canvas, Zoom, etc.
Procedural: Role-specific expectations, grading
procedures, etc.
Theme 2: Cohort leaders leveraged research
skills and relationships in order to fill
identified knowledge gaps.
Research skills example: Researching
unfamiliar literacy terms or concepts
154
Themes Details and/or examples
Personal relationships example: Contacting a
cohort leader at another authorized provider
to compare notes
Theme 3: Cohort leaders were intrinsically
motivated to be successful in their role by
their passion for high-quality literacy
instruction, desire to support students and
teachers, and loyalty to their authorized
provider.
Loyalty to authorized provider example:
Wanting to support the overall success of the
authorized provider organization through
attainment of high reading academies
metrics.
Theme 4: While cohort leaders did not feel that
as a group they were set up for success in
their role, they found role success by
leveraging their individual knowledge,
intrinsic motivation, and resiliency skills.
Example: Cohort leaders were not provided a
way to track individual participant
completion by TEA. In response, cohort
leaders, either individually or as an
organization, developed a system to track
participant completion to support participants
in meeting their deadlines, and communicate
participant progress to districts.
Theme 5: Cohort leaders believe that in-depth
preplanning and responsive management
structures could have supported a more
consistent and effective implementation of
HB 3 Reading Academies.
Preplanning example: Determining the
participants who were required to participate
in Reading Academies prior to the initial
launch.
Responsive management structures example:
Creating a system for collecting participant
extension requests that ensured approval or
denial within 72 hours.
Theme 6: Cohort leaders identified that reading
academies implementation could have been
more successful with stronger and more
structured communication channels between
TEA and all stakeholder groups.
Example: Communicating to school- and
district-leadership the teacher behavior goals
of reading academies, and how to support
those goals within their school or district.
Example: Systematically soliciting feedback
from cohort leaders related to business rules,
and clearly communicating how the feedback
was being used.
Theme 7: Cohort leaders believe that a publicfacing accountability plan would have made
HB3 reading academies implementation more
impactful.
Example: Identification of key success metrics
and collection of baseline data prior to the
initial launch. Then sharing the metrics/goals
with authorized providers, cohort leaders, and
districts for reference.
155
Themes Details and/or examples
Theme 8: Cohort leaders felt that the content of
the HB3 reading academies was high-quality;
however, utilizing human-centered design
approaches would have made it more
applicable for participants and would have
garnered higher stakeholder buy-in.
Example: Postponing the reading academies
initial rollout and extending the completion
deadline in response to the COVID-19
pandemic.
Example: Creating a blended model that had
less seat hours and more opportunities for
engagement/application.
Please complete this survey by Wednesday, May 24th at 5:00pm CST. If you have any
questions or concerns while completing this survey, please feel free to email me at
XXXXXXXX@usc.edu, or call me at XXX-XXX-XXXX. Thank you in advance for your
participation.
156
Appendix D: Participant Theme Review Survey
Figure D3
Participant Theme Review Survey Instructions
Note. For each of the eight identified themes, cohort leaders were asked to indicate if the theme
was reflective of their experience as a cohort leader, partially reflective of their experience as a
cohort leader, or not reflective of their experience as a cohort leader. If participants selected
“reflective of my experience as a cohort leader”, they were directed to an optional question that
provided the opportunity to give context to their answer. If they selected either “partially
reflective of their experience as a cohort leader” or “not reflective of their experience as a cohort
157
leader,” they were directed to a required question that asked them to describe how their
experience differed from the theme.
Figure D2
Participant Theme Review Survey Question 1
158
Figure D3
Participant Theme Review Survey Question 1.A
Figure D4
Participant Theme Review Survey Question 1.B
159
Figure D5
Participant Theme Review Survey Question 2
Figure D6
Participant Theme Review Survey Question 2.A
160
Figure D7
Participant Theme Review Survey Question 2.B
Figure D8
Participant Theme Review Survey Question 3
161
Figure D9
Participant Theme Review Survey Question 3.A
Figure D10
Participant Theme Review Survey Question 3.B
162
Figure D11
Participant Theme Review Survey Question 4
Figure D12
Participant Theme Review Survey Question 4.A
163
Figure D13
Participant Theme Review Survey Question 4.B
Figure D14
Participant Theme Review Survey Question 5
164
Figure D15
Participant Theme Review Survey Question 5.A
Figure D16
Participant Theme Review Survey Question 5.B
165
Figure D17
Participant Theme Review Survey Question 6
Figure D18
Participant Theme Review Survey Question 6.A
166
Figure D19
Participant Theme Review Survey Question 6.B
Figure D20
Participant Theme Review Survey Question 7
167
Figure D21
Participant Theme Review Survey Question 7.A
Figure D22
Participant Theme Review Survey Question 7.B
168
Figure D23
Participant Theme Review Survey Question 8
Figure D24
Participant Theme Review Survey Question 8.A
169
Figure D25
Participant Theme Review Survey Question 8.B
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The continuous failure of Continuous Improvement: the challenge of implementing Continuous Improvement in low income schools
PDF
Supporting emergent bilinguals: implementation of SIOP and professional development practices
PDF
An evaluation of teacher retention in K-12 public schools
PDF
AB 705: the equity policy – race and power in the implementation of a developmental education reform
PDF
An analysis of influences to faculty retention at a Philippine college
PDF
A path to K-12 educational equity: the practice of adaptive leadership, culture, and mindset
PDF
Employee standardization for interchangeability across states: an improvement study
PDF
Oregon education policy implementation: a case study of the achievement compacts information dissemination of the Oregon Education Investment Board
PDF
Knowledge, motivation, and organizational influences within leadership development: a study of a business unit in a prominent technology company
PDF
The formation and implementation of a regional achievement collaborative
PDF
Cultivating a culture of equity: lessons learned from librarians of color
PDF
The emerging majority in the United States health professions: a gap analysis innovation model for Latinx recruitment planning within higher education
PDF
The role of organizational leaders in creating sustainable diversity, equity, and inclusion initiatives in the workplace
PDF
Restoring justice in urban K-12 education systems: an evaluation study
PDF
Implementing effective leadership development initiatives at the unit level in the Air Force: an innovation study
PDF
Improving audience development for small town arts non-profits
PDF
The superintendent and reform: a case study of action by the system leader to improve student achievement in a large urban school district
PDF
Multi-tiered system of support as the overarching umbrella: an improvement model using KMO gap analysis to address the problem of school districts struggling to implement MTSS
PDF
Alternative education supporting students in meeting high school graduation requirements
PDF
Culture, politics, and policy implementation: how practitioners interpret and implement the transfer policy in a technical college environment
Asset Metadata
Creator
Harris, Rebekah Renee
(author)
Core Title
A call for transparency and accountability: continuous improvement in state education agency policy implementation
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Degree Conferral Date
2023-12
Publication Date
12/21/2023
Defense Date
10/17/2023
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
continuous improvement,education policy,education reform,gap analysis,OAI-PMH Harvest,policy design,policy implementation,state education agency
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Moore, Ekaterina (
committee chair
), Canny, Eric (
committee member
), Ott, Maria (
committee member
)
Creator Email
rebekah.harris153@gmail.com,rrharris@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113797583
Unique identifier
UC113797583
Identifier
etd-HarrisRebe-12581.pdf (filename)
Legacy Identifier
etd-HarrisRebe-12581
Document Type
Dissertation
Format
theses (aat)
Rights
Harris, Rebekah Renee
Internet Media Type
application/pdf
Type
texts
Source
20231221-usctheses-batch-1117
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
continuous improvement
education policy
education reform
gap analysis
policy design
policy implementation
state education agency