Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Interrupting inequitable systems: evaluating a teacher leadership development program
(USC Thesis Other)
Interrupting inequitable systems: evaluating a teacher leadership development program
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Interrupting Inequitable Systems: Evaluating a Teacher Leadership Development Program By Nicky M. Fritz Rossier School of Education University of Southern California A dissertation submitted to the faculty in partial fulfillment of the requirements for the degree of Doctor of Education December 2022 © Copyright by Nicky M. Fritz 2022 All Rights Reserved The Committee for Nicky M. Fritz certifies the approval of this Dissertation Maria Ott Kathy Stowe Courtney Malloy, Committee Chair Rossier School of Education University of Southern California 2022 iv Abstract Educational inequity results from systems that perpetuate inadequate resource distribution and produce disparate results for historically marginalized students. These inequities lead to pointedly different academic and life outcomes for students of color and low-income students. Given that an equitable education is determined by successful outcomes of efforts and resources, it is important that teachers—as the most proximate input for students—continue to develop their leadership toward these outcomes. The purpose of the study was to evaluate a teacher leadership development program against its theory of change. The theory of change for the program was that if the organization of study recruits ambitious potential leaders; grounds them in issues of diversity, equity, inclusion, and injustice; develops their leadership within and beyond the classroom; and focuses their leadership on the pursuit of systems change, then the organization will mobilize a growing group of leaders to collectively enact systems change toward educational equity. Eighty-three teachers entering their 2nd through 5th year in the classroom participated in the summer program. Quantitative, qualitative, and secondary data sources were collected and evaluated using a program theory approach. The program partially met its logic model goals in its pilot run, and the study produced findings and recommendations for the program to evolve and advance toward achieving its theory of change. By furthering research about the role of teachers in systems change, the field may better understand how to disrupt a perpetuating system of inequity. Keywords: program theory, logic model, theory of change, systems change, educational equity, teacher development v Dedication To my parents: Your support has unlocked boundless opportunities. To my children: You will forever inspire every aspiration. To my love: You have awakened my heart, stirred my being, and revived my soul. vi Acknowledgements My professors and peers at the University of Southern California were instrumental to my doctoral journey. I am beholden to your support and grateful for your guidance throughout the years. Also, I am indebted to Dr. Kathy Stowe and Dr. Maria Ott for joining my dissertation committee. Your counsel, feedback, and encouragement were invaluable to my research and growth. Finally, I have immense gratitude for Dr. Courtney Malloy, serving as my dissertation chair, professor, and mentor. You have inspired countless leaders to enact change through unleashing the power of data and storytelling. Thank you. vii Table of Contents Abstract .......................................................................................................................................... iv Dedication ....................................................................................................................................... v Acknowledgements ........................................................................................................................ vi List of Tables ................................................................................................................................ xii List of Figures .............................................................................................................................. xiii Chapter One: Introduction to the Study .......................................................................................... 1 Overview of Teacher Systems-Change Leadership Intensive ............................................ 2 Theory of Change ................................................................................................... 3 Program Overview .................................................................................................. 4 Participants .............................................................................................................. 5 Purpose of the Study ........................................................................................................... 5 Importance of the Evaluation .............................................................................................. 6 Evaluation Approach .......................................................................................................... 7 Definitions......................................................................................................................... 12 Organization of the Dissertation ....................................................................................... 13 Chapter Two: Review of the Literature ........................................................................................ 15 Current and Historical Contexts of Educational Inequity ................................................. 15 Current State of U.S. Education ............................................................................ 15 Historical Context of U.S. Educational Inequity .................................................. 20 Education as a System .......................................................................................... 23 Systems and Systemic Change ......................................................................................... 24 Defining a System ................................................................................................. 24 Defining Systemic Change ................................................................................... 27 viii Teachers as Change Agents .............................................................................................. 30 Role in Education System ..................................................................................... 30 Role in Systemic Change ...................................................................................... 32 Teacher Leadership Development ........................................................................ 33 Programs ................................................................................................... 33 Knowledge ................................................................................................ 34 Motivation ................................................................................................. 34 Program Theory ................................................................................................................ 35 Defining Program Theory ..................................................................................... 35 Evaluation ............................................................................................................. 37 Definition .................................................................................................. 38 Evaluation and Program Theory ............................................................... 38 Leading Indicators .................................................................................... 39 Importance of Evaluation in Pursuit of Systemic Change .................................... 40 Summary ........................................................................................................................... 40 Chapter Three: Methodology ........................................................................................................ 42 Evaluation Questions ........................................................................................................ 42 Overview of Design .......................................................................................................... 42 The Researcher .................................................................................................................. 44 Data Sources ..................................................................................................................... 46 End-of-Program Survey ........................................................................................ 47 Participants ................................................................................................ 47 Instrumentation ......................................................................................... 48 Data Collection Procedures ....................................................................... 48 ix Data Analysis ............................................................................................ 49 Interviews .............................................................................................................. 49 Participants ................................................................................................ 49 Instrumentation ......................................................................................... 50 Data Collection Procedures ....................................................................... 50 Data Analysis ............................................................................................ 51 Validity and Reliability ..................................................................................................... 51 Ethics................................................................................................................................. 54 Chapter Four: Findings ................................................................................................................. 56 Program Participants ......................................................................................................... 57 Surveyed Program Participants ............................................................................. 61 Interviewed Program Participants ......................................................................... 62 Findings............................................................................................................................. 63 Satisfaction ............................................................................................................ 65 Evaluation Question 1: To what extent, if at all, are participants satisfied with the summer training program? .............................................................................. 65 Finding 1: Accountability in Tension with Flexibility ............................. 68 Finding 2: Accessibility Issues in a Virtual Setting .................................. 71 Finding 3: Limited Usefulness with Diverse Populations ........................ 72 Broader Student Outcomes ............................................................................................... 76 Evaluation Question 2: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? ............................................................... 76 Finding 4: Various Levels of Previous Knowledge .................................. 77 Evaluation Question 3: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? ............................................................... 79 x Finding 5: Barriers to Shifting Teaching Practice .................................... 80 Role of Teacher in Systemic Change ................................................................................ 84 Evaluation Question 4: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? .................................................................. 84 Finding 6: Inconsistent Demonstrated Learning ....................................... 85 Finding 7: Confusion About How to Lead ............................................... 87 Evaluation Question 5: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? .................................................................. 89 Finding 8: Stifled Motivation Toward Systems Change........................... 91 Summary ........................................................................................................................... 93 Chapter Five: Recommendations .................................................................................................. 95 Discussion of Evaluation Findings ................................................................................... 96 Recommendations ........................................................................................................... 100 Recommendation 1: Implement Accountability Strategies to Increase Participation ........................................................................................................ 100 Recommendation 2: Ensure All Components of Program Are Accessible to All Learners ........................................................................................................ 101 Recommendation 3: Expand Program’s Applicability to Diverse Populations .. 102 Recommendation 4: Increase Motivation of Program Learners ......................... 103 Recommendation 5: Equip Teachers with Change Strategies ............................ 106 Recommendation 6: Design Program with Clearer Learning Objectives and Assessments ........................................................................................................ 110 Limitations and Delimitations......................................................................................... 111 Recommendations for Future Research and Evaluation ................................................. 112 Conclusion ...................................................................................................................... 113 References ................................................................................................................................... 115 xi Appendix A: Summer Training Program Participant Survey Questions .................................... 129 Appendix B: Summer Training Program Participant Interview Questions ................................ 132 Appendix C: Summer Training Program Participant Interview Protocol ................................... 136 Appendix D: Interview Data Codebook ..................................................................................... 142 xii List of Tables Table 1: Data Sources 44 Table 2: Overall Program Participant Demographics 59 Table 3: Reasons Teachers Selected for Attending Summer Training Program 60 Table 4: Surveyed Program Participants Entering Year of Classroom Teaching for Fall 61 Table 5: Summary of Findings by Evaluation Question 63 Table 6: Medians, Modes, and Standard Deviations for Satisfaction Scores 66 Table 7: Medians, Modes, and Standard Deviations for Usefulness Scores 73 Table 8: Broader Student Outcomes Scores By Number of Years in the Classroom 80 Table 9: Systems Change Scores By Number of Years in the Classroom 90 Table 10: Summary of Findings and Recommendations 96 Appendix A: Summer Training Program Participant Survey Questions 129 Appendix B: Summer Training Program Participant Interview Questions 132 Appendix D: Interview Data Codebook 142 xiii List of Figures Figure 1: Logic Model for Teacher Systems-Change Leadership Intensive 11 Figure 2: Grades Taught by Surveyed Program Participants 62 1 Chapter One: Introduction to the Study Educational inequity results from systems that perpetuate inadequate resource distribution, particularly for students of color and low-income students (Bai et al., 2021; Darling- Hammond, 1998; de Brey et al., 2019; Irwin et al., 2021; National Assessment of Educational Progress [NAEP], 2019a, 2019b; Nonoyama-Tarumi, 2008; Sirin, 2005). The Education Trust proclaimed that students of color and students from low-income families receive significantly fewer educational resources and, therefore, opportunities than their White peers, (Morgan & Amerikaner, 2018). The National School Boards Association (NSBA) similarly acknowledged that “based on factors including but not limited to disability, race, ethnicity, and socio-economic status, students are deprived of equitable educational opportunities” (NSBA, n.d., Equity section). Furthermore, these inequities lead to significant academic and life differences in outcomes for students of color and low-income students (Bai et al., 2021; Darling-Hammond, 1998; de Brey et al., 2019; Irwin et al., 2021; NAEP, 2019a, 2019b; Nonoyama-Tarumi, 2008; Sirin, 2005). According to Amadeo (2021), educational inequity is defined as a student or groups of students not receiving what they need from the education system, leading to disparate outcomes. Likewise, educational equity is “the intentional allocation of resources, instruction, and opportunities according to need, requiring that discriminatory practices, prejudices, and beliefs be identified and eradicated” so that students may “succeed in school and life” (NSBA, n.d., Equity section). Therefore, educational equity or inequity is ultimately determined by the outcomes of efforts and resources, not solely by the number of inputs themselves (Amadeo, 2021). 2 Given that an equitable education is determined by successful outcomes of efforts and resources, it is important that teachers—as the most proximate input to students—continue to develop their leadership toward these outcomes (Amadeo, 2021; National Education Association [NEA], n.d.-a). According to the NEA (n.d.-b), change is required to “dismantle systems of oppression that prevent children from accessing a great public education because of their race, gender, sexual orientation, culture, or nationality” (Racial & Social Justice section). Ongoing professional development on how to change the system of education may lead to more equitable student outcomes (U.S. Department of Education, n.d.). The purpose of this study was to evaluate the implementation and early outcomes of a program seeking to enact the leadership of novice teachers in pursuit of reimagining a more equitable educational system for low-income students and students of color. The overall goal of the program was to support the leadership development of teachers through strengthening their knowledge and motivations about (a) broader student outcomes that lead to a life of a student’s own choosing and (b) the role a teacher can play within a complex education system in pursuit of systems change. The program of study took place over four weeks during the summer, and the novice teachers were entering years two through five of teaching. By evaluating the implementation and early outcomes of the program against its stated goals, the organization will have the insights to refine and improve the program in future iterations in service of teachers’ ability to pursue systems change in their communities. In addition, the field may learn more about how teachers perceive their role in educational equity for their students. Overview of Teacher Systems-Change Leadership Intensive The Educational Equity and Justice Initiative (EEJI), a pseudonym, is a nonprofit organization with the mission of addressing systemic oppression leading to educational inequity 3 by developing the leadership of new teachers. The purpose of the organization is to build a critical mass of leaders that can, together, enact systems-level change within the U.S. education system. EEJI added a new program specifically for novice teachers entering years two through five in the classroom to further pursue its mission by deepening the personal understanding and conviction of rising teacher leaders so that they work to create equitable outcomes for students. The organization sought an evaluation to determine if this program had the potential to expand EEJI’s impact beyond its initial teacher training program. To achieve this potential impact, the new program was rooted in a theory of change to help eliminate educational inequity. Theory of Change To help correct for educational inequity, EEJI created a new summer training program to develop the leadership required to interrupt the system of educational inequity by grounding its programming in diversity, equity, and inclusion (DEI) and an orientation to systems change. The program was grounded in a theory of change, or the underlying standpoint and tenets of influence that identify an alternative outcome to a current state and advances concepts to support the transformation (Frumkin, 2006; Tuck & Yang, 2014). The theory of change for the summer training program was that if the organization recruits ambitious potential leaders, grounds them in issues of DEI and injustice, develops their leadership, and focuses their leadership on the pursuit of systems change, then the organization will mobilize a growing group of leaders to collectively enact systems change toward educational equity. There are assumptions made when defining a theory of change. At EEJI, there are assumptions that the current education system is not working for all students (particularly low- income students and students of color) and that inequity is rooted in systems of oppression. Another assumption includes that—to help combat the inequitable education system—leadership 4 is the critical lever to enacting change. Finally, there is an assumption that systems change can only happen through a critical mass of like-minded leaders, known as a coalition, and not through individuals working alone (Bolman & Deal, 2017). Based on this theory of change, EEJI created a new summer training program. Program Overview The program sought to accelerate teacher development toward the knowledge, skills, orientations, and agency that the leaders of EEJI believe need to be true for students in 21st century schooling. During the program, teachers gathered in a virtual setting for a 4-week professional development training intensive to develop their knowledge and motivations toward two transformative concepts: broader student outcomes and the role of teacher in systems change. Additionally, EEJI partnered with two external organizations to offer a literacy track and a general track as part of the 4-week program. EEJI ensured a cohesive experience for program participants by collaborating with the organizations to integrate broader student outcomes and systems change throughout the program’s content. The first week of programming exposed participants to EEJI’s research on broader student outcomes, systems change, and the role of a teacher in pursuit of systems change. After the first week of learning the foundational frameworks, participants proceeded into their 2-week chosen track with an external organization to (a) deepen their understanding about the frameworks and (b) practice these concepts within their track content. After teachers completed the first three weeks of training, the program concluded with facilitated sessions to help apply their learnings to their practice in preparation for their own classrooms in the fall. The program was designed to be an intensive training during the summer while also offering some flexibility in attending sessions live or watching the recordings later. The total 5 hours of the summer training intensive ranged from 60–80 hours, depending on which sessions participants elected to attend. A program evaluation began on the final day of programming with an end-of-program survey followed by a subset of participant interviews. Participants EEJI is an organization that trains teachers to work in predominantly Title 1-eligible schools as teacher leaders. These leaders teach all grades including Pre-K and nearly all traditional subjects. Most of the organization’s training focuses on entering the classroom as a first-year teacher, and the program of study was EEJI’s pilot to expand beyond initial training. The participants of the program were recruited to participate in the program because they are alumni of the organization’s initial teacher training program and have taught various subjects and grades across the United States. Over the last few years, EEJI’s program participants have been predominantly people of color, and the goal of the new training program was to meet or exceed the percentage of teachers of color in the new program with a goal of at least 55% identifying as a person of color. EEJI recruited 83 novice teachers entering their 2nd through 5th year in the classroom to join the organization’s new summer programming in order to continue their learning and development. Purpose of the Study The purpose of the study was to examine the implementation of the summer training program and to measure the early outcomes through program participant data. The goal of the summer training program was to enact the leadership of novice teachers in pursuit of systemic change in the U.S. educational system for low-income students and students of color. Therefore, this study examined the implementation outputs and early outcomes within a predetermined 6 programmatic logic model. Specifically, the evaluation focused on the following five evaluation questions: 1. To what extent, if at all, are participants satisfied with the summer training program? 2. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? 3. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? 4. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? 5. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? The research design for this evaluation was mixed methods in pursuit of a more holistic understanding of whether the programming was effective in enacting the organization’s theory of change. According to Creswell and Creswell (2018), mixed methods is a methodology that seeks to strengthen the insights from research by leveraging both quantitative and qualitative data collection. More specifically, the study leveraged the strategy of inquiry known as convergent mixed methods where the researcher collects the quantitative and qualitative data simultaneously (or roughly at the same time) and interlaces the data into the results (Creswell & Creswell, 2018). For this training, an end-of-program survey was distributed to the program’s teachers, and interviews were conducted with a subset of teachers. Importance of the Evaluation Given EEJI’s mission to eliminate educational inequity and the organization’s limited resources, it was important to evaluate the effectiveness of the program’s efforts. Without a 7 program evaluation, an organization might continue to expel its pursuits toward an ineffective program with little to no impact (Rossi et al., 2004). Conversely, an evaluation can illuminate areas of a program that are meeting intended outcomes while highlighting areas that can be strengthened. If the program were effective at meeting its objectives, EEJI could continue to expand the program and increase the scale of its systems-change efforts toward educational equity. If the program were ineffective, the organization could either seek to redesign the program in pursuit of effectiveness or redistribute its resources toward other organizational endeavors. This study measured the program’s effectiveness and yielded recommendations to enable the organization to make informed decisions based on the program’s theory of change. Furthermore, the results of the evaluation could be helpful for other programs seeking to accomplish similar aims (Rossi et al., 2004). By disseminating the study’s results, additional programs could be developed to support teachers and students in pursuit of educational equity. Evaluation Approach To examine the implementation and effectiveness of the EEJI summer programming, the study took a program theory approach. Program theory (also known as logic model or theory- based evaluation) explores the how, why, and extent a program reaches its intended goals (Birckmayer & Weiss, 2000; Frumkin, 2006; W. K. Kellogg Foundation [WKKF], 2004). This theory was an appropriate lens to examine the program’s effectiveness toward its theory of change in that it helped define the specific outcomes and impact the program was seeking, what inputs were needed to achieve those outcomes, and the underlying conditions required. A theory of change can be represented by a logic model. For this program’s theory of change, there were assumptions, or underlying beliefs, that undergirded the need for the program 8 (Merriam-Webster, n.d.-a). The assumptions included that (a) educational equity is a desire for and benefit to a society, (b) educational inequity is unjust and is caused by systems of oppression (racism, classism, etc.), (c) the education system can be transformed to serve all students equitably, (d) teachers have power and proximity to produce immediate and lasting change, (e) developing systems-change leadership knowledge and motivations in teachers will lead to change, and (f) a program is needed to develop the systems-change leadership in teachers. In addition to the assumptions informing the need for the program, there were specific resources and activities required to execute the program. The logic model defined specific resources, or the organizational inputs and supports dedicated toward a goal, needed to execute the activities of the program (WKKF, 2004). There was a need for new staff capacity to lead both the logistics and the facilitation of the program in addition to current staff support. Additionally, EEJI’s senior leadership support was a necessity that unlocked resources such as program approval, budget, and organizational participation. Once the resources defined in the logic model were in place, EEJI staff could execute the program activities. According to WKKF (2004), activities are the implementation actions of the program that lead to change. The activities for the summer training program included extensive planning that involved conducting a Request for Proposals (RFP) process to obtain one or more external organizations to co-design aspects of the program. In addition to the RFP, there was a need to recruit and retain the participants for the program through various strategies. Program logistics were confirmed and communicated with program participants in the spring, and all program designs were finalized and executed in the summer. The activities described in the logic model led to the program outputs. 9 Once the activities were accomplished, the following outputs, or immediate products of program implementation, provided evidence of program execution (WKKF, 2004). To measure whether the activities were accomplished as intended, several outputs were measured including attendance, satisfaction, and external organization metrics. First, the number of novice teachers who participated in the program was measured. A second measure was the level of satisfaction teachers had about the program. Third, the number of external organizations that facilitated the designed program was measured. The final output measure included the number of organizations that facilitated the program that would maintain their partnership with EEJI, should the program continue based on the findings and recommendations from the study. The external organizations maintaining their relationships with EEJI and their willingness to facilitate future programming would be evidence that the program was successfully executed as described in the logic model. The outcomes, or shifts in program participants’ ways of being (WKKF, 2004), for the program of study included growth in a participating teacher’s leadership knowledge and motivations about broader student outcomes as well as the role of teacher in the pursuit of systemic change. Another outcome from the program, if successful, would be a shift in teachers’ practice within and beyond the classroom aligned with the shift in knowledge and motivations. Teachers would also begin working to change policies and practices that negatively impact students. Ultimately, if the program were successful, several impacts would be observed. An impact is a “fundamental intended or unintended change occurring in organizations, communities or systems as a result of program activities within 7 to 10 years” (WKKF, 2004, p. 2). Intended future impacts of the EEJI programming include the following: (a) teachers are working in broad and diverse coalitions to enact systemic change within their communities; (b) concrete policies, 10 practices, and ways of being have shifted to enable a more equitable education for students within the community; and (c) students are thriving in schools and can access the futures they choose. Figure 1 describes the foundational assumptions and program theory for the organization’s summer training program in the form of a logic model. The logic model defines the assumptions, resources, activities, outputs, outcomes, and impact for the program. The components of the logic model grounded the program evaluation. 11 Figure 1 Logic Model for Teacher Systems-Change Leadership Intensive ASSUMPTIONS RESOURCES ACTIVITIES OUTPUTS OUTCOMES IMPACT In order to enact the theory of change, we need to articulate the underlying assumptions about the change: In order to accomplish the set of activities, we will need the following resources: In order to address the problem, we will accomplish the following activities: We expect that the activities will produce the following evidence of program execution in outputs: We expect that the outputs will lead to the following outcomes within 6 years: We expect that the outputs will lead to the following impact within 7–10 years: ● Educational equity is a desire for and benefit to a society ● Educational inequity is unjust and is caused by systems of oppression (racism, classism, etc.) ● The education system can be transformed to serve all students equitably ● Teachers have power and proximity to produce immediate and lasting change ● Developing systems- change leadership knowledge and motivations in teachers will lead to change ● A program is needed to develop the systems- change leadership in teachers ● Two–three full-time staff who work on the planning and execution of the program throughout the year; senior leadership helps protect dedicated capacity ● Regional staff support in actively recruiting leaders to participate in programming ● Senior leader support of initiative as demonstrated by signoff, budget, elevating the work publicly, investing fellow senior leaders, and participating in program welcome or facilitation ● Additional operations support leading up to and during the execution of the summer programming; additional facilitation support needed during the leadership and broader student outcomes “in house” training component ● Two external organizations are recruited to facilitate the middle 2- weeks of training during specified summer dates; training outcomes are negotiated; contracts are signed; relationships maintained throughout school year ● Fifty to 100 2nd–5th year teachers are recruited to participate in summer development programming during spring ● Program participant outreach and referrals are driven in partnership with regional leadership teams ● Program opportunity awareness campaign and outreach strategies determined and executed ● Additional organizational full-time staff capacity is secured to execute operations leading up to and during summer programming; additional full-time staff facilitation capacity secured ● Summer program logistics are solidified and shared with participants including prework and materials ● Summer program designs are finalized, and materials are prepared for execution ● Evaluation plan is developed ● Number of teachers who participated in program ● Level of teacher satisfaction ● Number of external organizations that have executed their portion of the overall summer programming ● Number of external organizations that would return for next year’s programming ● Teachers can identify the tenets of systems- change leadership and broader student outcomes by the end of the program ● Teachers can identify goals for their own practice by the end of the program ● Teachers apply their learnings to their immediate classroom practice in the fall post- programming ● Teachers work to begin changing the policies, practices, and ways of being that enable educational inequity within their school sites within the first year post-programming ● Teachers work in groups and coalitions to enact systemic change within and beyond their school sites within six years of programming ● Teachers are working in broad and diverse coalitions to enact systemic change within their communities ● Concrete policies, practices, and ways of being have shifted to enable a more equitable education for students within the community ● Students are thriving in schools and able to access the futures they choose 12 Definitions Defining terms allows readers to leverage a shared, working understanding of how the terms will be used throughout the study (Statistics Solutions, n.d.). Below is an overview of key terms. Additional terms are introduced throughout the dissertation, as needed. ● Change is an observable shift in semblance, condition, values, or status over time (Kezar, 2001; Van de Ven & Poole, 1995). ● Diversity is the make-up of individuals across lines of race, ethnicity, socio-economic status, ability, etc. that form a group of people, such as employees within an organization, and recognizes the heterogeneity of the group as an asset in lieu of homogeneity (Bolger, n.d.). ● Educational equity is when all students receive what they need from the education system to be successful within and beyond school, and educational inequity is when a student or groups of students do not receive what they need, leading to disparate outcomes (Amadeo, 2021; Center for Public Education, 2016). ● Equity is the intentional strategy to support people in getting what they need to produce similar outcomes (Bolger, n.d.). ● Inclusion is when individuals feel valued and welcomed in a specific setting regardless of their identities (Bolger, n.d.). ● Knowledge is information obtained, reorganized, and put into use (Boulding, 1956; O’Dell & Hubert, 2011). ● A logic model is an explicit description of how a program’s theory of change, the assumptions underlying the program, and program components connect to the outcomes and impact of the program (WKKF, 2004). 13 ● Motivation is a force that influences behavior (Dweck, 2017). ● A program is a set of resources, activities, or services directed toward one or more common goals for a defined population (Kettner et al., 2017; Newcomer et al., 2015). ● Program evaluation is the systematic investigation to ascertain the effectiveness and impact of the program of study so that improvements can be made toward progress (Kettner et al., 2017; Rossi et al., 2004). ● Program theory explores the how, why, and extent a program reaches its intended goals (Birckmayer & Weiss, 2000; Frumkin, 2006; WKKF, 2004). The theory determines the objectives of the program, articulates the elements composing the program, and lays the foundation for a program evaluation (Sharpe, 2011). ● A system is any whole composed of various, interrelated parts that interplay and affect one another such as K–12 education (Ferris & Williams, 2010). ● Teacher leadership is the process by which teachers influence and create change in and beyond the classroom (Nguyen et al., 2019; Wenner & Campbell, 2017; York-Barr & Duke, 2004). ● A theory of change is the underlying assumption of interventions that will lead to a specific, alternative outcome (Frumkin, 2006; Tuck & Yang, 2014). Organization of the Dissertation This dissertation is organized into five chapters. This first chapter introduces the problem of educational inequity, an organization’s program to pursue systemic change, the purpose of the study, the importance of evaluating a program, and the evaluation approach using a logic model. Chapter One also includes the evaluation questions of the study and the definitions of key terms used throughout the study. Chapter Two provides a literature review of topics related to the study 14 to deepen a shared understanding of concepts. These topics include current and historical contexts of educational inequity; systems and systemic change; teachers as change agents; and program theory. Chapter Three describes the researcher’s positionality, ethics of the study, and mixed methodology research design. Chapter Four presents the quantitative and qualitative data, analysis of data, and findings for each evaluation question. Finally, Chapter Five consists of a discussion of evaluation findings, recommendations for the program, limitations and delimitations, recommendations for future research and evaluation, and a conclusion about the importance of the study. 15 Chapter Two: Review of the Literature The objective of the summer training program was to enact the leadership of novice teachers in pursuit of systemic change in the United States educational system for low-income students and students of color. The purpose of the study was to examine the implementation of the summer training program and to measure the early outcomes through program participant data. The literature review analyzes relevant research to ground the study. The review begins with the educational landscape in the United States and an overview of the historical context that has led to the current, systemic reality. This is followed by a discussion of systems and systemic change. The next section explores teachers as change agents within the educational system and their need to continuously develop. Finally, the literature review concludes with an examination of program theory, its relation to evaluation, and the importance of evaluating programs in pursuit of systemic change. Current and Historical Contexts of Educational Inequity Current State of U.S. Education According to the Condition of Education—a U.S. congressionally mandated annual report—there were 50.7 million children enrolled in public pre-kindergarten through twelfth grade in the United States (Irwin et al., 2021). As of 2017, White students make up 51% of the school-aged population, Black students make up 14%, Hispanic students 25%, Asian students 5%, children who identify as two or more races 4%, American Indian and Alaska Natives 1%, and Pacific Islander students less than 1% (de Brey et al., 2019). While 16% of students live in poverty, a disproportionate number of students of color live in poverty including 31% of Black students and 26% of Hispanic students (de Brey et al., 2019; Irwin et al., 2021). Sixty-one percent of 3- to 5-year-olds were enrolled in school, and 86% of students graduated high school 16 within four years (de Brey et al., 2019; Irwin et al., 2021). Of the 16.3 million students who began an undergraduate degree in the fall of 2016, approximately 56% were White; 20% were Hispanic; 13% were Black; 7% were Asian; 4% were two or more races; and less than 1% were American Indian, Alaska Native, or Pacific Islander (National Center for Education Statistics [NCES], 2019). The U.S. education system is influenced predominantly by individual state and local government decisions in governance, funding, teacher workforce, and learning standards, and these decisions lead to variations in student outcomes by state (Bai et al., 2021). A study of international education systems concluded that high variation or differentiation in a nation’s school system tends to lead to higher levels of inequality of opportunity, especially when broken down by socioeconomic status (Van de Werfhorst & Mijs, 2010). According to Darling- Hammond (1998), “the U.S. educational system is one of the most unequal in the industrialized world, and students routinely receive dramatically different learning opportunities based on their social status” (p. 28). While the tenets of socioeconomic status vary across researchers, common indicators include family income, parental education level, parental occupation, and home resources such as access to books, technology, and internet (Sirin, 2005). Multiple studies have demonstrated a strong correlation between low socioeconomic status and inequitable resources and student outcomes (Nonoyama-Tarumi, 2008; Sirin, 2005; Zimmer et al., 2003). These inequitable student outcomes begin in early childhood (McCoy et al., 2017). In their analysis of 22 studies on early childhood education in the United States, McCoy et al. (2017) found that classroom programs targeting children under five can lead to significantly stronger educational outcomes including high school graduation. Only 20% of poor children under the age of 6 and not enrolled in kindergarten receive center-based care in the 17 United States compared to 37% of non-poor children, with poor defined by the national Census Bureau as family income below the established thresholds based on family size and composition (de Brey et al., 2019; U.S. Census Bureau, 2021). These emerging gaps continue during elementary education through both reading and mathematics achievement. According to the NAEP (2019b), the average reading scores across the United States have decreased over recent years by 1–3 points. Only 9% of the nation’s fourth grade students scored the NAEP advanced ranking on its reading assessment, 35% scored at or above the proficient ranking, and only 66% performed at or above the basic rating (NAEP, 2019b). Additionally, there was a 27-point difference between White and Black fourth grade reading performance, a 28-point difference in eighth grade, and a 32-point difference in twelfth grade. Of the students who reached the twelfth grade, only 37% performed at or above proficient and only 70% met the basic reading level or above (NAEP, 2019b). In addition to significant reading gaps by race, there are also gaps in U.S. mathematics performance. In 2019, fourth grade White students scored 25 points higher on the NAEP mathematics assessment than their Black peers, and this gap widened to 31 points for twelfth grade students (NAEP, 2019a). The report also demonstrated a 28-point difference between students whose parents graduated from college and whose parents did not finish high school. Furthermore, analysis from the NAEP 8th grade mathematics assessment from 2003 to 2017 demonstrated that the scores between students of higher socioeconomic status and students of lower socioeconomic status had remained unchanged for 34 states, had narrowed for 2 states, and had widened for 14 states (Bai et al., 2021). These gaps in reading and mathematics impact graduation rates. 18 A measure cited for educational attainment is the adjusted cohort graduation rate (ACGR), which is the percentage of 9th grade students who graduate with a high school diploma within four years (Irwin et al., 2021). In the 2018–2019 school year, the ACGR was the highest in the United States at 86% since the measure began in 2011 (de Brey et al., 2019; Irwin et al., 2021). While the average ACGR in the United States has risen to 86%, only 82% of Hispanic students, 80% of Black students, and 74% of American Indian and Alaskan Native students graduate from high school within four years (Irwin et al., 2021). While overall high school dropout rates are declining, so are college enrollment rates (de Brey et al., 2019; Irwin et al., 2021). Undergraduate enrollment has decreased from 17.5 million in the fall of 2009 to 16.6 million in the fall of 2019 (Irwin et al., 2021). Additionally, only 63% of those who seek a bachelor’s degree complete it within six years, and 31% of the adult population over the age of 25 have a bachelor’s degree or higher (de Brey et al., 2019; Irwin et al., 2021). According to de Brey et al. (2019), the percentage of adults over the age of 25 who have earned a bachelor’s degree or higher differs significantly by race. The researchers stated that approximately 54% of Asian adults, 35% of White adults, 21% of Black adults, 18% of Pacific Islander adults, 15% of Hispanic adults, and 15% of American Indian and Alaska Native adults have received a bachelor’s degree or higher. This demonstrates that the gap between those who have attained higher education with its corresponding economic benefits and those who have not attained higher education has widened. These education statistics are also correlated with yearly income and other economic outcomes (Irwin et al., 2021; U.S. Bureau of Labor Statistics, 2021). According to the Condition of Education report (2021), 25- to 34-year-olds employed full-time earned twice the median income with a master’s degree or higher at $70,000 compared to $35,000 for those who only 19 completed high school. When disaggregated by race, the median income for full-time employees who have obtained a bachelor’s degree or higher was $69,100 for Asian workers, $54,700 for White workers, $49,400 for Black workers, and $49,300 for Hispanic workers (de Brey et al., 2019). The data demonstrated income variation not only by educational attainment but also by race despite the same educational attainment. Not only is income correlated with education attainment, but employment rates are higher for those with higher levels of educational attainment (de Brey et al., 2019; Irwin et al., 2021; U.S. Bureau of Labor Statistics, 2021). For example, the employment rate for 25- to 34-year-olds that have attained at least a bachelor’s degree was 86% compared to those who have not completed high school at 57% (Irwin et al., 2021). The research demonstrated that there is a pattern of higher unemployment rates for lower levels of education across all races and ethnicities compared to higher education levels (de Brey et al., 2019). Specifically, de Brey et al. (2019) determined the unemployment rate was 9% for those who had not completed high school compared to 3% for those who have obtained a bachelor’s degree or higher. In addition to education level impacting one’s own earning potential, it also correlates with generational access and accumulation of resources (Irwin et al., 2021; Nonoyama-Tarumi, 2008). For example, 99% of children in families with adults who have earned a bachelor’s degree or higher have access to the internet versus only 83% of children whose families have not earned a high school diploma or equivalent (Irwin et al., 2021). Furthermore, the data demonstrated that parental education levels, income, and other home and family factors can affect student achievement in schools (Darling-Hammond, 1998). Educational attainment strongly influences opportunities and economic benefits for individuals and their families (Darling-Hammond, 1998; de Brey et al., 2019; Irwin et al., 2021; 20 Nonoyama-Tarumi, 2008). According to Terrell et al. (2018), “Data that describe disparities in achievement patterns, dropout rates, and enrollment in higher-order courses are powerful when used as indicators of access barriers that exist within school” (p. 19). The data demonstrated the current, inequitable outcomes for students in the United States as well as the disparities in opportunities and economic benefits are widest for low-income students and students of color. Historical Context of U.S. Educational Inequity The U.S. education system has never produced equitable outcomes across race, ethnicity, and socioeconomic status since its inception (Darling-Hammond, 1998; Williams, 2005). In fact, education policies and practices that contribute to today’s inequity began centuries ago. For example, to educate those who were enslaved was considered a threat to the institution of slavery, and therefore, great efforts—including state laws that articulated fines, imprisonment, or physical harm—were taken to prevent literacy acquisition (Executive Committee of the American Anti-Slavery Committee, n.d.; Williams, 2005). According to Williams (2005), a small percentage of those who were enslaved found innovative ways to gain literacy skills despite the risk of severe punishments for doing so, as literacy was considered an instrument to help dismantle slavery. The right to access education for those who were emancipated and their descendants greatly influenced future policies and practices in the United States (Williams, 2005). While there were policies and practices to withhold education from people who were enslaved, there were also policies to force a certain education upon Native children (Bowes, 2016). In 1819, the Civilization Fund Act was passed allowing for the government, religious organizations, and other institutions to begin efforts to “civilize” Native children (Bowes, 2016). The act laid the groundwork for the expansion into federally-funded boarding schools with the 21 explicit objective to remove one’s Native culture by prohibiting the use of their native language, names, and customs (Bowes, 2016; Smoak, 2006). Moreover, children were removed from their community to attend schools to further erase their Native culture and instill the values of White people (Bowes, 2016; Smoak, 2006). The practice of removing Native children from communities to send to boarding schools continued into the 20th century (Bowes, 2016). After the U.S. Civil War ended in 1865, the Thirteenth, Fourteenth, and Fifteenth Amendments to the U.S. Constitution, known as the Reconstruction Amendments, were passed to abolish slavery, to establish citizenship and equal protection under the law, and to guarantee the right for all eligible men to vote—regardless of race (Library of Congress [LOC], n.d.). The Civil Rights Act of 1875 further prohibited discrimination, yet in the landmark case Plessy v. Ferguson (1896), the Supreme Court ruled that it was constitutional to segregate public accommodations if they were equal in quality (LOC, 2020; U.S. Senate, n.d.). The ruling allowed for legally segregated schools by race in the United States. However, segregated schools proved to create different outcomes for students (Ladson- Billings, 2017). Legal school segregation, known as de jure, resulted in schools receiving different levels of funding, resources, and qualified teachers (Ladson-Billings, 2017). The era of legal school segregation continued until the Supreme Court ruling in Brown v. Board of Education (1954), which superseded Plessy v. Ferguson (1896) by claiming that separate was inherently unequal (LOC, 2020). The ruling called for schools across the United States to desegregate (Brown v. Board of Education, 1954). Nearly a century after the Reconstruction Amendments were adopted in the U.S. Constitution, the Civil Rights Act of 1964 further prohibited discrimination in public accommodations and federally funded programs including public schools (Civil Rights Act, 22 1964; U.S. Department of Justice, 2021; U.S. Department of Labor, n.d.). The act prohibited discrimination on the basis of race, color, religion, sex, or national origin and explicitly called for the desegregation of public education (Civil Rights Act, 1964; U.S. Department of Labor, n.d.). While de jure school segregation was no longer legal, de facto, or deliberate segregation in practice, continued in the United States leading to further legislation (Ladson-Billings, 2017). In 1965, the Elementary and Secondary Education Act (ESEA) was signed into law and “was one of the most significant legislative accomplishments in twentieth-century American politics,” as it was the first comprehensive federal legislation involving schools (Casalaspi, 2017, p. 247). There were five titles in ESEA that included provisions for program resources, instructional materials, professional development, and research (Casalaspi, 2017; ESEA, 1965; Paul, 2016). Title I used a formula to distribute funding in pursuit of equalizing spending between wealthier and poorer school districts (Casalaspi, 2017; ESEA, 1965). Title II supported the creation of school libraries by providing funds for library materials and classroom textbooks (ESEA, 1965). Title III provided funds for supplementary education centers to promote innovation, and Title IV funded research laboratories dedicated to regional education and training (Casalaspi, 2017; ESEA, 1965; Paul, 2016). Finally, Title V provided federal funding to state education departments in support of executing the broader legislation (Casalaspi, 2017; ESEA, 1965). While ESEA (1965) was considered an accomplishment, the act not only failed to end de facto school segregation but, instead, further entrenched segregation through funding policies (Casalaspi, 2017). In efforts to correct for the ongoing inequities for students, the ESEA (1965) has been modified and reauthorized every five years since its inception (Paul, 2016). Reauthorizations of the act included names such as Improving America’s Schools Act (1994), No Child Left Behind 23 Act (2002), and Every Student Succeeds Act (2015). Since 1965, new titles have been added and amended to the act and currently include: ● Title I: Improving the Academic Achievement of the Disadvantaged ● Title II: Preparing, Training, and Recruiting High-Quality Teachers, Principals, and Other School Leaders ● Title III: Language Instruction for English Learners and Immigrant Students ● Title IV: 21st Century Schools ● Title V: Flexibility and Accountability ● Title VI: Indian, Native Hawaiian, and Alaska Native Education ● Title VII: Impact Aid ● Title VIII: General Provisions (Skinner, 2020) The U.S. education system has been created through policies, such as ESEA (1965), as well as practices, such as de facto segregation, often leading to disparate outcomes (Casalaspi, 2017; Darling-Hammond, 1998; Ladson-Billings, 2017). These policies and practices have created dynamic, interdependent components that interact and function as a complex system (Ferris & Williams, 2010). Given the historical context and current state of educational inequity in the United States, there is a need to change the education system to produce more equitable outcomes for all students (Terrell et al., 2018). Education as a System According to Ferris and Williams (2010), a system—such as the U.S. education system— is developed with specific outcomes in mind, though its construction might not be adequate in addressing all interrelated factors. Irwin et al. (2021) claimed that “many factors contribute to the condition of an education system: who is served by the system, the contexts in which those 24 students are served, what resources are available, and what outcomes are achieved” (p. 10). Ultimately, inequitable factors and systems can determine life outcomes in “highly predictable ways” (Ladson-Billings, 2017, p. 82). Furthermore, the factors of the education system have become intertwined with parallel systems such as racism and poverty, demonstrating the tenets of a wicked problem (Kolko, 2011). A wicked problem is a large-scale, systemic social issue that is difficult to solve given its dynamism and complexity (Caulfield & Brenner, 2020; Kolko, 2011). The solution to a wicked problem is often indeterminate, or unbound by limits, which means that there are technically multiple solutions to a problem (Buchanan, 1992). Given the scale of wicked problems, it is also difficult to test a solution before implementing, and the chosen solution will have lasting impact regardless of if it is considered beneficial or harmful (Caulfield & Brenner, 2020; Kolko, 2011; Kotter, 2012). To demonstrate the complexity further, the indeterminacy of systemic, wicked problems such as the education system often leads to treating the symptoms of the problem and not the causes, ultimately sustaining the problem (Kolko, 2011). To differentiate between symptoms and causes of problems to effect systemic change, there is a need to understand the fundamentals of a system. Systems and Systemic Change Defining a System To change problems within the education system, a general understanding of systems is beneficial. A system is defined as a whole composed of elements that interplay with one another and the surrounding environment such as education in the United States (Ferris & Williams, 2010; von Bertalanffy, 1972). According to von Bertalanffy (1972), systems have predictive 25 value. This means that by understanding the traits of a particular system, one might obtain additional insights into the patterns the system is likely to perpetuate (Bertalanffy, 1972). Across physics, biology, medicine, psychology, economics, social collectivism, and philosophy, each science independently examines individual parts that lead to a whole (Boulding, 1956; von Bertalanffy, 1950). Von Bertalanffy (1972) claimed that “in order to understand an organized whole we must know both the parts and the relations between them” (p. 411). These patterns make up general systems laws applicable to any science and help define general systems theory. General systems theory (GST) defines the universality of wholeness, or the relationship between interacting components, the environment, and overall structure (Boulding, 1956; von Bertalanffy, 1950, 1972). GST is a “system of systems” that defines similar language across disciplines in pursuit of patterns and interconnections that might otherwise have been missed studying each field on its own (Boulding, 1956, p. 198). According to von Bertalanffy (1950), this phenomenon appears across scientific foci from non-living things to complex social structures. To this end, the Aristotelian dictum still holds true that the whole is greater than the sum of its parts as demonstrated across atoms, living organisms, and society (von Bertalanffy, 1972). In addition to defining a system as a unique entity of study, there are variations of systems that can aid in understanding systemic problems. Systems can be open or closed. A closed system is one where the composition remains static and reaches an internal state of equilibrium (von Bertalanffy, 1950). An open system demonstrates dynamicism by adjusting to the inputs or environment around it, and the system constantly seeks to renew and re-stabilize itself given this dynamicism (Boulding, 1956; Senge, 2006; Van Assche et al., 2019; von Bertalanffy, 1950, 1971). Furthermore, open systems are 26 “autonomous operative chains that extend themselves by continuously redrawing the distinction between internal operations and external events” (Van Assche et al., 2019, p. 312). In other words, open systems are constantly adapting to shifts in the environment to preserve their existence and a steady state through self-maintenance and self-reproduction (Boulding, 1956; Ferris & Williams, 2010; von Bertalanffy, 1950). Another way to think about self-perpetuation is to consider systems as resilient. Resilient systems are built by policies and practices that allow them to adapt to changes and reestablish themselves ongoingly (Van Assche et al., 2019). Specifically, Van Assche et al. (2019) claimed that resilient systems adapt their governance and resource management successfully against change—especially the ability to readjust to disruptions—and resilient systems cannot stay static given constant changes in the environment. Another tenet of systems is the feedback loop. A feedback loop is a correlative flow of impact within a system (Ferris & Williams, 2010; Senge, 2006). According to Senge (2006), a feedback loop could be reinforcing, meaning it promotes growth by a continuation or amplification of the current state, or a feedback loop could be balancing, meaning it works to stabilize around a specific goal. When a system responds to new inputs, it enters either a reinforcing loop to promote growth or a balancing loop to stabilize after a shock (Senge, 2006; Van Assche et al., 2019). Without changing the goal of the self-perpetuating system itself, the system will continue its current trajectory through reinforcing loops and ward off influences on the contrary through balancing loops (Ferris & Williams, 2010; Senge, 2006). Senge (2006) explained that there are additional considerations to keep in mind when understanding feedback loops within a system. First, feedback loops are prone to delays in observation and impact, and these delays can influence decisions before initial inputs are fully 27 realized. Second, if there is resistance to change within a system, it is often the work of hidden or implicit balancing loops that have not been addressed within the pursuit of change. And third, misidentifying the root cause of a problem in the system increases the reliance on short-term feedback loops to see progress to goal. By recognizing systems, their tenets, and their patterns, one can begin to pursue systemic change (Senge, 2012). Defining Systemic Change Change is the process of shifting behaviors and outcomes based on different inputs and interactions over time (Hord & Roussin, 2013; Kotter, 2007, 2012). Kotter (2007) also defined stages of the change process as (a) establishing urgency, (b) rallying a powerful coalition, (c) articulating and reinforcing a clear vision, (d) removing obstacles, (e) building momentum, and (f) continuing efforts to prevent balancing loops from reestablishing the previous system. Change can happen at the individual, organizational, and systemic levels (Caffarella & Daffron, 2013; Ferris & Williams, 2010; Hord & Roussin, 2013; Kotter, 2007, 2012; Senge, 2006). A system change is one that “focuses on ambitions for transforming systems and producing substantial improvements in outcomes,” and these ambitions stem from personal beliefs about what needs to change (Ferris & Williams, 2010, p. 11; WKKF, 2007). The research demonstrated that systemic change requires multiple, enduring inputs over time for a system to evolve and restabilize in its changed form (Ferris & Williams, 2010; Senge, 2006). Therefore, systemic change is an enduring, multifaceted set of shifts to policies, standards of practice, formal and informal rules, values, and culture that lead to different outcomes (Ferris & Williams, 2010). To change a system, there is a need to recognize the patterns of the system at large to prevent futile pursuits of isolated improvements (Senge, 2006). According to Love (2018), 28 members of a society are socialized to perpetuate a role within a system regardless of the advantages or disadvantages, which can lead to difficulty in recognizing systemic patterns. When leaders do not see the system as the entity leading to outcomes, they instead direct resources to symptoms versus underlying causes (Senge, 1990). In fact, Senge (2006) claimed that the pattern of ignoring the system leads to “the gradual atrophy of the ability to focus on fundamental solutions, and the increasing reliance on symptomatic solutions,” demonstrating the need to understand the nature of systems to change them (p. 109). There is also a need to ensure that introducing a “solution” into a system does not simply shift the problem to another part of the system, for this pattern often goes unrecognized as new people inherit a problem (Senge, 2006). Ferris and Williams (2010) identified three dimensions of systemic change: localized versus system-wide change, incremental versus fundamental change, and time horizon of change throughout the system. The researchers described localized change as one where a section of the system is pursuing innovation, but the impact of that innovation does not permeate throughout the rest of the system. For example, a school may seek to change the coursework requirements for graduation as a localized change. This change would not impact other students throughout the broader school system. If a localized change is successful, it runs the risk of only impacting a section of the system if one is pursuing a broader change. The researchers considered a system- wide change as a “top down” approach that commands the change throughout the whole system concomitantly. A state adopting a set of new curriculum standards that all schools would follow is an example of a system-wide change. If a system-wide change is successful, it impacts the entire system. The second dimension of systemic change defined by Ferris and Williams (2010) was incremental versus fundamental change. Incremental change is when efforts to change the 29 system are rolled out gradually and continuously until they are seen in full effect throughout the system. Incremental change can take time, which can be viewed as a disadvantage, though its ultimate impact can yield a transformed system (Ferris & Williams, 2010; Frumkin, 2006). For example, a state might shift the teacher certification requirements over the course of a 5-year rollout, allowing time for teachers to obtain any new training while simultaneously increasing the overall strength of teachers within the state. Alternatively, a fundamental change is one that is considered sweeping or abrupt, and it creates noticeable change in the system more quickly (Ferris & Williams, 2010). A fundamental change can have instant results across the system, and it can also have unintended consequences as the change integrates into the system (Ferris & Williams, 2010; Kotter, 2007; Van Assche et al., 2019). For example, a state might decide to change the teacher licensing law to only allow those who have a four-year teaching degree or higher to be a certified teacher. This fundamental change to the education system could see an increase in perceived teacher strength while also a decrease in the number of teachers available to students. The third dimension of systemic change is the time horizon of the change (Ferris & Williams, 2010). Challenges with incremental change within a system include both underestimation of the change’s impact and impatience given the considerable amount of time, often against the backdrop of an urgent need (Ferris & Williams, 2010; Senge, 2012). Furthermore, scaling incremental change requires building networks across sectors to increase impact and influence (Ferris & Williams, 2010; Moore et al., 2015). The benefits of incremental systemic change are that the values, norms, and ways of being also change along the way (Ferris & Williams, 2010; Senge, 1994, 2006). The challenge with a rapid, fundamental change to the 30 system is that it presents a shock to the system and is subject to feedback loops to counteract the new input (Ferris & Williams, 2010; Senge, 2006). When seeking systemic change, there is a need to note that people are one of the key inputs in an open system such as education (Ferris & Williams, 2010; Senge, 2006). Change also requires a “critical mass of the players that sustain the present system” to begin “seeing” the system (Senge, 2006, p. 349). If enough people who help make up a system begin to see the system and its behaviors, then these agents can disrupt the current system to create alternative outcomes (Senge, 2006). Teachers as Change Agents Role in Education System During the 2017–2018 school year in the United States, there were 3.5 million public school teachers divided equally across primary and secondary schools (Irwin et al., 2021). Nearly 80% of the teaching force identified as White, and 76% identified as female (Ingersoll et al., 2021; Irwin et al., 2021). According to Ingersoll et al. (2021), over 44% of teachers leave the classroom within five years of entering it, causing instability within the profession. While the number of teachers of color more than doubled between 1987 and 2018 (Ingersoll et al., 2019, 2021), the attrition of teachers of color is 25% higher at 18.9% than their White peers at 15.1% (Ingersoll et al., 2019, 2021). According to Grooms et al. (2021), the “recruitment and retention of educators of color remains a challenging, complex, and critical issue of local, state, regional, and national significance” (p. 181). Since teachers of color are more likely to work in schools that serve students of color (Grooms et al., 2021; Ingersoll et al., 2021), the increased attrition rate of teachers of color disproportionately impacts students of color. Given that low-income students and students of color already receive fewer educational 31 resources than their White peers (Morgan & Amerikaner, 2018), there is a need to further support and equip those who teach low-income students and students of color in an effort to retain leaders in the classroom (Johnson et al., 2012). Teachers are charged to promote student learning, or “the difference between what students know and are able to do when they arrive in a classroom in the fall and what they know and are able to do when they leave the following spring” (Ladson-Billings, 2021, p. 71). There is also a need for teachers and their students who seek to create change to be equipped with sociopolitical or critical consciousness, or the ability to translate learnings into solving problems that impact themselves and others (Ladson-Billings, 2017; 2021). Teachers who develop their critical consciousness “carefully give consideration to the problems related to diversity and equity within schools and society more broadly by considering the roles historical and social activity had in creating these problems” (Wenner & Campbell, 2017, p. 158). For example, teachers can facilitate student critical consciousness by centering relevant issues in society such as the differentiated impact of the COVID-19 pandemic on communities in the United States and around the world as well as police brutality, protests, and community safety funding structures (Ladson-Billings, 2021). There is a need for teachers to also develop cultural competence and proficiency (Ladson-Billings, 2017; Terrell et al., 2018). Cultural competence “seeks to help students to appreciate their own history, culture, and traditions while also becoming fluent in at least one other culture” with the goal of multiculturalism fluency (Ladson-Billings, 2017, p. 88; 2021). According to Ladson-Billings (2021), the teacher practice of cultural competence must be more integrated than simply posting images of Black, Indigenous, People of Color (BIPOC) throughout the classroom without examination. Researchers Terrell et al. (2018) defined the 32 teacher practice of cultural proficiency as an internal, ongoing commitment to embracing students’ cultural backgrounds as assets. Teachers who practice cultural competence and proficiency practice a “paradigmatic shift” in pursuit of equity (Ladson-Billings, 2017; 2021; Terrell et al., 2018, p. 28). Given teachers’ roles and proximity to students, they are uniquely positioned to enact systemic change (Mangin & Stoelinga, 2008). Role in Systemic Change To envision and achieve a different future, there is a need to first recognize the current systemic patterns (Senge, 1990). Despite the landmark education policy ESEA (1965) and its reauthorization every five years, the U.S. education system is still producing inequitable outcomes by race, ethnicity, and socioeconomic status (de Brey et al., 2019; Irwin et al., 2021; NAEP, 2019a, 2019b; NCES, 2019). The research demonstrated that systems are self- perpetuating and will not change without reacting to new inputs (Boulding, 1956; Van Assche et al., 2019; von Bertalanffy, 1950, 1971). Therefore, the only way to shift a self-sustaining system in the direction of change is for people to recognize their role in the system, determine the systemic patterns, identify the root causes of undesirable outcomes, and introduce new inputs into the system (Senge, 2006). Given the role teachers have in the education system, particularly their direct responsibilities and proximity to students, teachers have the potential to introduce new inputs such as increased student learning, sociopolitical consciousness, and cultural competence into the system leading to new outcomes from the system (Ladson-Billings, 2017; 2021). For example, there is an established, systemic pattern of low-income schools that serve Black students to disproportionately “suspend, expel, retain, assign to special education, and deny entrance into gifted/talented and AP courses,” and many of these decisions are at a teacher’s discretion 33 (Ladson-Billings, 2021, p. 69). Teachers can serve as change agents by recognizing inequitable patterns and interrogating the role they play in the systemic inputs (Nguyen et al., 2019). Teacher Leadership Development Across decades of research, there is yet to be a common definition of teacher leadership (Nguyen et al., 2019; Taylor et al., 2011; Wenner & Campbell, 2017; York-Barr & Duke, 2004). Wenner and Campbell (2017) defined teacher leadership as maintaining classroom responsibilities while also extending teacher efforts beyond the classroom, going “above and beyond their typical duties” (p. 140). More specifically, Nguyen et al. (2019) have synthesized the research on teacher leadership into four key characteristics. These characteristics include teacher leadership as (a) a process of influence rather than a specific role within a school, (b) formally and informally collaborating with peers to improve instructional practice, (c) initiating and facilitating change, and (d) the implementation of improved “instructional quality, school effectiveness and student learning” (Nguyen et al., 2019, p. 67). Ergo, teacher leadership is the pursuit of student outcomes by way of ongoing learning, collaboration, and change efforts within and beyond the classroom (Nguyen et al., 2019; Wenner & Campbell, 2017; York-Barr & Duke, 2004). Programs To achieve change on behalf of students, teachers need support to enact their leadership in the form of ongoing professional development such as training programs (Wenner & Campbell, 2017). According to Wenner and Campbell (2017), teacher leadership development trainings predominantly focus on three components: content knowledge, pedagogical knowledge, and leadership strategies. The depth of teachers’ knowledge and motivations “positively affect their leadership ability to lead effectively” (Nguyen et al., 2019, p. 70). Furthermore, the research 34 demonstrated that teacher leadership programs can enact the knowledge and motivations of teachers to serve as change agents within and beyond the classroom (Nguyen et al., 2019; Taylor et al., 2011; Wenner & Campbell, 2017; York-Barr & Duke, 2004). Knowledge Knowledge is information obtained, reorganized, and put into use (Boulding, 1956; O’Dell & Hubert, 2011). Leveraging a revision of Bloom’s taxonomy for learning objectives by Anderson et al. (2001), there are four types of knowledge a training program can seek to embed in participants. These four types of knowledge fall on a continuum from concrete to abstract and include factual, conceptual, procedural, and metacognitive. Factual knowledge includes recalling definitions and other concrete details whereas conceptual knowledge encompasses the ability to categorize, generalize, and theorize. Procedural knowledge involves demonstrating skills and discerning methodology. Finally, metacognitive knowledge comprises the capability to strategically apply learning in various contexts (Anderson et al., 2001). Motivation In addition to expanding knowledge as a key component of teacher leadership development, cultivating motivation is also needed to put knowledge into use (Kirkpatrick & Kirkpatrick, 2016). Motivation is a force that influences behavior (Dweck, 2017). Developed by Eccles and colleagues, one motivation theory that can help understand a learner’s behavior is expectancy-value theory, or EEVT (Wigfield et al., 2018). EEVT describes the relationship between a person’s expectancy of a task outcome and the value of completing the task (Wigfield et al., 2018). According to Westaby (2002), expectancy describes the extent one believes an outcome will ensue based on completing a task, and value considers the assessment or importance of the believed outcome. 35 Distinguishing further, outcomes expectancies are a person’s beliefs that a task will lead to a specific result, and efficacy expectancies are a person’s beliefs that their efforts and abilities would lead to a specific result (Wigfield et al., 2018). EEVT posits that the overall assessment of a task’s value relates to three specific functions: (a) attainment value, or how important one perceives a task to be; (b) intrinsic value, or the personal fulfillment one receives from completing a task; and (c) utility value, or how helpful completing this task is to one’s goals (Wigfield et al., 2018). When evaluating a program, it is helpful to understand the participants’ expectations and task values as contributors or inhibitors of the program’s success. In addition to EEVT, social cognitive theory defines an important motivational factor in pursuing one’s goals through learning as self-efficacy, or one’s confidence in their ability to learn and execute tasks at meaningful levels (Bandura, 2000). In other words, people are more motivated to learn and apply new knowledge if they believe they can be successful on their own (Wigfield et al., 2018). Therefore, teacher leadership development programs that focus on both knowledge and motivations may lead to stronger outcomes (Nguyen et al., 2019). Program Theory Defining Program Theory A program is a predetermined set of resources and activities that aim to achieve specific goals or objectives and is the unit of analysis for a program evaluation (Kettner et al., 2017; Newcomer et al., 2015; O’Connor & Netting, 2007). Programs are created to address a problem or need and can include direct services to a specific population as well as training and leadership development (Kettner et al., 2017; O’Connor & Netting, 2007). Recent federal and local government funding legislation have led to an increase in accountability for social program effectiveness, and, therefore, program planning has shifted from solely focusing on the process 36 of executing a program to also considering the program’s outcomes (Caffarella & Daffron, 2013; Kettner et al., 2017; Kirkpatrick & Kirkpatrick, 2016). Focusing on a program’s outcomes in addition to its inputs creates a hypothesis and grounds a program in a theory (Kettner et al., 2017). Program theory is the relationship between a problem, an underlying need to solve the problem, and a hypothesis about how to achieve the desired change through inputs and interventions (Kettner et al., 2017). According to Kettner et al. (2017), there are five general tenets to planning a program according to this theory: (a) analyze the problem and assess the need, (b) determine the outcomes of the program, (c) design the program inputs and activities, (d) determine data collection methods, and (e) create an evaluation plan. There is also a need to understand the desired impact and define the interventions to increase scale and impact overtime (Frumkin, 2006). If a program is a predetermined set of activities in pursuit of a goal, then a logic model is a visual, systematic declaration of how a set of programmatic interventions aims to achieve specific results (Frumkin, 2006; WKKF, 2004). Kettner et al. (2017) claimed that a logic model “borrows concepts from systems theory to create and build upon a foundation in a way that helps to see the relationships between and among the resources invested, the services provided, and the results achieved” (p. 7). Therefore, a logic model is a visual representation of interrelated program aspects. To develop a logic model, there is a need to be clear about the ultimate impact the intervention is seeking (Frumkin, 2006; WKKF, 2004). Once the goal is clear, a logic model then determines the “causal linkages, which together articulate the steps that must be completed for an intervention to succeed” (Frumkin, 2006, p. 175). In addition to articulating the components of 37 logic models, it is important to note that logic models are inherently open systems, for there are numerous inputs and the surrounding environment that influence ultimate impact (Frumkin, 2006). A logic model visualizes the relationships that influence a program’s outcomes. Furthermore, a logic model is a strategic manifestation of a theory of change (Frumkin, 2006; WKKF, 2004). A theory of change is the underlying assumption of interventions that will lead to a specific, alternative outcome (Frumkin, 2006; Tuck & Yang, 2014). Frumkin (2006) claimed that theories of change can range in scale from professional development programs for leaders to shifts in public policy with each theory of change conditional on people as change agents. Frumkin (2006) also stated that “theories of change are the heart of logic models and strategy development” (p. 176). An example of a theory of change given by the researcher was dedicating resources into a summer institute or distance learning program with the expectation that those leaders would take “greater responsibility for shaping a field” and that properly equipped individuals within a system are able to change the system (p. 180). Evaluation An important component of program theory is the relationship between a problem, an underlying need to solve the problem, and a hypothesis about how to achieve the desired change through inputs and interventions (Kettner et al., 2017). To understand if a program achieved the desired change, there is a need to consider which components of the program will be measured and how the components will be measured when planning the program (Caffarella & Daffron, 2013; Kettner et al., 2017). When designing and executing a program, there is often more consideration about a program evaluation after the program ends than during the strategic planning or logic model planning phase, leading to a lack of clarity or misunderstanding of what change did or did not occur (Frumkin, 2006). Without planning for the program measurement in 38 tandem with implementation, there is a risk that the components required for measurement would not be collected (Caffarella & Daffron, 2013; Kettner et al., 2017; Rossi et al., 2004). Definition According to Kettner et al. (2017), the primary purposes of a program evaluation are to determine the results or outcomes of the program and to inform stakeholders if the program is effective as an intervention toward its theory of change. Therefore, a program evaluation is an episodic, systematic investigation to ascertain the effectiveness and impact of the program of study so that improvements can be made toward progress (Caffarella & Daffron, 2013; Kettner et al., 2017; Rossi et al., 2004). According to Rossi et al. (2004), there are five programmatic domains that an evaluation can investigate: (a) the underlying need, (b) the overall design, (c) the implementation or outputs, (d) the outcomes and impact, and (e) efficiency. An evaluation can focus on one or more of the five programmatic domains (Rossi et al., 2004). Evaluation plans also include specific questions to answer, methods to obtain the answers, and findings from the execution of the plan (Rossi et al., 2004). Additionally, an evaluation can include recommendations based on the findings (Kettner et al., 2017; Rossi et al., 2004). For any evaluation, there is a need to clearly define the domains, questions, methods, and findings for program stakeholders (Rossi et al., 2004). Evaluation and Program Theory If a program is a theory in action, then an evaluation is the test (Weiss, 1998). In other words, only an effective program can lead to its theory of change (Kettner et al., 2017). According to Frumkin (2006), “When done well, leadership development, training, and professional education programs can help build the human capital in a field, cultivate new skills, and motivate the people to continue working toward the missions that matter to them” (p. 180). 39 To know if a program is meeting its desired outcomes, there is a need to evaluate its effectiveness (Caffarella & Daffron, 2013; Kettner et al., 2017; Rossi et al., 2004; WKKF, 2004). Rossi et al. (2004) described a program’s theory as a key factor in evaluation design. The researchers further illustrated that “the more explicit and cogent the program conceptualization, the easier it will be for the evaluator to identify the program functions and effects on which the evaluation should focus” and, relatedly, planning for an evaluation in tandem can also “help sharpen and shape the program design to make it both more explicit and more likely to effectively achieve its objectives” (p. 44). When the program goals are clear, then data can be collected to evaluate the program (Caffarella & Daffron, 2013). Evaluation data can take many forms depending on the program of study. Common data collection methods include observations, interviews, and surveys (Caffarella & Daffron, 2013; Creswell & Creswell, 2018; Merriam & Tisdell, 2016; Patton, 2002; Robinson & Leonard, 2019). Once data is collected, it is then analyzed into findings and recommendations for program stakeholders (Kettner et al., 2017; Rossi et al., 2004). Leading Indicators Given that program outcomes and impact can take years to manifest, organizations can also consider early data known as leading indicators, or the near-term benchmarks that provide early evidence if the training and supporting efforts are on track to having a favorable impact (Robinson & Leonard, 2019). For example, an indicator of a program’s implementation success could be the satisfaction level program participants report upon completion (Rossi et al., 2004). Program participants’ perceptions on how the program has produced benefits in their lives could be another indicator (Rossi et al., 2004). Moreover, Rossi et al. (2004) described leading 40 indicators that are specific to the program objectives and theory as “an especially informative outcome monitoring system” when pursuing a program evaluation (p. 225). Importance of Evaluation in Pursuit of Systemic Change Given the immensity of any system, there is a need to be strategic when seeking systemic change (Ferris & Williams, 2010). As part of this strategy, Ferris and Williams (2010) recommend developing a theory of change, a logic model, and an evaluation plan. According to the researchers, by understanding the tenets of a system, creating a theory of change, developing a logic model, and evaluating the program’s effectiveness, those seeking systemic change can find greater leverage with their efforts to have the most impactful, sustainable change possible. Summary Education in the United States is a complex system that has historically and is currently producing inequitable outcomes for students of color and low-income students (Bai et al., 2021; Brown v. Board of Education, 1954; Casalaspi, 2017; Darling-Hammond, 1998; Irwin et al., 2021; Ladson-Billings, 2017, 2021; NAEP, 2019a, 2019b). Given the nature of open systems, the education system will continue to self-perpetuate in its current form until it is disrupted enough to produce different outcomes (Boulding, 1956; Ferris & Williams, 2010; Irwin et al., 2021; Ladson-Billings, 2017; Senge, 2006; Van Assche et al., 2019; von Bertalanffy, 1950, 1971). While solving the system’s inequitable outcomes remains indeterminate, the research demonstrated that those with the most proximity to student outcomes have the greatest potential for impact: teachers (Buchanan, 1992; Darling-Hammond, 1998). To increase a teacher’s ability to effect change within the system, there is a need to continue to develop their leadership (Darling-Hammond, 1998; Ferris & Williams, 2010; Senge, 2006). Programs are vehicles for change and are a way to develop people along the lines of the 41 program’s theory of change (Kettner et al., 2017; Newcomer et al., 2015; O’Connor & Netting, 2007). Teachers have immediate proximity to students and, therefore, are important enablers of systemic change. Given the problem a program is designed to solve, there is a need to ensure that the program is effective (Frumkin, 2006; Kettner et al., 2017; Rossi et al., 2004; WKKF, 2004). A program evaluation helps assess the effectiveness of a given program against its theory of change (Caffarella & Daffron, 2013; Frumkin, 2006; Kettner et al., 2017; Tuck & Yang, 2014; Weiss, 1998; WKKF, 2004). The following chapter details the methodology used to evaluate a program in pursuit of systemic change. 42 Chapter Three: Methodology The purpose of this study was to examine the implementation of EEJI’s new summer training program and to measure the early outcomes through program participant data. The chapter provides an overview of the evaluation questions, study design, and researcher. Data sources including collection methods, instrumentation, participants, and analysis techniques are also described. The chapter concludes with a discussion of issues related to validity, reliability, and ethics. Evaluation Questions The evaluation focused on the following five evaluation questions: 1. To what extent, if at all, are participants satisfied with the summer training program? 2. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? 3. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? 4. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? 5. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? Overview of Design The evaluation incorporated a mixed methods approach to develop a more holistic, nuanced understanding by leveraging both quantitative and qualitative data collection. The evaluation utilized convergent mixed methods, or when quantitative and qualitative data are collected either concurrently or roughly the same time, analyzed separately, and then compared 43 to inform the results (Creswell & Creswell, 2018). The two data collection methods included surveying and interviewing the program participants. A survey was chosen as a data collection method given the ease of collecting construct, cross-sectional data for program participants in a short amount of time (Creswell & Creswell, 2018; Robinson & Leonard, 2019). Robinson and Leonard (2019) described surveys as viable instruments to measure a respondent’s “attributes (e.g., demographic characteristics), behaviors, abilities (e.g., knowledge and skills), and thoughts (e.g., attitudes, beliefs, feelings, awareness, opinions, or preferences)” (p. 2). While surveys can also include open-ended questions to solicit qualitative responses, surveys are primarily quantitative allowing for a timely, consistent, and structured data collection method (Creswell & Creswell, 2018; Robinson & Leonard, 2019). In addition to the survey, the program evaluation included interviews. Interviews were chosen as a data collection method because they allow a researcher to obtain information that cannot otherwise be acquired such as the participants’ feelings, interpretations, and thoughts of personal impact in their own words (Merriam & Tisdell, 2016). According to Patton (2002), interviews “allow us to enter into the other person’s perspective” (p. 341). The data collected through interviews contributed deeper insights into the quantitative data collected through the survey. These two collection methods yielded the data needed for the analysis in Chapter Four. Table 1 describes the data sources for each evaluation question. 44 Table 1 Data Sources Evaluation question Data source EQ1: To what extent, if at all, are participants satisfied with the summer training program? Participant surveys & Interviews EQ2: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? Participant surveys & Interviews EQ3: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? Participant surveys & Interviews EQ4: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? Participant surveys & Interviews EQ5: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? Participant surveys & Interviews The Researcher I was interested in understanding the effectiveness of a teacher leadership summer training program’s implementation and outcomes given its theory of systemic change. This program focused on developing aspiring anti-racist educators to lead rigorous, inclusive, and supportive classrooms. It is important to acknowledge that I was exploring this inquiry through specific lenses based on my salient and intersecting identities. As a White woman who has experienced economic mobility from low-income to middle-income class, I experience this 45 world most frequently from a place of societal privilege. Privilege is defined as unique advantages granted to those considered superior (Merriam-Webster, n.d.-b). According to Morgan (1996), the combination of a person’s identities can include those that are considered dominant as well as those that are oppressed, creating a unique lived experience at those intersections. These intersections are critical to name, as they influence how I see both problems and solutions within and beyond work. Despite lacking acute awareness of its impact throughout my life, the identity that most prominently affects my experiences and worldview is my whiteness. My entire worldview based on my race has been relentlessly affirmed and has allowed me to feel “confident, comfortable, and oblivious” at the expense of those identifying as a different race who experience “hostility, distress, and violence” (McIntosh, 1988, p. 5). When this whiteness—a privileged identity—and my female gender—an oppressed identity—combine, they create an intersecting identity that White men and women of color do not experience (Morgan, 1996). This means that the experiences of whiteness are not universal to both White men and White women, and the experiences of being a woman are also not universal across races. The intersection itself creates a unique identity of which to experience and view the world. Furthermore, additional identities can continue to layer onto these intersections such as economic class. Despite growing up in a low-income class, I benefitted from my privileges— primarily my race. This has afforded me economic mobility from low-income to middle-income class status. I share the low-income background with the students the organization of study seeks to support, but I do not share the experience of systemic racism of these same students, further demonstrating how the intersection of race and class yields different lived experiences. 46 As a researcher, it is also important to state that I work for the organization and led the program I studied. While the participants of the study itself were teachers and not employees of the organization, there was a need to communicate to the participants that my data collection methodology served dual purposes: for the organization and my own research. The leadership of my organization would ultimately decide whether to implement any of the recommendations based on my findings. My role as a researcher was to provide the recommendations, and my role as an employee would be to execute the final decisions of my organization’s leadership. When surveying and interviewing participants, there was also a need to consider how my positional power within the organization might impact how openly and honestly the participants were willing to share their thoughts about the program. This could be especially true for critical feedback. To mitigate this influence, it was important to remind the participants that their perspectives could lead to a stronger, more impactful iteration of the program for future teachers and therefore students. As a researcher, my understanding of problems, solutions, and the world itself are both my own evolving truth and incomplete. According to WKKF (2007), values, beliefs, and assumptions about a system and the need for change undergird the development of programs and their evaluations. In this research study, there was an inherent value that the education system should be equitable for all students and, therefore, requires systemic change. It is with these prominent identities and values that I reviewed the literature and will propose potential solutions. Data Sources The study’s mixed methods approach used two data sources for this evaluation: an end- of-program survey and participant interviews. All participants received the survey and time to complete it as part of their last day of programming. Interviews were conducted within three 47 weeks of the last day of the program. The data were collected from late July through early August 2022. This allowed for participants to complete the summer training program before sharing their perspectives about their program satisfaction and any shifts in knowledge or motivations related to student outcomes and systemic change in the role of teacher. End-of-Program Survey The first data source for the study was an end-of-program survey. Surveys are a data collection instrument containing a series of questions that define the distribution of variables across the population (Merriam & Tisdell, 2016; Robinson & Leonard, 2019). Surveys are a helpful data collection method when seeking information about a construct, or a concept a researcher is seeking to measure that cannot be directly observed such as satisfaction or intent to change (Robinson & Leonard, 2019). When conducting a program evaluation, surveys are also helpful in identifying indicators, or obtainable clues toward a goal’s progress (Robinson & Leonard, 2019). Given the study’s evaluation at the end of the program, the survey provided data that helped to understand both the constructs in the evaluation questions and the indicators toward program objectives. These data helped to determine if the program had contributed to lasting change for the participants. Participants The participants for this study were 83 teachers entering their 2nd through 5th year in the classroom who had completed the summer training program. These participants were recruited to the program because they were alumni of the organization’s initial teacher training program and had taught various subjects and grades across the United States. All recruited program participants were invited to participate in the study through an end-of-program survey, which is 48 known as a census, or when one seeks to survey the entire population in a setting (Robinson & Leonard, 2019). Of the 83 participants, 38 completed the end-of-program survey. Instrumentation The survey began with a brief explanation about the research study and that participation in the survey was optional. There were initial participant questions about the number of years in the classroom, which grades and subjects they taught, and which program track they participated in. The survey also included an additional 18 close-ended ordinal questions and seven open- ended questions for a total of 29 questions. These questions asked for the participants’ perspectives about their knowledge and motivations related to the program’s objectives as well as questions about their satisfaction with the program. The survey questions can be found in Appendix A. Data Collection Procedures The survey was administered to all program participants on the last day of the virtual programming, and there were 20 minutes of time scheduled into the last day for participants to take the survey online before closing out the program for the summer. Scheduling time within the program was an effort to support a higher completion rate and to help ensure reliable data collection. The survey was kept open for an additional 1.5 weeks after the end of the program. The program used Qualtrics (https://www.qualtrics.com) to administer the survey. Given that the survey was anonymous (outside of participants who volunteered to complete an interview), there was not a way to determine who of the 83 participants had completed the survey. Therefore, a general reminder was sent to all program participants to complete the survey during the window if they had not already. 49 Data Analysis The closed-ended survey questions were analyzed in Qualtrics using descriptive statistics. According to Salkind and Frey (2019), descriptive statistics include mean, mode, median, and standard deviation to help describe a data set. For this study, the median, mean, and standard deviation were used to describe the closed-ended data. The open-ended survey responses were analyzed alongside the qualitative interview data using a priori and axial coding strategies. Further detail on the qualitative data collection and analysis follows. Interviews The second data source for the study was interviews. Interviews are a data collection method that involves a purposeful conversation between the researcher and participant (Merriam & Tisdell, 2016). Interviews allow the researcher to collect information that cannot otherwise be observed such as a participant’s feelings, interpretations, reflections, and future intentions (Merriam & Tisdell, 2016; Patton, 2002). Given the study’s evaluation at the end of the program, the interviews provided more holistic data to understand if the program had led to change as outlined in the program’s logic model. Participants The interview participants for this study were a sample of the program participants who completed the end-of-program survey. Of the 38 teachers who took the survey, 11 completed a virtual interview. By leveraging a sample of end-of-program survey completers, the interview population can further support the triangulation of quantitative and qualitative data. Triangulation increases the validity of qualitative research by establishing themes that emerge across more than one data source (Creswell & Creswell, 2018). 50 Instrumentation The instrument used for the interviews was a semi-structured protocol. The interview protocol consisted of an opening statement shared with each participant that described the purpose of the study and asked permission to record the conversation. From there, the interview protocol consisted of 16 standardized questions with optional probes, as needed, and the interview questions can be found in Appendix B. The set of questions followed a standardized open-ended interview, allowing for consistency across participants to identify themes and to compare responses more easily (Patton, 2002). This approach allowed for uniformity across participants while also allowing the opportunity for participants to share other salient feedback not captured in the original questions. The protocol also included a closing script to thank participants for their time. Data Collection Procedures Each interview was conducted virtually using the technology Zoom. Calendar invites were sent to each participant for 60 minutes to ensure enough time to collect data on each question, and the average interview time across all 11 interviews was 31 minutes and 26 seconds. Each interview had the same interview protocol, or list of questions and space to take notes, to ensure consistency across participants (Creswell & Creswell, 2018). The interview protocol can be found in Appendix C. Each teacher granted permission to record the interview. Notes were typed in addition to the audio recording. After each interview was completed, Zoom transcribed the audio into a text file, which required manual updates to the text by the researcher to ensure accuracy. 51 Data Analysis Comparative analysis was used across the transcripts using codes to find similarities and differences in experiences (Gibbs, 2018). Coding is the process of analyzing text to identify places that describe similar ideas and then listing the ideas in a codebook for reference (DeCuir- Gunby et al., 2011; Gibbs, 2018). The codebook can be found in Appendix C. The coding process began with a priori codes, or codes that are predetermined based on the research questions and logic model, and then the transcripts were re-examined for axial codes, or codes that are refined and interconnected (Gibbs, 2018). From these codes, themes, or recurring concepts that emerge from analyzing data, were determined (Gibbs, 2018). The themes were used to generate findings for the evaluation questions. Validity and Reliability Given the mixed methods approach of the evaluation study, there is a need to address similar concepts using different language across quantitative and qualitative research. It is important to confirm that a research study’s instrumentation and processes are both valid and reliable, otherwise the findings would ultimately be inconclusive (Salkind & Frey, 2019). The first concept to address is validity. Validity (also referred to as credibility) is whether the research findings reflect the known reality (Merriam & Tisdell, 2016; Salkind & Frey, 2019). In other words, validity refers to the degree of accuracy of what the research is intending to understand (Robinson & Leonard, 2019). According to Creswell and Creswell (2018), validity is important to establish for one to “draw meaningful and useful inferences” that advance a field of research (p. 251). For the purposes of this mixed methods study, the term validity will be used. There were three strategies that were used to ensure the validity of the research findings. First, the researcher described her positionality to clarify potential bias, or how the researcher’s 52 background and experience might influence their findings (Creswell & Creswell, 2018). Bias was important to acknowledge given it could influence how one interprets reality and its relationship to the data (Merriam & Tisdell, 2016). The second strategy was that multiple data sources contributed to the study’s findings. Determining themes that appear in multiple data collection methods is known as triangulation, and this method helps ensure validity of the findings (Creswell & Creswell, 2018; Merriam & Tisdell, 2016). The study triangulated program participant reflections through closed-ended survey questions, open-ended survey questions, and interviews. Finally, it was important for the researcher to also include any data that did not fit within the emerging themes to support a more authentic and realistic data landscape (Merriam & Tisdell, 2016). According to Creswell and Creswell (2018), this is known as presenting negative or discrepant information. Chapter Four includes the perspectives of participant outliers within the findings. Another consideration for the evaluation’s validity is known as construct validity. Creswell and Creswell (2018) defined construct validity as when items within an instrument accurately measure the concept it seeks to measure, such as satisfaction. The content of the survey and interview instruments were drawn from research-based constructs. For example, Anderson et al. (2001) categorized knowledge, a construct, into four types: factual, conceptual, procedural, and metacognitive. These major types of knowledge range from concrete to abstract, and the research instruments sought to ascertain which, if any, levels of knowledge shifted due to program participation. Additionally, the survey and interviews leveraged questions based on expectancy theory and self-efficacy theory to determine any shift in a participant’s motivations. 53 In addition to the importance of validity and credibility to a research study, there is a need to also address the study’s reliability. The second concept to define that supports the study’s findings is reliability. Reliability (also referred to as dependability) is the replicability of the findings in a similar research study, or—given the nature of social science and the difficulty replicating exact human conditions—the likelihood that another researcher would replicate the same findings given the present data (Merriam & Tisdell, 2016; Salkind & Frey, 2019). In other words, reliability relates to the consistency of results given the same conditions (Robinson & Leonard, 2019). Given the importance of these tenets in confirming research findings (Salkind & Frey, 2019), the researcher sought to achieve both validity and reliability of the evaluation study through additional strategies. To support the overall validity and reliability of the evaluation study, there was a need to ensure alignment across the conceptual framework of study, data collection methodology, and evaluation questions. According to Sharpe (2011), program theory—the study’s conceptual framework—determines the objectives of the program, articulates the elements composing the program, and lays the foundation for a program evaluation. The program’s objectives were to shift novice teachers’ leadership knowledge and motivations toward both broader student outcomes and enacting systemic change, which also informed the study’s evaluation questions. From there, each evaluation question was probed through both surveys and interviews, and the items asked during the data collection methods were in pursuit of understanding if the program met its goals. This alignment supported the evaluation’s validity in that each element of the research design was aimed at measuring the same components. The alignment also supported the study’s reliability in that the overall design and execution were consistent across survey 54 instrumentation and data collection so that another researcher could replicate the same results in the same conditions. Ethics According to Glesne (2011), ethical considerations are inextricable from a researcher’s approach to data collection and interaction with participants. Before the research study could begin, approval from the university’s Internal Review Board (IRB) was required. The purpose of the IRB is “to assure, both in advance and by periodic review, that appropriate steps are taken to protect the rights and welfare of humans participating as subjects in the research” (Food and Drug Administration, 1998). The researcher took these ethical steps by submitting the study for and gaining IRB approval. Glesne (2011) defined five principles that IRBs look for to protect the rights and welfare of the study’s participants. The first basic principle of human subjects research is informed consent. Informed consent is ensuring that participants have the needed information to decide if they want to participate in a study and that participation in the research study is voluntary. This includes obtaining permission to record an interview and clarifying confidentiality of data collection. A second ethical consideration included advising participants that they may exit the study, without penalty, at any point. A third principle was that human subjects should be free from any unnecessary risks from participating in the study which includes securely storing data and keeping participant identities confidential. Fourth, the benefits of the research must outweigh the risks, and the final principle is that the researcher should be qualified to conduct the study. These research principles helped guide the IRB decision. In addition to obtaining IRB approval, there were additional ethical issues to consider such as who the research would benefit. Based on the researcher’s transformative orientation, the 55 research study intended to serve the interests of those who are affected by educational inequity. The pursuit of solving educational inequity would benefit society at large, and this research study explored the effectiveness of one potential programmatic solution. If a program is not effective according to its theory, those who spent their development time participating would have potentially missed the opportunity to participate in programming that would lead to more impactful change. Therefore, the results of the research study should be widely dispersed so that additional programs can either replicate positive impact or avoid the failures of a program’s theory (Rossi et al., 2004). By disseminating the study’s results, the field of education is one step closer to achieving educational equity. 56 Chapter Four: Findings The purpose of this study was to examine the implementation and early outcomes of a summer training program through program participant data. The goal of the summer training program was to enact the leadership of novice teachers in pursuit of systemic change in the U.S. educational system for low-income students and students of color. The study took a program theory approach and focused on evaluating the implementation outputs and early outcomes within a preset logic model. Program theory (also known as logic model or theory-based evaluation) examines the how, why, and extent a program reaches its predetermined goals (Birckmayer & Weiss, 2000; Frumkin, 2006; WKKF, 2004). The study’s evaluation questions guided the analysis of the program’s implementation output and early outcome goals. In this chapter, the findings for each of the following five evaluation questions are discussed: 1. To what extent, if at all, are participants satisfied with the summer training program? 2. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? 3. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? 4. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? 5. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? 57 The findings are substantiated by interpreting data collected through an end-of-program survey, program participant interviews, and secondary data related to participant registration. The survey questions can be found in Appendix A, and the interview questions can be found in Appendix B. The chapter begins with a report on the overall program participants, the surveyed subset, and the interviewed subset. Next, the chapter examines the study findings organized by the following subsections: satisfaction, broader student outcomes, and the role of teacher in systemic change. Each subsection is organized by the corresponding evaluation questions. The chapter concludes with a summary of the findings. Program Participants The staff at EEJI recruited alumni of their initial teacher training program to participate in a new, virtual summer training program. The teachers volunteered to participate in order to develop the leadership required to interrupt the system of educational inequity. Eighty-three teachers participated in the summer training program. An analysis of secondary registration data provided insights about the program participants. Through examining the registration data, the 83 program participants were confirmed to be entering their 2nd through 5th year in the classroom. Overall, 63.9% of teachers identified as a person of color, 33.7% identified as not a person of color, and 2.4% preferred not to answer. Specifically, participants identified racially as 41.0% Black, 27.7% White, 14.5% Latinx, 7.2% multi-racial, 4.8% Asian, 2.4% American Indian or Alaska Native, 1.2% other race, and 1.2% preferred not to answer. One in five participants identified as Hispanic. Related to gender, 89.2% identified as female, 7.2% as male, and 3.6% as non-binary. Finally, 60.2% of program participants identified as coming from a low-income background, 31.3% did not 58 identify as coming from a low-income background, and 8.4% preferred not to answer. A description of the overall program participant demographics can be found in Table 2. In addition to participant demographics, the secondary data provided further insights into which grade levels and subjects the participants taught as well as where the participants were located within the United States. The participants taught all grades ranging from Pre-K through 12th grade, and subjects taught included general elementary education; secondary math, English, science, and social studies; and various other subjects such as foreign language and art. Special education and English language development teaching placements were also represented. The participants were located in 25 states in addition to the District of Columbia. Finally, the secondary data also confirmed the reasons teachers selected for attending the summer training program. The choices and frequencies can be found in Table 3. Participants were able to select as many choices as applicable and were then asked to select their most important reason for attending the summer training program of those they had originally selected. The top reason teachers chose to attend the summer training program was, “I want to develop and/or grow in my teaching practice,” with 73.5% of participants selecting this option. 59 Table 2 Overall Program Participant Demographics Category Number Percent Entering Teaching Year 2 17 20.5 3 20 24.1 4 23 27.7 5 23 27.7 Race/Ethnicity Black 34 41.0 White 23 27.7 Latinx a 12 14.5 Multi-Racial 6 7.2 Asian 4 4.8 American Indian or Alaska Native 2 2.4 Other 1 1.2 I prefer not to answer 1 1.2 Hispanic No 65 78.3 Yes 17 20.5 I prefer not to answer 1 1.2 Gender Female 74 89.2 Male 6 7.2 Non-Binary 3 3.6 Low-Income Background Yes 50 60.2 No 26 31.3 I prefer not to answer 7 8.4 Note. N = 83. a Latinx is an evolving term proposed for the ethnicity representing those from Latin American countries (Salinas, 2020 & Torres, 2018) and was used by the organization in the study. 60 Table 3 Reasons Teachers Selected for Attending Summer Training Program Reason for attending Teachers Percent I want to develop and/or grow in my teaching practice 81 97.6 The program content is applicable to my work. 43 51.8 I want to learn with and from other EEJI teachers. 42 50.6 The program is free. 40 48.2 I want to grow my professional network. 36 43.4 The virtual platform supports my professional development. 27 32.5 The timing of the program supports my professional development. 21 25.3 The rigor of the program supports my professional development. 20 24.1 I am interested in Digital Promise/Rollins Center. 17 20.5 EEJI staff encouraged me to participate. 13 15.7 I need ongoing professional development credits (i.e., CEUs). 10 12.0 Other 1 1.2 Note. N = 83. Some program participants selected multiple reasons for attending. 61 Surveyed Program Participants Of the 83 program participants, 38 completed the end-of-program survey. This represents a survey response rate of 46%. Table 4 describes which year of classroom teaching the surveyed program participants will begin in the fall, and Figure 2 describes the grades taught by surveyed program participants. There was a slight over-representation of teachers entering their 3rd and 4th year of teaching compared to the overall program participants, and no Pre-K teachers were part of the survey sample. Subjects taught were similar to the overall program population. In addition, teachers who provide specialty supports such as special education and support linguistically diverse students were also represented. Table 4 Surveyed Program Participants Entering Year of Classroom Teaching for Fall Entering teaching year Number Percent 2 7 18.4 3 12 31.6 4 13 34.2 5 6 15.8 Note. N = 38. 62 Figure 2 Grades Taught by Surveyed Program Participants Note. N = 38. Some program participants taught multiple grade levels. Interviewed Program Participants In addition to survey responses, 11 teachers participated in interviews representing 13% of the program participants. The interviewed participants taught a variety of grade levels and subjects with five teaching secondary students and six teaching elementary students. The teaching placements included two in special education, two in lower elementary, two in upper elementary, two in secondary humanities, two in secondary math, and one in secondary science. To maintain anonymity, no additional identifiable information will be shared about interviewed 63 participants. The data collected through the end-of-program survey and participant interviews informed the findings of the study. Findings This section describes the findings for the study, organized by subsections (a) satisfaction, (b) broader student outcomes, and (c) the role of teacher in systemic change. Each subsection is then organized by corresponding evaluation questions. A summary of the findings can be found in Table 5. Each finding is then discussed in further detail. Table 5 Summary of Findings by Evaluation Question (EQ) Finding Description EQ1: To what extent, if at all, are participants satisfied with the summer training program? Finding 1: Accountability in Tension with Flexibility The logistics, content, and facilitation design of the program provided flexibility in training for the participants. While the quality of the program was strong overall, the increased program flexibility also had negative consequences on participant accountability. Finding 2: Accessibility Issues in a Virtual Setting The program provided synchronous and asynchronous content with a flexible, accessible approach. The flexibility of how and when to engage with course materials benefitted individual program participants; however, the program was ill-prepared to support Americans with Disabilities Act (ADA) accessibility in the virtual setting. Finding 3: Limited Usefulness with Diverse Populations To enact the leadership of novice teachers, there was a need for participants to find the program useful to their practice. The program was mostly useful to participants with opportunities to expand its applicability to additional marginalized populations. 64 Finding Description EQ2: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? Finding 4: Various Levels of Previous Knowledge While the evidence suggested the program successfully aligned its design to support the expansion of knowledge about broader student outcomes, the program’s ability to increase understanding with participants during the training varied according to participants’ previous knowledge. EQ3: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? Finding 5: Barriers to Shifting Teaching Practice Due to participation in the summer program, most teachers stated that they will seek to make shifts to various components of their teaching practice in pursuit of broader student outcomes. However, some program participants articulated that they either did not experience an increase in motivation or they did not know how to overcome certain barriers to implement shifts to their practice. EQ4: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? Finding 6: Inconsistent Demonstrated Learning The program did not integrate the learning objectives related to understanding systems change and the role of a teacher throughout the design of the program as successfully as the broader student outcomes objectives. Participants varied in their ability to define and articulate systemic change, especially the role a teacher might play in this pursuit. Finding 7: Confusion About How to Lead There was demonstrated confusion by program participants related to which leadership actions would contribute to systemic change and which actions would support student learning outcomes within the classroom alone. EQ5: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? Finding 8: Stifled Motivation Toward Systems Change Due to the lack of knowledge developed about systemic change throughout the program, teachers were unable to clearly define what they hoped to do differently at their schools in the fall related to systemic change and their role as a teacher. 65 Satisfaction Evaluation Question 1: To what extent, if at all, are participants satisfied with the summer training program? The purpose of this study was to examine select implementation outputs of the summer training program and to measure the early outcomes through program participant data. The study focused on participant satisfaction as the key implementation output, and the first evaluation question focused on the program participants’ satisfaction with EEJI’s new summer training program. The analysis assessed the satisfaction scores and Net Promoter Score (NPS) from the survey data. The analysis also included participant interview and registration data to inform the findings of the study. In the end-of-program survey, each satisfaction question had Likert responses ranging from 1–Strongly Disagree to 7–Strongly Agree. A Likert scale is a common measure for respondent attitudes (Boone & Boone, 2012). According to the surveyed participants, 86.8% agreed or strongly agreed that they were satisfied with the program overall. The standard deviation, or average distance from the mean in each data set (Salkind & Frey, 2019), for the overall satisfaction was 1.6 for the program participants. The overall satisfaction median score was 7.0, and the overall mean score was 6.1. The satisfaction scores can be found in Table 6. 66 Table 6 Medians, Modes, and Standard Deviations for Satisfaction Scores Survey item n Mdn M SD I Am Satisfied with: The usefulness of the asynchronous modules offered before the program. 36 6.0 5.9 1.4 The communications about the program logistics. 38 7.0 6.3 1.4 The schedule of the program. 38 6.0 5.8 1.5 The technology used for the program. 38 7.0 6.2 1.4 The content delivered by external organizations. 38 7.0 6.1 1.5 The facilitation from external organizations. 38 7.0 6.0 1.6 The content delivered by EEJI. 35 6.0 5.7 1.5 The facilitation from EEJI. 35 6.0 6.0 1.2 The program overall. 38 7.0 6.1 1.6 In 2003, Frederick F. Reichheld from Bain and Company developed an indicator known as the NPS, and the indicator is considered by some as the “gold standard of customer experience metrics” (Qualtrics, n.d.-b). The NPS is a single question about the likeliness one would recommend the organization or experience to a friend or colleague and has a scale of 0, or not at all likely, to 10, or extremely likely (Reichheld, 2003; Qualtrics, n.d.-a, n.d.-b). The indicator produces three clusters of customers: (a) those who logged a likeliness score of nine or ten are known promoters, (b) those who gave a score of seven or eight are passively satisfied, and (c) those who ranked a score of zero through six are known as detractors (Reichheld, 2003). 67 According to Reichheld (2003), the promoter cluster is purposefully set high to avoid “grade inflation” (p. 51). The NPS is calculated by subtracting the percentage of detractors from the percentage of promoters to produce a number between –100 and 100, with a higher number indicating more promoters than detractors (Reichheld, 2003; Qualtrics, n.d.-a, n.d.-b). Concretely, an NPS score “above 0 is good, above 20 is favorable, above 50 is excellent, and above 80 is world class” (Qualtrics, n.d.-a). The participants were asked: How likely are you to recommend the program to a friend or colleague? on the end-of-program survey. Of the 34 responses, 62% were promoters, 26% were passive, and 12% were detractors. Therefore, the NPS for the summer training program was 50, a highly favorable satisfaction score (Qualtrics, n.d.-a). In addition to the survey data, 11 participants were interviewed and asked about their satisfaction with the program. One participant shared that “we’re actually doing specific, actionable, research-based things. And so that was really what I wanted and was very happy that [the program] included that.” Another stated that “it’s probably some of the most robust training that I’ve received.” The interview transcripts were triangulated with survey data throughout the data analysis. The secondary registration data included an open-ended question about what participants hoped to gain from the program, and the responses were compared to responses shared through the open-ended survey and interview questions. This comparison was also included in the satisfaction analysis. Given the data, the analysis concluded that program participants were satisfied with the summer training program to a great extent. In addition to concluding the overall program satisfaction, the analysis provided three specific findings related to improving participant satisfaction that emerged for the first evaluation question. 68 Finding 1: Accountability in Tension with Flexibility The logistics, content, and facilitation design of the program provided flexibility in training for the participants. While the quality of the program was strong overall, the increased program flexibility also had negative consequences on participant accountability. This section examines the finding related to participant accountability being in tension with program flexibility. The program’s flexible logistics were designed with potential summer teaching responsibilities and other circumstances in mind. Some program participants said that they had a summer teaching commitment, they were traveling during a portion of the program, or the scheduled synchronous meeting times did not work for their location. Given the various circumstances, EEJI attempted to take a logistically flexible approach to the program. The tenets of the flexible approach included recording the live, synchronous meetings so that participants could watch that evening or revisit at a later time. Another tenet was that program participation during the live sessions was optional—regardless of circumstances. And finally, there were no formal requirements in place to hold participants accountable for any asynchronous or synchronous material they did not complete. EEJI’s orientation to the flexible programming was that participants would yield learnings based on what they chose to complete. The flexible approach was beneficial to many program participants. Nine of 11 interviewed participants explicitly discussed the flexibility as beneficial, and 16 open-ended survey responses also positively mentioned the flexible approach. “I really liked that I did everything asynchronously. And so, I liked that it was super flexible that I could do that,” one teacher shared. In fact, the flexibility of the program allowed for some teachers to participate at all. “Even though I didn’t get to participate live,” one teacher described, “being able to listen to 69 other people practice and then kind of play along on my own and practice” proved to work asynchronously for the participant. Furthermore, another teacher shared: I got to do most of the sessions over zoom. Some of them I had to go back and watch the videos … because I am also going to school. … I did like that you were able to go back and watch the recordings and still be able to participate in that way. Because, like, I guess I was kind of expecting that you would have to attend each one all the way through. While the flexible approach was helpful to many program participants, the teachers also mentioned a negative aspect of the flexible logistics 15 times in the survey data. The flexible approach had unintended consequences on the depth of content covered, the extent of facilitator preparation and choices during live sessions, and the felt experience of participants who attended live sessions. The flexible design of the program limited the depth of content that could be covered by facilitators. In an attempt to schedule the synchronous sessions when various time zones across the United States could participate, the training days were reduced to four hours of live meetings and spread over the course of four weeks in the summer. Some participants shared that the content covered felt rushed at times, while others wished for fewer days: I’m not sure that it needs to be four weeks, but I do think that I understand why it was four weeks. They don’t want to take everybody’s day, but I would have been fine with like two weeks, full days. You know, let’s do an intensive and, you know, we have a product at the end of that. Additionally, the flexible attendance policy hindered the facilitators’ abilities to plan the synchronous sessions. While there were officially 83 program participants, there were days when only 30 or fewer teachers joined the live sessions making it challenging for facilitators to plan 70 the sessions. One participant wrote in their survey a reflection about the random group assignments each day given the fluctuating attendance: I wish more of an effort could have been made to group us by age group/content area. As nice as it was to meet so many different people, it also would have been helpful to get feedback from people who know what I’m teaching without me also having to explain the content I’m teaching. Given the evolving attendance, the facilitators could not plan and consistently group teachers together in meaningful ways. Finally, program participants were negatively impacted by the inconsistent attendance of program peers. While some teachers appreciated the flexibility and held themselves accountable to making up missed sessions, some participants did not stay current with the material. One teacher found no trouble in missing sessions and said, “People who weren’t able to watch a live recording or attended live zoom recording, you were able to kind of pick up on the conversation in the discussions,” while other participants shared a contrasting perspective on the same attendance flexibility. One participant described the impact on their experience: I think for some people, you know, they were busy, maybe they were working while they were doing it. So, there were times when I was ready to engage and talk about an idea. And during a small group, that person, they were like, “Hey, I’m not actually available to help or I’m at work or…” So, the only thing I would say is, it would be good if the program had a little bit more of a value attached to it. So, people aren’t just jumping into it. Another teacher stated a similar perspective, “The least helpful [aspect of the program] was the cohorts. I feel like they could have been helpful, but my group was not very talkative or 71 participatory.” The teachers who were unable to attend consistently appreciated the flexible attendance while those who attended consistently felt negatively impacted by their inconsistent peers. The survey and interview data indicated that participants were satisfied overall with the logistics, content, and facilitation of the program. However, while the quality of these program aspects was strong, the flexible logistics also had negative consequences. These consequences included unfavorable impacts on the content covered, facilitator preparation, and experience of the program participants during the live sessions due to fluctuations in attendance. Not only did the flexible logistics impact the quality of the program, but they also affected the accessibility of the materials due to the virtual nature of the program. Finding 2: Accessibility Issues in a Virtual Setting The program provided synchronous and asynchronous content with a flexible, accessible approach. The flexibility of how and when to engage with course materials benefitted individual program participants; however, the program was ill-prepared to support ADA accessibility in the virtual setting. This section delineates the finding related to accessibility issues in a virtual setting. In an attempt to provide maximum flexibility in programming, EEJI determined the new summer training program would be exclusively virtual. A virtual program would allow for eligible participants to attend from any location. Program staff would utilize websites and email to communicate updates about the training, and facilitators would utilize the video conferencing platform Zoom as their primary meeting space. Recordings of each meeting would be available on the program website each day as well as the corresponding slides and resources. These methods were selected to allow the greatest access to materials needed throughout the program. 72 Program participants varied in their satisfaction with accessing the required program materials. Six out of 11 interviewed participants explicitly discussed the favorable accessibility of the program while three teachers described barriers in their experiences related to accessibility. One participant stated, “All of the accessibility, you know, the ADA issues should be there. Right? We shouldn’t have to beg for them,” regarding videos missing closed captions, text embedded in pictures preventing the use of screen readers, and other accessibility problems. Another participant felt that there could have been more effort into making the material accessible by clearly communicating the directions in written form, ensuring the reflection questions were visually shared in the Zoom chat feature, and participants had equal access to the slide decks. The program was not fully designed to support ADA accessibility in a virtual setting. While many program participants found the materials and resources accessible, some teachers described the lack of accessibility as a barrier to their success in the summer program. Not only did the program design hinder participants with disabilities, but the evidence suggested that the program also struggled to address how to apply program concepts for students with disabilities and other marginalized populations. Finding 3: Limited Usefulness with Diverse Populations To enact the leadership of novice teachers, there was a need for participants to find the program useful to their practice. The program was mostly useful to participants with opportunities to expand its applicability to additional marginalized populations. The following section discusses the limited usefulness of the program with diverse populations. The mission of EEJI is to address systemic oppression leading to educational inequity by developing the leadership of new teachers. EEJI teachers predominantly and, at times, 73 exclusively teach low-income students and students of color. Ergo, there is a need to design EEJI’s ongoing training with marginalized populations in mind to maximize its usefulness in the classroom. Participants were asked the following survey questions related to their perceptions of program usefulness: (a) I found the content of the program useful and (b) I am confident in my ability to apply learnings from this program to my classroom practice. Of the surveyed participants, 85.7% agreed or strongly agreed that they found the content of the program useful. The median score for the survey item was 7.0, the mean was 6.1, and the standard deviation was 1.6. When asked if they were confident in their ability to apply learnings from this program to their classroom practice, 85.7% of surveyed teachers also agreed or strongly agreed. The item’s median score was 6.0, the mean was 6.1, and the standard deviation was 1.4. The individual usefulness survey item scores can be found in Table 7. Table 7 Medians, Modes, and Standard Deviations for Usefulness Scores Survey item n Mdn M SD I found the content of the program useful. 35 7.0 6.1 1.6 I am confident in my ability to apply learnings from this program to my classroom practice. 35 6.0 6.1 1.4 Additionally, nine of the 11 participants surveyed expressed the program’s usefulness in depth. One participant described the connection to their practice: 74 A lot of the times when we have those types of sessions, it’s throwing the information at you and never an opportunity to use the information in real time. But this summer was different. Because we were able to use it in real time, we’re able to get feedback from peers—people who are in the classroom the same time we are, who are teaching the same things that we are, the same subjects—that are going through some of the some of the things that we’re going through socially, emotionally. That’s the heavy usage of technology that we’ve come in to appealing to those students who are adjusting from the pandemic, who may still be virtual, but also who still may be in the classroom. While many participants believed the program was useful overall, others believed there were specific, marginalized populations that were not supported by the program design. There were two populations of students that some of the program participants felt were not adequately addressed throughout the summer training: students in special education and secondary literacy students. One participant who supports students in special education said that the program “didn’t have any programming whatsoever for marginalized populations, for disabled populations, and so on. It assumed a standard learner and went from there.” Similar statements were echoed in the survey responses such as the following: They weren’t at all prepared to handle questions from a SPED [special education] perspective such as how to teach reading to non-verbal populations or which programs work best for students with/at risk for emotional behavioral disorders… I was hoping that a SPED lens would be found somewhere… sadly, I found none. Thus, the work will just contribute to general knowledge, little of which is relevant to my context. 75 The evidence indicated that the summer program did not cover enough content related to students in special education to the extent that some program participants would have found useful. In addition to students in special education, the program did not sufficiently address, as advertised, secondary literacy students. According to a report from NAEP (2019b), only 37% of students who reached twelfth grade performed at or above reading proficiency and only 70% met the basic reading level or above. Program participants stated in both survey responses and interviews that they had hoped for more specific instruction to support reading proficiency in secondary grades. One teacher stated: There were so many instances, that even with them saying that there were 6–12 teachers in the room, that they were making the assumption that we were general education teachers and trying to talk about, like how we could use this in a class of 25. … Every single 6–12 teacher I connected with, though, was an interventionist or a special education teacher and is not running a room of 25 where they can do small group time and circle time and all these things that are much more elementary focused. … And now I have to create a whole new strategy to do this thing that’s really important that you taught me how to do, but the strategies you taught me to use for it are not applicable. Another teacher stated in the survey that the program was “defiantly geared towards younger grades.” While many program participants were satisfied with the content covered and usefulness of the material related to their classroom application, some teachers admitted that they were looking for more program content to support specific populations to further refine their classroom practice. 76 Overall, the evidence demonstrated that program participants were satisfied with the summer training program, and there were three specific findings that could further increase participant satisfaction. These findings were related to participant accountability being in tension with program flexibility, accessibility issues in a virtual setting, and limited usefulness of the program with diverse populations. The next section examines to what extent, if at all, the summer training program shifted novice teachers’ leadership knowledge and motivations about broader student outcomes in and beyond the classroom. Broader Student Outcomes In addition to evaluating the summer training program’s implementation output of satisfaction, the study focused on specific early outcomes within the logic model. The early outcomes included the following: (a) teachers could identify the tenets of broader student outcomes (i.e., knowledge about broader student outcomes) and (b) teachers could identify goals for their own practice by the end of the program (i.e., motivation to pursue broader student outcomes). This section examines the data and findings related to these two early outcome goals. The broader student outcomes evaluation questions, survey and interview analysis, and corresponding findings follow. Evaluation Question 2: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? As part of the program’s theory of change, the summer training was designed to introduce EEJI’s research on which outcomes the staff believed were most important for teachers to cultivate to produce equitable life options for students. These outcomes included mastering traditional and contemporary subjects, social and emotional learning, leadership and life skills, 77 and social awareness and agency. Program participants were asked within the survey and during the interviews about the changes, if any, to their own understanding of broader student outcomes. Finding 4: Various Levels of Previous Knowledge While the evidence suggested the program successfully aligned its design to support the expansion of knowledge about broader student outcomes, the program’s ability to increase understanding with participants during the training varied according to participants’ previous knowledge. This section presents the finding related to participants’ various levels of previous knowledge about broader student outcomes. Program participants spanned four years of EEJI’s evolving initial training design, resulting in dissimilar baseline levels of broader student outcomes knowledge during the new summer training program. The differing knowledge levels led EEJI to facilitate optional, introductory sessions on broader student outcomes early in the program. Attendance for these sessions varied; some teachers who were already familiar with these student outcomes attended the sessions while other teachers who were unfamiliar did not attend. The disparate levels of previous exposure to broader student outcomes and the inconsistent attendance of programming led to varied levels of shifted broader student outcomes knowledge by the end of the program. In the program survey, participants were asked the following: Given the program, how has your understanding about broader student outcomes changed, if at all? Of the teachers who answered the survey item, there were 25 out of 30 open-ended responses describing how their understanding of the broader student outcomes had increased due to the program. For example, one participant shared: My understanding of broader student outcomes has expanded. I now understand that there are factors involved that affect students’ and teachers’ behaviors. These factors, if 78 addressed appropriately, can offer insight into students’ lives and mindsets. The good news is that undesirable outcomes can improve with the right approach. However, five of the 30 teachers who responded said that their understanding did not change as a result of the training program. One participant said, “It hasn’t changed or developed at all. I feel as if I’m receiving the same content in different words.” Another participant simply stated, “It hasn’t changed too much.” The survey demonstrated various levels of changed understanding by program participants. The same question was also asked during the participant interviews. In this case, all 11 teachers were able to articulate the tenets of broader student outcomes as defined by the program. One participant summarized their knowledge about broader student outcomes in the following way: I would say that a broader student outcome is an outcome that benefits the student and their lifetime success. So, it’s focusing on outcomes that benefit the lifespan of the student’s success. So, it’s not a short-term outcome. It’s a long-term outcome. So, you’re thinking about this child as an adult, you’re thinking about this child in their career field as a parent. So, you’re aligning yourself with the outcomes that will better suit them as a whole person. Another teacher synthesized their broader student outcomes knowledge as the “skills and knowledge we want students to have, regardless of what content we teach. … These are how we’re going to develop our students so that they are equipped to survive beyond school.” Each interviewed teacher gave specific examples of how their knowledge about broader student outcomes had increased as a result of the program. 79 Overall, the program was mostly successful in increasing program participants’ knowledge of broader student outcomes. The extent to which that knowledge increased, if at all, varied based on the differences in teachers’ baseline knowledge coming into the program. To achieve the program’s theory of change, not only do participants need to deepen their knowledge about broader student outcomes, but they also need to identify goals for putting the knowledge into practice. Evaluation Question 3: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? In addition to identifying the tenets of broader student outcomes, the logic model articulated another outcome required to achieve the program’s theory of change: Teachers can identify goals for shifting their practice related to broader student outcomes by the end of the program. The purpose of measuring this outcome was to see if the training program had a direct impact on teachers’ motivation to shift practice when they arrived back in their classrooms. Through the survey and interviews, participants were asked how the program’s focus on broader student outcomes influenced, if at all, what they hoped to do in the classroom beginning in the fall. Within the program survey, teachers were asked about their level of agreement with the following question: I am confident that I can lead my students toward broader student outcomes. Of the surveyed teachers, 82.9% agreed or strongly agreed with the survey item. The average median score for all teachers was 6.0, the mean score was also 6.0, and the standard deviation was 1.5. Teachers entering their second year in the classroom had both the lowest mean score at 5.3 and the largest standard deviation at 1.8. The survey scores can be found in Table 8. 80 Table 8 Broader Student Outcomes Scores By Number of Years in the Classroom Survey item Entering teaching year Mdn M SD I am confident that I can lead my students toward broader student outcomes. 2 6.0 5.3 1.8 3 6.0 5.7 1.7 4 7.0 6.6 0.5 5 6.5 6.3 0.8 All Program Participant Scores 2–5 6.0 6.0 1.5 Note. n = 35. In addition to the program survey, 11 teachers were asked during interviews about their confidence level in using what they had learned from the program by putting it into practice in the fall. Eight out of 11 participants specifically stated that they were confident they could apply what they learned about broader student outcomes to their practice. One teacher stated, “The broader student outcomes reminds me to think about your students. And so, think about them first, the ones that are in front of you—not the ones you experienced before, and not the ones you miss.” Based on the survey and interview data, the program participants agreed overall that they felt confident about their leadership toward broader student outcomes as a result of the program. Finding 5: Barriers to Shifting Teaching Practice Due to the participation in the summer program, most teachers stated that they will seek to make shifts to various components of their teaching practice in pursuit of broader student outcomes. However, some program participants articulated that they either did not experience an increase in motivation or they did not know how to overcome certain barriers to implement shifts 81 to their practice. The following section elucidates some of the barriers participants anticipated in shifting their teaching practice in the fall. Most of the surveyed and interviewed participants were able to clearly articulate how they planned to shift their teaching practice related to achieving broader student outcomes based on program participation. These shifts included collaborating with fellow teachers around curriculum, integrating more technology usage during lessons, and supporting students in self- directed learning. Within the survey, 26 out of 28 participants planned to make shifts to their teaching practice, and 10 out of 11 interviewed teachers also planned to shift as a result of the program. One participant said, “I will start off the year with more restorative practices while also setting expectations at the same time,” as an example of a shift in practice based on learnings from the program. While the evidence suggested that most program participants were motivated to shift their teaching practices based on the program, there were others that did not describe an increase in their motivation. There were two outliers in the program survey that mentioned that they did not plan on shifting their practice related to broader student outcomes based on program participation, with one participant stating, “I care about these things because I think they’re important to student development as members of society, not because EEJI told me to.” This was echoed by one interviewed participant who described how they believed they had previously aligned their teaching practice to the broader student outcomes, so they did not experience a shift in motivation based on the program. The teacher stated: Honestly, I don’t know that the program’s focus on it [broader student outcomes] really shifted anything. Some of what was helpful was that those focuses were kept in mind while we were doing things, but they aligned enough with my beliefs as a teacher, that 82 it’s including those beliefs meant that I may be tuned into content more than I might have otherwise. If I felt like I was being presented content that violated those beliefs, I might have been more likely to check out. But I don’t know that the explicit inclusion of them affected what I wanted to do in my classroom based on what I learned. Additionally, within the survey and interviews, participants identified barriers to implementing their learnings that impacted their motivation to shift their practice. Some teachers shared that they did not have the capacity, network, or autonomy to make changes to curriculum within their schools and, therefore, did not have plans to make shifts based on the training. For example, one teacher disclosed the following related to aligning their teaching practice to broader student outcomes: What I wonder about is how practical that becomes during the school year, when I’m overwhelmed with everything else, especially where I am in a new district and in a new school. As well as the fact that because I’m new there, I don’t already really know the community. And I don’t already have like friends and coworkers that I know I can go to with that. So, it becomes finding people that I can bring on board with it as opposed to a bunch of people who also agreed to go through the process because they’re in the same training that I am. And so, it just becomes a bit more difficult to actually implement in that sense. Moreover, another participant noted that the program created a barrier to change by not focusing on what broader student outcomes could look like in practice for more subjects. They said: I feel like some of them [broader student outcomes] sometimes still feel harder to do in STEM than they do in humanities. And so, we were shown some ways where it’s like, 83 here’s a quick example of how you do that. But then, if this is my bio class, where it’s accountability, where my school expects us all to be doing more of the same thing? And so, I have less freedom to go ahead and just try something different because it’s what I think will work. Navigating the shifts in teaching practice upon returning to the school environment was not discussed during the program, nor was there a direct application of the broader student outcomes to a variety of subjects during the training. Overall, the evidence indicated that the program partially met its early outcome goal of teachers identifying plans to shift their practices in pursuit of broader student outcomes due to program participation. Most teachers were able to articulate specific steps they were planning to take as a result of the training, but some teachers felt they were already motivated before the program and did not experience a shift in motivation. A few outliers felt that there were barriers that they did not know how to navigate, impacting their motivation to make shifts to their practice. The data suggested that the overall confidence level of pursuing broader student outcomes in their classroom was strong. However, when participants were asked more specifically about their knowledge about broader student outcomes and their motivation to pursue them after the program, the analysis concluded that program participants increased their levels of knowledge and motivation to various extents depending on previous knowledge and unaddressed barriers. The study concluded that the program partially met its two goals related to broader student outcomes. Not only did the program have early outcome goals around knowledge and motivation related to broader student outcomes, but it also had goals related to a teacher’s impact beyond the 84 classroom. The program’s theory of change also included underlying assumptions that teachers can play a role in systemic change. The next section examines to what extent, if at all, the summer training program shifted novice teachers’ leadership knowledge and motivation about enacting systemic change within education as a teacher. Role of Teacher in Systemic Change First, the study evaluated the summer training program’s early implementation output of program satisfaction. Next, the program’s early outcomes related to broader student outcomes were assessed. Finally, the study examined two additional early outcomes within the logic model: (a) teachers could identify the tenets of systems-change leadership and (b) teachers could identify systems-change leadership goals for their own practice by the end of the program. This section examines the early outcomes related to systems-change knowledge and motivation as well as the three final findings the study produced. The two evaluation questions related to systemic change within education as a teacher, the survey and interview analysis, and the corresponding findings follow. Evaluation Question 4: To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? In addition to introducing EEJI’s research on which outcomes the staff believed were most important for teachers to cultivate to produce equitable life options for students, the staff presented their research on the leadership they believed was required for teachers to pursue systems change with and for students. The staff shared their working definition of systems change and introduced their framework of leadership practices. The framework included concepts such as setting a clear vision, working individually and in coalitions, thinking and 85 acting strategically, learning continuously, and reimagining the world anew. The program also clarified the role a teacher could play in systemic change beyond the classroom. At the end of the program, participants were asked within the survey and during the interviews about the changes, if any, to their own understanding of enacting systemic change within education as a teacher. Finding 6: Inconsistent Demonstrated Learning The program did not integrate the learning objectives related to understanding systems change and the role of a teacher throughout the design of the program as successfully as the broader student outcomes objectives. Participants varied in their ability to define and articulate systemic change, especially the role a teacher might play in this pursuit. This section details the finding related to the inconsistent learning demonstrated by program participants related to systemic change. The staff introduced their research on both broader student outcomes and systems change in the first week of the summer training program. However, only the broader student outcomes content was integrated into the designs and discussions for weeks two and three of the program. Both frameworks were discussed again during the final week of programming. Of the interviewed participants, nine out of 11 could identify at least one of EEJI’s tenets of systems change or the role that a teacher could play within enacting systems change. One program participant reflected on their new knowledge about the potential role of a teacher and said, “I think that systems change could start with us, if we sort of are strategic about thinking about how much power we have.” Another teacher demonstrated their new systems change knowledge by stating: Something that I’ve been thinking a lot about is where I fit within my own system. So, the system of the school, the system of the district, and then broader within the 86 community, and, you know, farther out, and then thinking about what influences are working with me and against me. And what resources exist within the system… that I can work together to make systems change? While some participants clearly demonstrated their systems change knowledge, eight out of 11 interviewed teachers could not correctly identify all tenets of systems change or provided incorrect information alongside correct information. One participant confused the concept of systems change with their classroom-level systems. They incorrectly defined systems change as: Systems change is a change in process, like a change in the way that you tackle a content area or the way that you tackle a specific area in your classroom, whether that be management, classroom management, whether that be the way you deliver your lessons, I believe that a systems change is… a step-by-step system that you… trust it and you implement it, until I guess there’s some kind of mastery. And if there is no mastery, tweak a part of the system, but don’t completely destroy the system. Not only did the interview data illuminate confusion, but the survey data also exhibited disparate understanding about systems change. The survey data also demonstrated that participants varied in their ability to define the tenets of systems change or the role they may play as a teacher in pursuit of systemic change. Within the survey, 14 out of 28 participants who answered the survey item were able to articulate tenets of systems change or the role of teacher within, and the other 14 teachers could not identify the tenets or gave incorrect information. The responses ranged from “the teacher’s role is paramount to systems change,” which is a tenet of EEJI’s framework for systems change, to “I need to focus on what I can do or control within my classroom, and then I can expand beyond 87 it,” which is counter to the tenets of the program. Both the survey and interview data confirmed a range of understanding about systems change and the role a teacher could have in change efforts. The evidence suggested that the lack of systems change content delivery compared to the broader student outcomes content delivery led to varied participant ability to define and articulate systemic change by the end of the program, especially the role a teacher might play in this pursuit. Not only did the data demonstrate confusion about systems change, but it also illustrated additional confusion between the concepts of leadership toward broader student outcomes in the classroom and leadership toward systemic change. The next section describes an additional finding about the participants’ confusion between the program’s conceptual frameworks. Finding 7: Confusion About How to Lead There was demonstrated confusion by program participants related to which leadership actions would contribute to systemic change and which actions would support student learning outcomes within the classroom alone. The frameworks presented by EEJI during the summer training program distinguished between teacher actions that directly impact student outcomes and teacher actions that impact the education system more holistically. Not only did participants struggle to articulate new systems change knowledge in general, but there also seemed to be confusion in some participants’ understanding about which leadership choices would affect students and which would affect surrounding systems. This section describes the finding related to the participants’ confusion about how to lead within and beyond the classroom. When asked to explain how they would define systems change during an interview, one participant admitted, “I think that was the thing I was thinking about earlier,” referring to when 88 they were asked to define broader student outcomes. “Sorry, I mashed the two things together,” she added. Another teacher confused the two frameworks by stating: If I’m keeping in mind, the mindset that for me to be a teacher who is effecting systems change means that I need to focus on my classroom and not everything else about the system. I’d say that I’m at like, an eight out of 10. Again, you know, being able to go into it with that focus, instead of trying to feel stressed about everything under the sun, gives me more confidence in being able to then effect that change and create it within my classroom. The evidence across the interview and survey data demonstrated that some participants were conflating the frameworks. Within the survey, there were 28 examples of mentioning student-level outcomes (i.e., tenets of the broader student outcomes) when asked about systems change. For example, when asked about systems change, one participant stated, “The social emotional learning aspect is going to be huge for me this year,” an example of one of the broader student outcomes defined during the program, not the systems-change leadership framework. Another teacher shared that their understanding about the role of a teacher in systems change evolved in that “lessons need to incorporate different materials to teach children,” which is an additional example of confusing broader student outcomes and systems-change leadership. Not only did the program participants demonstrate a lack of growth in systems change knowledge, but the data also suggested that there was now confusion about which leadership actions lead to specific outcomes. The evidence indicated that (a) the program did not reach its goal of expanding novice teachers’ leadership knowledge about enacting systemic change within education as a teacher and, in fact, (b) confused some participants about their roles within education as a teacher. 89 Without the program meeting the early outcome goal of teachers identifying the tenets of systems change and what role a teacher could play in this pursuit, the final logic model goal— which was to build on new systems change knowledge—would be difficult to achieve. The final evaluation question related to potential shifts in teachers’ motivation toward systemic change as a result of the program follows. Evaluation Question 5: To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? The final early outcome of study from the program’s logic model was that teachers could build upon their knowledge of enacting systemic change within education as a teacher by identifying specific goals for their own practice by the end of the program. The purpose of measuring this outcome was to see if the training program had a direct impact on teachers’ motivation to shift their practices when they arrived back in their classrooms. At the end of the program, participants were asked how the program’s focus on systemic change within education as a teacher influenced, if at all, what they hoped to do in their classrooms beginning in the fall. Within the survey, program participants were asked about their level of agreement with the following question: I am confident that I can lead toward systems change. Of the surveyed teachers, 77.1% agreed or strongly agreed with the survey item. The average median across all teaching years was 6.0, and the mean was 5.8. The confidence level of teachers entering their second year in the classroom was the lowest with a mean score of 5.1 and a standard deviation of 2.1. The standard deviation was 1.5 across all teaching experience levels. The survey scores can be found in Table 9. 90 Table 9 Systems Change Scores By Number of Years in the Classroom Survey item Entering teaching year Mdn M SD I am confident that I can lead toward systems change. 2 6.0 5.1 2.1 3 6.0 5.7 1.6 4 6.0 6.1 0.7 5 6.5 6.3 0.8 All Program Participant Scores 2–5 6.0 5.8 1.5 Note. n = 35. Like the broader student outcomes questions, the interviewed teachers were asked about their confidence level in using what they had learned from the program by putting it into practice in the fall. This time, only six out of 11 participants specifically stated that they were confident that they could apply what they learned about systems change and their role as a teacher to their practice. One teacher admitted that, “I’m still somewhat stuck in the sense because—while I’m optimistic about wanting to make systems change—I do acknowledge that is way easier said than done.” Compared to the broader student outcomes findings, the data determined that program participants had less confidence and more confusion about their leadership as a teacher in pursuit of systems change as a result of the program. When teachers were asked more specifically about their knowledge related to systems change and the role of a teacher, the analysis determined that program participants did not consistently demonstrate understanding about systems change. Furthermore, when asked about their motivation to pursue systems change after the program, the teachers revealed confusion 91 about what they could do next. The next section describes the final finding about the participants’ stifled motivation toward systems change. Finding 8: Stifled Motivation Toward Systems Change Due to the lack of knowledge developed about systemic change throughout the program, teachers were unable to clearly define what they hoped to do differently at their schools in the fall related to systemic change and their role as a teacher. The evidence demonstrated that, unlike motivation gained about pursuing broader student outcomes through the program, participants did not gain the same extent of motivation about enacting systemic change in their role as a teacher. The following section expounds the final finding related to stifled motivation toward systems change. When asked about how the program’s focus on systems change influenced, if at all, what they hoped to do in their classrooms beginning in the fall, only four out of 28 survey responses had clear next steps as opposed to 24 survey responses that were unclear, confused, or did not articulate steps related to systems change. One participant demonstrated confusion by stating, “I feel more prepared to change the way I teach and create better results,” a shift in practice toward broader student outcomes instead of systems change. Another participant revealed their lack of motivation toward systems change: Honestly, a lot of it has shown me just how much is out of my control and yet how much is expected of me and other educators. I’ll apply what I can this school year but truthfully this is above us. There aren’t enough programs, stipends, or certificates that will change this. These survey results were in contrast with the 26 out of 28 teachers who had clear next steps to implement their learnings about broader student outcomes. 92 When the same question was asked about systems change during the 11 interviews, only four of the teachers were able to clearly articulate what they hoped to do differently in their schools in the fall. One participant shared: And there’s still things that I’m trying to figure out because I know that even though I know I’m doing what’s right, I might be viewed as someone who’s overzealous and needs to be kept in my own place, in a sense, and what that might mean for my own opportunities as a teacher. So, I think that’s something worth noting, when talking about system change where it’s like, yes, it’s possible. However, it’s a strategic way to go about it as well that I think I would be interested to even learn more about because, again, being honest, and being realistic. It’s not as straightforward as just saying, here’s the data showing your boss, showing your principal, and then they’re just like, wow, this is amazing. And they implement it—like it doesn’t work like that. Because there’s going to be extreme pushback, there’s always going to be well, this is the vision that we’ve seen, what you’re not seeing from this bird’s eye view that we have, and kind of, again, it’s always going to be a conflict of interest. Another teacher said, “There needs to be more explicit support in navigating that [systems change]. I guess navigating the systems that you’re a part of, but trying to change,” hoping for more support from EEJI. Based on the evidence, the summer training program produced little, if any, motivation toward enacting systemic change within education as a teacher. The program had varied success in shifting participants’ understanding about systemic change in education and the role of a teacher. By not achieving the knowledge early outcome goal for systemic change within the logic model, the program struggled to meet its motivation early outcome goal. The study concluded that the program did not meet its two goals related to 93 systems change and the role of teacher. Therefore, the analysis of the holistic program data determined that the program would not achieve its theory of change in its current iteration. Summary Based on the program participant data, the summer training program only partially met its intended goals defined within its logic model. The implementation goal related to participant satisfaction was partially met according to the findings. Finding 1 described how the program’s flexibility benefited some, but it also had negative consequences on participant accountability. Finding 2 illuminated how the program was ill-prepared to support ADA accessibility in the virtual setting, and Finding 3 concluded the program was mostly useful to participants with opportunities to expand its applicability to additional marginalized populations. Overall, the program partially met its logic model satisfaction goal. The organization’s early outcome goals related to broader student outcomes also varied in success level. The goal related to program participants deepening their knowledge about broader student outcomes varied based on participants’ previous knowledge according to Finding 4. Finding 5 stated that most teachers’ motivation to change their teaching practice based on their learnings increased, though some teachers did not know how to overcome certain barriers to implement shifts to their practice. The findings uncovered that the program only partially met its early outcome goals related to broader student outcomes. Finally, the evidence demonstrated that EEJI’s early outcome goals related to systems change were unsuccessful. Finding 6 highlighted that the program participants’ ability to define systems change, especially the role a teacher might play in this pursuit, was not developed as a result of the program. Not only were program participants unable to describe systemic change and their role as a teacher, but Finding 7 concluded that some program participants were now 94 confused about the concepts of systems change and their role as a teacher. Finally, Finding 8 confirmed that program participants were unable to clearly define what they hoped to do differently at their schools in the fall related to systemic change and their role as a teacher. According to these findings, the program did not meet its logic model early outcome goals of increasing knowledge and motivation about systems change and their role as a teacher as a result of the program. This chapter summarized the study’s findings about a new summer training program, organized by evaluation question. The study produced eight findings based on the organization’s pursuit of its implementation output and early outcome goals using a program theory-approach. The final chapter includes a discussion of the evaluation findings, the recommendations based on the findings, the limitations and delimitations of the study, the recommendations for future research and evaluation, and the conclusion of the dissertation. 95 Chapter Five: Recommendations The purpose of this study was to evaluate a summer training program that aimed to enact the leadership of novice teachers in pursuit of systemic change in the U.S. educational system for low-income students and students of color. Chapter Four discussed the findings for the following five evaluation questions: 1. To what extent, if at all, are participants satisfied with the summer training program? 2. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about broader student outcomes in and beyond the classroom? 3. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation to pursue broader student outcomes in and beyond the classroom? 4. To what extent, if at all, does the summer training program shift novice teachers’ leadership knowledge about enacting systemic change within education as a teacher? 5. To what extent, if at all, does the summer training program shift novice teachers’ leadership motivation toward enacting systemic change within education as a teacher? The findings were grounded in a program theory approach. Program theory evaluates the how, why, and extent a program reaches its intended goals (Birckmayer & Weiss, 2000; Frumkin, 2006; WKKF, 2004). The study focused on evaluating select implementation output and early outcome goals within the summer training program’s logic model. Chapter Five provides a discussion of the findings outlined in Chapter Four. A summary of the recommendations based on the findings can be found in Table 10. Next, the chapter includes recommendations for practice, limitations and delimitations of the study, and recommendations for future research and evaluation. The chapter ends with a conclusion of the dissertation. 96 Table 10 Summary of Recommendations Recommendation Description 1 Implement Accountability Strategies to Increase Participation 2 Ensure All Components of Program Are Accessible to All Learners 3 Expand Program’s Applicability to Diverse Populations 4 Increase Motivation of Program Learners 5 Equip Teachers with Change Strategies 6 Design Program with Clearer Learning Objectives and Assessments Discussion of Evaluation Findings The purpose of the study was to evaluate EEJI’s new summer training program rooted in a theory of change to help eliminate educational inequity. A theory of change is the underlying point-of-view and tenets of influence that identify an alternative outcome to a current state and articulate interventions to support the transformation (Frumkin, 2006; Tuck & Yang, 2014). The program’s theory of change was represented in a logic model, and the evaluation findings examined aspects of that logic model. The program was designed to realize EEJI’s theory of change by continuing the teacher leadership development of its early alumni teachers as change agents toward educational equity. Teacher leadership is the pursuit of student outcomes through continuous improvement, diverse coalitions, and change endeavors within and beyond the classroom (Nguyen et al., 2019; Wenner & Campbell, 2017; York-Barr & Duke, 2004). The training centered on two of EEJI’s frameworks: broader student outcomes and systems change in the role of teacher. Over the 97 course of four weeks, teachers entering their 2nd through 5th year in the classroom gathered virtually to continue their development in both knowledge and motivations toward student-level and systemic change. According to Wenner and Campbell (2017), teachers need ongoing support in the form of professional development to unleash their leadership to the end of student outcomes. By developing a logic model and evaluating the professional development against its goals, EEJI was able to understand the effectiveness of its first iteration of the program. The findings concluded that there were clear, identified successes and failures in the program execution. For example, the findings illuminated that most program participants were satisfied overall with the program. Specifically, the teachers were mostly satisfied with the logistics, content, and facilitation, though the flexibility of the program caused some dissatisfaction with peer accountability. The findings also revealed that there were areas of opportunity for accessibility within the virtual setting as well as a limited usefulness of the content with diverse populations. The findings also demonstrated that there was some increase in knowledge gained as a result of the program. Knowledge is information that is acquired, restructured, and utilized (Boulding, 1956; O’Dell & Hubert, 2011). Through the survey and interview data, the teachers described their learnings about broader student outcomes and—to some extent—systems change. They described shifts in their own understanding of social and emotional learning, contemporary subjects, and social awareness and agency. While some teachers were able to articulate education as a system and the role of a teacher within systemic change, most could not. This knowledge was critical to EEJI’s theory of change because to reimagine what is possible, there is a need to first understand the current system (Senge, 1990). Not only did the program participants gain some new knowledge, but they also demonstrated some extent of increased motivation. 98 By the end of the program, most teachers could articulate their motivation to make changes to their practice upon returning to their classrooms in the fall. According to Dweck (2017), motivation is a force that affects behavior. The participants shared specific examples of changes they planned to enact through the survey and interview data. These changes to practice included setting a clear vision for the classroom, altering their approach to lesson planning, and leveraging student voice during and beyond class. However, teachers were less likely to share specific examples related to their role in systemic change. According to NCES (2022), 44% of U.S. public schools reported that they are understaffed by at least one teaching vacancy, and for the schools reporting either teaching or broader staff vacancies, more than half are due to non-retirement resignations. With an already strained teaching force, there is a need to further support teachers as change agents by equipping them with the skillset to navigate barriers to change. Given that teachers were left unclear about how to lead toward systemic change back in their schools, the study concluded that the program was only partially effective in its objectives. According to Kettner et al. (2017), only an effective program can achieve its theory of change. The findings determined which aspects of the program were effective and should remain in future iterations of the program and highlighted aspects that were ineffective and should be redesigned based on recommendations. While there was some evidence that the program met or partially met some of its goals, the findings demonstrated that the program did not meet all its logic model goals and, therefore, would not achieve its theory of change in its current iteration. Without a program evaluation, the staff at EEJI might have assumed the program was on track to achieving its theory of change. According to Rossi et al. (2004), an organization could continue to exhaust limited resources without ever achieving impact without a program 99 evaluation. Additionally, evaluations clarify which aspects of a program are and are not achieving results, and a full program evaluation could even support other organizations seeking to attempt similar programming (Rossi et al., 2004). EEJI’s program evaluation was consequential because the staff believed in the program’s theory of change, and the organization was allocating its limited resources toward the program’s success. The program evaluation, findings, and recommendations were developed for EEJI staff to consider either executing a new iteration of the program or reallocating resources toward another effort in pursuit of its mission. Regardless of EEJI’s ultimate decision, there is a benefit to distributing the study’s results. By disseminating the findings from the study, other organizations could also attempt to support educators in pursuit of broader student outcomes and systems change as teachers. EEJI’s program logic model articulated the assumptions, resources, activities, outputs, outcomes, and impact that the organization believed would lead to its theory of change if all components were met. The evaluation findings determined that the program only partially met its output and early outcome goals examined within the scope of the study. Based on program theory, the training was ineffective and, therefore, would not achieve its theory of change in its current iteration. The research demonstrated that systems—such as the education system—are self-perpetuating and will not change without reacting to effective inputs (Boulding, 1956; Van Assche et al., 2019; von Bertalanffy, 1950, 1971). However, through the evaluation findings, there are clear recommendations for improvement that could lead to a more successful program design and execution if iterated upon. The following section defines the research-based recommendations and the implementation strategies that could support future iterations of the program in pursuit of its theory of change. 100 Recommendations The following section describes the six recommendations to emerge from the research study. The first recommendation is to implement accountability strategies to increase participation in the program, and the second recommendation is for program staff to ensure all components of the program are accessible to all learners. The third is to expand the program’s usefulness to diverse populations. The next two recommendations address the needs to increase motivation of program learners and equip them with change strategies. The final recommendation is for EEJI program staff to design the program with clearer learning objectives and assessments. Recommendation 1: Implement Accountability Strategies to Increase Participation The first recommendation is for EEJI to implement accountability strategies to increase program participation. The study found that the logistics, content, and facilitation design of the program provided flexibility in training for the participants. But while the quality of the program was strong overall, the increased program flexibility also had negative consequences on participant accountability. Accountability is a formal or informal contractual agreement between two parties to deliver on a commitment (Bregman, 2016; Hentschke & Wohlstetter, 2004; Olsen, 2014). Clear expectations and goals from the party responsible for supplying the goods or services support the success of the receiving party (Hentschke & Wohlstetter, 2004; Olsen, 2014). Accountability strategies include (a) clear and consistent communications, (b) follow-through on committed actions, (c) realistic assessments of capacity, (d) feedback between parties, (e) transparency about progress toward goals, and (f) defined consequences for breaking the contractual agreement (Bregman, 2016; Hentschke & Wohlstetter, 2004; Olsen, 2014). 101 The recommendation is for EEJI program staff to develop a program handbook that clearly defines expectations for participation during the program. To increase satisfaction of program participants in a flexible program, there is a need for the handbook to describe attendance policies, communication expectations, components of program completion, and consequences for not delivering on the program commitment. As part of the prework and orientation for the program, participants should review the handbook and sign a commitment to abide by the policies within as part of the requirements of the program. By enacting this recommendation, teachers will be clearer on expectations of successful program participation. Recommendation 2: Ensure All Components of Program Are Accessible to All Learners The second recommendation is to ensure all components of the summer training program are accessible to all learners. The program provided synchronous and asynchronous content with a flexible, accessible approach. The benefits of a virtual, flexible learning experience include (a) the removal of certain time constraints for asynchronous portions, (b) reduced costs to program and participants, and (c) the opportunity to progress through content at one’s own pace (Xie et al., 2020). The flexibility of how and when to engage with course materials benefitted individual program participants; however, the program was ill-prepared to support ADA accessibility in the virtual setting. Under Section 508 of the Rehabilitation Act of 1973, government organizations and organizations receiving federal funding must make electronic information accessible to those with disabilities (U.S. General Services Administration, 2022; Ableser & Moore, 2018). In addition to the legal requirements to support people with disabilities, there is also a universal design approach that encompasses the needs of multiple abilities and diverse learners known as Universal Design for Learning (Ableser & Moore, 2018; CAST, n.d.). Universal Design for 102 Learning has three guidelines to support successful implementation of universal design: (a) provide multiple means of engagement, (b) provide multiple means of representation, and (c) provide multiple means of action and expression (CAST, n.d.). By abiding by these guidelines, there is not only a greater likelihood of meeting the needs of people with disabilities but also a greater likelihood of meeting the needs of more diverse learners (Ableser & Moore, 2018; CAST, n.d.). The recommendation is to apply the Universal Design for Learning guidelines to future design iterations of the summer training program in an effort to better serve all learners. Specifically, it is recommended that the program offer multiple and customizable displays of information as well as alternatives for audio and visual program components. EEJI should also increase the number of methods participants can share their reflections and learning. These methods could include allowing participants to share through audio or video recordings, writing, or other visual means to demonstrate their understanding. By making these Universal Design for Learning adjustments, the program can meet more needs of learners through how they present content and receive communication from participants. Recommendation 3: Expand Program’s Applicability to Diverse Populations The third recommendation is to expand the program’s applicability to diverse populations. To enact the leadership of novice teachers, there was a need for participants to find the program useful to their practice. While the program was useful to participants generally, there were opportunities to expand its applicability to additional diverse populations. The lack of consideration for diverse populations within the program demonstrated misalignment with the organization’s stated mission that is rooted in diversity. 103 Diversity is the make-up of individuals across lines of race, ethnicity, socio-economic status, ability, etc. that form a group of people—such as employees within an organization or students within a school—and recognizes the heterogeneity of the group as an asset in lieu of homogeneity (Bolger, n.d.). According to the participants, the program did not include content that was applicable to the diverse students they teach, making the professional development less relevant and helpful. Without addressing the lack of considerations for diverse populations, an organization can perpetuate the very inequities its mission is seeking to combat (Hawkins, 2014). The third recommendation to increase participant satisfaction is to ensure applicability of content to diverse student populations. There is a need for program staff to compare the content of the program they design against the specific language of its diversity-centered mission. EEJI should ensure that the program is relevant for various races, ethnicities, and—its largest opportunity according to program participants—abilities. If the content of the program is designed to be relevant to the unique strengths and challenges of diverse populations, then program satisfaction will increase according to the study’s findings. Recommendation 4: Increase Motivation of Program Learners The fourth recommendation is to increase the motivation of program learners. While the evidence suggested that the program successfully aligned its design to support the expansion of knowledge—especially about broader student outcomes—the program’s design alone did not guarantee an increased understanding during the training due to participants’ varied previous knowledge. Given the various levels of previous knowledge, there is a need for the program to better support teachers’ motivation to gain understanding from the program regardless of their entry-level knowledge. The staff can support increased motivation by first grounding their programmatic choices in social cognitive theory. 104 Developed by Albert Bandura, social cognitive theory describes the reciprocal causation between one’s social environment, behavior, and mental processes and its effect on human learning (Schunk & Usher, 2019). According to Schunk & Usher (2019), this triadic reciprocity and its inclusion of the mental processes is what distinguishes this learning theory from its behaviorism predecessor, which emphasized that behavior was dependent on environment and its reinforcements. The researchers also described that learning could happen enactively (by doing the tasks themselves) or vicariously (though observing through models such as peers, television, and the world around them). Another key tenet of social cognitive theory is a learner’s ability to self-regulate, or to manage themselves purposefully through sustained choices toward their desired goals (Schunk & Usher, 2019; Medina et al., 2017). Finally, Bandura named that an important motivational factor in pursuing one’s goals through learning is self-efficacy, or one’s believed competence in their ability to learn and function at meaningful levels (Bandura, 2000). Due to the lack of knowledge developed about systemic change throughout the program, teachers were also unable to clearly define what they hoped to do differently at their schools in the fall related to systemic change and their role as a teacher. In addition to social cognitive theory, EEJI can ensure clarity about how teachers can enact systems change concretely through expectancy-value theory. Expectancy-value theory (EEVT), developed by Eccles and colleagues, describes the relationship between a person’s expectations of a task outcome and the value they ascribe to the completion of the task (Wigfield et al., 2018). An expectation, or expectancy, describes the outcome one believes will ensue due to completing a task, and the value assigns the level of importance to the outcome (Westaby, 2002). To delineate expectancies and values further, outcomes expectancies are a person’s beliefs that tasks lead directly to results whereas efficacy expectancies are a person’s beliefs that their efforts and abilities lead to specific results 105 (Wigfield et al., 2018). EEVT postulates that the overall assessment of a task’s value relates to three components: (a) attainment value, or how consequential one regards the task to be; (b) intrinsic value, or how fulfilling one perceives a task to be; and (c) utility value, or how helpful completing this task is to one’s objectives (Wigfield et al., 2018). Social cognitive theory and expectancy-value theory are suitable lenses to address this finding. Program facilitators can prompt reflections from participants related to their expectancies associated with their own competence and self-efficacy for new learnings. Additionally, systems change requires leaders to reflect on their values regarding the (a) perceived importance about completing new tasks in pursuit of change, (b) personal enjoyment or fulfillment in new task opportunities during change, and (c) alleged helpfulness of any new tasks required of them due to their expanded understanding of their role as a teacher. By using social cognitive theory and expectancy-value theory in the program design, EEJI can better understand how teachers are or are not motivated by the outcomes intended from the summer training program. Furthermore, this understanding can inform strategies throughout the program to support the success of the logic model’s short-term outcomes. Practically, EEJI can increase the motivation of its program participants through a few strategies. Teachers can first assess their own knowledge entering the summer training through a pre-knowledge survey, consultation with staff, or a review of the program’s material on their own. Once program participants understand their grasp of the content before entering the program, then they can set learning goals for where they wish to be by the end of the program. Once participants are clear about how much they want to learn based on where they are starting, then they can define key learning milestones and determine learning strategies that will work for 106 them. The program can integrate these motivation strategies into the prework for the program as well as including time during the program for teachers to reflect on their own progress. Recommendation 5: Equip Teachers with Change Strategies The fifth recommendation is to equip teachers with change strategies. There was demonstrated confusion by program participants related to which leadership strategies would contribute to systemic change and which would support student learning outcomes within the classroom alone. Helping teachers understand the unique type of leadership to pursue and achieve systemic change was a missed opportunity during the program implementation. One leadership model that can support change is transformational leadership. Transformational leadership describes “the process of how certain leaders are able to inspire followers to accomplish great things” (Northouse, 2019, p. 192). Kouzes and Posner (2017) described five behaviors that transformational leaders demonstrate. First, leaders define their own values, hold themselves accountable, and lead by example. Second, they compel others to follow a collective vision for their work and inspire staff to pursue their own goals. Next, leaders think expansively and question the current status quo as needed, as they are unafraid to innovate and continuously learn and grow from failure. Fourth, leaders lay the conditions for others to accomplish goals such as establishing trust, teamwork, and a sense of comradery in collective ownership. And the fifth practice of a transformational leader is inspiring others through recognition, appreciation, and praise while encouraging followers to continue to produce strong results. For EEJI’s summer training program, there is a need to further focus on what constitutes leadership and what it looks like for each teacher. Building in reflective time throughout the summer training program will allow for teachers to determine their own values around leadership 107 and what they want to be true for their classroom, school, and community impact. Through this leadership exploration, the program participants will leave clearer about how to transform the systems that hinder educational equity for students. In addition to program participants reflecting on their own leadership, there is a need to support them with change management strategies. While most teachers stated they will seek to make shifts to various components of their teaching practice in pursuit of broader student outcomes, some program participants articulated that they either did not experience an increase in motivation or they did not know how to overcome certain barriers to implementing the shifts back in their school systems. To increase motivation to enact change, it is recommended that the program also includes content about change management to build on a leader’s transformational orientations. While there are various change models to consider, John Kotter’s Eight-Step Process for Leading Change can be transferred to school environments. In this change model, Kotter (2017) stated that the first step to enacting change is building a sense of urgency around the change. To build momentum that the change is necessary, there is a need to articulate the case for change in a meaningful way. The next step in Kotter’s change model is to create a guiding coalition (Kotter, 2007, 2017). This group includes staff that hold influential power, though not necessarily due to positional power. The coalition supports the change by leveraging their influence with their teams and peers. Since the group operates outside of the normal hierarchy, the coalition facilitates change given “reform generally demands activity outside of formal boundaries, expectations, and protocol” (Kotter, 2007, p. 98). 108 The third step in Kotter’s change model is to develop a vision and strategy (Kotter, 2007, 2017). In this step, the leaders within an organization clarify the new direction and the plan to achieve the new vision. This ensures that staff are working toward a common goal once they are motivated to do so. In the fourth step, Kotter (2017) emphasized the need to communicate the change vision to staff and to do so frequently. The researcher asserted that, “Employees will not make sacrifices, even if they are unhappy with the status quo, unless they believe that useful change is possible. Without credible communication, and a lot of it, the hearts and minds of the troops are never captured” (Kotter, 2007, p. 100). This step suggests that an organization uses every opportunity to communicate the vision and the corresponding strategies while also leveraging the guiding coalition to model the new desired behaviors (Kotter, 2007). The change model continues with step five, the need to empower employees for broad- based action (Kotter, 2017). To encourage employees to enact change, the model recommends the following strategies: (a) remove the obstacles preventing the change from succeeding, (b) align the organization’s systems and structures that hinder the vision, and (c) encourage employees for promoting new ideas and taking risks in line with the new vision (Kotter, 2007). Once employees understand the vision and begin to act, there is a need for employees to observe progress through short-term wins. Planning for short-term wins is the sixth step in the change model (Kotter, 2007, 2017). During this step, organizations should purposefully plan for these wins and not leave them to chance (Kotter, 2007). To do this, there is a need for an organization to plan for what a demonstrated win would look like, execute that plan, and reward all employees involved with 109 that win (Kotter, 2007, 2017). Without short-term wins, change efforts can lose momentum and eventually reverse course (Kotter, 2007). Another step in the model supports forward momentum through consolidating gains and producing more change (Kotter, 2017). In this step, organizations must be cautious against declaring victory too soon and, instead, build on the short-term wins to address even larger issues (Kotter, 2007). There is a need to continue building upon change until it is anchored into the organization’s culture. The final step in Kotter’s change model is anchoring new approaches in organizational culture (Kotter, 2007, 2017). According to Kotter (2007), “Until new behaviors are rooted in social norms and shared values, they are subject to degradation as soon as the pressure for change is removed” (p. 103). Strategies to anchor new approaches include making connections between new staff behaviors and organizational success as well as promoting leaders who demonstrate the desired behaviors. Through Kotter’s Eight-Step Process for Leading Change, leaders in organizations can pursue and achieve required change. EEJI’s summer training program calls on its participants to make changes back in their schools and communities upon completion of the program. Teachers, however, demonstrated a lack of motivation and intimidation by enacting change in their own contexts. By exposing teachers to Kotter’s change model, they will be better equipped with concrete strategies to lead change back at their schools. The recommendation is for EEJI to integrate Kotter’s Eight-Step Process for Leading Change into the content for the summer training program. Specifically, the integration would be supplementary reading material and an additional session in the program design, extending one of the program days. Not only do teachers need exposure to the steps of change, but they also 110 need support in creating their own plans. It is recommended that teachers have the option to sign up with staff to review their proposed plans. By supporting teachers with both knowledge about and plans for implementing change alongside programming about broader student outcomes and systems change, program participants will be better equipped to take their program learnings and enact change. Recommendation 6: Design Program with Clearer Learning Objectives and Assessments The sixth recommendation is to design the program with clearer learning objectives and assessments. The program did not integrate clear learning objectives related to understanding systems change and the role of a teacher throughout the design of the program as successfully as the broader student outcomes objectives. Participants varied in their ability to define and articulate systemic change, especially the role a teacher might play in this pursuit. Learning is the new knowledge, skill, attitude, confidence, and commitment required to enact a change, and an objective is the specific expectation the participant will know or do after a learning experience occurred (Kirkpatrick & Kirkpatrick, 2016). Utilizing a revision of Bloom’s taxonomy for learning objectives by Anderson et al. (2001), there are four types of knowledge a training program can seek to embed in participants. These four types of knowledge fall on a continuum from concrete to abstract and include factual, conceptual, procedural, and metacognitive (Anderson et al., 2001). The recommendation is for EEJI to clearly define and assess learning that includes factual, conceptual, procedural, and metacognitive objectives for its systems change content to better align with the objectives for broader student outcomes. The learning objectives would be achieved through a series of three asynchronous modules followed by two facilitated, synchronous sessions attended by program participants. At the beginning of the first 111 asynchronous module, a pre-test of current knowledge, skills, and motivation would be assessed as a baseline to measure learning growth during the program. The pre-survey would ask the same questions as the mid-survey and post-survey. Each module would take 45–60 minutes to complete. After successful completion of the asynchronous modules, the mid-survey would be administered to participants. The mid-survey results would help inform the final design of the two 90-minute synchronous sessions facilitated by EEJI. The formative assessment data would determine which of the factual and conceptual objectives need further discussion in addition to informing the designs for procedural and metacognitive meaning-making. These sessions should include 45- minutes of small group workshop time to solidify the learning. After the synchronous sessions are completed, the post-survey summative assessment should be administered to training participants. This post-survey would help determine if the teachers met the learning objectives that would lead to the outcomes and impact described in the logic model. Designing the program with clearer learning objectives and assessments would better support EEJI in pursuit of the program’s theory of change. Limitations and Delimitations Each decision a researcher makes when conducting a research study ultimately leads to both limitations and delimitations. The study had inherent limitations including the diversity of those who sign up to participate in the program, the attrition of participants throughout the study, and the truthfulness of the respondents during the data collection process. Another limitation was the virtual nature of both the programming and data collection. In addition to the limitations of the study, there were also delimitations that impacted the data collection and analysis. First, the researcher decided to eliminate pre- and mid-surveys from 112 the methodology and only survey participants at the end of the summer training program. The decision was made given survey fatigue, or when respondents become tired or uninvested in completing a survey, that can lower the quality of the results (Robinson & Leonard, 2019). Second, only a subset of the original participants was able to participate in interviews during the limited timeframe, and the decision to accept a subset was made given the limitations of the researcher’s capacity and timing of the study. Finally, the items within each data collection instrument were a natural delimitation determined by the researcher, for there was a need to be strategic in developing and selecting the most aligned question items to obtain valid findings (Salkind & Frey, 2019). Recommendations for Future Research and Evaluation There is a need for future research about the role of teachers in dismantling systems that produce an inequitable education for students. The scope of this study was limited to the outputs and early outcomes of teachers’ knowledge and motivations toward broader student outcomes and systems change, and there is a need to study both the longer-term outcomes and eventual impact teachers might have in their classrooms and school systems. Not only is there a recommendation for longitudinal evaluation on the impact of these frameworks, but there are additional recommendations for future research. Future studies could include an examination of the knowledge teachers across tenures have toward education as a system and their expectancies for creating change within the system. Researchers could also study which knowledge and motivations accelerate the rate of change catalyzed by teacher leadership. Furthermore, continued research could investigate the relationship between a teacher’s orientations toward systemic change and retention in the 113 classroom. By furthering research about the role of teachers in systems change, the field may better understand how to disrupt a perpetuating system. In addition to research about the role of teachers in dismantling inequitable systems, there is a need for further evaluation of programs designed to develop systems-change leadership in teachers. The study was limited to early career teachers, and there are outstanding questions about when systems-change leadership programming should be introduced and how teachers might continue to develop beyond introduction. Further research could also examine the outcomes of programming aimed at various tenures of teaching beyond novice teachers. A program designed using a logic model has a theory of change guiding it, and there is a need to better understand a logic model’s long-term impact of a program in pursuit of systems change. The evaluation of a new summer training program using a logic model with an educational equity theory of change only began to examine the role a teacher may have in producing change inside and outside of the classroom. The program partially met the logic model goals in its pilot run, and the study produced the findings and recommendations for the program to evolve and to advance toward achieving its theory of change. Without continuing research, programs designed to support teachers in pursuit of systemic change may continue to fail and leave students no closer to an equitable education. Conclusion Education in the United States is a complex system that continues to disproportionately impact historically marginalized students in deleterious ways with great magnitude. Systems continue to perpetuate an equilibrium—even if adversely affecting specific populations—until they are dismantled. Educational inequity will continue to produce unequal access to opportunities, and students will be limited in options that lead to a life full of possibilities. 114 An alternative future is not only possible but is critical. An increasingly global community depends on students fulfilling their potential. Teachers have the closest proximity to students and are, therefore, the key to unlocking a more equitable society. When given the resources and opportunity to lead—both inside and outside of the classroom—teachers can disrupt the status quo and provide proof points of what is possible. It will take a broad and diverse coalition of leaders to dismantle an inequitable education system, and teachers continue to be an undervalued and underutilized partner in this coalition. By including and accelerating the leadership of teachers, it is possible to interrupt inequitable systems. It is possible to redefine, reimagine, and rebuild a more equitable society. 115 References Ableser, J., & Moore, C. (2018, September 10). Universal Design for Learning and digital accessibility: Compatible partners or a conflicted marriage? EDUCAUSE Review. https://er.educause.edu/articles/2018/9/universal-design-for-learning-and-digital- accessibility-compatible-partners-or-a-conflicted-marriage Amadeo, K. (2021, May 24). What is educational equity and why does it matter? Retrieved July 31, 2021, from https://www.thebalance.com/equity-in-education-4164737 Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives (Complete ed.). Longman. Bai, Y., Straus, S., & Broer, M. (2021). U.S. national and state trends in educational inequality due to socioeconomic status: Evidence from the 2003–17 NAEP [AIR-NAEP Working Paper #2021-01]. American Institutes for Research. Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in Psychological Science: A Journal of the American Psychological Society, 9(3), 75–78. https://doi.org/10.1111/1467-8721.00064 Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in practice: What do we learn? Evaluation Review, 24(4), 407–431. https://doi.org/10.1177/0193841X0002400404 Bogler, M. (n.d.). What’s the difference between diversity, inclusion, and equity? General Assembly Blog. https://generalassemb.ly/blog/diversity-inclusion-equity-differences-in- meaning/ 116 Bolman, L., & Deal, T. (2017). Reframing organizations artistry, choice and leadership (6th ed.). Jossey-Bass. Boone, H. N., & Boone, D. A. (2012). Analyzing Likert data. Journal of Extension, 50(2). Boulding, K. E. (1956). General systems theory—The skeleton of science. Management Science, 2(3), 197–208. https://doi.org/10.1287/mnsc.2.3.197 Bowes, J. P. (2016). US expansion and its consequences, 1815–1890. In F. E. Hoxie (Ed.), The Oxford handbook of American Indian history (1st ed., pp. 93–110). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199858897.001.0001 Bregman, P. (2016, January 11). The right way to hold people accountable. Harvard Business Review. https://hbr.org/2016/01/the-right-way-to-hold-people-accountable Brown v. Board of Education, 347 U.S. 483 (1954). https://www.loc.gov/item/usrep347483/ Buchanan, R. (1992). Wicked problems in design thinking. Design Issues, 8(2), 5–21. https://doi.org/10.2307/1511637 Caffarella, R. S., & Daffron, S. R. (2013). Planning programs for adult learners: A practical guide (3rd ed.). Jossey-Bass. Casalaspi, D. (2017). The making of a “legislative miracle”: The Elementary and Secondary Education Act of 1965. History of Education Quarterly, 57(2), 247–277. https://doi.org/10.1017/heq.2017.4 CAST. (n.d.). About Universal Design for Learning (UDL). https://www.cast.org/impact/universal-design-for-learning-udl Caulfield, J. L., & Brenner, E. F. (2020). Resolving complex community problems: Applying collective leadership and Kotter’s change model to wicked problems within social system 117 networks. Nonprofit Management & Leadership, 30(3), 509–524. https://doi.org/10.1002/nml.21399 Center for Public Education. (2016). Educational equity: What does it mean? How do we know when we reach it? https://www.nsba.org/-/media/NSBA/File/cpe-educational-equity- research-brief-january-2016.pdf Civil Rights Act of 1964, Pub. L. No. 88-352, 78 Stat. 241 (1964). https://www.congress.gov/88/statute/STATUTE-78/STATUTE-78-Pg241.pdf Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches. SAGE Publications. Darling-Hammond, L. (1998). Unequal opportunity: Race and education. The Brookings Review, 16(2), 28–32. https://doi.org/10.2307/20080779 de Brey, C., Musu, L., McFarland, J., Wilkinson-Flicker, S., Diliberti, M., Zhang, A., Branstetter, C., & Wang, X. (2019). Status and trends in the education of racial and ethnic groups 2018 (NCES 2019-038). National Center for Education Statistics. Retrieved December 22, 2021, from https://nces.ed.gov/pubsearch/ DeCuir-Gunby, J. T., Marshall, P. L., & McCulloch, A. W. (2011). Developing and using a codebook for the analysis of interview data: An example from a professional development research project. Field Methods, 23(2), 136–155. https://doi.org/10.1177/1525822X10388468 Dweck, C. S. (2017). From needs to goals and representations: Foundations for a unified theory of motivation, personality, and development. Psychological Review, 124(6), 689–719. https://doi.org/10.1037/rev0000082 118 Elementary and Secondary Education Act of 1965, Pub. L. No. 89-10, 79 Stat. 27 (1965). https://www.congress.gov/89/statute/STATUTE-79/STATUTE-79-Pg27.pdf Executive Committee of the American Anti-Slavery Committee. (n.d.). Slavery and the international slave trade in the United States of America. Gilder Lehrman Center for the Study of Slavery, Resistance, and Abolition at the Whitney and Betty MacMillan Center for International and Area Studies at Yale. Retrieved February 23, 2022, from https://glc.yale.edu/american-anti-slavery-committee Ferris, J., & Williams, N. (2010). Foundation strategy for social impact: A system change perspective. Nonprofit Policy Forum, 1(1). https://doi.org/10.2202/2154-3348.1008 Food and Drug Administration. (1998, January). Institutional review boards frequently asked questions. Retrieved July 24, 2021, from https://www.fda.gov/regulatory- information/search-fda-guidance-documents/institutional-review-boards-frequently- asked-questions Frumkin, P. (2006). Strategic giving the art and science of philanthropy. University of Chicago Press. https://doi.org/10.7208/9780226266282 Gibbs, G. R. (2018). Analyzing qualitative data (2nd ed.). SAGE Publications Ltd. Glesne, C. (2011). Becoming qualitative researchers: An introduction (4th ed.). Pearson. Grooms, A. A., Mahatmya, D., & Johnson, E. T. (2021). The retention of educators of color amidst institutionalized racism. Educational Policy, 35(2), 180–212. https://doi.org/10.1177/0895904820986765 Hawkins, P. (2014). Diversity for nonprofits: Mission drift or mission fulfillment? Journal of Diversity Management, 9(1), 41–50. https://doi.org/10.19030/jdm.v9i1.8621 119 Hord, S. M., & Roussin, J. L. (2013). Implementing change through learning: Concerns-based concepts, tools, and strategies for guiding change. Corwin. Hentschke, G. C., & Wohlstetter, P. (2004). Cracking the code of accountability. University of Southern California Urban Education, Spring/Summer, 17–19. Ingersoll, R., May, H., & Collins, G. (2019). Recruitment, employment, retention and the minority teacher shortage. Education Policy Analysis Archives, 27(37), 1–42. https://doi.org/10.14507/epaa.27.3714 Ingersoll, R., Merrill, E., Stuckey, D., Collins, G., & Harrison, B. (2021). The demographic transformation of the teaching force in the United States. Education Sciences, 11(5), 234. https://doi.org/10.3390/educsci11050234 Irwin, V., Zhang, J., Wang, X., Hein, S., Wang, K., Roberts, A., York, C., Barmer, A., Bullock Mann, F., Dilig, R., & Parker, S. (2021). Report on the condition of education 2021 (NCES 2021-144). National Center for Education Statistics. Retrieved December 20, 2021, from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2021144 Johnson, S. M., Kraft, M. A., & Papay, J. P. (2012). How context matters in high-need schools: The effects of teachers’ working conditions on their professional satisfaction and their students’ achievement. Teachers College Record, 114(10), 1–39. https://doi.org/10.1177/016146811211401004 Kettner, P. M., Moroney, R. M., & Martin, L. L. (2017). Designing and managing programs: An effectiveness-based approach (5th ed.). SAGE. Kezar, A. J. (2001). Understanding and facilitating organizational change in the 21st century: Recent research and conceptualizations. Jossey-Bass. 120 Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training evaluation. ATD Press. Kolko, J. (2011). Thoughts on interaction design (2nd ed.). Morgan Kaufmann. Kotter, J. P. (2007). Leading change: Why transformation efforts fail. Harvard Business Review, 85(1), 96–103. Kotter, J. P. (2017). Leading change. Harvard Business Review Press. Kouzes, J., & Posner, B. (2012). The leadership challenge (6th ed.). Jossey-Bass. Ladson-Billings, G. (2017). Makes me wanna holler: Refuting the “culture of poverty” discourse in urban schooling. The Annals of the American Academy of Political and Social Science, 673(1), 80–90. https://doi.org/10.1177/0002716217718793 Ladson-Billings, G. (2021). I’m here for the hard re-set: Post pandemic pedagogy to preserve our culture. Equity & Excellence in Education, 54(1), 68–78. https://doi.org/10.1080/10665684.2020.1863883 Library of Congress. (n.d.). Brown v. Board at fifty: “With an even hand.” https://www.loc.gov/exhibits/brown/brown-segregation.html Library of Congress. (2020, November 16). Plessy v. Ferguson: Primary documents in American history. https://guides.loc.gov/plessy-ferguson Love, B. J. (2018). Developing a liberatory consciousness. In M. Adams, W. J. Blumenfeld, D. C. J. Catalano, K. DeJong, H. W. Hackman, L. E. Hopkins, B. J. Love, M. L. Peters, D. Shlasko, & X. Zuniga (Eds.), Readings for diversity and social justice (4th ed., pp. 610– 614). Routledge. 121 Mangin, M. M. & Stoelinga, S. R. (2008). Teacher leadership: What it is and why it matters. In M. M. Mangin & S. R. Stoelinga (Eds.), Effective teacher leadership: Using research to inform and reform (pp. 1–9). Teachers College Press. McCoy, D. C., Yoshikawa, H., Ziol-Guest, K. M., Duncan, G. J., Schindler, H. S., Magnuson, K., Yang, R., Koepp, A., & Shonkoff, J. P. (2017). Impacts of early childhood education on medium- and long-term educational outcomes. Educational researcher, 46(8), 474– 487. https://doi.org/10.3102/0013189X17737739 McIntosh, P. (1988). White privilege: Unpacking the invisible knapsack. https://www.racialequitytools.org/resourcefiles/mcintosh.pdf Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). Jossey-Bass, a Wiley Brand. Merriam-Webster. (n.d.-a). Assumption. In Merriam-Webster.com dictionary. Retrieved January 6, 2022, from https://www.merriam-webster.com/dictionary/assumption Merriam-Webster. (n.d.-b). Privilege. In Merriam-Webster.com dictionary. Retrieved August 31, 2020, from https://www.merriam-webster.com/dictionary/privilege Moore, M., Riddell, D., & Vocisano, D. (2015). Scaling out, scaling up, scaling deep: Strategies of non-profits in advancing systemic social innovation. The Journal of Corporate Citizenship, 2015(58), 67–84. https://doi.org/10.9774/GLEAF.4700.2015.ju.00009 Morgan, I., & Amerikaner, A. (2018, February 27). Funding gaps 2018. The Education Trust. https://edtrust.org/resource/funding-gaps-2018/ Morgan, K. P. (1996). Describing the emperor’s new clothes: Three myths of educational 122 (in-)equity. In A. Diller, B. Houston, K. P. Morgan, & M. Ayim (Eds.), The gender question in education: Theory, pedagogy, and politics (1st ed., pp. 105–122). Routledge. https://doi-org.libproxy1.usc.edu/10.4324/9780429496530 National Assessment of Educational Progress. (2019a). NAEP report card: Mathematics. The Nation’s Report Card. https://www.nationsreportcard.gov/mathematics/nation/groups/?grade=4 National Assessment of Educational Progress. (2019b). NAEP report card: Reading. The Nation’s Report Card. https://www.nationsreportcard.gov/reading/nation/groups/?grade=4 National Center for Education Statistics. (2019). Indicator 20: Undergraduate enrollment. Retrieved December 29, 2021, from https://nces.ed.gov/programs/raceindicators/indicator_reb.asp National Center for Education Statistics. (2022, March 3). U.S. schools report increased teacher vacancies due to COVID-19 pandemic, new NCES data show. Retrieved October 13, 2022, from https://nces.ed.gov/whatsnew/press_releases/3_3_2022.asp National Education Association. (n.d.-a). Our mission, vision, & values. Retrieved July 31, 2021, from https://www.nea.org/about-nea/mission-vision-values National Education Association. (n.d.-b). Racial & social justice are education justice. Retrieved July 31, 2021, from https://www.nea.org/advocating-for-change/racial-social-justice National School Boards Association. (n.d.). Equity. Retrieved January 5, 2022, from https://www.nsba.org/Advocacy/Equity 123 Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Planning and designing useful evaluations. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (4th ed., pp. 7–35). Jossey-Bass. Nguyen, D., Harris, A., & Ng, D. (2019). A review of the empirical research on teacher leadership (2003–2017): Evidence, patterns and implications. Journal of Educational Administration, 58(1), 60–80. https://doi.org/10.1108/JEA-02-2018-0023 Nonoyama-Tarumi, Y. (2008). Cross-national estimates of the effects of family background on student achievement: A sensitivity analysis. International Review of Education, 54(1), 57–82. https://doi.org/10.1007/s11159-007-9069-5 Northouse, P. G. (2019). Leadership: Theory and practice (8th ed.). Sage. O’Connor, M. K., & Netting, F. E. (2007). Emergent program planning as competent practice: The importance of considering context. Journal of Progressive Human Services, 18(2), 57–75. https://doi.org/10.1300/J059v18n02_05 O’Dell, C., & Hubert, C. (2011). The new edge in knowledge: How knowledge management is changing the way we do business. Wiley. Olsen, J. P. (2014). Accountability and ambiguity. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 106–126). Oxford University Press. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Sage Publications. Paul, C. A. (2016). Elementary and Secondary Education Act of 1965. Social Welfare History Project. https://socialwelfare.library.vcu.edu/programs/education/elementary-and- secondary-education-act-of-1965/ Plessy v. Ferguson, 163 U.S. 537 (1896). https://www.loc.gov/item/usrep163537/ 124 Qualtrics. (n.d.-a). What is a good Net Promoter Score? Retrieved August 19, 2022, from https://www.qualtrics.com/experience-management/customer/good-net-promoter-score/ Qualtrics. (n.d.-b). What is NPS? Your ultimate guide to Net Promoter Score. Retrieved August 19, 2022, from https://www.qualtrics.com/experience-management/customer/net- promoter-score/ Reichheld, F. F. (2003). The one number you need to grow. Harvard Business Review, 81(12), 46–54. Robinson, S. B., & Leonard, K. F. (2019). Designing quality survey questions. SAGE Publications, Inc. Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Sage. Salinas, C., Jr. (2020). The complexity of the “x” in Latinx: How Latinx/a/o students relate to, identify with, and understand the term Latinx. Journal of Hispanic Higher Education, 19(2), 149–168. https://doi.org/10.1177/1538192719900382 Salkind, N. J., & Frey, B. B. (2019). Statistics for people who (think they) hate statistics. Sage. Senge, P. M. (1990). The leader’s new work: Building learning organizations. Sloan Management Review, 32(1), 7–23. Senge, P. M. (1994). The fifth discipline fieldbook: Strategies and tools for building a learning organization (1st ed.). Currency. Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization (2nd ed.). Crown Business. Senge, P. M. (2012). Schools that learn: A fifth discipline fieldbook for educators, parents, and everyone who cares about education (2nd ed.). Crown Business. 125 Sharpe, G. (2011). A review of program theory and theory-based evaluations. American International Journal of Contemporary Research, 1(3), 72–75. Sirin, S. R. (2005). Socioeconomic status and academic achievement: A meta-analytic review of research. Review of Educational Research, 75(3), 417–453. https://doi.org/10.3102/00346543075003417 Skinner, R. R. (2020, August 18). The Elementary and Secondary Education Act (ESEA), as amended by the Every Student Succeeds Act (ESSA): A primer. Congressional Research Service. https://crsreports.congress.gov/product/pdf/R/R45977 Smoak, G. (2006). Ghost dances and identity: Prophetic religion and American Indian ethnogenesis in the nineteenth century (1st ed.). University of California Press. Statistics Solutions. (n.d.). In need of definition: How to select terms to define in your dissertation. Retrieved January 4, 2022, from https://www.statisticssolutions.com/in- need-of-definition-how-to-select-terms-to-define-in-your-dissertation/ Taylor, M., Goeke, J., Klein, E., Onore, C., & Geist, K. (2011). Changing leadership: Teachers lead the way for schools that learn. Teaching and Teacher Education, 27(5), 920–929. https://doi.org/10.1016/j.tate.2011.03.003 Terrell, R. D., Terrell, E. K., Lindsey, R. B., & Lindsey, D. B. (2018). Culturally proficient leadership: The personal journey begins within. SAGE Publications. Torres, L. (2018). Latinx? Latino Studies, 16(3), 283–285. https://doi.org/10.1057/s41276-018- 0142-y Tuck, E., & Yang, K. W. (2014). Youth resistance research and theories of change. Routledge. U.S. Bureau of Labor Statistics. (2021, September 8). Education pays. https://www.bls.gov/emp/tables/unemployment-earnings-education.htm 126 U.S. Census Bureau. (2021, November 22). How the Census Bureau measures poverty. https://www.census.gov/topics/income-poverty/poverty/guidance/poverty-measures.html U.S. Department of Education. (n.d.). Equity of opportunity. Retrieved July 31, 2021, from https://www.ed.gov/equity U.S. Department of Justice. (2021, March 25). Types of educational opportunities discrimination. https://www.justice.gov/crt/types-educational-opportunities- discrimination U.S. Department of Labor. (n.d.). Legal highlight: The Civil Rights Act of 1964. https://www.dol.gov/agencies/oasam/civil-rights-center/statutes/civil-rights-act-of-1964 U.S. General Services Administration. (2022, March). Section 508 of the Rehabilitation Act of 1973. https://www.section508.gov/manage/laws-and-policies/ U.S. Senate. (n.d.). Landmark legislation: Civil Rights Act of 1875. https://www.senate.gov/artandhistory/history/common/generic/CivilRightsAct1875.htm Van Assche, K., Verschraegen, G., Valentinov, V., & Gruezmacher, M. (2019). The social, the ecological, and the adaptive. Von Bertalanffy’s general systems theory and the adaptive governance of social‐ecological systems. Systems Research and Behavioral Science, 36(3), 308–321. https://doi.org/10.1002/sres.2587 Van de Ven, A. H., & Poole, M. S. (1995). Explaining development and change in organizations. The Academy of Management Review, 20(3), 510–540. https://doi.org/10.2307/258786 Van de Werfhorst, H. G., & Mijs, J. J. B. (2010). Achievement inequality and the institutional structure of educational systems: A comparative perspective. Annual Review of Sociology, 36(1), 407–428. https://doi.org/10.1146/annurev.soc.012809.102538 127 von Bertalanffy, L. (1950). An outline of general system theory. The British Journal for the Philosophy of Science, 1(2), 134–165. https://doi.org/10.1093/bjps/i.2.134 von Bertalanffy, L. (1972). The history and status of general systems theory. Academy of Management Journal, 15(4), 407–426. https://doi.org/10.2307/255139 W. K. Kellogg Foundation. (2004, January). W. K. Kellogg Foundation logic model development guide. https://www.wkkf.org/resource-directory/resources/2004/01/logic-model-develop ment-guide W. K. Kellogg Foundation. (2007, September). Designing initiative evaluation: A systems- oriented framework for evaluating social change efforts. https://www.wkkf.org/resource- directory/resources/2007/09/designing-initiative-evaluation--a-systems-oriented- framework-for-evaluating-social-change-efforts Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies (2nd ed.). Prentice Hall. Wenner, J. A., & Campbell, T. (2017). The theoretical and empirical basis of teacher leadership: A review of the literature. Review of Educational Research, 87(1), 134–171. https://doi.org/10.3102/0034654316653478 Westaby, J. (2002). Identifying specific factors underlying attitudes toward change: Using multiple methods to compare expectancy‐value theory to reasons theory. Journal of Applied Social Psychology, 32(5), 1083–1104. https://doi.org/10.1111/j.1559- 1816.2002.tb00257.x Wigfield, A., Rosenzweig, E. Q., & Eccles, J. S. (2018). Achievement values. In A. J. Elliot, C. S. Dweck, & D. S. Yeager (Eds.), Handbook of competence and motivation (2nd ed., pp. 116–134). Guilford Press. 128 Williams, H. A. (2005). Self-taught African American education in slavery and freedom. University of North Carolina Press. Xie, X., Siau, K., & Nah, F. F.-H. (2020). COVID-19 pandemic – online education in the new normal and the next normal. Journal of Information Technology Cases and Applications, 22(3), 175–187. https://doi.org/10.1080/15228053.2020.1824884 York-Barr, A. J., & Duke, K. (2004). What do we know about teacher leadership? Findings from two decades of scholarship. Review of Educational Research, 74(3), 255–316. https://doi.org/10.3102/00346543074003255 Zimmer, R. W., Krop, C., & Brewer, D. J. (2003). Private resources in public schools: Evidence from a pilot study. Journal of Education Finance, 28(4), 485–521. 129 Appendix A: Summer Training Program Participant Survey Questions Question Response option EQ Concept measured 1. I am satisfied with the usefulness of the asynchronous modules offered before the program. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 2. I am satisfied with the communications about the program logistics. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 3. I am satisfied with the schedule of the program. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 4. I am satisfied with the technology used for the program. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 5. I am satisfied with the content delivered by [partner organization]. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 6. I am satisfied with the facilitation from [partner organization]. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 7. I am satisfied with the content delivered by EEJI. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 8. I am satisfied with the facilitation from EEJI. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 9. I am satisfied with the program overall. 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction 130 Question Response option EQ Concept measured 10. I found the content of the program interesting. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Task Value – Intrinsic/ Interest 11. I found the content of the program useful. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Task Value – Extrinsic/ Utility 12. I found the content of the program important. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Task Value – Attainment/ Importance 13. I found that participating in the program worth my time. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Task Value – Cost Value/ Benefit 14. I am confident in my ability to apply learnings from this program to my classroom practice. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Expectancy Outcome 15. I am confident that this program will make me a better teacher. 1–Strongly Disagree to 10–Strongly Agree 3, 5 Motivation – Expectancy Outcome 16. I am confident that I can lead my students toward broader student outcomes. 1–Strongly Disagree to 10–Strongly Agree 3 Motivation – Expectancy Outcome 17. I am confident that I can lead toward systems change. 1–Strongly Disagree to 10–Strongly Agree 5 Motivation – Expectancy Outcome 18. How likely are you to recommend this program to a friend or colleague? 1–Strongly Disagree to 10–Strongly Agree 1 Satisfaction – Net Promoter Score 131 Question Response option EQ Concept measured 19. What was most helpful about this program, if anything? (open) 1 Satisfaction 20. What was least helpful about this program, if anything? (open) 1 Satisfaction 21. What other feedback do you have about the program, if any? (open) 1 Satisfaction 22. Given the program, how has your understanding about broader student outcomes changed, if at all? (open) 2 Knowledge – Metacognitive 23. How has the program’s focus on broader student outcomes influenced what you hope to do in your classroom beginning this fall, if at all? (open) 2, 3 Motivation – Expectancy Outcome; Knowledge – Metacognitive 24. Given the program, how has your thinking changed about the role of a teacher in systems change, if at all? (open) 4 Knowledge – Metacognitive 25. How has the program’s focus on systems change influenced what you hope to do in your classroom beginning this fall, if at all? (open) 4, 5 Motivation – Expectancy Outcome; Knowledge – Metacognitive 132 Appendix B: Summer Training Program Participant Interview Questions Question Probe EQ Concept measured 1. What grades and subjects do you teach? [Ask, if needed] And how long have you been a teacher? [If needed] Anything else about your teaching placement you want to share? N/A Demographics 2. What did you think about your overall experience during the program? [Rephrase, if needed] How would you describe the program to a fellow teacher? 1 Satisfaction 3. How was the program similar or different than you thought it would be? [Rephrase, if needed] What was similar? What was different? [If needed] Tell me more about X. 1 Satisfaction 4. How satisfied were you with the summer training program? [Ask] Tell me more. 1 Satisfaction 5. How might you explain broader student outcomes to a teacher who has not heard about this idea before? [Rephrase, if needed] How would you define broader student outcomes? [Ask] What else might you tell them if they were still unclear? 2 Knowledge – Factual/ Conceptual 6. How did your understanding about broader student outcomes change during the program, if at all? [Rephrase, if needed] What do you understand about broader student outcomes now that you did not before the program, if anything? 2 Knowledge – Metacognitive 133 Question Probe EQ Concept measured 7. How has the program’s focus on broader student outcomes influenced, if at all, what you hope to do in your classroom as a teacher beginning this fall? [Ask, if applicable] What, specifically, would you change? Why? [Ask, if applicable] What, specifically, would you keep the same? Why? 2, 3 Motivation – Expectancy; Knowledge – Procedural 8. Is there anything else you have been thinking about with broader student outcomes that you have not already shared? [If needed] Tell me more. 2, 3 Knowledge; Motivation 9. How might you explain systems change to a teacher who has not heard about this idea before? [Rephrase, if needed] How would you define systems change? [Ask] What else might you tell them if they were still unclear? 4 Knowledge – Factual/ Conceptual 10. How has your thinking changed, if at all, about the role of a teacher in systems change? [Rephrase, if needed] What do you understand about systems change and the role of a teacher now that you did not before the program, if anything? 4 Knowledge – Metacognitive 134 Question Probe EQ Concept measured 11. How has the program’s focus on systems change influenced, if at all, what you hope to do in your classroom as a teacher beginning this fall? [Ask, if applicable] What, specifically, would you change? Why? [Ask, if applicable] What, specifically, would you keep the same? Why? 4, 5 Motivation – Expectancy; Knowledge – Procedural 12. Is there anything else you have been thinking about when it comes to being a teacher and systems change that you have not already shared? [If needed] Tell me more. 4, 5 Knowledge; Motivation 13. How confident are you, if at all, that you could put what you learned from this program into your practice this fall? Why is that? [Ask] What challenges might you encounter, if any? [If needed] What is your confidence level of putting what you learned about broader student outcomes into practice this fall? Why is that? [If needed] What is your confidence level of putting what you learned about systems change and your role as a teacher into practice this fall? Why is that? [If needed] What else gives you confidence, if anything? 3, 5 Motivation – Self-Efficacy 135 Question Probe EQ Concept measured 14. Overall, how useful do you think this training was for you as a teacher? [If needed] Tell me more about that. 1 Satisfaction 15. Given your experience with the program, would you recommend this program to a fellow teacher? Why or why not? [If needed] Could you tell me more about that? 1 Satisfaction 16. Do you have any other feedback that we should consider to make the program stronger? [If needed] What should we change? What should we keep? [If needed] What else should I know? Anything else? 1–5 Satisfaction; Knowledge; Motivation 136 Appendix C: Summer Training Program Participant Interview Protocol Participant Name Interviewer Date & Time Teaching Placement Info INTRO Hi <NAME>, thanks for joining the Zoom to share your thoughts about this summer’s training program. My name is <NAME>, and I . Now I work on staff. As I mentioned, I am also conducting research as part of my dissertation study at the University of Southern California, which is the hat I am wearing for our conversation. The purpose of this study is to evaluate the effectiveness of the <summer training program> and learn more about your perspectives as a participant. Your perspectives will remain completely confidential. I am excited to learn about your experience this summer, but before I do that—is it okay that I record? [If no] No problem! I will type my notes during our conversation. Before we get started, do you have any questions or concerns about our interview? Do I have your consent to use your perspectives in my research? Let’s get started. [If yes] Great, I will hit record now. Okay, I am recording now. Before we get started, do you have any questions or concerns about our interview? Do I have your consent to use your perspectives in my research? Let’s get started. 137 QUESTION 1 First, tell me a bit about your current teaching position. What grades and subjects do you teach? [Ask, if needed] And how long have you been a teacher? [If needed] Anything else about your teaching you want to share? Notes: ● QUESTION 2 Thank you. Now let’s move on to this summer. What did you think about your overall experience during the program? [Rephrase, if needed] How would you describe the program to a fellow teacher? Notes: ● QUESTION 3 How was the program similar or different than you thought it would be? [Rephrase, if needed] What was similar? What was different? [If needed] Tell me more about X. Notes: ● QUESTION 4 How satisfied were you with the summer training program? [If needed] Tell me more about that. Notes: 138 QUESTION 5 Moving into some of the content of the program, how might you explain “broader student outcomes” to a teacher who has not heard this idea before? [Rephrase, if needed] How would you define broader student outcomes? [Ask, if needed] What else might you tell them if they were still unclear? Notes: ● QUESTION 6 How did your understanding about broader student outcomes change during the program, if at all? [Rephrase, if needed] What do you understand about student outcomes now that you did not before the program, if anything? Notes: ● QUESTION 7 How has the program’s focus on broader student outcomes influenced, if at all, what you hope to do in your classroom as a teacher beginning this fall? [Ask, if applicable] What, specifically, would you change? Why? [Ask, if applicable] What, specifically, would you keep the same? Why? Notes: ● QUESTION 8 Is there anything else you have been thinking about with broader student outcomes that you have not already shared? [If needed] Tell me more. Notes: ● 139 QUESTION 9 Thank you! Now we are going to shift toward another focus of the program—systems change. How might you explain “systems change” to a teacher who has not heard about this idea before? [Rephrase, if needed] How would you define systems change? [Ask, if needed] What else might you tell them if they were still unclear? Notes: ● QUESTION 10 That was helpful. How has your thinking changed, if at all, about the role of a teacher in systems change? [Rephrase, if needed] What do you understand about systems change and the role of a teacher now that you did not before the program, if anything? Notes: ● QUESTION 11 How has the program’s focus on systems change influenced, if at all, what you hope to do in your classroom as a teacher beginning this fall? [Ask, if applicable] What, specifically, would you change? Why? [Ask, if applicable] What, specifically, would you keep the same? Why? Notes: ● QUESTION 12 Is there anything else you have been thinking about when it comes to being a teacher and systems change that you have not already shared? [If needed] Tell me more. Notes: ● 140 QUESTION 13 Thank you. And just a few more short questions. How confident are you, if at all, that you could put what you learned from this program into your practice this fall? Why is that? [Ask, if needed] What challenges might you encounter, if any? [If needed] What is your confidence level of putting what you learned about broader student outcomes into practice this fall? Why is that? [If needed] What is your confidence level of putting what you learned about systems change and your role as a teacher into practice this fall? Why is that? [If needed] What else gives you confidence, if anything? Notes: ● QUESTION 14 Overall, how useful do you think this training was for you as a teacher? [If needed] Tell me more about that. Notes: ● QUESTION 15 Given your experience with the program, would you recommend this program to a fellow teacher? Why or why not? [If needed] Could you tell me more about that? Notes: ● QUESTION 16 Finally, do you have any other feedback that we should consider to make the program stronger? [If needed] What should we change? What should we keep? [If needed] What else should I know? Anything else? Notes: ● 141 QUESTIONS Thank you so much! We have a few minutes remaining. Are there any questions I can answer for you? Notes: ● CLOSING [Any other questions?] Great! Well thank you so much <NAME> for sharing your perspectives about this summer’s training program. You have contributed to the ongoing research about how to strengthen our offerings to teachers in pursuit of educational equity, and I really enjoyed our conversation. Thanks again and have a great day! 142 Appendix D: Interview Data Codebook Code Description Example Satisfaction General satisfaction Mentions being satisfied with an aspect of the summer training program. “I think it was an opportunity to learn different things that I haven’t thought about.” Accessibility Mentions the ease of accessibility of materials or specifically related to disability accommodations. “The ADA issues should be there. Right, we shouldn’t have to beg for them.” Quality Mentions the quality of program or the rigor of the program. “The design of this seemed rather haphazard.” Timing Mentions the timing of year, length of program, or pacing of program. “The live sessions, I think were super helpful because like with the built-in breaks, I felt like it was a good pace as well, where I didn’t feel like my brain was being oversaturated. And I got a break.” Usefulness Mentions transferability of learnings into practice. Can include mention of resources. “It didn’t have any programming whatsoever for marginalized populations, for disabled populations, and so on, it assumed a standard learner and went from there.” 143 Code Description Example Expectations Mentions expectations of the program or alignment to expectations of the program. “It should have said, you know, we’re going to talk about literacy for the most standard population, you know, elementary school standpoint. And that would have informed my decision, I think, a lot better.” Broader Student Outcomes Knowledge General Broader Student Outcomes Knowledge Mentions a student output based on deliberate work from the teacher. Could also mention general definitions of broader student outcomes. “These are how we’re going to develop our students so that they are equipped to survive beyond school.” Teaching Strategies Mentions various teaching strategies learned during the program. “You have to differentiate the experience for every student. You have to customize it for them and their unique context, their unique needs, addressing their identity, their family position and class and so on.” Broader Student Outcomes Motivations General Broader Student Outcomes Motivations Mentions intended actions from participating in the summer training program related to achieving student outcomes. “It’s encouraging me to kind of just change the ways that I’ve been teaching, I guess, since like, a lot of it’s been just trial and error. Now, I kind of have some things that have been scientifically proven to work.” 144 Code Description Example Teacher Mindsets Mentions teacher mindsets in supporting student outcomes. “And it’s just strictly my job to make sure they get the tools and the resources they need to make sure they achieve those outcomes.” Student Mindsets Mentions student mindsets in supporting student outcomes. “My students tell me every day they might never have to use a protractor again. But there’s still golden things in the classroom that they may use it for the rest of their lives.” Systems Change Knowledge General Systems Change Knowledge Mentions a change to established policy, practice, or way of being within or across institutions. Could also mention definitions related to systems change. “Moving beyond, you know, the tactical moves that we make into the strategic, you know, enabling the principal, to leverage funds for the short term, and then moving into policies so that we can cement those gains, and make sure that they stay there as legacy.” Transcends Classroom- Only Change Mentions that systems change does not happen within the classroom (e.g., a teacher changes the rules of the classroom that only affects the classroom), but it happens within and across institutions. “A system change would mean that this is something that goes across the board. And that transcends, like, whatever I’m doing in my class. And it’s, it’s supposed to be like a bar set for all of us to kind of be meeting.” 145 Code Description Example Systems Change Motivations General Systems Change Motivations Mentions intended actions from participating in the summer training program related to pursuing systems change (a change to established policy, practice, or way of being within and across institutions). “There were things within the school that I didn’t like. So now that I understand the system change—it helps me understand that I can effect that change not even in my classroom, I can effect that change in my school, I can bring things to my principal.” Teachers’ and Others’ Mindsets Mentions teacher, administrator, students, student families, community members, etc. mindsets in pursuing systems change. “I actually made a diagram where I kind of placed where I am in relation to all of the different other stakeholders and people within my own system. So, me in comparison to parents, students, the district, or their teachers, admin, all of that, and then branching out, and then kind of seeing how all of that communication fits together.” Proof Points Mentions an example of systems change or one in progress. “It’s more effective when it happens from within, when teachers and families get together to effect change within a particular school, and then use that use case, if you will, that case study, I guess, to say, Hey, look at what we’ve done here, it’s brilliant, and everybody should be doing.”
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Father engagement in the child welfare system: a promising practice study
PDF
School culture and its impact on PBIS implementation
PDF
Developing equity-mindedness in the face of whiteness: White educator perspectives of race
PDF
Preparing teachers to advance equity through deeper learning and antiracist practices
PDF
Training qualified rural area teachers in Malawi using synchronous virtual learning (SVL) information and communication technologies (ICTs) platforms
PDF
Evaluating the effectiveness of global residence in improving resident cultural intelligence
PDF
Influencing teacher retention: an evaluation study
PDF
Multi-tiered system of supports to address significant disproportionality in special education
PDF
When they teach us: recruiting teacher candidates of color for the next generation of students, an evaluation study
PDF
Principal leadership succession: developing the next generation of leaders
PDF
Transforming the leadership table: a critical narrative study of the underrepresentation of Chicana/os in higher education leadership
PDF
A path to K-12 educational equity: the practice of adaptive leadership, culture, and mindset
PDF
Closing the leadership development learning transfer gap
PDF
Teacher as nurturer: perceptions of elementary teachers integrating social and emotional learning practices
PDF
Indonesian teachers' adoption of technology in the K-12 classroom: a TAM-based quantitative study
PDF
An invisible army: access to basic training for special education paraprofessionals
PDF
Narrowing the English learner achievement gap through teacher professional learning and cultural proficiency: an evaluation study
PDF
Attendance interventions to address chronic absenteeism
PDF
A customer relationship management approach to improving certificate completion
PDF
Promising practices for developing leadership capacity in future school administrators
Asset Metadata
Creator
Fritz, Nicky
(author)
Core Title
Interrupting inequitable systems: evaluating a teacher leadership development program
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Publication Date
11/21/2022
Defense Date
11/04/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
educational equity,logic model,OAI-PMH Harvest,program theory,systems change,teacher development,theory of change
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Malloy, Courtney (
committee chair
), Ott, Maria (
committee member
), Stowe, Kathy (
committee member
)
Creator Email
nicky@usc.edu,nickyfritz@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC112486247
Unique identifier
UC112486247
Identifier
etd-FritzNicky-11324.pdf (filename)
Legacy Identifier
etd-FritzNicky-11324
Document Type
Dissertation
Format
theses (aat)
Rights
Fritz, Nicky
Internet Media Type
application/pdf
Type
texts
Source
20221121-usctheses-batch-992
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
educational equity
logic model
program theory
systems change
teacher development
theory of change