Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Understand how leaders use data to measure the effectiveness of leadership development programs
(USC Thesis Other)
Understand how leaders use data to measure the effectiveness of leadership development programs
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Understand How Leaders Use Data to Measure the Effectiveness of
Leadership Development Programs
by
Gina K. Minniti
Rossier School of Education
University of Southern California
A dissertation submitted to the faculty
in partial fulfillment of the requirements for the degree of
Doctor of Education
December 2021
© Copyright by Gina K. Minniti 2021
All Rights Reserved
The Committee for Gina K. Minniti certifies the approval of this Dissertation
Joshua Baker
Anthony Maddox
Emmy Min, Committee Chair
Rossier School of Education
University of Southern California
2021
iv
Abstract
This study sought to understand how data was used to evaluate the effectiveness of leadership
development programs (LDP) at Century Manufacturing, Inc. Using the Clark and Estes (2008)
gap analytic conceptual framework, the qualitative study used interviews to examine knowledge,
motivation, and organizational influences on the use of data by program stakeholders. Interviews
also sought to determine what systemic barriers existed that prevent key stakeholders from using
data to make decisions. Interview participants included organizational actors with roles that hold
ownership or responsibility for LDPs or evaluate program participants’ performance, which
included LDP coordinators, participants, organizational sponsors, and managers of program
participants. Study findings indicated that LDP coordinators demonstrated solid conceptual and
procedural knowledge of leadership skills and principles, tools and approaches for learning
measurement, and motivation to deliver value to the organization. Organizationally, the company
has a strong data-driven decision-making culture and interview participants felt they received
adequate funding and executive sponsor support to meet expectations. Conversely, findings
indicated motivational gaps caused by disconnects between LDP coordinators’ expectancy and
utility value and that perceived by their organization. Further, organizational gaps existed as a
result of ambiguous accountability relationships and weak alignment between stakeholder
performance and organizational goals. To address these gaps, recommendations focused on
creating stronger alignment of definitions for leadership and program effectiveness across the
company, as well as connecting program outcome and organizational business goals.
Recommendations also include an implementation and evaluation framework based on
Kirkpatrick and Kirkpatrick’s (2016) New World Model.
v
Dedication
A mi familia
vi
Acknowledgements
Leaders are only as good as the network of support surrounding them. Gratitude is not
enough to convey how fortunate I was to have the best people supporting me. I express my
heartfelt thanks to my chair, Dr. Emmy Min, for making a complex process manageable. I
appreciate your ability to put at ease any confusion, doubt, or anxiety that I felt, and always
being there to answer the “one more question” that I had.
To my committee members, Dr. Anthony Maddox and Dr. Josh Baker, thank you for
your valuable feedback, thoughtful insights, and enthusiasm in pushing me to keep thinking
bigger. I am humbled by the passion you shared for my topic, as well as your desire to help me
grow as a student, researcher, and thought leader.
To my fellow Cohort 14 members, it was a pleasure sharing this journey together. I
enjoyed seeing your faces Saturday mornings, the breakout group discussions, and the
relentlessly high bar you all set as amazing leaders and human beings. To my study group, Team
JFMQ, I am forever indebted for your intelligence, humor, wittiness, and saltiness. We were all
busy but found a way to create the safe space needed to mentally weather the challenges the
pandemic brought with it. This experience would not have been as fulfilling or rewarding
without any of you.
To my husband and son, your unending support and belief in me were constant sources of
motivation and inspiration. Thank you for your acceptance and understanding for every missed
baseball and soccer game, every long night I spent on the computer, and every neurotic moment
that I obsessed about an assignment or the latest chapter I was writing. One day, far in the future,
when you read this dissertation, know that it would not have been possible without my boys.
vii
Table of Contents
Abstract .......................................................................................................................................... iv
Dedication ....................................................................................................................................... v
Acknowledgements ........................................................................................................................ vi
List of Tables ................................................................................................................................. ix
List of Figures ................................................................................................................................. x
Chapter One: Overview of the Study .............................................................................................. 1
Background of the Problem ................................................................................................ 1
Purpose of the Study ........................................................................................................... 3
Significance of the Study .................................................................................................... 3
Overview of Theoretical Framework .................................................................................. 4
Methodology ....................................................................................................................... 5
Definition of Terms............................................................................................................. 6
Organization of the Study ................................................................................................... 7
Chapter Two: Review of the Literature .......................................................................................... 8
Literature Review................................................................................................................ 8
Conceptual Framework ..................................................................................................... 21
Summary ........................................................................................................................... 29
Chapter Three: Methodology ........................................................................................................ 30
Research Questions ........................................................................................................... 30
Overview of Design .......................................................................................................... 31
Research Setting................................................................................................................ 33
The Researcher.................................................................................................................. 35
Data Sources ..................................................................................................................... 36
Participants ........................................................................................................................ 37
viii
Instrumentation ................................................................................................................. 37
Data Collection ................................................................................................................. 39
Data Analysis .................................................................................................................... 39
Validity and Reliability ..................................................................................................... 41
Ethics................................................................................................................................. 41
Limitations and Delimitations ........................................................................................... 42
Chapter Four: Results or Findings ................................................................................................ 44
Participants ........................................................................................................................ 45
Results for Research Question 1 ....................................................................................... 48
Results for Research Question 2 ....................................................................................... 56
Results for Research Question 3 ....................................................................................... 68
Summary ........................................................................................................................... 72
Chapter Five: Recommendations .................................................................................................. 74
Discussion of Findings ...................................................................................................... 74
Recommendations for Practice ......................................................................................... 75
Integrated Implementation and Evaluation Plan ............................................................... 85
Limitations and Delimitations ......................................................................................... 101
Recommendations for Future Research .......................................................................... 102
Conclusion ...................................................................................................................... 103
References ................................................................................................................................... 105
Appendix A: Interview Protocol ................................................................................................. 115
Appendix B: LDP Post Event Survey ......................................................................................... 119
Appendix C: LDP Follow-up Survey ......................................................................................... 120
Appendix D: LDP Shared Accountability Model Balanced Scorecard ...................................... 121
ix
List of Tables
Table 1: Knowledge Influences 25
Table 2: Motivational Influences 26
Table 3: Organizational Influences 28
Table 4: Data Sources 32
Table 5: Research Questions Alignment 38
Table 6: Composition of Sample and Data Use-Related Responsibilities 46
Table 7: Participant by Experience and Role 47
Table 8: Use of Measurement Method by Program Element 54
Table 9: Knowledge Influences – Participant Interview Results 58
Table 10: Knowledge Assets and Gaps 76
Table 11: Outcomes, Metrics, and Methods for External and Internal Outcomes 88
Table 12: Critical Behaviors, Metrics, Methods, and Timing for Evaluation 90
Table 13: Required Drivers to Support Critical Behaviors 92
Table 14: Evaluation of the Components of Learning for the Program 96
Table 15: Components to Measure Reactions to the Program 97
Appendix D: LDP Shared Accountability Model Balanced Scorecard 121
x
List of Figures
Figure 1: Clark and Estes (2008) Gap Analysis Process 22
Figure 2: Conceptual Framework Model 23
1
Chapter One: Overview of the Study
This study addresses the problem of practice of the lack of use of data to measure the
effectiveness of leadership development programs. While 44% of organizational leaders
surveyed identified increasing effectiveness of training as a top one or two priority, only 10 to
20% of organizations evaluate their leadership development programs (Avolio et al., 2010).
Researchers found that many training initiatives begin without clear alignment with the strategic
or business goals of their organization (Barnett & Mattox, 2010) or omit levels of measurements
resulting in the absence of key data for decision making (Kirkpatrick, 2006). This problem is
important to address because human resources (HR) leaders stated that identifying and
developing leadership talent for organizational growth as their top problem (Avolio et al., 2010),
yet do not measure the effectiveness of their training programs in a way that defines and
demonstrates success (McKinsey & Company, 2014). As such, leaders lack sufficient data to
provide feedback on performance, identify interventions, and enable other important decision-
making (Marsh, 2015). Therefore, talent development leaders must have a holistic, theory-based
approach to assessing and improving their leadership training programs.
Background of the Problem
Recent estimates suggest that although a majority of funds in organizational training
budgets are allocated to leadership training, only a small minority of organizations believe their
programs are highly effective and do not have the information available to make appropriate
improvements (Lacerenza et al., 2017). Research shows that while organizations continue to
identify developing leaders as a top priority, few consistently measure training programs for
effectiveness (Barnett & Mattox, 2010), create action plans for improvement (Association for
Talent Development, 2020), and struggle to support employees’ use of data (Marsh, 2015).
2
Executives surveyed by LinkedIn Learning stated talent was a top priority (81%) and learning
and development were a necessary benefit to employees (90%), yet barely half assessed skills
gaps in critical areas such as leadership (Association for Talent Development, 2018). With the
perceived value of training combined with high levels of monetary investment, leadership
development coordinators face an increased need to produce a data-driven measurement strategy
to inform decision-making (Avolio et al., 2018).
Research reveals that although leaders have access to measurement instruments
(Kirkpatrick & Kirkpatrick, 2016), technology and resources to gather feedback, and data for
analysis (Marsh, 2015), organizations are not consistently building comprehensive measurement
strategies using a holistic, theory-based definition for effective leadership or demonstrating value
and impact to the business. Companies frequently emphasize individual learner’s demonstrated
knowledge, skills, and abilities when evaluating program performance (Barnett & Mattox, 2010)
when effective leadership development programs educate leaders to coordinate with teams, build
commitments and develop social networks to integrate their organizations, not simply to
demonstrate leadership skills (Hall et al., 2014). Assessment that lacks context in which
competencies are applied nor incorporates environmental factors cannot be considered measures
of effectiveness (Hall et al., 2014), yet HR leaders often forego administration of competency-
based evaluations (Thomas et al., 2019). Further, many organizations today use training
evaluations that are inadequate to determine if training is achieving its purpose (Kirkpatrick,
2016) and may not determine if behavior change is the result of training or another source
(Kirkpatrick, 2006). The lack of a comprehensive measurement strategy can establish barriers to
holding leaders accountable for meeting program objectives and sets up social contingency
models (Hall et al., 2017) of subjective interpretation for what participants and program leaders
3
are held accountable for rather than objective mechanisms. Therefore, this study seeks to
understand how a manufacturing company that effectively utilizes and embraces data for
decision-making in a vast majority of areas across the company employs such behavior in this
context, as it is less transparent within the organization.
Purpose of the Study
The purpose of this study is to understand whether or not key leadership development
program leaders at Century Manufacturing, Inc. use data to inform decisions. If so, how and to
what extent it is used. If not, what barriers exist, and how do those barriers impede the
organization from effectively achieving its goals. It will also necessitate an understanding of the
accountability relationships required to ensure program goals are met.
1. How are LDP coordinators collecting and utilizing data within each element of the
learning development program?
2. Do systemic barriers exist that prevent LDP coordinators, managers, and Executives
from using data to make programmatic decisions? If so, how?
3. If data is available, how is it used to identify, define, and manage accountability
relationships required to make the LDP for participants to be successful?
Significance of the Study
According to Kirkpatrick (2016), companies should invest in evaluating the most
organizationally important or costly training programs, which include leadership development
programs. Further, without a clear measurement strategy aligned to the organizational goals,
companies not only risk a poor return on investment but insufficient resources to execute their
priorities, as well (Barnett & Mattox, 2010). Therefore, talent development leaders must have a
results-driven approach to assessing and improving their leadership training to hold stakeholders
4
accountable for performance. Through an examination of the use of data at Century
Manufacturing, Inc., the study can identify what organizational enablers or barriers on the job to
the application of data analysis exists (Clark & Estes, 2008). Further, given the data-driven
culture of Century Manufacturing, Inc., the study could provide a unique ontological lens to view
results.
Overview of Theoretical Framework
This dissertation will use Bandura’s (2012) social cognitive theory. The social cognitive
theory asserts that humans function as part of social systems, and therefore personal agency and
social structures are inherently linked (Bandura, 2005). An assumption underlying the problem
of practice is that talent development organizations do not establish a culture that fosters the use
of data for decision making, which can establish structural barriers that result in insufficient
resources to execute their learning strategy priorities (Barnett & Mattox, 2010). This research
seeks to explore what social conditions and institutional practices serve as enablers to individual
agency using the lens of Bandura’s (2005) social cognitive theory.
Social cognitive theory is appropriate to examine the problem of practice because
effective learning is not just demonstrated by measuring an individual’s knowledge, abilities, and
behavior alone, but also by factoring in the dynamic context of social interactions and
experiences within which they occur (Bandura, 2012). Effective leadership development
programs educate leaders to coordinate with teams, build commitments, and develop social
networks to integrate their organizations, not simply to learn leadership skills (Hall et al., 2014).
As such, talent development leaders require an assessment approach, grounded in sound theory
and practice, that accounts for the dynamic nature of effective leadership development. Social
cognitive theory can help examine the elements within the person, his or her behavior, and
5
environment that will optimize learning and application of leadership development knowledge
and skills. Often, the emphasis is placed on an individual learner’s demonstrated knowledge,
skills, and abilities when measuring the effectiveness of a leadership development training
program (Barnett & Mattox, 2010). However, when training lacks the context in which
competencies are applied, truly effective leadership skills cannot be developed, nor can
environmental elements be factored into the assessment of effectiveness (Hall et al., 2014).
Further, individual factors, such as talent, self-efficacy, motivation, and developmental readiness,
need to also be considered when evaluating the effectiveness of training (Avolio et al., 2010).
The social cognitive theory provides a lens to not only assess for these gaps but better inform
talent development program leaders of the right individual and programmatic interventions with
which to proceed.
Methodology
This study will use a qualitative methodology that is predominantly a qualitative
approach to seek meaning and understanding of the processes that exist for the use of data-driven
decision making within the context of the unique, complex social system of the LDP (Creswell &
Creswell, 2018). The researcher will conduct open-ended, semi-structured interviews of
participants, as well as analyses of documents related to targeted high-potential LDPs. This
methodology supports a phenomenological approach, which provides a detailed description of
human experiences and meaning, and uses specialized methods of participant selection, data
collection with interview component, and systematic data analysis into assembling of a final
report (Creswell & Creswell, 2018). The interviews will allow for a better understanding of how
LDP coordinators collect and utilize data for decision-making, as well as gather insight into
6
unique perspectives on any barriers and enablers overseeing the multifaceted nature of the
programs.
The research design will incorporate multiple forms of data collection (interviews,
surveys, and document reviews) and keep a detailed audit trail of notes to strengthen validity and
credibility (Lincoln & Guba, 1985). Lincoln and Guba (1985) state that researchers can
strengthen confirmability by keeping detailed fieldnotes and other data to trace back to original
sources. In addition to multiple forms of data, the researcher can also utilize peer researchers
during interviews and data analysis to increase credibility.
Definition of Terms
For the purposes of the study, key concepts integral to understanding the problem are
data-driven decision-making and effective leadership development programs.
• 360 assessment is a measurement tool widely used to assess individual leader
performance. The tool collects perceptions of a leader’s performance from self, peers,
managers, and direct reports across specific skills, behaviors, or practices (Heuston et al.,
2021). The data collected across observer groups is valuable in guiding leadership
development activities for LDP participants.
• Data-Driven Decision Making occurs when leaders collect, organize, and analyze data to
become information, and then combine it with stakeholder understanding and expertise to
become actionable knowledge (Marsh, 2015). Data-driven organizations would possess a
clear measurement strategy, exercise efficient decision-making capability, set
performance goals, and use data to create accountability relationships amongst all
stakeholders (Barnett & Mattox, 2010; Marsh, 2012).
7
• Effective Leadership Development Programs (LDPs) educate leaders to coordinate with
teams, build commitments, and develop social networks to integrate their organizations,
not simply to learn leadership skills (Hall et al., 2014). Effective LDPs would treat
learning as a social cognitive process where a network of activities, people, and
experiences support the participant’s development (Bandura, 2005; Ling et al., 2018).
Program leaders would not only measure each component of the network but also the
relationships between them to form insights on drivers of individual and program
performance (Ling et al., 2018).
Organization of the Study
This dissertation is organized into five chapters. Chapter One introduces the study’s
background and methodology, including the statement and context of the problem, the purpose
and importance of the study, and the research questions and theoretical framework. Chapter Two
is a literature review of the problem that presents the case for measurement, examines existing
measurement strategies for leadership development training programs, and presents a conceptual
framework for the study. Chapter Three details the methodology for research design, including
the data collection approach, participant selection, research settings, instruments, and data
analysis. Considerations associated with validity, reliability, ethics, and overall constraints are
also discussed. Chapter Four contains the study’s results and a discussion of key findings.
Chapter Five presents recommendations for closing gaps in knowledge, motivation, or
organizational elements identified in the study’s results, limitations of research, and possible
areas for future research.
8
Chapter Two: Review of the Literature
The purpose of this literature review is to uncover current strategies and best practices for
measuring the effectiveness and business impact of leadership training. According to research
(Woltring et al., 2003), only 20% of training programs evaluate how new skills are applied on
the job following training, and those who are successful in demonstrating training effectiveness
use an approach that evaluates employee skill development, behavior change, and alignment to
business needs (Bourda, 2014). Leadership development typically involves the identification of
the core competencies needed for high-level performance in a specific position, assessment of
the extent to which a particular leader possesses those core competencies, and creation of
specific developmental opportunities to match the requirements of the competency (Achlison et
al., 2019). This review focuses on demonstrating criteria for program effectiveness, existing
measurement frameworks, and how accountability relationships contribute to sustainability.
Further, it investigates what elements need to exist to drive the effective use of data for decision
making as well as barriers and challenges organizations experience. This chapter presents
literature and research related to the key elements of knowledge, motivation, and organization
within the research’s conceptual framework and narrows the focus on a set of research-based
conditions for success.
Literature Review
This literature review section focuses on how and why training programs should be
constructed upon leadership skills and behaviors aligned with specific organizational needs,
including mission, vision, value, and strategy, as well as how operating with a view of long-term
interests of stakeholders makes for more effective decision making and provides motivation for
change. The review begins by exploring challenges to creating adequate definitions and criteria
9
for leadership and program efficacy, both considered critical foundational elements of
measurement strategies. Next, the examination takes place of the importance of evaluation, yet
sparsity of research available that studies usage across organizations. Finally, the review studies
the role of accountability in establishing relationships needed to produce and sustain effective
program operations.
Leadership and Program Effectiveness
A primary challenge to measuring leadership and program effectiveness is a lack of one
consistent definition for leadership and program effectiveness (Avolio & Hannah, 2008; Yukl et
al. 2002). Leadership development coordinators and content developers are often left on their
own to navigate multiple taxonomies, inconsistent term definitions, and multiple definitions for
the same word (Yukl et al., 2002). As many as 65 different classification systems have been
created to define leadership (Northouse, 2016). As such, many organizations take the approach
of placing individuals into positions of leadership based on technical skills (Fuqua & Newman,
2005) or selecting them based on personality traits or characteristics (Northouse, 2016), which
does not mean they are prepared for leadership. When organizations do seek structure and
definition in leadership development, training coordinators can become overwhelmed with
content produced by over half a century of studies in leadership. In 2017 there were 31,339
papers published about leadership styles, compared to just 499 in 1960 (Wilkinson, 2018).
For training to be successful, organizations need to construct a definition and approach to
leadership that clearly defines what skills leaders are expected to learn, requires leaders to
demonstrate behavior change through the application of knowledge, and connects behavior
change to business performance improvement. This section will discuss why a clear definition
around leadership is important for program effectiveness, how and why a shift has occurred from
10
traditional, bureaucratic models to holistic, systems-based approaches to defining leadership, and
how literature review research narrowed the volume of disparate and often conflicting content
into a central approach to conceptualizing leadership.
Importance of a Clear Definitions
A consistent set of leadership skills and behaviors aligned with the needs of the business
creates clear expectations for the behaviors needed across the organization. A clear definition of
leadership, one that engages all stakeholders, including customers, establishes a strong
connection between organizational mission, vision and goals, and program elements. By doing
so, talent development coordinators cultivate an organizational definition of leadership and
development approach aligned to business needs (Senge, 2005). Training can be successful when
teaching core competencies (Austin et al., 2011) and possess common behavioral components
such as goal setting, resilience, and integrity (Alina, 2013; Silva, 2016). Further, leaders work to
ensure management and mission are united with a commitment to ongoing stakeholder inclusion
as business landscapes inevitably evolve (Wheeler & Sillanpa'a, 1998).
A clear definition of effective leadership also enables the measurement of defined
outcomes and demonstrates the value of training back to the business. While organizations
continue to identify critical skills needed by leaders and invest millions of dollars in leadership
development programs (2013 Training Industry, 2013), prioritization of measuring program and
skill effectiveness remains low (Reinelt, Foster, & Sullivan, 2002). With a defined set of
outcomes aligned with a clear organizational definition of leadership, program coordinators are
better positioned to incorporate evaluation tools such as competency-based elements that
measure knowledge of compliance regulations or determining areas of expertise required for
11
leaders within the organization (Barnett & Mattox, 2010). Coordinators can also use surveys and
interviews to gather data (Yukl et al., 2002).
Ogidan (2014) conducted a phenomenological study demonstrating a linkage between
leadership development programs and employee retention. Twenty business leaders who
participated in leadership development programs were interviewed to determine how
understanding expected behaviors, improvement of critical thinking, and promoting
organizational values were all elements of an effective leadership program. Leaders stated that
leadership skills addressed within the training strengthened their ability to communicate vision,
developed people management skills, improved employee engagement, and reduced turnover.
The researcher drew upon many of the leadership skills identified as part of the training to
construct interview questions, which facilitated the connection between specific, targeted
behaviors and outputs impacting the business. In identifying outcomes to measure, Stiehl et al.
(2015) also recommends training coordinators consider environmental factors unique to the
organization that serve as situational variables that could influence training effectiveness, such as
supervisory, peer, and organizational support.
Shift in Approaches to Defining Leadership
An analysis of leadership materials written between 1900 and 1990 found more than 200
different definitions for leadership and how the definition evolved over nearly a century
(Northouse, 2016). The research, conducted by Rost (1991), reveals a shift from an emphasis on
control and centralization of power in the first three decades of the 20th century to a gradual
introduction of leading through influence that culminates into viewing leadership as a process
whereby an individual influences a group of individuals to achieve a common goal (Northouse,
2016). This group theory, use of relationships that develop shared goals, and influence on group
12
effectiveness emerges in the 1950s and infuses organizational behavior theory starting in the
1970s. Leadership research exploded in the 1980s, driving greater academic and public
awareness but also adding volumes of disparate views that intensified debates (Northouse, 2016).
While research does not produce a single view on leadership, scholars are unified in a belief that
leadership is not a fixed set of traits or driven by one universal theory but rather a process that
requires adaptability to respond to changing global landscapes and generational differences.
Traditional, bureaucratic models of leadership place limitations on leaders and
organizations which cause them both to be less agile and reliant on a central authority for
direction and confirmation (Fuqua & Newman, 2005). The bureaucratic-based approach fixates
power and authority within a hierarchy created at one point in time and remains regardless of
context (Fuqua & Newman, 2005). Shifting towards a definition of leadership based upon
systems theory will enable organizations to develop a higher quantity and quality of leaders. A
systems-theory-based model emphasizes collaborative processes, shared authority, and local
leadership. It redefines organizational leadership more broadly to engage and embed leaders at
all levels and across teams. Authority evolves within a collaborative context that focuses leaders
to leverage the collective expertise and decision-making capability of the team. Fuqua and
Newman (2005) contend that the systems-theory approach better suits the dynamic nature of
business environments and allows organizations to develop leaders more effectively rather than
rely on an approach to leadership that selects leaders based on existing technical expertise.
By developing a holistic definition of leadership effectiveness, organizations can increase
the impact of training and glean insights that better inform decision-making (Kragt & Guenter,
2018). According to Bandura (2005), effective leadership is conceptualized as a complex process
involving multiple dimensions that motivate a diverse group of individuals in the completion of
13
tasks and achievement of a common goal. Leadership is not confined to an innate set of
characteristics, or defined by completion of a set of activities, but is rather reflective of a
perpetual learning and application experience that is multilateral, distributed, and contextual
(Bolman & Deal, 2017). Effective leaders inspire people through clear goals and vision (Alina,
2013; Bennis, 2009) and influence employee behavior to deliver results (Moores, 2013; Silva,
2016).
Scholars support a definition of effective leadership that includes the capability to better
enable the organization to respond to change. In a study conducted by Austin et al. (2011),
participants reported that training improved their ability to coach employees and implement
organizational vision, which facilitated change at individual and organizational levels. Yukl et al.
(2002) found in two studies surveying over 1000 subordinates and direct reports of managers that
task-oriented, relations-oriented, and change-oriented behaviors were all relevant for
demonstrating effective leadership in different situations. Further, change-oriented behavior,
defined as identifying external threats and opportunities, envisioning new possibilities, proposing
innovative strategies, and encouraging innovative thinking by followers, correlated strongly with
a subordinate rating of manager competence. Training programs developed with a purposeful
and meaningful conceptual framework that includes task, relations, and change behaviors are
better positioned to reinforce an organizational definition for leadership that results in higher
employee satisfaction and organizational performance.
Conceptualizing Leadership
Although research does not show a single list of skills (Austin et al., 2011), it does
provide program coordinators several theories and models to incorporate into their design that
can inform a framework that allows leaders to evolve in response to varying environmental
14
conditions. Increased scholarly and popular works examining leadership emerged in the 1980s,
producing a high volume of classification approaches to leadership (Northouse, 2016). This
opened the door for scholars in the 1990s and into the early 21st century to produce research
focused on understanding the process of leadership, leading the way to practical, yet
academically robust leadership theories and models that training programs coordinators can use.
In his examination of the evolution and application of leadership theories, Northouse
(2016) creates a central definition of leadership by examining the major themes within a selected
group of approaches to leadership and how they can be used to develop effective leadership.
Northouse (2016) defines leadership as a process whereby an individual influences a group of
individuals to achieve a common goal. This definition aligns with Bandura’s (2005) social-
cognitive view of leadership as a complex process involving multiple dimensions that motivate a
diverse group of individuals in the completion of tasks and achievement of a common goal.
Northouse (2016) asserts that three components, leadership is a process, leadership involves
influence, and leadership involves common goals, emerge as central to the definition of
leadership across the various ways it is conceptualized.
In his comprehensive literature review, Northouse (2016) emphasized how theory
informs the practice of leadership (Northouse, 2016). He also includes descriptions of each
approach, including contributions and failures to understanding the leadership process, how each
approach can be applied, and measurement instruments that identify and assess critical
knowledge and behaviors within each leadership style. Of the robust set of 14 leadership theories
and concepts, the four approaches identified as emerging within the 21st century to contribute to
the process of leadership are authentic, spiritual, servant, and adaptive. Authentic leadership
focuses on leader use of compassion, consistency, and connectedness; spiritual leadership on
15
utilization of values, sense of calling, and membership; servant leadership focuses the leader on
followers’ needs through the use of “caring principles”; and adaptive leadership where leaders
encourage followers to adapt while confronting problems, challenges, and change (Northouse,
2016).
Authentic leadership approaches help to achieve positive organizational outcomes, which
include increased follower and organizational self-esteem and performance (Avolio et al., 2018).
A qualitative case study conducted by Dematthews and Izquierdo (2017) demonstrated how
authentic and social justice leadership practices drive effective practices and mitigate obstacles.
In the case study, a bilingual female principal used authentic leadership styles and actions to
create a more inclusive and socially just school along the U.S.–Mexico border, especially for
Mexican and Mexican American students classified as English language learners (ELLs). The
principal challenged school contexts and district politics that promoted distrust between teachers
and administrators. She effectively navigated a myriad of unjust policies and student
marginalization that had stymied the district. The principal strengthened relationships through
building trust, and nurtured, inspired, and motivated teachers and families (Dematthews &
Izquierdo, 2017). Through interviews conducted by Dematthews and Izquierdo (2017), teachers
and families described the principal as “active,” “passionate,”, “engaging,” “caring,”
“inspirational,” “so real,” and “down-to-earth” (p. 343). The district now has innovative
programs that meet the needs of all students, especially Mexican American English language
learners (Dematthews & Izquierdo, 2017).
Spiritual leadership plays a critical role in establishing the right organizational culture
that drives the ability to motivate individuals and teams. Milliman and Ferguson’s (2008) study
grounded the leadership approach in a theoretical framework to understand how it impacts the
16
organization, produce examples of spiritual leadership, and provide an assessment to measure
leader effectiveness. Through interviews, Milliman and Ferguson (2008) identify key elements
of an entrepreneurial executive's leadership style and his impact on employees and organizational
efficiency. Case study findings show that individuals perform and advance in their careers better
as a result of the executive’s capability to establish broad appeal of vision and mission,
demonstrate and cultivate elements of altruistic love that include trust, forgiveness, and
compassion, and creating hope that produces perseverance, setting of stretch goals, and setting a
high bar for excellence. Program evaluations included organizational data revealing over a 50%
reduction in employee turnover over three years (Milliman & Ferguson, 2008).
The servant leader is governed by creating opportunities within the organization that help
followers grow, is genuinely concerned with serving followers, ensures that the followers grow
and achieve their personal well-being, and is not motivated by power but to serve others
(Rachmawati, 2014). Rachmawati (2014) summarizes a breadth of research and analysis into a
set of six key characteristics for servant leadership, an increasingly popular approach to
leadership amongst companies and organizations (Northouse, 2016). The research dives deeper
into the leadership style by reviewing the literature to determine key servant leadership concepts,
as well as identify the various measures and assessment instruments available that leaders can
use to study their organizations. Through an extensive literature review, exploratory factor
analysis, and diverse sample sizes of over 4,700 people, Rachmawati’s (2014) analysis
determined that servant leadership is measured by (a) empowering and developing people; (b)
humility; (c) authenticity; (d) interpersonal acceptance; (e) providing direction; and (f)
stewardship. For leadership development coordinators, particularly those who incorporate
servant leadership into their organizational definition for leadership, Rachmawati’s research
17
produces a guide to determine what skills program participants need to learn and demonstrate, as
well as what and how to include in program evaluation methods. For example, according to the
study, stewardship can be defined and measured as demonstrating behaviors that build
community, behaving ethically, and showing accountability. Additionally, leaders can be
considered effective at interpersonal acceptance through valuing people, facilitating emotional
healing, and practicing and encouraging forgiveness.
Adaptive leadership strengthens individual and team capability to respond quickly and
effectively to changing environments (Northouse, 2016). In this model, a leader is
conceptualized as one who assists people in confronting tough problems rather than a savior role
(Heifetz & Linksky, 2002). According to Northouse (2016), adaptive leaders step out of the fray
and find perspective, identify and analyze adaptive challenges, regulate distress, maintain
disciplined attention, give work back to the people, and listen and remain open to the voices of
people on the fringes. Achlison et al. (2019) present a case study demonstrating that training in
adaptive leadership skills increases university lecturer ability to improve unit content in response
to changes impacting current engineering and business management courses. In an increasingly
complex, interdependent, and dynamic global environment, lecturers are being challenged to
operate with agility to not only adapt lecture content to ensure accuracy and quality but to also
continuously develop adaptive leadership competencies to remain agile. Achlison et al.’s (2019)
study produces a model that aligns adaptive leadership skills with the respective leadership
principle and behavioral competency. The model was applied to a total of 12 faculty and staff,
and results demonstrated the validity and reliability of the connection between researcher
competency needed by leaders to identify new ideas to improve units in the industry and defined
adaptive leadership competency.
18
Northouse (2016) contends that to become an effective leader, one must be able to
examine, comprehend, and apply a robust set of approaches across elements of leadership
theories and models. When developing a definition and approach to leadership, training
coordinators can focus on four approaches, authentic, spiritual, servant, and adaptive, that
contribute to a concept of leadership that focuses on motivating others to achieve shared goals, as
well as acknowledges each leadership theory’s potential role in informing and directing the
practice of leadership, depending on the organization’s specific needs. To motivate others and
define success in the context of shared goals, leaders use compassion and connectedness
(authentic); establish a sense of calling and membership (spiritual), focus on followers’ needs
(servant leadership), and encourage followers to adapt to change (adaptive) (Northouse, 2016).
Seminal research conducted by scholars such as Northouse (2016) and Rachmawati (2014) can
enable training coordinators to explore all key aspects within the field of leadership within
sources designed to be consumable by educators and leadership program coordinators.
Importance of Evaluation
Data utilization and accompanying measurement approaches should reflect the dynamic,
multifaceted nature of leadership while also being credible and practical to administer and
interpret results (Northouse, 2016). While the study of training programs has been conducted for
years (Kahn, 1990), research is scant that focuses on training results and application of learning
back on the job (Austin, et. al, 2006; Schindler & Burkholder, 2016). Within Kaiser and
Curphy’s (2014) study of leadership development programs and their impact across the 20th
century, researchers found only 200 programs with evaluations or use of interventions. Of those
200 instances, nearly all lacked assessment of behavior changes, and evaluations focused only on
reaction and knowledge acquisition. Little evidence existed demonstrating the investment in
19
leadership training produced effective leaders or that skills were used within the workplace,
relying instead on participant feedback on whether they enjoyed training (Kaiser & Curphy,
2014).
Effective measurement approaches identify the knowledge, motivational, and
organizational barriers to the on-the-job application of learning (Clark & Estes, 2008). Strong
alignment with the needs of the business also enables coordinators to demonstrate a return on
investment (Barnett & Mattox, 2010). However, the Association for Talent Development
(American Society for Training & Development, 2015) reported organizations spend little effort
measuring and demonstrating training results despite millions of dollars invested in training.
Research further indicates organizations widely offer leadership training but lack clarity on how
participants apply new knowledge and skills (Black & Earnest, 2009; Chiaburu et al., 2013;
Chiaburu et al., 2010; Craun, 2014).
While research shows a dearth of evaluation methods available (Black & Earnest, 2009),
a majority of talent development leaders find it difficult to demonstrate their program's return on
investment (Craun, 2014; DeGrosky, 2013). Self-report tactics are common with leadership
training measurement (Solansky, 2010) but often lack components beyond satisfaction.
Measurement approaches inconsistently examine design factors impacting knowledge and skill
outcomes as well as behavioral outcomes (Ford et al., 2018).
Research literature provides additional examples of evaluation focused on cost-effective
approaches to training measurement. Black & Earnest’s (2009) research study produced a self-
assessment approach for leadership training coordinators that focused on measuring outcomes.
Researchers deployed a Likert-scale-based instrument to nearly 200 participants that measured
the leadership programs targeted outcomes, produced data at the individual and organizational
20
levels, and informed decisions that impacted statewide leadership development programs. As a
result, the company studied received data to inform decision-making using a practical, scalable,
repeatable approach.
In another example evaluating learning outcomes, Ford, Baldwin, and Prasad's (2018)
literature review included a study of 359 firms over 12 years that demonstrated the impacts of
training on business performance metrics such as increased productivity and profit growth.
Research showed that the amount of internal training investment over time was significantly
related to firm profit growth via the impact of that training on labor productivity (Kim &
Ployhart, 2014). Results further showed in both aviation and health-care settings that training led
to more effective team performance as measured by observer ratings, Like Black and Earnest’s
(2009) study, Ford, Baldwin, and Prasad’s (2018) synthesis of several meta-analytic studies and
recent empirical work produce data demonstrating that training investments were related to a
variety of important business outcomes and can contribute substantively to competitive
advantage.
Establishing Accountability
Program effectiveness data collected, analyzed, and accompanying insights produced are
necessary to determine whether expectations are being met, where enablers and barriers exist to
meeting expectations, and which stakeholders are impacted (Clark & Estes, 2008).
Accountability establishes consequences for someone not meeting expectations that are imposed
by others, which enables the efficient operation of coordinated activities (Hall et al., 2017).
Effective use of measurement data can expose which contractual relationships need to be
examined; relationships which Hentschke and Wohlstetter (2004) define as driven by values,
decision rights, and information, and exist between a service provider and a recipient of that
21
service who has the authority to recognize or discipline the provider for their actions. Dubnick
(2014) argues that relational accountability relationships are foundational to any structural
contract capable of processing sustainable governance.
Dubnick (2020) identifies relationality, spatiality, temporality, ethicality, and constitutive
as the five common elements of the ontology of accountability. For leadership development
programs, relationality (Dubnick, 2020) is reflected through accountability relationships not only
with senior leaders expecting a return on investment but on training attendees expecting a
positive, valuable learning experience. Further, within the workplace, those interacting with
leaders expect more effective employees able to guide the organization in delivering results. This
multi-level organizational accountability structure impacts spatiality, where formal reporting
conversations should occur within a structured business review, and individual-level
conversations require one-on-one feedback between student and mentor, manager, or other
support participants. Documented processes provide a similar structure to temporality, the third
ontological characteristic, and bind timeframes of accountability conversations (Dubnick, 2020).
From a social cognitive perspective, effective learning occurs within a dynamic context of social
interactions and experiences (Bandura, 2012), and accountability exists in similar terms as webs
of relationships, reflecting a social network approach (Hall et al., 2017). As such, effective
measurement strategies target all areas where accountability relationships are needed for
program operations to enable talent development leaders to assess a wide variety of program
elements and learners (Koohang et al., 2017).
Conceptual Framework
Maxwell (2013) argues that conceptual frameworks are a system of concepts,
assumptions, expectations, beliefs, and theories that supports and informs research. It explains
22
the key factors, concepts, or variables of the study and the relationships among them. The study
intends to use the Clark and Estes (2008) model to assess gaps and diagnose the barriers to data
use. Figure 1 illustrates Clark and Estes’ (2008) gap analytic framework as a systematic process
used to identify whether leaders have adequate knowledge, motivation, and organizational
support to achieve program goals. The conceptual framework will also incorporate the
Kirkpatrick (2016) model for learning and talent measurement. The Kirkpatrick (2016) model
takes a results-driven approach to measuring the effectiveness of training and is widely accepted
as one of the most creditable and practical methods for learning measurement (Bates, 2004).
Figure 1
Clark and Estes (2008) Gap Analysis Process
23
Clark and Estes’ (2008) gap analysis process examines social and behavioral research on
people, how they learn, and what motivates them to learn to diagnose issues and prescribe
solutions. It will be used to study individuals’ knowledge and skills related to theories and
models that define leadership effectiveness, as well as credible methods for measurement,
motivation to achieve goals, and organizational barriers such as lack of necessary resources and
missing or inefficient work processes. Figure 2 presents a model to assess gaps and diagnose the
barriers to the use of data as conceptualized within key theoretical principles of knowledge,
motivation, and organization that enable the data-driven decision-making process (Marsh, 2015).
Figure 2
Conceptual Framework Model
Knowledge
Conceptual, Procedural,
Metacognitive
Organization
Culture, Outcomes,
Accountability
Mechanisms
Motivation
Active Choice, Persistence,
Mental Effort,
Self-Efficacy
Effective
Data
Utilization
Context
➢ Leadership & Program
Effectiveness
➢ Measurement
Framework
➢ Accountability
Relationships
24
Knowledge
To effectively utilize data to measure program effectiveness, coordinators need to
understand their knowledge of what defines leadership and program effectiveness and theory-
based methods for measurement and data analysis. In addition to facts (information) and
concepts (skills), Krathwohl (2002) defines knowledge also to include conceptual, procedural,
and metacognitive knowledge, which is the self-awareness of one's understanding. Research
(Costanza et al., 2016; Northouse, 2016; Rachmawati, 2014) establishes a conceptual definition
for effective leadership as a complex set of theories and processes with multiple dimensions used
in real situations and includes a collection of instruments that can assess leaders’ ability to
demonstrate and apply knowledge. Anderson and Krathwohl’s (2001) revised model of Bloom’s
Taxonomy further establishes conceptual and procedural categories, including knowledge,
comprehension, application, analysis, synthesis, and evaluation. Metacognitively, each
stakeholder needs to understand their impact on the organizational culture and the influence on
creating organizational change (Schein, 2004).
Table 1 shows the knowledge influences and knowledge types associated with effectively
designing and measuring leadership development programs.
25
Table 1
Knowledge Influences
Organizational mission
The mission of the Leadership, Learning and Organizational Capability (LLOC) is to
develop and deliver training programs that produce the talent needed to achieve business
results.
Organizational performance goal
To ensure training programs effectively position the organization to meet its mission,
LLOC will execute a measurement strategy, including business impact metrics, that
evaluates all leadership development programs by 2022.
Knowledge influence Knowledge type Knowledge influence assessment
Managers need to
demonstrate knowledge of
effective leadership
theories
Declarative
(conceptual)
Interview to demonstrate
knowledge, responses to
interview questions to gauge
knowledge.
Managers need to create an
effective learning
measurement strategy,
including instruments,
scorecards, and other data
analysis tools.
Declarative
(procedural)
Interview to demonstrate
knowledge, responses to
interview questions to gauge
knowledge.
Managers need to be
proficient in the ability to
analyzing their existing
skills to conduct data
analytics, informed
decision-making and lead
organizational change.
Metacognitive Interview to demonstrate
knowledge, responses to
interview questions to gauge
knowledge
Talent management leaders must also possess the knowledge and skills needed to drive
organizational culture change. Common to organizations, such as Century Manufacturing, Inc.,
culture shifts can be slow. Leaders will need to motivate stakeholders across their organization to
work towards a common vision (Burke, 2012), which can create and sustain a culture that
promotes measurement and data-driven decision-making. With the right knowledge, skills, and
26
abilities required, leaders can also produce the appropriate behaviors needed to motivate change
(McGee & Johnson, 2015).
Motivation
Motivation impacts individuals’ behaviors and actions, leading to favorable or
unfavorable outcomes (Elliott et al., 2017). The behavioral approach to motivation (McGee &
Johnson, 2015) integrates identifying the behaviors required to successfully achieve goals by
identifying the knowledge, skills, and abilities needed to produce the appropriate behaviors. The
research’s conceptual framework includes active choice, persistence, and mental effort, which
Clark and Estes (2008) argue are the three key concepts that comprise motivation, as well as
incorporates theories of effort, self-efficacy, and influences on learning (Dweck, 2013).
Table 2 lists motivational theories and influences required to drive organizational change.
Table 2
Motivational Influences
Motivational influence Motivational type Motivational influence
assessment
Leaders desire achievement and
competence towards specific
outcomes and experiences.
Expectancy value
theory
Interview to demonstrate
motivation, responses to
interview questions to
gauge motivation.
Leaders believe in their capability
to lead learning measurement and
organizational change.
Self-efficacy Interview to demonstrate
motivation, responses to
interview questions to
gauge motivation.
Leaders value data-driven decision-
making and the utility of leaders
adapting their current
measurement approaches.
Utility value Interview to demonstrate
motivation, responses to
interview questions to
gauge motivation.
27
Motivation, and its influence on the achievement of stakeholder goals, will be better
understood regarding the use of data to drive organizational change (McGee & Johnson, 2015).
Organization
Research demonstrates that the right organizational culture is needed to motivate
followers to accomplish goals (Schein, 2004). Schein (2004) outlines several reinforcement
mechanisms that leaders can use to embed and transmit culture and draws attention to the
importance of consistent, systematic measurement of key initiatives such as management
development. A culture that embraces the use of data to guide decision-making is a critical
enabler but may not be enough to sustain use (Marsh, 2012). To drive sustainability, Shahin &
Zairi (2006) advocate for the use of accountability mechanisms within a measurement strategy,
such as visible artifacts (Schein, 2004), like a balanced scorecard that monitors both current
performance and improvement efforts.
Kirkpatrick (2016) argues strong learning measurement cultures include external and
internal outcomes targeted as a result of the training, metrics, and methods, as well as the critical
behaviors, goals, and objectives intended to drive overall program performance. This enables
leaders at all levels with comprehensive, credible data to guide the difficult conversations that
often arise when holding people accountable (Lencioni, 2007).
Table 3 shows the cultural and environmental influences for leadership development
coordinators.
28
Table 3
Organizational Influences
Organizational influence Organizational type Organizational influence
assessment
Organizational leaders
embrace the use of data
to define and measure
program performance,
as well as provide
autonomy for decision-
making.
Culture Interview to demonstrate
organizational influences,
responses to interview
questions to gauge influence.
Organization has external
and internal outcomes
defined as measures of
program performance.
Outcomes Interview to demonstrate
organizational influences,
responses to interview
questions to gauge influence.
Organization uses
mechanisms at all
levels, such as visible
artifacts, to monitors
both current
performance and
improvement efforts.
Accountability Interview to demonstrate
organizational influences,
responses to interview
questions to gauge influence.
The conceptual framework follows Maxwell’s (2013) philosophy to qualitative research
by using a dialectical approach, which combines ontological realism and epistemological
constructivism, that focuses on expansion and deepening of one’s understanding, rather than
simply confirm. Using the Clark and Estes (2008) gap analysis framework, the conceptual
framework integrates the key theoretical principles within the knowledge, motivation, and
organizational area that enable or hinder data use to measure leadership development program
effectiveness. The approach can uncover actionable and relevant improvement opportunities to
close gaps in stakeholder knowledge and skills related to leadership effectiveness theories and
29
models and measurement strategies, motivation to achieve goals, and organizational culture and
resources.
Summary
By developing and executing a holistic measurement approach, organizations can
increase the effectiveness of training and glean insights that better inform decision-making
(Kragt, 2018). The review of literature suggests that for organizations to effectively execute data-
driven decision-making within leadership development programs, clear criteria for leadership
and program effectiveness, a research-based framework for measurement, and well-defined
accountability relationships are the three core elements that need to exist. To study the extent to
which the target organization, Century Manufacturing, Inc., has the right knowledge, motivation,
and organizational elements needed to establish the core elements within their programs, the
researcher will use a conceptual framework grounded in Clark and Estes (2008) gap analytic
framework. The use of the Clark and Estes (2008) gap analytic framework enables the researcher
to construct a study approach that encompasses data collection on a wide range of barriers and
enablers. This holistic method can enhance the validity and credibility of the study (Lincoln &
Guba, 1985) through the use of strategies to identify organizational barriers to the application of
data, such as lack of resources and missing or inadequate work processes, as well as validation of
whether people possess the right knowledge, skills, and motivation to use data. Clark and Estes’
(2008) gap analytic framework also incorporates the use of individual interviews in support of
this study’s qualitative methodology.
30
Chapter Three: Methodology
The purpose of this study was to understand whether or not key leadership development
program leaders at Century Manufacturing, Inc. use data to inform decisions. An assumption
underlying the problem of practice was that talent development organizations do not establish a
culture that fosters data for decision-making. In the context of social cognitive theory, Bandura
(1977) contends that a person’s cognition, behavior, and environment, work in dynamic and
reciprocal interaction. Therefore, understanding knowledge, motivational, and organizational
conditions tied to a leader’s use of data can provide insight into enablers and barriers to meeting
outcome expectations.
The study used a qualitative methodology to seek meaning and understanding of the
knowledge, motivational, and organizational conditions that exist for the use of data-driven
decision-making within the context of LDPs (Creswell & Creswell, 2018). This chapter
addresses the research’s design and is divided into eight sections. The first section will present
the research questions. An overview of the design is described in the second section. The third
section addresses the research setting. The fourth section acknowledges the researcher as the
research instrument. Specific data sources, including methods, participants, instrumentation, and
procedures used to collect and analyze data, are addressed in the fifth section. The sixth section
presents activities to ensure validity and reliability. Finally, the seventh section addresses ethical
concerns, and the eighth section overviews the limitations.
Research Questions
The research project explored what social conditions and institutional practices serve as
enablers to individual agency using the lens of Bandura's (2005) social cognitive theory. As
such, the research questions address socio-structural determinants and personal determinants to
31
use of data at Century Manufacturing, Inc. It will also necessitate an understanding of the
accountability relationships required to ensure meeting program goals.
1. How are LDP coordinators collecting and utilizing data within each element of the
learning development program?
2. Do systemic barriers prevent LDP coordinators, managers, and Executives from using
data to make programmatic decisions? If so, how?
3. How is data used to identify, define, and manage accountability relationships required
to make the LDP for participants to be successful?
Overview of Design
This study used a qualitative methodology to seek meaning and understanding of existing
conditions through the collection and analysis of empirical data in the form of words, text,
actions, or interactions between people or artifacts (Creswell & Creswell, 2018). A qualitative
study is the preferred research method when rich descriptions of phenomena are desired
(Sandelowski, 2000). Because the purpose was to describe whether or not leadership
development program leaders use data to drive decision-making, including individual, social, and
environmental characteristics, a qualitative study is an appropriate approach. Qualitative research
uses an inductive approach that emphasizes the development of insights and generalizations out
of a rich set of data to reach an understanding (Creswell & Creswell, 2018). A qualitative
approach is needed to understand each unique human experience of collecting and utilizing data,
perceived systemic barriers preventing the use of data to make programmatic decisions, and the
network of accountability relationships that influence behavior.
The research design incorporated open-ended, semi-structured interviews of participants
and document analysis related to targeted high-potential LDPs. This methodology supports a
32
phenomenological approach, which provides a detailed description of human experiences and
meaning. It uses specialized methods of participant selection, data collection with interview
component, and systematic data analysis to assemble a final report (Creswell & Creswell, 2018).
Table 4 presents how interviews and document reviews will be used as data sources for each
research question.
Table 4
Data Sources
Research questions Interviews Document reviews
RQ1: How are LDP coordinators collecting and
utilizing data within each element of the learning
development program?
X X
RQ2: Do systemic barriers exist that prevent LDP
coordinators, managers, and Executives from using
data to make programmatic decisions?
X
RQ3: How is data used to identify, define, and
manage accountability relationships required to
make the LDP for participants to be successful?
X X
33
Data collection methodology also supports the study's paradigm of inquiry,
constructivism. Creswell (2018) describes the constructivist worldview perspective as to how
factors interact to create understanding within the world in which individuals work and live.
Interviews and document reviews construct meaning by capturing participants' perceptions of
how LDP leaders collect and utilize data, what barriers exist that prevent leaders from using data
to make decisions, and the nature of accountability relationships. Multiple participants and
artifacts with the data sources capture the dynamic situations to establish an ontology of
multiple, socially constructed realities (Bandura, 2005).
Finally, utilization of a detailed audit trail of notes is incorporated to address issues of
validity and credibility. According to Lincoln and Guba (1985), researchers can strengthen
confirmability by keeping detailed fieldnotes and other data to trace back to original sources. In
addition to multiple forms of data, the researcher can also utilize peer researchers during
interviews and data analysis to increase credibility.
Research Setting
Century Manufacturing, Incorporated (Inc.) is one of the world's largest publicly traded
manufacturing companies in the world and has over 140,000 employees worldwide. The
company's corporate headquarters is located in the United States, and it has locations in the
United Kingdom, Europe, Asia, and Australia. The company aspires to be the best in aerospace
and an enduring global industrial champion. The company-defined eight organizational goals
required to achieve this aspiration are (a) market leadership; (b) top-quartile performance and
returns; (c) growth fueled by productivity; (d) design, manufacturing, and services excellence;
(e) accelerated innovation; (f) global scale and depth; (g) best team, talent, and leaders; and (h)
top corporate citizen. Goals are measured through an operational excellence balanced scorecard
34
that include the six key performance indicators of (a) product and services safety; (b) quality; (c)
workplace safety; (d) schedule; (e) cost; and (f) customer. All eight organization goals are
interdependent and integral to achieving the mission. Having the right leaders in place is
essential for ensuring accountability of how all members of the organization meet the targets
(Avolio et al., 2018).
While the company has an established, enterprise-wide learning organization, many
business, and functional organizations own a stake in ensuring Century Manufacturing, Inc.
develops leaders effectively. The enterprise learning organization, Leadership, Learning and
Organizational Capability (LLOC), works closely with senior leaders, corporate strategists, and
the Human Resources (HR) department to develop enterprise strategies and deliver training
programs that produce the talent needed to achieve business results. In addition to leadership
training, LLOC is also responsible for providing workforce development, organizational
capability, and cultural enablement products and services to the businesses, functions, and the
company's customers and partners. The company also owns and operates a corporate leadership
center located in the Midwest region of the United States, which serves 20,000 employees a year,
and offers a wide variety of week-long leadership training programs and hosts executive
leadership development events.
Multiple programs also operate under different internal organizations, which creates a
decentralized approach to leadership training development and implementation and distributed
oversight of performance. The company's largest business unit employs over 60,000 employees
worldwide and divides into six functional organizations, some of which operate their leadership
development programs. For example, rotational leadership programs exist independently within
the business, engineering, IT, and HR functions. Finally, the company’s largest labor union
35
operates development programs for its 600,000 currently employed and retired members and
provides up to $3,000 per year for external training opportunities.
The Researcher
An assumption underlying the problem of practice was that talent development
organizations do not establish a culture that fosters data for decision making, which can establish
structural barriers that result in insufficient resources to execute their learning strategy priorities
(Barnett & Mattox, 2010). Given that the purpose of the interviews was to learn without passing
judgment, the researcher ensured this assumption did not lead to the confirmation bias that
influences data collection. Pilot interview sessions and following the interview questions within
the guide assisted in focusing the exercise as a learning experience (Weiss, 1994). The guide, and
the script outlining the purpose of the research and emphasizing that feedback is confidential,
helped mitigate social desirability bias in interviewees.
The researcher's relevant job roles as a program management leader within global
learning and development and business analytics organization positioned her to understand and
demonstrate the value of data to inform strategy and raise a potential bias of thought. Previous
roles created a predisposed expectation that learning and development leaders should possess the
knowledge, skills, and mindsets to support the successful use of data-driven leadership
development strategies. The researcher’s use of a phenomenological methodology decreased the
risk of bias influencing the study by focusing the researcher’s lens on a description of human
experiences and meaning rather than a judgment of its quality (Creswell & Creswell, 2018).
Finally, the researcher possesses a passion for seeking diverse perspectives to understand
complex issues before seeking solutions. As a mixed-race female, the researcher's diverse
heritage drives a personal norm of seeking a wide range of perspectives and experiences, even if
36
they differ from her own, to approach problem-solving holistically. Professionally, this translates
into a career path filled with roles that constantly challenge embedded processes, tools, and other
structures, determined to break down barriers and integrate diverse teams. Job roles and positions
provided experiences across various teams and organizations, such as information technology
(IT), management consulting, digital business analytics, and manufacturing and engineering.
Data Sources
This section will address each method of data collection, participants and
instrumentation involved in the study, and data collection and analysis procedures.
Interviewing
Interviews of participants used a semi-structured approach. Creswell (2018) and Bandura
(2005) share the assumptions that meaning-making is dynamic and participants' perceptions of
situations establish an ontology of multiple, socially constructed realities. A semi-structured
approach allowed for an examination into how factors interact to create understanding within the
world in which individuals work and live (Creswell, 2018). Open-ended, semi-structured
interviews of 7-10 participants lasted 60-90 minutes. The interview instrument included
questions addressing knowledge, motivational, and organizational elements.
Document Review
The research intended to include analyses of documents related to targeted high-potential
LDPs that are identified through interviews. Although the researcher requested artifacts from
interview participants, none were submitted. Artifacts solicited included documents describing
development programs' structure and objectives, measurement instruments such as surveys or
assessments, scorecards that include metrics and performance, and documentation detailing
decisions and interventions identified as a result of program performance.
37
Participants
Participants were purposefully sampled for the study (Merriam & Tisdell, 2016) and
comprise targeted organizational actors currently engaged in a targeted LDP, with perspectives
on data use for decision making that influence the LDP. For the study, the scope sought to
include at least two to three enterprise- or functional-level programs. To support a
phenomenological research approach (Creswell & Creswell, 2018), participants were distributed
amongst roles that hold ownership or responsibility for leadership development programs or
evaluate program participants’ performance. These roles include organizational sponsors of
training, leadership development program coordinators, and managers of program participants.
The researcher utilized an established relationship with the LDP coordinators to recruit the
research participants. As the high potential LDPs have a limited number of participants, the
target of seven to ten LDP research participants represented the targeted population. Further
increasing credibility in sampling strategy is the inclusion of multiple stakeholder perspectives
across program elements and organizational layers, which allows for triangulation of data
(Lincoln & Guba, 1985). The study also targeted the inclusion of multiple LDPs, strengthening
the transferability of results.
Instrumentation
Knowledge and experience and behavior questions were included in the interview guide
(Appendix A) because the interviews intend to learn about the things a person knows regarding
the program elements or what they do or did (Merriam & Tisdell, 2016; Patton, 2007). Questions
focused on gathering data on how measurement and data collection currently is approached by
the organization, what barriers exist and what is done to remove them, and how data collected is
used for decision making. These focus areas aligned with the research questions and seek to
38
inform answers and the conceptual framework in enabling the identification of barriers and
enablers to knowledge, motivation, and organizational conditions.
Table 5 details how each question of the interview guide aligned with the study's research
question and conceptual framework.
Table 5
Research Questions Alignment
Interview questions Research
question
addressed
Conceptual framework
item (Clark & Estes,
2008)
1. What Learning Development Program
(LDP) elements are measured?
1 Knowledge
2. How are the LDP elements measured? 1, 2 Knowledge
3. What tools are used for measurement? 1,2 Knowledge
4. If areas of the program are not measured,
why not?
1,2 Knowledge
5. When barriers to measurement are
identified, what is done to remove them?
1,2,3 Knowledge; Opinion
6. How was the measurement strategy
developed?
1,2 Knowledge; Experience;
Behavior
7. What decisions are made using the data? 3 Knowledge; Experience;
Behavior
8. How does your organization set goals? 3 Knowledge; Experience;
Behavior
9. How does your organization allocate
resources for learning measurement each
year?
1,2,3 Knowledge; Experience;
Behavior
10. How are participants held accountable
for performance?
3 Knowledge; Experience;
Behavior
11. How is success defined for each element
within the program?
3 Knowledge
39
Data Collection
The researcher conducted open-ended, semi-structured interviews of participants lasting
up to 60 minutes within a period of two to three weeks. Interviews were selected because they
are the preferred method in most phenomenological studies as they present the best opportunity
to develop an in-depth understanding through participants' experiences (Creswell & Creswell,
2018). Interviews were held using Zoom, as current limitations caused by the COVID-19 global
pandemic negate face-to-face opportunities and were recorded, with the interviewee’s
permission. The researcher took notes via laptop but recording offered increased fidelity to the
transcripts of the material (Weiss, 1997). Additionally, notes produced and recorded early
insights pointed to key points or quotations during the interview and served as a back-up if the
recording malfunctions (Patton, 2002).
The interview guide was available to the researcher and integrated into the notetaking
template to enhance data capture with better details and uncover where and why gaps truly exist
(Bogdan & Bilken, 2007). To reinforce mindfulness during the interviews, the researcher notes
key questions and concepts to make sure to cover and initial findings and interpretations (Patton,
2002). Additionally, probes were used to assist in focusing the interviewees' attention on
questions that leverage their subject matter expertise (Bogdan & Bilken, 2007).
Data Analysis
Data was analyzed through identifying major themes and categories within the data
collection methods of interviews and document review. Steps from Creswell and Creswell's
(2018) data analysis process include organize and prepare the data for analysis, read or look at all
the data; coding the data, generate descriptions and themes, and represent descriptions and
themes.
40
Interviews
The researcher employed simultaneous procedures for qualitative data analysis of
interview data, which allowed for analysis of interviews conducted earlier and write up of reports
before all interviews were concluded (Creswell & Creswell, 2018). Winnowing of data focused
on some data and disregarded other parts that were unnecessarily rich or detailed for the final
report. The researcher organized and prepared data by typing up interviews and transcripts and
looked at all final reflection data before coding. Coding involved Tesch's eight-step approach
(Creswell & Creswell, 2018), where the researcher will read all typed material carefully, select
two to three documents to record initial thoughts, make a list of common topics, abbreviate
topics into codes, create categories from topics, make final decisions on code abbreviations,
assemble data into one place and begin analysis, and, if necessary, recode data.
Finally, using an inductive approach, coded data was grouped into categories based on a
description of the setting or people and by theme. Descriptions were derived from information
regarding people, places, or events, while themes appeared as major findings and can
interconnect to produce narratives (Creswell & Creswell, 2018). The three sources researcher
looked at to derive the categories were the researcher the participants' exact words or from the
literature surrounding the research questions, and ensured the categories are exhaustive, mutually
exclusive, sensitive to the data, and conceptually congruent (Merriam & Tisdell, 2016).
Document Review
Analysis of artifact data followed the same qualitative data analysis approach as used for
interviews. The researcher winnowed data to focus on relevant information, typed up notes, and
looked at all data for final reflection before coding. Coded data was grouped into categories that
aligned with those established as part of the interviewing data analysis. As needed, permission
41
was secured for any visual images recorded during the analysis and final reporting of findings
(Merriam & Tisdell, 2016).
Validity and Reliability
The researcher used triangulation, respondent validation, and establishing an audit trail.
Triangulation is the method of examining evidence from the sources and using it to build a
coherent justification for themes (Creswell & Creswell, 2018). This was achieved through the
inclusion of multiple stakeholder perspectives across program elements and organizational
layers, which increased credibility in sampling strategy (Lincoln & Guba, 1985). Respondent
validation, also known as member checks, ensured internal validity and credibility by soliciting
feedback on preliminary findings from interviewees (Merriam & Tisdell, 2016). During the
interview, the researcher presented initial findings and interpretations to the interviewee for
respondent validation (Merriam & Tisdell, 2016). Finally, the research design kept a detailed
audit trail of notes to strengthen validity and credibility (Lincoln & Guba, 1985).
Ethics
As the research involves human subjects, there was a responsibility to ethically execute
all study activities (Creswell & Creswell, 2018). Following procedures in alignment with IRB
processes (Creswell & Creswell, 2018), informed consent statements were provided in advance
of and during interviews, and participants could follow-up with questions at any time.
Confidentiality was managed by removing personally identifiable information from all
transcriptions, field notes, and data analysis artifacts. To mitigate the risk of undue influence of
power dynamics or coercion, the researcher was transparent with her positionality and the
purpose of the study and explained the possible benefits of output for use by the leaders and
organization to inform program decisions or improvements (Creswell & Creswell, 2018).
42
Limitations and Delimitations
While diligence was taken to develop a comprehensive research approach, there were still
some limitations to the study. Accessibility to research participants was challenged by recent
waves of individual retirements and company layoffs. Expanding the participant scope to include
former LDP leaders increased the overall population size to sample but limited eligibility of
former leaders to those separated within the last one to two years so as not to erode applicability
of findings (transferability). Misunderstanding within the interpretation of data could pose
another limitation. The quality of data depends on the researcher's ability to minimize the impact
of any biases to collect, analyze, and interpret the meaning of participants' inputs. Finally, the
limited length of study could result in participants' perceptions changing and evolving.
When reflecting upon ethical implications of the research and potential limitations that
could result, several axiological assumptions were presented within the study regarding the
extent to which participants' values influence the research process (Saunders, 2019).
Organizational executives, LDP coordinators, and the researcher could perceive potential
benefits from the research conducted. Research findings could be used to improve the experience
for all LDP stakeholders, produce better company leaders, and result in positive recognition for
LDP coordinators if findings exceed expectations. However, the same stakeholders who
potentially benefit could also experience potential harm if results do not meet preconceived
expectations (Creswell & Creswell, 2018). This can pose an ethical challenge to ensuring
participants answer questions truthfully, as their perspective and values inform the study's results
(Hinga, 2020) and influence the interaction between them, the researcher, and the environment
(Saunders, 2019). Further, as results will be published in a dissertation available outside of the
43
organization, the researcher practiced continued diligence to IRB standards of confidentiality
(Creswell & Creswell, 2018) to adhere to all participants' ethical commitments.
44
Chapter Four: Results or Findings
The purpose of this study was to understand whether or not key leadership development
program leaders at Century Manufacturing, Inc. use data to inform decisions. If so, how and to
what extent it is used. If not, what barriers exist, and how do those barriers impede the
organization from effectively achieving its goals. It also necessitated an understanding of the
accountability relationships required to ensure program goals are met. The following questions
guided the research:
1. How are LDP coordinators collecting and utilizing data within each element of the
learning development program?
2. Do systemic barriers exist that prevent LDP coordinators, managers, and Executives
from using data to make programmatic decisions? If so, how?
3. If data is available, how is it used to identify, define, and manage accountability
relationships required to make the LDP for participants to be successful?
Using the lens of Bandura’s (2005) social cognitive theory, this research sought to
explore what social conditions and institutional practices serve as enablers to individual agency.
Through a review of literature, assumed attributes were developed that could lead to an
understanding of whether or not LDP leaders use data to inform decisions. As a result, this
assumed that talent development organizations that fostered a data-driven decision-making
culture could demonstrate they collected and utilized data with leadership development program
(LDP) elements, identify and mitigate systemic barriers for using data to make programmatic
decisions, and had established accountability mechanisms required to be successful. More
specifically, it is assumed that LDP coordinators who effectively collected and use data had
structured measurement methodology; used credible tools; and collected meaningful data used
45
for decision making. It was also assumed that barriers, if any, would fall within knowledge,
motivational, and organizational categories. Finally, a data-driven talent development
organization would possess a clear strategy, decision-making capability, set performance goals,
and accountability relationships amongst all stakeholders (Barnett & Mattox, 2010). If an
absence of these attributes was experienced during interviews, the researcher used the Clark and
Estes (2008) model to assess gaps and diagnose the barriers to data use.
The study used a qualitative methodology to seek meaning and understanding (Creswell
& Creswell, 2018) and included eight individual interviews and one small focus group interview.
Interviews were conducted in three phases to build on and refine each subsequent phase. This
chapter presents the research findings, which are organized by research question and the relevant
theme that emerged.
Participants
As the purpose of the interviews was to gather data from perspectives on data use for
decision making that influence the LDPs, participants were purposefully sampled (Merriam &
Tisdell, 2016) and represent roles that hold ownership or responsibility for leadership
development programs or evaluate program participants’ performance. These roles include
organizational sponsors of training, leadership development program coordinators, and managers
of program participants. The additional role of LDP participant emerged during interviews as
multiple LDP coordinators were once participants of LDPs. Organizational sponsors are
responsible for program strategy and are the primary stakeholder and decision-maker for their
respective LDP, which included approving the program budget for resources and other
organizational support mechanisms. LDP coordinators are responsible for the management and
implementation of leadership development learning initiatives in support of the overall
46
leadership development strategy. LDP coordinators lead the measurement of learning
effectiveness and related reporting. Managers of participants are responsible for monitoring the
LDP participant’s development and performance management back on the job, as well as
providing feedback to organizational sponsors and LDP coordinators. Finally, LDP participants’
are responsible for using data to create and manage performance on individual development
plans and performance management plans.
Overall, nine participants were interviewed for this study and represented each of the
roles targeted. Table 6 presents the number of each role category included in the study and a
brief description of their data use-related responsibilities.
Table 6
Composition of Sample and Data Use-Related Responsibilities
Role Number Description of responsibilities
Organizational
sponsor
1 Create and manage LDP strategy. Serve as primary
stakeholder and decision-maker for LDPs. Manage budget
approvals for resources and other organizational support
mechanisms.
Leadership
Development
Program
(LDP)
coordinator
8 Responsible for the overall delivery and performance of
program activities, which includes logistics, event planning
and delivery, and participant communication and ongoing
engagement. Supports all aspects of program operations
(technology, scheduling, contracts, expenses, consultants,
etc.), as well as overall marketing of the programs –
including all recruitment-related activities. Supports the
measurement of learning effectiveness and related
reporting.
Manager of LDP
participant
1 Incorporate program measures into participant’s development
and performance management plans. Provide feedback to
organizational sponsors and LDP coordinators on the
performance of LDP participant(s).
LDP participant 4 Create and manage performance on individual development
plans and performance management plans.
47
Of note, as some participants qualified for multiple roles, the numbers represented in
Table 6 do not sum to nine. One manager of an LDP participant and two LDP coordinators were
also LDP participants and counted in each category. Additional participant information can be
found in Table 7.
Table 7
Participant by Experience and Role
Name LDP
experience
(Years)
LDP coordinator LDP
participant
Organizational
sponsor
Manager of
participant
Amelia 6 X X
Cassie 3 X
Douglas 4 X
Jesse 11 X
Kennedy 6 X X
Michael 3 X X X
Peter 10 X X X
Riley 8 X
48
Century Manufacturing, Inc.’s organizational structure is tiered, with three business units,
comprised each of multiple functional organizations, rolling up to one enterprise-wide level. The
company offers leadership development programs at both the enterprise- and functional-level. As
such, participants were selected to represent each area. Enterprise-level programs encompass all
three business units at the company. Functional-level programs operate within one of the
multiple functional organizations at the company. The scope of the study focused on one
business unit that is comprised of 14 functions. Employees are eligible for enterprise-level
leadership development programs, as well as LDPs within their respective functions. Overall, the
nine participants interviewed for this study represented eight enterprise-level programs across
two business units and 18 functional-level programs within two functional organizations
Results for Research Question 1
As discussed in Chapter two, organizations need a set of conditions to be successful at
utilizing data to measure leadership and program effectiveness. Northouse (2016) asserted that a
measurement framework and approach should be credible and practical, and reflect the dynamic,
multifaceted nature of leadership. To demonstrate this, Century Manufacturing, Inc. is expected
to measure multiple programs and their elements, have a structured method with a robust tool
set, and collect data that appropriately informs the decisions needed to maintain program and
leadership effectiveness. Research findings are outlined below that address each of these areas.
Program Elements Measured
Leadership development programs at Century Manufacturing, Inc. comprised the four
major elements of (a) experiential, project-based learning; (b) instructor-led courses; (c) online
learning; and (d) mentoring. Interview participants provided multiple examples of the collection
and utilization of data across each element at both the enterprise- and functional-program level.
49
Also common across all program elements was success defined and measured through the
participant performance back on the job as the focal point, and less on the composition or
delivery of the program element. Stated another way, data was collected on the performance of
individual participants and the organizational area (e.g., team, business unit, function) they were
located and less on the program element itself.
Experiential, Project-Based Learning
A key theme that emerged across all interviews was the extensive use of experiential,
project-based learning within leadership development programs. These learning experiences
emphasize practical hands-on activities that can generate real-world verifiable results (Bertoni &
Bertoni, 2020). LDPs were typically multi-year, rotation assignments designed to holistically
develop leadership knowledge, skills, and capability in participants. In some instances,
participants were also provided mentors and coaches.
Leadership development programs followed what Bertoni and Bertoni (2020) described
as the Experiential Learning Cycle (ELC), which is comprised of a four-stage holistic model that
included ‘concrete experience’, ‘reflective observation’, ‘abstract conceptualization’, and ‘active
experimentation’. Within an early career foundational program, Amelia described that concrete
experiences are provided through special projects and followed by reflective observation of
assessing where participants are at, which then are used for future assignments:
Participants rotate every four months and are given performance scores every four
months. The participants take time to reflect on their experience and assessment results,
as well as other things such as involvement in their cohort and career network. This helps
with placement in future opportunities where participants put what they learned into
practice.
50
This learning cycle approach allows for periodic measurement that is aligned with the
unique experience and skill gaps targeted (Bertoni & Bertoni, 2020). As such, LDP coordinators
used a semi-structured approach to measure these program offerings. Interview participants
described using a strategy that focuses on the measurement of performance against leadership
behaviors, as identified at the corporate level, and targeted business metrics. For instance, in the
example provided above, Amelia stated that assessments were comprised of elements drawn
from the corporate level leadership behaviors, as well as functional knowledge and skills.
Individual performance was defined primarily in terms of an individual’s leadership capability as
observed by their hosting manager, mentor, and coach, if assigned. Jesse shared that his program
“created development plan from both assessment results and feedback from participants hosting
managers”, while Kennedy’s enterprise program observed performance through quarterly
presentations:
Participants would give a deep dive to the group on the project and focus on the program
management best practice perspective. The group then questioned the participant to
understand what they learned and what value was delivered to the business, then gave
feedback to the person for improvement.
These examples demonstrate that programs are structured in a way to allow participants
to not only carry out particular actions but also understand and interpret the impact of the action
in real-life situations, which Bertoni and Bertoni (2020) state are essential within experiential
learning. As such, LDP performance, within the lens of experiential learning, was defined in
terms of achievement of business targets within the participant’s assigned organization, as well
as several human resources metrics, such as attrition. A balanced scorecard was presented to
organizational sponsors, but it did not break out results by program element.
51
Instructor-Led and Online Learning
Traditional learning and development options, such as instructor-led or online learning,
were used to supplement experiential-based learning. Leadership development programs
frequently included multi-day workshops offered at either the participant's home site or the
corporate leadership center. Part of the enterprise program Kennedy oversaw were onsite
programs at the company’s official leadership center. Peter, as a former LDP participant, said “I
went to the leadership center a few times, but also went to a few workshops at the corporate
headquarters and in my building”. Common weeklong programs focused on developing
leadership skills, which could be focused on early- or mid-career leaders, as well as functionally
focused within their home organization. Finally, enterprise LDPs presented weeklong
experiences that incorporated instructor-led training with real-world simulations with the intent
of preparing leaders to solve common business problems within the industry. Surveys were
distributed to participants to gather feedback on Satisfaction (Level 1), Learning (Level 2), and
Application (Level 3) metrics (Kirkpatrick, 2016). However, LDP coordinators interviewed
stated these were used minimally to evaluate leader or program effectiveness.
Online courses are offered via the internal learning management system (LMS). Douglas
stated that “participants could take courses in the LMS as part of their learning plan or to earn
certifications”, while Riley and Cassie shared their program participants also occasionally take
courses online. While the system provides access for any employee to enroll and complete
training, multiple functional-level leadership programs have developed “college-type”
approaches with curriculum built to establish tracks for leaders to follow. Foundational courses
establish general knowledge and understanding of leadership principles. As leaders progress
through the tracks, Douglas described that coursework becomes more advanced and focused on
52
content appropriately aligned to the business culture and needs of their function. Level 1-3
surveys are also utilized to measure the effectiveness of training and used to refine the tracks.
Mentoring and Coaching
Program participants were routinely assigned one or more mentors as part of their
development. Within both Mike and Kennedy’s enterprise programs, as part of a participant’s
immersive, multi-year experience, the program arranges a structured mentorship experience for
each participant. Peter stated that mentors can start as someone within the same functional
organization, or within the organization hosting the immersive experience, but can also change.
Within Amelia’s program, mentors can rotate one or more times during the multi-year immersive
experience, as well as include alumni and peer mentorships.
Enterprise programs also assign a coach to each participant. In one enterprise executive
leadership program, Kennedy shared that participants work on a focused project and are
supported by a coach whose team supports the leadership skill development of each participant.
Each coach can be assigned to support one or more program participants. Coaches also support
the execution of the LDP from a program management perspective and can be asked to gather
individual and business performance data to report insights to executive sponsors. Alumni and
peer mentoring were also used in functional LDPs managed by Amelia.
Mentors and coaches are not measured for effectiveness, and Amelia, Peter, and
Kennedy, who also were LDP participants, did not recall ever providing feedback to the program
on their mentoring or coaching experience. Mentors and coaches, however, can provide
individual input to program participant performance using a 360-format offered through the
company’s performance management plan. As Amelia described:
53
As a participant, you can have three to four mentors that you worked with and meet with
quarterly to talk through competencies. There’s an assumption that the mentors have
proficiency in the core competencies, but it does end up being more subjective. But you
do have a cohort that you work with to tell you if you're way off.
Peter added that “the performance management system limits feedback to an open-ended,
qualitative format, and is structured to focus only on pre-identified leadership attributes”. These
approaches place boundaries on feedback, which could place barriers to the comprehensive
measurement of all program aspects.
Measurement Methods
Leadership development program (LDP) coordinators used a variety of measurement
methods they determined to be most appropriate for each program element. Surveys were used to
measure the effectiveness of traditional training offerings, such as instructor-led and online
learning courses. Skills-based assessments were used by one functional LDP to measure
knowledge, skills, and abilities before and after the completion of training experiences. Finally,
multiple programs tracked and reported on business results data for multiple areas of the
organization. Table 8 details the specific use of measurement method by program element,
including how each was used.
54
Table 8
Use of Measurement Method by Program Element
Program element Measurement
method
Description of use
Experiential
learning
Assessments
Business
results
Semi-structured data collection on an individual’s
leadership capability as observed by their hosting
manager, mentor, and coach, if assigned.
Business results data collected via existing company
systems and tools, from within the participant’s
assigned organization, as well as several human
resources metrics, such as attrition. A balanced
scorecard was presented to organizational sponsors.
Instructor-led
learning
Surveys Collects post-event, Level 1-3 data. Data included in
a balanced scorecard.
Online learning Surveys Collects post-event, Level 1-3 data. Data included in
a balanced scorecard.
Mentoring/Coaching Assessments Via 360-style format; collects performance data for
leadership behaviors
Data
For effective data collection and utilization, data must be sufficient enough to provide
feedback on performance, identify interventions, and enable other important decision-making
(Marsh, 2015). According to interview participants, quantitative and qualitative data were
collected and used to measure program and leadership effectiveness. For feedback on program
performance, operational data was collected on the number of program activities delivered, the
number of leaders participating in the programs, and delivery costs. Jesse described using data on
“the number of participants in each program, cost per participant, and how many leadership
trainings we were doing”; Riley described a similar approach to operational cost measurement
within her programs. Data was combined with satisfaction (Level 1), learning (Level 2), and
55
application (Level 3) data (Kirkpatrick, 2004) collected via surveys, and business impact (Level
4) metric data collected via existing business systems and mechanisms. For example, Amelia
shared that her program has “surveys sent out to participants a few times asking about
experience, resources needed to be successful, and how they’re using the training”. Business
impact data included productivity, cost, and quality, as well as overall attrition.
Individual performance data was a mix of qualitative feedback gathered from participant
managers, hosting managers, mentors, and coaches, as well as ratings provided through bi-annual
performance management mechanisms. One-on-one conversations identified interventions for
individual participants, which usually came in the form of additional mentoring or coaching,
assigned training, or rotation to another experiential learning assignment. Kennedy shared that
assigned mentors meet with her program participants at least once per month, but all
interviewees stated that formal performance rating was conducted annually via the company
performance management process.
To drive decision-making, analysis and findings were reported through balanced
scorecards presented during recurring review meetings. Competency data was also collected
through assessments within four functional LDPs and used to make decisions regarding changes
in an individual’s development plan, as well as identifying adjustments to the overall LDP. In
one enterprise mid-career development program, competencies were assessed quarterly.
Kennedy, who led the program stated that “participants were required to demonstrate proficiency
in three of the program’s five core competencies to graduate”. Programmatically, LDP
coordinators and organizational sponsors used competency data to inform what experiential
learning opportunities were needed for each cohort, as well as what mentors and coaches were
needed to support growth. For example, Douglas shared that his team, who manage multiple
56
functional LDPs, conducted job analyses to identify critical skills gaps and Peter used internal
and external benchmarking to guide the development of participants and measurement of
program effectiveness within his enterprise program.
These research findings showed that Century Manufacturing, Inc. effectively collected
and used data for decision-making across multiple programs at the enterprise- and functional
levels. LDP coordinators had a structured methodology that used a robust tool set to collect data
that measured program and leadership effectiveness. While all interview participants were able
to articulate program elements measured, how methods and tools were used, and how data was
used within their respective LDPs, participants also acknowledged gaps existed in their
approaches which impacted the ability to make programmatic decisions. These gaps are detailed
in the findings for Research Question 2.
Results for Research Question 2
Using Clark and Estes’ (2008) gap analytic framework, the findings for Research
Question 2 discuss what barriers exist to leaders having adequate knowledge, motivation, and
organizational support to achieve program goals. The framework assumes conceptual and
procedural knowledge of leadership and program effectiveness and theory-based methods for
measurement and data analysis, as well as self-awareness of one’s understanding (Krathwohl,
2002). Assumed motivational influencers are expectant value theory, self-efficacy, and utility
value. Finally, conditions should also be influenced by organizational culture, outcomes, and
accountability mechanisms.
Analysis of interview data revealed that the three main barriers LDP coordinators
encountered to data-driven decision making were inconsistent leadership definitions
57
(knowledge), perceived value disconnects (motivation), and lack of clear outcomes
(organization).
Knowledge
To effectively use data for decision making, it is assumed that LDP coordinators are
using appropriate learning strategies to address each of the four knowledge dimensions,
including factual, conceptual, procedural, and metacognitive knowledge (Krathwohl, 2002).
Across each knowledge type, interview participants demonstrated robust understanding and
application of the conceptual, procedural, and metacognitive knowledge needed to use data to
measure leadership and program effectiveness. Table 9 provides specific examples of each
knowledge influence.
58
Table 9
Knowledge Influences—Participant Interview Results
Knowledge type Description Example of use
Declarative
(conceptual)
Demonstrate knowledge of
effective leadership theories
“We use various categories when
conceptualizing leadership, these can
include leadership skills, as well as
mindsets and values. For example,
showing culture agility, understanding
customers/suppliers/ competitors,
having business acumen, and
motivating the team for technical
excellence”
Declarative
(procedural)
Creates an effective learning
measurement strategy,
including instruments,
scorecards, and other data
analysis tools
“Our programs use training surveys
based on the Kirkpatrick model. It is
hard to do Level 4, but the
organization doesn’t demand Level 4-
5 much. So, we focus a lot on Level 2
learning and consumption”
Metacognitive
Ability to analyze their
existing skills to conduct
data analytics, informed
decision making, and lead
organizational change
“We have a pretty clear ROI, and I can
demonstrate how much it costs per
participant. However, we know value
isn’t just ROI, so we’re constantly
asking ourselves and assessing our
capability to change and influence
culture, how to better serve the
purpose of the stakeholder, and how
we can use both quantitative and
qualitative data to tell the story. We’re
always trying to get better”
As Table 9 reveals, interviews provided qualitative evidence that research participants
possessed and applied each knowledge influence. Declaratively, Amelia provided specific
examples of theoretical leadership concepts of programs applying culture agility, understanding
internal and external influences, applying business acumen, and motivating employees
(Northouse, 2016). Procedurally, Peter demonstrated the application of Kirkpatrick's (2004)
59
principles within measurement tools. Metacognitively, Jesse shared a critical self-assessment of
his team’s ability to effectively measure and demonstrate ROI. Further, Peter shared that he
compares his team’s skills and capability externally by “doing benchmarking and attending
conferences to see what other businesses are doing to make sure we’re using the right
measurement approach here”. These interview findings suggest that LDP coordinators
possessing adequate conceptual, procedural, and metacognitive is not a barrier at the
organization.
Further, it is assumed that talent management leaders must also possess the knowledge
and skills needed to drive organizational culture change. Interview participants demonstrated
these skills through the description of multiple established relationships with those in decision-
making positions, clearly defined visions for their leadership programs, established forums for
use of data-driven decision-making activities, and mechanisms for communication to
stakeholders (Burke, 2012).
Although knowledge of leadership theories was not identified as a barrier, interview
participants described measurement approaches that varied in the application of leadership
theories and principles across the various LDPs. The knowledge influences in Table 9 also
revealed examples of different applications of leadership concepts within measurement
approaches by LDP coordinator and the LDPs they manage.
While Amelia’s programs build a measurement strategy based on various leadership
behaviors (Northouse, 2016), Peter’s rely on a Kirkpatrick (2016) levels of measurement
approach that he explained: “does not always translate well to the experiential learning format
these LDPs largely use”. This lack of consistent translation of theory to real-world application is
further highlighted within Douglas’ functional programs. He conducted multiple job role analysis
60
exercises to develop a set of leadership core competencies (Achlison et al., 2019; Hall et al.,
2014) but has been unable to apply these competencies to measurement approaches. While some
adoption was piloted and approved by executive leaders to scale further, Douglas explained:
“translating these into crafted employee engagement plans under each senior director is still a
year away”. Tools such as competency-based assessments can be valuable (Thomas et al., 2019),
but lacking a clear, consistent measurement strategy across the organization that uses a common
definition of leadership aligned to organizational goals risks a poor return on investment, as well
as insufficient resources to execute program priorities (Barnett & Mattox, 2010).
Participants recognized the need to cultivate an organizational definition of leadership
and development approach aligned to business needs (Senge, 2005). However, efforts are
independent and decentralized, creating barriers to achieving alignment. As Peter explained:
“gaps exist in using data to create and refine a consistent set of leadership skills and behaviors
aligned with the needs of the business, so we end up with different results that change too
frequently to be effective”. Some LDP coordinators, such as Kennedy, stated this contributed to
a feeling that their measurement and data use work was not defining what makes a leader a good
one, particularly before the organization starts identifying leaders for development programs.
While some, such as Amelia, use gap assessments to determine “where the participant is at and
those needs so the program can identify the opportunities for development, such as different
projects and leadership styles training”. These varied approaches contribute to a barrier in
developing and applying a consistent organizational definition of leadership and development
that Senge (2005) argues is needed for programs to be effective.
61
Motivational
Assumed motivational influencers in this study were identified as expectant value theory,
self-efficacy, and utility value. Analysis of interview data revealed that while individuals had a
strong belief in their capability to lead measurement and change, LDP coordinators encountered
disconnects between their expectancy and utility value and that of their organization. These
disconnects, left unresolved, presented a risk in eroding the self-efficacy of the LDP coordinator
to ultimately deliver success.
Expectancy Value Theory
The collection of four constructs that define expectancy value theory are attainment,
utility value, intrinsic value, and perceived cost (Eccles, 2006). Motivation to remain engaged in
a task also requires long-term value or extrinsic reasons (Eccles, 2006; Rueda, 2011). All
interview participants demonstrated self-recognition (attainment) and strong intrinsic value in the
use of data to measure the effectiveness of LDPs and their participants. As Peter stated:
My time in LDPs goes back over 10 years. I’ve been a participant, led skill development
across enterprises, and worked hard to collect and use data to ensure job roles are tailored
to jobs. Even externally, I’m doing benchmarking and attending conferences to see what
other businesses are doing to make sure we’re using the right measurement approach
here.
For those who did struggle with inconsistent measurement approaches, the common
cause was a lack of extrinsic value. According to expectancy value theory, choices are motivated
by a combination of people's expectations for success and the value of the task (Eccles, 2006).
To interview participants, the perceived time, money, and effort required to produce reporting
62
was not worth the perceived value, or lack thereof, it delivered to decision making. As Peter
explained:
If the needle didn’t move, it was demotivational. I was satisfying my customer in getting
the leaders the quarterly reporting they wanted, but I had to chase down so many people
to get it. Some didn’t ever provide it to me, so I had to go around them.
Further, within programs managed by Amelia, she shared that: “while there are no
barriers to getting the data, executive sponsors of the program don't always care outside of
quarterly reviews where conversations were more high level”. Access to data is not enough to
drive motivation to sustain the use of data (Marsh, 2015) nor does it ensure organizations will
build comprehensive measurement strategies that also sustain a data-driven culture.
Utility Value
Clark and Estes (2008) stated that utility value focuses on the benefit of completing a task
rather than the task itself. Further, incentivized tasks can significantly increase an individual’s
willingness to persist at or complete a task. One barrier to utility value was described by both
Peter and Kennedy as a lack of perception that their measurement and data use work was shaping
a definition of leadership. As Peter described, the program should be “defining what makes a
leader a good one before [the organization] starts identifying and developing”, while Kennedy
spoke of “gaps in using data to create and refine a consistent set of leadership skills and
behaviors aligned with the needs of the business. This need to cultivate an organizational
definition of leadership and development approach aligned to business needs is therefore unmet,
negatively impacting motivation (Senge, 2005).
Motivation was further impacted by creating ambiguity on whether the work of LDP
coordinators was accomplishing the bigger goal. Instead, as Amelia stated “there is not a set
63
quantification for ‘value’” or value creation is not tied to specific leadership behaviors but rather
“people are providing [the program] headcount and budget from their organization”, as Kennedy
and Peter characterized. Motivation to remain engaged in a task requires long-term value or
extrinsic reasons (Eccles, 2006; Rueda, 2011). When value is not consistently demonstrated
across the organization, measurement approaches can become irrelevant, stale, and static,
thereby missing opportunities to optimize program performance, which further decreases the
perceived value of data-driven decision-making (Marsh, 2012). These disconnects, left
unresolved, presented a risk in eroding the self-efficacy of the LDP coordinator to ultimately
deliver success (Bandura, 2012).
Organizational
In addition to knowledge and motivational barriers, this study evaluated three
organizational influencers assumed to motivate followers to use data for decision making.
Organizational leaders use these assumed mechanisms to embed and transmit culture and draw
attention to the importance of consistent, systematic measurement of key initiatives (Schein,
2004). The analysis of these assumed organizational influences revealed that while Century
Manufacturing, Inc.’s data-driven culture translates to the measurement of leadership
development program effectiveness, gaps exist in aligning stakeholder and organizational
performance goals, as well as optimizing accountability. In particular, while measurement
mechanisms existed, there was not necessarily a clear, consistent connection between success
criteria and measured outcomes of leadership and program performance.
Culture
Overall, few cultural barriers were shared by interview participants. All LDP
coordinators felt they had the right levels of leadership support to be successful. Riley noted that:
64
Organizational sponsors and other executives are actively part of our program, which has
helped build strong relationships over 30 years with pipeline colleges and universities.
There are KPIs we can show, but there is something built into the culture where they just
know that people are good.
Organizational sponsors felt they received the right data and insights to inform decision-
making. In fact, when gaps between available data and decisions were encountered, “good
healthy discussions happened” and “even with an audience of engineers, it wasn’t about being
precise, the conversation was able to look at the big picture rather than attacking numbers”.
Situations did occur where organizational sponsors and executive leaders would want to
scale successful efforts quickly, and LDP coordinators would have “limited resources and
multiple concurrent initiatives”. However, accessibility to decision-makers was “rarely an issue,
and removing barriers was quick”.
Outcome
While a culture that embraces the use of data to guide decision-making is a critical
enabler, Marsh (2012) cautions it may not be enough to optimize use as clear outcomes are
needed to define success. Interviews showed that while measurement mechanisms existed, there
was not necessarily a clear, consistent connection across programs between success criteria and
measured outcomes of leadership and program performance. Riley shared that:
We need to better craft the value proposition. For instance, we just went through lots of
activities that reduced management layers and it created a perception that you had to
remove management positions. We had to show that you get better leaders through the
programs. Now, we’re trying to build out the visuals and other communications to show
this and change that perception.
65
When LDP coordinators are not consistently mindful of aligning program goals and value
propositions with data collection, analysis, and reporting, a barrier is created to data-driven
decision-making (Ravitch, 2019). Jesse further characterized this barrier, adding:
[It] feels like there is so much to track in terms of value. There are qualitative things like
the attraction and brand of [the company]. And then there are things that we can quantify,
and that we don't take enough time to quantify the benefits. We know the ROI is there, it
makes sense, we don't need to spend a lot of time measuring. We need to do a better job
of the good things that come out, otherwise, we can shortchange ourselves.
Without the clear connection between measurement approaches and targeted outcomes,
LDP coordinators establish a barrier to intentionally using data as evidence that is needed to
support findings and assertions and enable decision making (Ravitch, 2019), as well as feel
confident that the data and insights truly measured whether programs were successful at meeting
objectives.
The variability of experiential learning assignments was identified as a common
contributor to the lack of connection between measurement approaches and demonstrating
program success. As Kennedy described:
It can sometimes depend on the importance and/or size of the role and organization. The
participant can hit all his or her goals, but not move the needle because the role wasn’t
that important to the business. However, it could still provide the right opportunity for
that participant to develop the skills they needed.
Michael also explained that “each rotation is different, and with different people in
different orgs it’s hard to compare to one another” to define what is successful. This inability to
create clear external and internal outcomes targeted as a result of learning can present a barrier to
66
establishing and cultivating a strong learning measurement culture (Kirkpatrick, 2016). It can
also create barriers to maintaining effective accountability mechanisms such as standardized
balanced scorecards and formal guardrails for holding participants answerable for low
performance, both of which can impede program effectiveness (Shahin & Zairi, 2006).
While the size of the company makes it challenging to consistently define success at an
individual and program level, there is an opportunity to leverage approaches across programs to
streamline the development and deployment of measurement and data utilization.
Accountability
While all leadership development programs described within interviews use mechanisms,
such as visible artifacts, to monitor both current performance and improvement efforts (Shahin &
Zairi, 2006), established accountability relationships were less consistent. Value was frequently
defined as “what the participant brings to the hosting manager” and it was sometimes hard to
determine if it was a performance issue or just a “poor fit”. Programs relied heavily upon early
conversations between the participant and his or her manager to set expectations for
performance.
While this helped set individual goals between the participant and direct manager, there
was rarely any formal mechanism for accountability outside of the standard company-wide
performance management process. This creates gaps in accountability relationships between the
participant and the LDP coordinator, who ultimately is responsible for program performance.
The performance management process did include “leadership behaviors” defined at the
company level, as well as other “business-related performance metrics”, which could vary based
on assignment. Further, data collected by LDP coordinators and reported up through leadership
did not typically include performance management data. Jesse described a scorecard approach
67
that was similar to other LDP coordinators: “we report KPIs, such as performance, retention,
diversity, and investment, and sometimes include survey data from participants”. However,
coordinators and participants both perceived that there were no formal guardrails for holding
participants accountable for low performance. Kennedy shared that low participant performance
was met with inconsistent intervention:
Senior Directors that did performance management would have one-on-one conversations
but never kicked anyone out. You put in what you get out. A lot of success was perceived
as ‘what kind of position did you get as you rotated? Was it solicited for or did you have
to find it on your own?’. Some did hang out in the program because they couldn't be
placed. Some got offers in 1st year. They can accept the offer and stay in the program and
not be immersive.
These research findings reflect a lack of clarity in expectations for performance, which
establishes ambiguity within accountability relationships between program stakeholders
(Dubnick, 2020), particularly when intervention is needed.
When asked what barriers existed to the lack of accountability mechanisms, interview
participants cited a lack of incentives for change. Organizational sponsors were “paying attention
and have meetings and steering team forums for evaluation” but many interview participants felt
a “structure was in place that presented barriers to change”. To counter this status quo, Douglas
stated that one functional program is requiring all senior directors to have established plans with
defined success actions. While the aim is to have a long-term accountability mechanism that
includes comparison to peers, he admitted that once it is deployed, those that are behind may be
too far behind to catch up. While a challenge, this function’s approach is an opportunity to
overcome the barrier of inertia other LDP leaders can use to establish accountability mechanisms
68
within a measurement strategy that drive overall performance and sustainability (Shahin & Zairi,
2006).
Results for Research Question 3
To have effective accountability relationships, leadership development programs need
clear expectations, manage performance in alignment with those expectations, and establish
consequences for someone not meeting expectations (Hall et al., 2017). Based on the review of
literature, assumed accountability influencers are an organization’s ability to create a
measurement strategy that clearly defines and measures success, sets goals that are specific and
measurable, use mechanisms that facilitate data-driven decision making (Marsh, 2017), and
establishes accountability relationships between program stakeholders (Dubnick, 2020).
Strategy
When developing a measurement strategy for their programs, LDP coordinators and
business sponsors at Century Manufacturing, Inc. developed missions and visions that tied
leadership skills and behaviors aligned with specific organizational needs (Hentschke &
Wohlstetter, 2004). Interviews revealed that this alignment helped clearly define value and
success in terms of stakeholders’ long-term interests, which Jesse stated included “business
metrics such as productivity and attrition, as well as leadership and technical competencies”.
Michael shared that “a major priority for us as a company was to grow program management
excellence, so we defined value and reporting performance using both business goals and
objectives and learning goal elements”. Having a clear strategy aligned with short- and long-term
business needs ensures programs are gathering the right data and creating meaningful insights for
change (Barnett & Mattox, 2010; Marsh, 2012).
69
Measurement tools, such as surveys and assessments, fed balanced scorecards reviewed
at standard, recurring cadences to continuously evolve the strategy. Peter stated this enabled him
to manage his program in “consistent alignment between our mission, what we measured, and
how we reported performance that could evolve as our needs evolved”. Douglas noted that his
team “conducted several job analyses to identify which gaps existed in critical areas”, which was
used to inform updates to their program strategy. Using measurement tools that incorporate
targeted program behaviors derived from critical skills needed by leaders is an effective
approach the LDP coordinators are using to align the organizational definition of leadership with
measures of success (2013 Training Industry, 2013).
Goal Setting
Effective measurement strategies have a clear alignment with organizational business
goals (Barnett & Mattox, 2010). To demonstrate a strong learning measurement culture, LDPs
need to identify both external and internal outcomes targeted as a result of the training, as well as
the critical behaviors, goals, and objectives intended to drive overall program performance
(Kirkpatrick, 2004). Several programs had defined key performance indicators and goals for both
program and participant performance. Program performance goals were measured in operational
investment, business impact within the hosting organization, and retention and diversity of
participants. Participant goals were set at an individual level and measured through existing
performance management mechanisms like 360s. Kennedy described the use of metrics that
included leadership behaviors such as “having the right mindset, culture agility, business
acumen, and functional/technical acumen".
One functional organization conducted a manager capability exercise to identify a core
set of individual competencies to ground LDP programs. Lasting three years, the organization
70
sent out surveys to managers across levels to solicit which of a set of 80+ capabilities did they
feel were what they needed to be successful leaders. Before this, human resources (HR)
primarily defined what programs needed to be built upon. This approach better represented the
needs of the business by, as Cassie stated, “digging into their needs” and as Riley stated,
“hearing from the business, not relying solely on HR as the conduit”. External benchmarking and
focus groups were then used to refine and validate the list, and customize it based on areas of the
company, as functional leaders existed in a variety of business units and environments. Using
this approach demonstrated a disciplined methodology to ensure training with teaching core
competencies (Austin et al., 2011) and resulted in program participants possessing common
leadership behaviors such as goal setting, resilience, and integrity (Alina, 2013; Silva, 2016).
Decision Making
Within their operating rhythm, interviewed participants stated broad use of visible
artifacts, like a balanced scorecard, monitored performance against goals. Century
Manufacturing, Inc’s use of accountability mechanisms, such as recurring business performance
reviews (BPRs) and quarterly cohort meetings, can drive accountability within a measurement
strategy (Shahin & Zairi, 2006). Within one executive leadership development program he
managed, Michael stated that:
The entire team, which included hosting managers, managers of participants, program
participants, mentors and coaches, and the project management team, would meet on a
monthly basis to discuss performance and make decisions regarding change. Executive
organizational sponsors that managed programs met quarterly to review a balanced
scorecard for program performance and discuss each participant’s performance.
Decisions were made related to budget and resources, improvement efforts needed to
71
close program and individual performance gaps and address any additional help needed’s
that were elevated. Finally, an offsite was held during the first quarter for level setting
that established the mission, vision, and success criteria for the upcoming year.
This was a consistent approach to an operational rhythm for decision-making forums
used across the LDP coordinators interviewed. Kennedy highlighted that each quarter, her
program would have present on on-the-job experiences and learnings:
Each participant gives a deep dive to the group on his or her project and focuses on a
program management best practice perspective. What are those best practices and what
has the growth opportunity meant to you?
Active participation of program participants in reviews not only provided decision-
makers with a first-hand demonstration of core competencies (Alina, 2013; Silva, 2016) but also
strengthens a culture of accountability by embedding it within leaders at all levels (Shahin &
Zairi, 2006).
Accountability
Clear accountability relationships are needed to sustain and continuously improve
program operations through the assessment of a wide variety of LDP elements (Koohang et al.,
2017). For leadership development programs, this is assumed as manifesting in relationality,
spatiality, and temporality elements (Dubnick, 2004). With regards to relationality, expectations
were held by organizational sponsors and hosting managers expecting business impact and a
return on investment as a result of the participant’s involvement in the program. Participants
expected a positive, valuable learning experience that made them stronger candidates for future
career advancement opportunities. Oftentimes, disconnects happened between these sets of
expectations as what took priority when determining program success. For example, Peter stated
72
that attrition can be driven by a lack of perceived career advancement opportunities at the
company, and therefore program participants find little incentive to work hard to achieve the
business impact that leaders expect. However, it was unclear to Peter what organizational
sponsors and executive leaders did to remove blockers to advancement.
Further gaps in accountability existed at the individual participant level. No interview
participant was able to provide a clear consequence for low individual participant performance
that resulted in removal from the program. Low performers were provided multiple opportunities
for success, such as rotation to other project-based learning opportunities, until their
predetermined tenure in the program concluded, which on average was two years.
Century Manufacturing, Inc.’s measurement strategy demonstrated temporality and
spatiality through the use of monthly and quarterly business reviews to conduct formal reporting
conversations, as well as individual-level conversations through recurring one-on-one feedback
sessions between student and mentor, manager, or other support participants (Dubnick, 2020).
Through this system of reviews and performance discussions, program coordinators and sponsors
take a social network approach (Hall et al., 2017) and account for the web of relationships that
exists within the dynamic context of social interactions and experiences of a leadership
development program (Bandura, 2012).
Summary
The study’s research intended to uncover how Century Manufacturing, Inc. and its LDP
coordinators were demonstrating knowledge and behaviors that indicated a data-driven decision-
making culture within their leadership development programs. Research participants provided
multiple, detailed examples of the collection and use of data across all elements of LDPs. They
had access to measurement tools and mechanisms (Kirkpatrick & Kirkpatrick, 2016), technology
73
and resources to gather quantitative and qualitative data for analysis (Marsh, 2015), and built
comprehensive measurement strategies. These strategies used a definition for effective
leadership that incorporated internal and external benchmarking, identified business technical
and leadership skill needs, and aligned with company mission, vision, and values (Senge, 2005).
Further, research participants demonstrated they possess the knowledge and motivation to
collect, organize, and analyze data in a way that results in actionable knowledge (Marsh, 2015).
Organizationally, the company’s culture supports data-driven decision-making and provides the
resources needed for measurement. LDP coordinators felt they had a clear line of sight to
appropriate decision-makers to request and secure support.
Where barriers exist, the primary opportunity for improvement was clarifying and
strengthening accountability relationships. One gap Century Manufacturing, Inc. could focus on
is the inconsistent alignment of expectations for individual participant performance and
consequences for not meeting expectations. Setting clear accountability relationships, including
consequences not meeting expectations, enables efficient program operation (Hall et al., 2017).
A second gap to focus on is more efficient mechanisms to adapt program priorities, particularly
through leadership changes and as overall company strategy evolves. For example, when
leadership changes occurred, multiple interview participants stated there could a shift in
expectations for both program participants and LDP support team priorities. This is sometimes
mitigated by mechanisms LDP coordinators build in to evolve leadership behaviors in tune with
the business. As one LDP coordinator described it “depends on the needs of business, but those
change. Change is ok, but mechanisms should be in place to adapt, and we don’t always have
those”. Recommendations on how to approach closing gaps will be presented in Chapter 5.
74
Chapter Five: Recommendations
This study examined whether or not key leadership development program leaders at
Century Manufacturing, Inc. use data to inform decisions using the Clark and Estes (2008) gap
analysis framework. Semi-structured interviews were used to understand how leadership
development program (LDP) coordinators collected and utilized data. Interviews also sought to
determine what systemic barriers existed that prevent key stakeholders from using data to make
programmatic decisions, and how data was used to identify, define, and manage accountability
relationships. Gaps and enablers in knowledge, motivation, and organizational influences on data
use were identified. In this chapter, recommendations will be provided to address the gaps
identified in Chapter Four and are presented through the lens of the conceptual framework used
by the study.
Discussion of Findings
Century Manufacturing, Inc. (CMI) to achieve its aspiration to become the best in its
industry and an enduring global industrial champion, it must identify and continuously develop
the best leaders (Avolio et al., 2010). According to Bandura's (2005) social cognitive theory,
effective leadership is conceptualized as a complex process involving multiple dimensions.
Evaluating program performance needs to take a similar approach and incorporate all program
elements into a measurement strategy. Interviews conducted with key stakeholders revealed a
strong culture of data-driven decision-making at Century Manufacturing, Inc. To operationalize
this culture into a comprehensive measurement approach will require LDP coordinator
knowledge of how to apply conceptual and procedural approaches to assessing program
effectiveness and the ability to reflect and adjust. Operationalization will also require adequate
levels of motivation across all stakeholders, not just LDP coordinators. Stakeholder motivation is
75
necessary to resolve current disconnects between LDP coordinator expectancy and utility value
and that of their organization. Resolving disconnects results in the strong self-efficacy needed for
LDP coordinators to be confident in their ability to use data to drive decision-making. Finally,
the broader organization will need to set the right context for change. This involves
strengthening the alignment between program and organizational goals, the collaboration
between LDP stakeholders, and establishing clear accountability relationships and mechanisms.
Recommendations for addressing knowledge, motivational, and organizational influences
will also align with what Chapter Two's literature review showed as necessary to establish a
data-driven culture. First, to establish clear, standard definitions and measurement mechanisms,
LDP coordinators will be given training and support on how to advance through Krathwhol's
(2000) taxonomy for learning, teaching, and assessing. This enables LDP coordinators to
compare and explain components of leadership development programs, and then apply and
analyze learnings across programs for reflection and improvement. Executive sponsors will also
establish a community of practice where LDP coordinators, participants, and other stakeholders
can prioritize knowledge-sharing, establish standards, and define collective accountability, which
will increase efficacy and confidence. Through these communities of practice, the organization
establishes a broad support network for consistent use of data to measure program success.
Recommendations for Practice
The study used the Clark and Estes (2008) model to assess gaps and diagnose the barriers
to data use. The systematic process used within the framework assumes knowledge, motivation,
and organizational influences lead to root causes and needs to derive recommended solutions.
This section will present recommendations using the Clark and Estes (2008) model and include
data validating existing conditions that support the use of data to measure learning effectiveness.
76
Knowledge Recommendations
Through interviews, LDP coordinators demonstrated solid conceptual and procedural
knowledge of leadership skills and principles and measurement. Table 10 outlines the knowledge
influences, whether a validation or gap occurred, and a recommendation aligned with findings.
Table 10
Knowledge Assets and Gaps
Knowledge type Asset or Gap?
(A, G)
Recommendation
LDP coordinators
demonstrate
knowledge of
effective leadership
theories (D)
A Create a Community of Practice (COP) to serve
as a forum to collaborate, conduct knowledge
sharing, establish standards, and create
mechanisms for shared accountability.
LDP coordinators create
an effective learning
measurement strategy,
including instruments,
scorecards, and other
data analysis tools
G
G
Establish a shared approach to defining
leadership.
Provide training on adapting common
measurement approaches, such as surveys and
assessments, to an experiential learning
approach.
LDP coordinators
analyze their existing
skills to conduct data
analytics, informed
decision-making, and
lead organizational
change
G Create opportunities for observation, application,
discussion, and reflection of measurement
approaches.
77
Strengthen Existing Knowledge and Capability Through Community-Building
While interviews demonstrated that LDP coordinators were both knowledgeable of
leadership theories and principles and strategies and tools for learning measurement,
coordinators did not often interact with their peers in developing common approaches. This
siloed environment results in multiple participants establishing multiple socially constructed
realities (Bandura, 2005). It also results in a missed opportunity to utilize internal resources to
increase knowledge and capability.
Strengthening knowledge and capability should include incorporating the environment's
influence on a person's learning and subsequent behavior (Bandura, 2005). As Peter shared
during his interview, Century Manufacturing, Inc. currently utilizes communities of practice
(COP) as forums for learning and problem-solving. COPs should have established authority and
autonomy to implement change, which is facilitated through the inclusion of an executive-level
leader as a sponsor and promoting members as change champions (USAID, 2014). An
enterprise-level leader within the Leadership, Learning and Organizational Capability (LLOC)
department is recommended to be assigned as a sponsor. COPs engage and empower
communities most impacted by change, which is essential in creating equitable and inclusive
approaches (Bentley, 2019). COPs serve as a safe space for dialogue to collaborate, conduct
knowledge sharing, establish standards, and create mechanisms for shared accountability
(USAID, 2014).
The LLOC can facilitate collaboration, as well as work with the community to establish
shared standards for the knowledge needed to successfully manage leadership development
across the company. Marsh (2015) asserts that these professional learning communities often
serve as "data teams" associated with data-driven reform initiatives and successfully improve
78
data use through colleagues clarifying and correcting analysis errors and having positive effects
on leader beliefs, understandings, and practice.
Establish Clear, Standard Definitions and Measurement Mechanisms
Building a clear, consistent definition for leadership and program effectiveness is a
common challenge for leadership development program coordinators (Avolio & Hannah, 2008;
Yukl et al., 2002); Century Manufacturing, Inc. is no exception. While LDP coordinators
described disciplined approaches to developing criteria for measurement, gaps in procedural
knowledge exist for evaluating the quality of all LDP components. Creating adequate criteria for
leadership and program efficacy, both considered critical foundational elements of measurement
strategies, start with a clear definition of leadership.
Leadership development programs at Century Manufacturing, Inc. use a set of company-
level defined leadership behaviors but vary in assessing competency. Several are investing years
and resources for a balanced approach between functional leadership and business competency.
Coordinators reportedly use the right approach of using surveys and focus group interviews to
construct program-based definitions (Yukl et al., 2002). However, there was limited evidence
that cross-program collaboration was occurring that established company-wide standards. This is
both inefficient and risks variation in measures for effectiveness (Yukl et al., 2002). To create a
model that produces an organizational definition of leadership tied to company mission, vision,
goals, and business priorities, LDP coordinators must work together through mechanisms like the
COP (Senge, 2005). LDP coordinators should also practice ongoing stakeholder inclusion within
their respective businesses to define unique elements appropriate for each program (Wheeler &
Sillanpa'a, 1998). This bottoms-up approach moves away from the bureaucratic model of a
79
central authority for definition and direction and enables the programs to be more agile (Fuqua &
Newman, 2005).
LDP coordinators face another adaptability challenge in applying research-based
approaches to measuring the effectiveness of multiple program components. As described in
Chapter Four, LDPs primarily use experiential learning assignments to develop leaders. These
experiences use multi-year assignments that provide on-the-job training, new hosting managers,
mentors, weeklong workshops, and traditional in-person and online learning courses. That
experience is multifaceted, yet evaluation can be limited to competency assessments and one-on-
one performance management discussions with a manager. LDP coordinators should receive
training on how to apply common measurement mechanisms, such as competency assessments,
surveys, and observations, within the experiential learning context. As Marsh (2012) described,
organizations benefit from a sociocultural learning perspective that builds data use and
intervention identification capacity. Sociocultural theory assumes learning is embedded within
social events, so to effectively assess program performance, one must focus on measurement
approaches that include every day, authentic activities that individuals participate in that involve
peers, activities, and artifacts Marsh (2012).
While establishing standards is a known challenge across industries, Century
Manufacturing, Inc. appears to benefit from having an existing knowledge base and approaches
to leverage and scale. For example, Jesse and Riley worked together within their functional
organization on a three-year effort to distill down over 80 competencies to a core set. By shifting
from a human resources-driven approach to one that engaged management across business
environments, the process created more understanding and ownership of the definitions (Fuqua
& Newman, 2005). Further, it resulted in a more experiential-based approach that provides
80
multiple pathways to scale across the organization. Examples such as these can be shared,
adapted, and applied by other LDP coordinators, drawn from collaboration within the COP, to
build a more aligned, efficient approach to establishing consistent data use strategies.
Create Opportunities for Observation, Application, Discussion, and Reflection
For LDP coordinators to master complex procedural knowledge, they need to use
metacognitive strategies such as assessing needs, planning one's approach, and monitoring
progress (Krathwohl, 2002). According to Krathwohl (2002), effective learning, teaching, and
assessing move beyond remembering, understanding, and applying conceptual and procedural
knowledge to metacognitively analyze, evaluate, and create knowledge. Thus, experiences that
create opportunities for LDP coordinators to observe one another's use of measurement, apply
learnings to unfamiliar tasks, and analyze, discuss, and reflect on results can strengthen critical
thinking and create new alternative solutions to complex problems. LDP coordinators can use the
COP to identify opportunities, and organizational sponsors and other executives should also
prioritize cross-organizational opportunities. Creating cross-organizational approaches can
increase alignment between program initiatives and strategic or business goals shared across the
organizations (Barnett & Mattox, 2010).
Motivation Recommendations
The study's conceptual framework incorporates Clark and Estes' (2008) three key
motivational concepts of active choice, persistence, and mental effort and incorporates theories
of effort, self-efficacy, and influences on learning according to Dweck (2013). Interviews
revealed threats to LDP coordinator self-efficacy created by disconnects between their
expectancy and utility value and their organization. Based on findings, LDP coordinators lack
confidence that the organization values the effort put into using data for decision-making.
81
Additionally, organizational sponsors and other decision-makers need to decrease ambiguity in
how the work of LDP coordinators was accomplishing more significant goals.
Increase Transparency of Data Usage
Motivation to remain engaged in a task requires long-term value or extrinsic reasons
(Eccles, 2006; Rueda, 2011). According to the expectancy-value theory, for LDP stakeholders to
remain extrinsically motivated, they should see and understand how data delivers value (Eccles,
2006). It is not enough to have LDP coordinators intrinsically motivated to sustain an
organizational culture (Marsh, 2015). Interview findings revealed that the use of measurement
data is limited to leadership-level reviews and one-on-one conversations and not consistently
shared with LDP participants, mentors, or other members of the LDP support system. The use of
data and artifacts to drive decision-making, such as standardized balanced scorecards, needs to
be visible to all stakeholders to demonstrate its utility value (Clark & Estes, 2008). Further,
transparency to the value of data use can support effective accountability mechanisms to holding
all stakeholders answerable for performance (Shahin & Zairi, 2006).
Build Clear Connections Between Data Usage and Business Outcomes
Training initiatives risk failure when they do not have alignment with the strategic or
business goals of their organization (Barnett & Mattox, 2010) or use measurement approaches
that do not connect clear definitions for program effectiveness to the collection of key data for
decision-making (Kirkpatrick, 2006). Value creation should be tied to specific leadership
behaviors (Eccles, 2006; Rueda, 2011), and build an understanding of how the effort going into
data-driven decision-making accomplishes bigger organizational goals (Marsh, 2012). For
example, Douglas’ program is mapping leadership knowledge, skills, and abilities, as well as the
tools that leaders need, to the key business priorities of safety, quality, and production system
82
performance. By creating these clear connections, LDP coordinators align program performance
with business performance, demonstrating the value of measurement and creating buy-in across
stakeholder groups (Eccles, 2006). As LDP coordinators continue to create and refine a
consistent set of leadership skills and behaviors, an organizational definition of leadership and
development approach aligned to business needs is therefore unmet, negatively impacting
motivation (Senge, 2005).
Organizational Recommendations
Organizational culture directs the effectiveness of leadership on organizational
performance by creating a set of ideas, values, norms, and institutions (Arun et al., 2020).
Cultural models are important because they shape how people think about issues and solutions
that they see as effective and can define organizational behaviors. Within the Clark and Estes
(2008) model, leaders require adequate knowledge, motivation, and organizational support to
achieve program goals. Based on interview findings, Century Manufacturing, Inc.'s cultural
model values the idea of data-driven decision-making but lacks a clear, consistent connection
between success criteria and measured outcomes of leadership and program performance.
Further, established accountability relationships were inconsistent or ambiguous. As such,
recommendations focused on creating clear alignment between stakeholder performance and
organizational goals (Clark & Estes, 2008) and establishing accountability relationships that set
clear expectations for performance (Dubnick, 2020).
Align Stakeholder Performance and Organizational Goals
A fundamental component of a strong learning measurement culture requires is alignment
between program stakeholder performance and that of the business (Kirkpatrick, 2016).
According to Clark and Estes (2008), performance goals are effective when they cascade from
83
organizational goals. Aligning performance and business goals requires creating descriptions of
tasks or objectives that individuals and teams must accomplish according to specific deadlines
and criteria (Clark & Estes, 2008). Century Manufacturing, Inc. will improve its current
measurement approaches by connecting the day-to-day activities of stakeholders to
organizational goals. Without this, individuals will tend to focus instead on tasks that advance
their careers rather than what is needed for organizational success (Clark & Estes, 2008). Clear
communication of value between individual contributions and organizational success is also
needed to maintain motivation (Rueda, 2011).
Through the alignment of stakeholder and organizational goals, LDP coordinators can
better diagnose gaps and frame solutions through a lens of broader organizational improvement
(Clark & Estes, 2008). Further, Bandura's (2005) social-cognitive approach to learning and
measurement asserts that all stakeholders should participate in goal setting to build stronger
commitment. A mechanism, such as the LDP Community of Practice (COP), should be
established to serve as a forum for goals to be set, consistently monitored, and changed. When
creating goals, Clark and Estes (2008) suggest leaders use an approach that creates concrete and
challenging current goals, meaning short-term, as daily or weekly goals are more effective than
longer-term goals set monthly or annually. In this way, LDP decision-makers can better evaluate
how employee skill development and behavior change lead to the accomplishment of short-term,
individual, and team goals, that are then aligned to business needs (Bourda, 2014).
Establish Accountability Relationships for All Program Stakeholders
Accountability relationships are driven by the three dimensions of values, decision rights,
and information (Hentschke & Wohlstetter, 2004). While alignment largely exists in the value of
the use of data to measure program effectiveness, decision rights are less established across all
84
stakeholder relationships, and information sharing may be limited to select forums with limited
audiences. The epistemological accountability of Century Manufacturing, Inc. is managerial and
bureaucratic because of its strong culture of safety and compliance due to its products. However,
within the context of developing leaders, the organization’s lack of standards to hold people
responsible to challenge accountability relationships prevents the use of the same
epistemological approach.
Dubnick (Hall et al., 2017) discusses accountability in terms of webs of relationships,
reflecting a social network approach. While the network created within the LDP experience can
define domains of accountability, role responsibilities can often be vague (Chittenden, 2014).
This results in ambiguity within established decisions rights, as well as how to hold stakeholders
accountable for success. To establish the first ontological characteristic of relationality, which
decreases ambiguity, the organization should clearly define relationships between roles and make
expectations for success transparent (Dubnick, 2020). For example, organizational sponsors
expect a return on investment, while LDP program participants expect positive, valuable learning
experiences and quality mentorships. Hosting managers expect value delivered to their
organization and mentors expect program participants to be engaged in the development process.
Further, within the workplace, those interacting with leaders should expect more effective
employees able to guide the organization in delivering results. Unless these expectations for
success are clearly communicated, and standards for holding individuals to account established,
stakeholders will find it difficult to deliver on commitments (Chittenden, 2014).
Expanding stakeholder inclusion within LDP information sharing and decision-making
forums will establish stronger accountability relationships. Currently, formal reporting
conversations occur within structured business reviews, and individual-level conversations
85
involving one-on-one feedback via performance management plan discussions. These
mechanisms begin to define accountability between LDP coordinators, hosting managers, and
their organizational sponsors but leave gaps between LDP coordinators, program participants,
and mentors. Increased transparency of formal reporting conversations would reduce
accountability gaps within the organization's leadership (Hall, 2017) and better establish
spatiality (Dubnick, 2020). Access to information and discussions that drive decision-making
allows stakeholders to define goals as measures of accountability and identify appropriate
accountability agents to resolve performance disconnects (Burke, 2004). Agents are better
informed to indicate necessary informational inputs and outputs for decisions and establish
governance structures to sustain the approach (Burke, 2004). When future accountability
deficiencies or gaps arise, the LDP Community of Practice (COP) can be used to identify and
create additional structures and processes needed to establish and foster account-giving
relationships and associated behavior (Dubnick, 2014).
Integrated Implementation and Evaluation Plan
Building organizational capacity for change requires mechanisms to embed and transmit
culture and draw attention to the importance of consistent, systematic measurement of key
initiatives (Schein, 2004). To hold stakeholders accountable for LDP performance, coordinators
need to measure whether targeted outcomes were achieved (Kirkpatrick & Kirkpatrick, 2016).
For the LDP, the primary targeted outcome is a strong feeder pool for leadership roles delivered
across the enterprise. The program also needs to deliver strong LDP participant performance and
LDP mentor/manager performance to enable this outcome. Internally, the primary targeted
outcome is functional excellence. Finally, to sustain the approach, change agents need to
86
understand their impact on the organizational culture and the influence on creating organizational
change (Schein, 2004).
Implementation and Evaluation Framework
The Kirkpatrick (2016) model takes a results-driven approach to measuring the
effectiveness of training (Barnett & Mattox, 2010). Leadership development program (LDP)
designers must begin with desired results (Level 4), determine what behavior (Level 3) is needed
to accomplish them, identify the attitudes, knowledge, and skills (Level 2) that are necessary to
bring about the desired behavior(s), and present the content in a way that enables the participants
to learn what they need to know and react favorably to the program (Level 1). Within the
monitor and adjust period that encompasses Levels 3 and 4, LDP designers should also include
required drivers, such as job aids, coaching, and recognition mechanisms, that monitor, reinforce,
encourage, and reward the performance of essential behaviors on the job (Kirkpatrick, 2016). A
precise measurement strategy aligned with the business's needs better positions companies to
secure sufficient resources to execute their learning strategy priorities (Barnett & Mattox, 2010).
Through the use of the Kirkpatrick framework (2008), talent development leaders enhance their
strategy to include evaluation of program effectiveness on business results and return on
investment and can demonstrate the impact of training on business performance metrics
Organizational Context
Unlike externally delivered products, LDP coordinators and organizational sponsors are
not held to specific professional or regulatory standards within their LDPs. This creates several
challenges that impact the accountability binary between LDP participants (provider) and LDP
coordinator (director). Selection to the program often adds additional work to existing job duties
for participants yet does not automatically lead to a promotion, upgrade, or monetary
87
compensation of any kind regardless of performance. As such, weak incentives are immediately
established for the participants to complete the extra work or risk possible failure during
challenging, high-profile stretch assignments (Hentshke & Wohlstetter, 2004). LDP coordinators
have limited decision rights, as providers themselves in relationships to Century Manufacturing,
Inc. executives, to hold program participants accountable. No standards or measurement criteria
exist for maintaining participation in the program nor centralized entity for periodic review due
to decentralized authority (Hentshke & Wohlstetter, 2004).
LDP coordinators need to measure whether targeted outcomes were achieved
(Kirkpatrick & Kirkpatrick, 2016). For the LDP, the primary targeted outcome is a strong feeder
pool for leadership roles delivered across the enterprise. The program also needs to deliver
strong LDP participant performance and LDP mentor/manager performance to enable this
outcome. Internally, the primary targeted outcome is functional excellence.
Level 4: Results and Leading Indicators
Metrics and methods are required to serve as indicators for progress, as well as
interventions for improvements, to determine whether targeted outcomes occur as a result of the
training, (Kirkpatrick & Kirkpatrick, 2016). Table 11 outlines metrics that function as key
performance indicators for each external and internal outcome.
88
Table 11
Outcomes, Metrics, and Methods for External and Internal Outcomes
External outcomes
Outcome Metric Method(s)
Strong feeder pool for
leadership roles exists
Number of 1
st
year participants
Number of 2
nd
year participant
Number of recent LDP graduates
Demographic data collection
Increased LDP
participant
performance
Results on key questions:
● Capability focus skills
● Experiential learning
activities
● Learning through
relationships
● Formal training activities
Kirkpatrick surveys (L4)
Increased LDP mentor/
manager performance
Results on key questions:
● Experiential learning
activities
● Learning through
relationships
Kirkpatrick surveys (L4)
Internal outcomes
Outcome Metric Method(s)
Increased functional
excellence
Results on key questions:
● Enterprise common leader
process criteria
● Candidate selection criteria
● Communication roadshow
Kirkpatrick surveys (L4)
Shared accountability model
scorecard results
Assigned and enrolled
manager mentor and
progress checks (utilizing
Kirkpatrick surveys (L4))
Recurring LDP COP reviews
(L4)
Quarterly LDP visibility
reviews (L4)
Bi-annual leadership team
reviews (L4)
89
The measurement strategy also needs to incorporate the right behaviors that drive
external and internal outcomes to determine their resulting impact (Kirkpatrick & Kirkpatrick,
2016).
Level 3: Behavior
Critical Behaviors
According to interview participants, successful program graduates are expected to
demonstrate a defined set of behaviors. For purposes of this study, a sample list is provided (see
Table 12) and used to provide examples of the application of recommendations. However, LDP
stakeholders should work together to develop a common list of core standards and an approach
to adding program-specific behaviors that align with the needs of their unique business
conditions. The sample list is based on data provided from interviews with Michael and
Kennedy, who both oversee Enterprise programs.
90
Table 12
Critical Behaviors, Metrics, Methods, and Timing for Evaluation
Critical behavior Metric(s)
Method(s)
Timing
Conflict
resolution
Leads people and teams
through opposing
views to reach a
common goal.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Directing,
inspiring, and
motivating
others
Leads tasks and people
effectively, solicits and
considers others'
opinions, and inspires
and motivates others.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Performance to
operational
plans
Delivers products and
services that
consistently meet or
exceed expectations.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Team
effectiveness
Effectively participates
in accomplishing team
goals, creates group
cohesion, values the
contributions of people
from diverse
backgrounds, and
involves others in
decisions that affect
them.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Coaching/
Teaching
Guides and coaches
others to improve
skills and achieve
challenging goals.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
91
Critical behavior Metric(s)
Method(s)
Timing
Operational and
financial
acumen
Demonstrated
knowledge of how
business is run.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Political savvy Effectively gain support
for his/her idea and/or
build coalitions.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Strategic agility Ability to sense and
respond to
opportunities or
obstacles without
losing momentum.
Participant and manager
will complete a
Kirkpatrick L3 survey
that contains a Likert
based scale measuring
behavior effectiveness
Twice per year:
mid-year and
end-of-year
Required Drivers
While LDP participants maintain primary accountability for managing their careers,
managers and key support people play influential roles in the success of stakeholder outcomes.
Clark and Estes (2008) state that organizational barriers, which can inform required drivers, can
be identified through additional feedback collected from surveys and interviews. See Table 13
for details.
92
Table 13
Required Drivers to Support Critical Behaviors
Method(s) Timing Critical behaviors supported
1, 2, 3, etc.
Reinforcing
LDP guidebook including program
information for models, definitions,
processes, and tools.
Ongoing All
On-the-Job (OTJ) work assignment job aid
listing activities available by critical
behavior for conflict resolution, team
effectiveness, political savvy, and
strategic agility.
Quarterly 1, 4, 7, 8
Competency assessment tool Annually Focus on three lowest-rated
critical behaviors
Encouraging
Feedback and coaching from manager Ongoing 2, 5, 6,7
Feedback and coaching from assigned
LDP mentor
Ongoing 2, 5, 6,7
Rewarding
Annual capstone event offers an
opportunity to share success stories and
receive formal awards.
Annually 2,3,4,5
Completed experiential learning
assignments can provide recognition for
achievement of business target and/or
stretch goal.
Project-based Project-based, but likely to
target all critical behaviors
Monitoring
Complete an experiential learning
assignment with assigned LDP mentor
or advocate from a list of internal
opportunities
Quarterly Project-based and aligned with
critical behaviors from a
competency assessment tool
Provide opportunities for LDP to share
experiences, including success stories,
key challenges, and lessons learned,
with peers.
Monthly All
93
Organizational Support
For the LDP organization to be held accountable for implementing drivers that support
critical behaviors continuously, Kirkpatrick and Kirkpatrick (2016) state clear leadership
direction and regular communication are required. Clear leadership direction is provided through
the sponsorship of executive-level leaders aligned with each program. With the new
accountability approach established by the recommended measurement strategy, managers,
mentors, and advocates assigned to LDP participants are held accountable for executing
activities that drive critical behaviors.
Regular communication of LDP information, requirements, and performance will exist to
provide organizational support for critical behaviors. The LDP Guidebook should be available
internally to all Century Manufacturing, Inc. employees and include information about all job
aids. Orientation and internal social media sites enable frequent communication between LDP
participants. LDP support assigned to each participant, including their direct manager, mentor,
advocate, and human resource generalist, are all held accountable to provide regular, intentional
communication to the emerging leader.
Level 2: Learning
Although human resources leaders state identifying and developing leadership talent for
organizational growth as their top problem, only 10 to 20% of organizations evaluate their
leadership development programs (Avolio et al., 2010). Hauser et al. (2020) estimate only about
one-third of organizations measure learning level activities. Currently, Century Manufacturing,
Inc.'s LDP is one such organization experiencing the problem of lack of use of data to hold
leaders accountable for desired program participant performance. To bring about the desired
leadership behaviors, and drive accountability, learning goals should define the degree to which
94
LDP participants have acquired the intended knowledge, skills, abilities, and confidence levels
based on participation training (Kirkpatrick & Kirkpatrick, 2016).
Learning Goals
LDP coordinators can use learning goals to create an answerability accountability
approach by defining a set of tasks and expectations within roles for learning and performance
(Dubnick, 2003). Learning goals will also strengthen alignment within the measurement
approach to address the accountability problem of the lack of a data-driven strategy for
leadership development programs (Barnett & Mattox, 2010).
Utilizing Anderson and Krathwohl's (2001) revised model of Bloom's Taxonomy, below
is a list of learning goals for LDP participants which align with the sample list of behaviors
provided earlier:
• Implements strategies that mediate opposing team viewpoints to reach a common goal.
(Conflict Resolution)
• Checks for others' opinions regularly and involve others in decisions that affect them to
directly influence motivation. (Directing, inspiring, and motivating others)
• Recognizes output that consistently meets or exceeds expectations to improve operational
performance. (Performance to operational plans)
• Recognizes the value of contributions from people with diverse backgrounds. (Team
Effectiveness)
• Critiques and guides others to improve skills and performance. (Coaching/ Teaching)
• Interprets business data and explains how business is run to demonstrate operational and
financial acumen. (Operational and financial acumen)
95
• Initiates political capabilities in support of one's ideas through the organization of
partnerships. (Political Savvy)
• Strategizes to help interpret situational conditions for barriers, opportunities, and
potential solutions. (Strategic Agility)
Program
Bandura's (2012) social cognitive theory emphasizes that knowledge acquisition includes
elements outside direct personal agency, such as social conditions and institutional practices. To
achieve learning goals, recommendations are that the LDP launch a cohort using its current
selection process and predetermined program participation duration (usually two years). The
delivery method supports the current approach LDPs use, which is a blend of on-the-job
assignments, in-person and online training courses, single- and multi-day leadership workshops,
capstone events, and mentoring sessions. The LDP will integrate the specific learning goals and
measurement strategy into this blended learning experience for high-potential employees capable
of rising to and succeeding in a more senior, critical role.
Evaluation of the Components of Learning
In alignment with the Kirkpatrick and Kirkpatrick new world model (2016), the
measurement strategy includes methods and activities listed in Table 14 to measure the degree to
which participants acquired the intended knowledge, skills, attitude, confidence, and
commitment.
96
Table 14
Evaluation of the Components of Learning for the Program
Methods or activities Timing
Declarative knowledge:
"I know it."
Kirkpatrick survey using Likert
scale items
Conclusion of all learning
events
Knowledge checks during events Periodically during the in-
person events, such as
workshops or mentoring
sessions
Procedural skills: "I can
do it right now."
Kirkpatrick survey using Likert
scale items
On-the-job via follow-up
surveys connected to
specific learning events
Demonstrate individually and in
groups
During learning events, such
as workshops
Demonstrate on-the-job via
special assignments or 360
ratings
Documented through
observation notes
Attitude: "I believe this
is worthwhile."
Kirkpatrick survey using Likert
scale items
Conclusion of all learning
events
Knowledge checks during events Periodically during the in-
person events, such as
workshops or mentoring
sessions
Confidence: "I think I
can do it on the job."
Kirkpatrick survey using Likert
scale items
Conclusion of all learning
events
Knowledge checks during events Periodically during the in-
person events, such as
workshops or mentoring
sessions
Commitment: "I will do
it on the job."
Discussion during events Discussion at the end of in-
person events, such as
workshops or mentoring
sessions
Documenting On-the-Job Work
assignment
Discussion during a capstone
event
97
Level 1: Reaction
According to Kirkpatrick (2016), Level 1 feedback is valuable in providing insight into
whether learners found training to be favorable, engaging, and relevant to their jobs. Table 15
outlines how LDP coordinators can approach gathering reaction data from program participants.
Table 15
Components to Measure Reactions to the Program
Method(s) or Tool(s) Timing
Engagement Learning event evaluation Two weeks after conclusion of event
Attendance Collected during the event (each day,
if multi-day)
Online course completions On-going – within online LDP
system
Data analytics of program events On-going – within online LDP
system
Observations by LDP coordinators During in-person learning events,
such as workshop
Relevance Learning event evaluation Two weeks after the conclusion of
the event
On-the-Job work assignment
evaluation
After each relevant experience
Customer
satisfaction
Learning event evaluation Two weeks after the conclusion of
the event
Program evaluation Bi-Annual (2x/yr)
Mentorship experience evaluation Bi-Annual (2x/yr)
Evaluation Tools
Evaluations and performance assessments intend to assess the extent to which programs
achieve their stated goals and objectives (Ebrahim, 2010). To measure the effectiveness of
98
leadership development programs, evaluations should be administered both post-learning event
and after a set period of time has passed to allow for the application of learning (Barnett &
Mattox, 2010).
Immediately Following the Program Implementation
The recommended LDP Post Event Survey will capture Level 1 and Level 2 data for
program learning events. The instrument, located in Appendix B, contains ten questions and uses
a five-point Likert-based scale in which answer options range from five as strongly agree to one
as strongly disagree. The instrument considered motivational measures of extrinsic (value) and
cost (worth) for learning and reaction (Clark & Estes, 2008) in addition to knowledge-based facts
(information) and concepts (skills) (Krathwohl, 2002). One open-ended question is included to
gather insight into the organizational culture of support for the training. The formative
assessment can be administered on an ongoing basis using an internal Century Manufacturing,
Inc. survey tool with analytics capability. A link would be sent via participant emails
immediately following the training event conclusion and allow two weeks for completion.
Delayed for a Period After the Program Implementation
LDP coordinators should deploy the LDP follow-up survey 60 to 90 days following the
conclusion of a program training event. Delaying administration of the follow-up survey will
allow for leadership knowledge and skills to develop and be applied within context to assess
competency and determine what environmental elements can be factored into the assessment of
effectiveness (Hall et al., 2014). Follow-up survey data can be utilized by program participants
during mentor or peer support sessions to establish communal norms of accountability. Norms
place greater emphasis on helping remove barriers that impede participant's performance without
expectation of immediate benefit to the mentor or peer (Wang et al., 2019). The LDP follow-up
99
survey, located in Appendix C, can be administered via participant email. When researchers and
practitioners partner to develop and administer evaluations, research shows more effective
training programs can result (Hauser et al., 2020).
Data Analysis and Reporting
Century Manufacturing, Inc.'s culture embraces the use of data to guide decision-making,
which is a critical enabler but may not be enough to sustain use (Marsh, 2012). LDP coordinators
should establish a monitoring and evaluation approach that includes a balanced scorecard that
monitors current performance and improvement efforts (Shahin & Zairi, 2006). The scorecard
should encompass the external and internal outcomes targeted as a result of the training, metrics,
and methods and the critical behaviors, goals, and objectives intended to drive overall program
performance (Kirkpatrick, 2016). Appendix D contains the LDP Shared Accountability Model
Scorecard with a list of recommended metrics grouped within the key performance indicator
categories of financial, customer, internal process, and learning and growth. This holistic
approach includes metrics such as cost per program participant (financial), candidate selection
criteria selection quality (internal process), LDP post-event survey scores (learning and growth),
and bi-annual leadership team reviews (customer) to monitor performance and informs decision
making (Shahin & Zairi, 2006) more effectively.
Additionally, LDP coordinators should set internal benchmarks to use as targets for
improvements and link short-term operational goals with long-term vision and business strategy.
Defining clear tasks, role expectations, and obligations and formalizing them within the
improvement plans will reinforce the answerability accountability type for ongoing development
(Dubnick, 2003). Data analysis can identify critical success factors and cause and effect
relationships for process improvement action plans.
100
Improvement plans will be developed at the individual and organizational levels. Clear
accountability contractual relationships will be established between LDP participants (provider)
and mentors (directors), as well as the LDP coordinators (providers) and Century Manufacturing,
Inc. leaders (directors) for meeting performance targets. Set capstone reviews, such as recurring
mentoring sessions, will be forums for individual performance reviews, and Quarterly LDP
visibility reviews and Bi-annual Leadership Team reviews will be forums for organizational
reviews. Utilizing data for decision making, strategy planning, individual and organizational
change can now be a more informed, effective activity (Marsh, 2012).
Summary
Kirkpatrick (2016) states that the three main reasons for evaluating training programs are
improvement, maximizing learning transfer and organizational results, and demonstrating the
value of training. Currently, Century Manufacturing, Inc. is making major investments in
leadership development programs. It requires help in all three of those areas to achieve its
organizational goals of providing leadership development and building organizational capability.
As leadership, skills, and human capital play integral roles in building organizational capacity
(Cox et al., 2008), it is critical LDPs maintain high-quality leadership development programs.
The Kirkpatrick (2016) model grounds the LDP measurement strategy in culture,
knowledge, and capability and establishes a blueprint for creating standardized, specific,
measurable goals. The plan was constructed using the Clark and Estes (2008) gap analysis
approach to identify and incorporate stakeholders’ knowledge and motivation influencers with
Century Manufacturing, Inc. priorities to hold all stakeholders accountable to meeting defined
goals objectives. The implementation approach is representative of each level of Kirkpatrick's
(2016) new world model, which includes reaction (Level 1), learning (Level 2), critical behaviors
101
(Level 3), and results and leading indicators (Level 4). Finally, the evaluation plan includes
survey instruments for data collection and analysis, a scorecard for reporting, and an operating
rhythm for frequent communication of summary explanations.
Through frequent monitoring and conversation adjustments, all stakeholders engage in
integrated implementation and evaluation of the program measurement strategy. The expectation
is that root cause and corrective action takes place continuously and that identified interventions
can return value to encourage, reinforce, and reward the critical behaviors needed to produce
desired results (Kirkpatrick, 2016). Self-efficacy, motivation, and developmental readiness can
enhance the effectiveness of training (Avolio et al., 2010; Clark & Estes, 2008). With the defined
learning goals, measurement instruments and activities, and balanced scorecard for monitoring,
LDP has the tools needed to improve, maximize learning transfer and results, and demonstrate
the value of its training (Kirkpatrick, 2016).
Limitations and Delimitations
In addition to the limitations identified in Chapter Three, two limitations were identified
during the study. First, the research resulted in a lack of artifacts provided by interview
participants, which limited the ability to dive deeper into approaches. Incorporating multiple
forms of data collection, such as interviews and document reviews, strengthen research validity
and credibility (Lincoln & Guba, 1985). A lack of artifacts limited the use of document reviews
to construct meaning, which narrowed the understanding of participants' perceptions of how LDP
leaders collect and utilize data. For example, while interview participants referred to the use of
scorecards and leadership presentations, the researcher was unable to view actual copies of the
artifacts to validate descriptions. A lack of artifacts limited the ability to identify gaps between
what tangibly exists and what are socially constructed realities (Bandura, 2005).
102
Second, limited resources caused by drastic reduction of employee headcount in 2020
delayed implementation of several planned initiatives. As a result, emerging data use approaches
should not be generalized as evidence of knowledge, motivation, or organizational influences. If
caution is not exercised, behavior and activities that are only in preliminary stages can increase
the risk of making assumptions regarding motivation and accountability. For example, while
Douglas’ program requires senior directors to craft employee engagement plans using
competency-based assessments, implementation is still a year away. This demonstrates intent to
connect business success and measured outcomes of leadership and program performance but
lacks the evidence to determine if motivation is sustained to deliver results. As such,
recommendations for increasing motivation and closing gaps between organizational business
goals and LDP goals can still be considered for emerging measurement strategy elements.
Recommendations for Future Research
Opportunities for additional research emerged that present chances to dive deeper into the
study’s findings. Experiential learning is a core component of leadership development programs,
but a lack of research exists in general regarding the application of measurement strategies
within that development approach (Arun et al., 2020). Additionally, continuing research on
emerging efforts can assess long-term effectiveness.
More research is recommended on the application and effectiveness of learning
measurement within experiential learning environments. It is important to investigate how
leaders adapt and apply learnings within different organizational contexts to truly demonstrate
effectiveness (Arun et al., 2020). LDP coordinators can use data to understand which aspects of
learning are emphasized within experiential learning activities, which experiences are best suited
for participants based on learning needs, and how to adjust experiences to maximize learning
103
(Bertoli & Bertoli, 2020). Currently, however, leadership effectiveness knowledge and insights
are somewhat limited when researching within different cultural contexts (Arun et al., 2020).
Continuing to explore how companies measure experiential learning effectiveness will contribute
to the general body of research. Finally, additional research can also create a deeper
understanding of how organizational culture influences the effect of leadership on organizational
outcomes (Arun et al., 2020).
The second recommendation is to continue researching Century Manufacturing, Inc.’s
leadership development programs to further assess the use of data. As noted in the limitations
section, additional research is needed to determine the long-term effectiveness of emerging
efforts. Research should include the study of stakeholder motivation and organizational support,
both of which are needed for continuing development and implementation of measurement
strategies (Clark, 2005). Although interviews provided insight into the direction LDP
coordinators are leading their programs’ use of data, additional research can reveal to what
extent activities delivered the results intended. Potential outputs include competency-based
leadership definitions, alignment of LDP goals to business goals, and accountability
mechanisms, such as tailored employee engagement plans. Further, the company-wide emphasis
to strengthen diversity, equity, and inclusion presents implications for further adjustment to how
LDPs function and operate.
Conclusion
Leadership development programs at Century Manufacturing, Inc. intend to deliver the
best leaders in each area of their business that, according to its website, can deliver superior
value to customers, employees, shareholders, communities, and partners. The company, who
employs over 140,000 worldwide, has multiple leadership programs that exist independently
104
within one of three enterprise-level business units or 18 functional organizations. The
decentralized approach to measuring effectiveness creates critical knowledge, motivation, and
organizational gaps that present barriers to establishing standards and organizational capacity for
change. Forming a shared vision can motivate stakeholders across the organization, to create and
sustain a culture that promotes measurement and data-driven decision-making (Burke, 2012).
Building on an already strong data-driven decision-making culture and quality knowledge LDP
coordinators hold in leadership theory and practice and measurement tools and strategies,
company leaders can use these recommendations to sharpen and accelerate the development of
their leaders.
105
References
Achlison, U., Soegito, A.T., Samsudi, & Sugiyo. (2019). Adaptive Leadership to Potential
Industry for Competencies Development: A Case Study. International Journal on
Leadership, 7(1), 13-21.
Alina, M. D. (2013). Leadership between skill and competency. Manager, 17, 208-214.
https://docplayer.net/27708148-Leadership-between-skill-andcompetency.html
Anderson, L., Krathwohl, D., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., Raths, J.,
Wittrock, M., & Bloom, B. (2001). A taxonomy for learning, teaching, and assessing: A
revision of Bloom's Taxonomy of Educational Objectives (Complete edition).
Arun, K., Şen, C., & Okun, O. (2020). How does leadership effectiveness related to the context?
Paternalistic leadership on non-financial performance within a cultural tightness-
looseness model? Journal for East European Management Studies, 25(3), 503–529.
https://doi.org/10.5771/0949-6181-2020-3-503
Association for Talent Development (ATD), Bridging the Skills Gap: Workforce Development
and the Future of Work (ATD Press, 2018).
Association for Talent Development. (2015). State of the industry.
https://www.td.org/Professional-Resources/State-Of-The-Industry-Report
Association for Talent Development (ATD), State of the Industry Report: 2019. (ATD Press,
2020).
Austin, M. J., Regan, K., Samples, M. W., Schwartz, S. L., & Carnochan, S. (2011). Building
managerial and organizational capacity in nonprofit human service organizations through
a leadership development program. Administration in Social Work, 35(3), 258-
281.doi:10.1080/03643107.2011.575339
106
Austin, M. J., Weisner, S., Schrandt, E., Glezos-Bell, S., & Murtaza, N. (2006). Exploring the
transfer of learning from an executive development program for human services
managers. Administration in Social Work, 30(2), 71-90.
http://www.haworthpress.com/web/ASW
Avolio, B., Avey, J., & Quisenberry, D. (2010). Estimating return on leadership development
investment. The Leadership Quarterly, 21(4), 633–644.
https://doi.org/10.1016/j.leaqua.2010.06.006
Avolio, B. J., & Hannah, S. T. (2008). Developmental readiness: Accelerating leader
development. Consulting Psychology Journal: Practice and Research, 60(4), 331-347.
doi:10.1037/1065-9293.60.4.331
Avolio, B., Wernsing, T., & Gardner, W. (2018). Revisiting the Development and Validation of
the Authentic Leadership Questionnaire: Analytical Clarifications. Journal of Management,
44(2), 399–411. https://doi.org/10.1177/0149206317739960
Baker, L. (2006). Metacognition. http://www.education.com/reference/article/metacognition/
Bandura, A. (2005). The evolution of social cognitive theory. In K. G. Smith & M. A. Hitt
(Eds.), Great Minds in Management (pp. 9–35). Oxford University.
Bandura, A. (2012). On the functional properties of perceived self-efficacy revisited. Journal of
Management, 38(1), 9-44. doi: 10.1177/0149206311410606
Barnett, K., & Mattox, J. R. (2010). Measuring success and ROI in corporate training. Journal of
Asynchronous Learning Networks JALN, 14(2), 28–44.
https://doi.org/10.24059/olj.v14i2.157
Bates, R. (2004). A critical analysis of evaluation practice: the Kirkpatrick model and the
principle of beneficence. Evaluation and Program Planning, 27(3), 341-347.
107
Bennis, W. (2009). On becoming a leader: The leadership classic. Basic Books.
Bentley, T. (2019). The challenge of negotiating race in cross-sector spaces. Stanford
Social Innovation Review.
https://ssir.org/articles/entry/the_challenge_of_negotiating_race_in_cross_sector_spaces#
:~:text=1
Bertoni, M., & Bertoni, A. (2020). Measuring experiential learning: An approach based on
lessons learned mapping. Education Sciences, 10(1), 11–.
https://doi.org/10.3390/educsci10010011
Black, E. (2009). Measuring the Outcomes of Leadership Development Programs. Journal of
Leadership & Organizational Studies, 16(2), 184–196.
https://doi.org/10.1177/1548051809339193
Bolman, L., & Deal, T. (2017). Reframing organizations: Artistry, choice and leadership. John
Wiley & Sons, Incorporated.
Bourda, T. B. (2014). Developing leaders in the nonprofit sector: Evaluating the influence of
executive leadership training on collaboration. ProQuest Dissertations Publishing.
Campbell, D. J., & Campbell, K. M. (2011). Impact of decision-making empowerment on
attributions of leadership. Military Psychology, 23(2), 154-179.
doi:http://dx.doi.org.libproxy1.usc.edu/10.1080/08995605.2011.550231
Chiaburu, D. S., Huang, J. L., Hutchins, H. M., & Gardner, R. G. (2013). Trainees' perceived
knowledge gain unrelated to the training domain: the joint action of impression
management and motives. International Journal of Training and Development, 18(1), 37-
52. https://doi.org/10.1111/ijtd.12021
108
Clark, R. E. (2005). Research-tested team motivation strategies. Performance Improvement, 4(1),
13–16.
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Information Age Publishing, Inc.
Costanza, D., Blacksmith, N., Coats, M., Severt, J., & DeCostanza, A. (2016). The Effect of
Adaptive Organizational Culture on Long-Term Survival. Journal of Business
Psychology, 31(3), 361-381.
Craun, J. (2014). Qualitative case study on the progress of contract management leadership
development program graduates obtaining leadership positions. ProQuest Dissertations
Publishing.
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed
methods approaches. Sage Publications.
Culver, K.C., Harper, J., & Kezar, A. (2021). Design for equity in higher education. Pullias
Center for Higher Education. https://pullias.usc.edu/download/design-for-equity-in-
higher-education/
DeGrosky, M. T. (2013). Transfer of knowledge, skills, and abilities from leadership
development training. ProQuest Dissertations Publishing.
Dematthews, D., & Izquierdo, E. (2017). Authentic and Social Justice Leadership: A Case Study
of an Exemplary Principal along the U.S.-Mexico Border. Journal of School Leadership,
27(3), 333–360. https://doi.org/10.1177/105268461702700302
Dubnick, M. (2014). Accountability as cultural keyword. In M. Bovens, R. E. Goodin, & T.
Schillemans (Eds.), Oxford handbook of public accountability (pp. 23–28). Oxford
University Press.
109
Schillemans (Eds.), Oxford handbook of public accountability (pp. 649-654). Oxford University
Press.
Dubnick, M. J. (2020). Week 2 · Ontological Accountability and Systems Thinking. [Webinar].
2SC. https://2sc.rossieronline.usc.edu/ap/courses/2112/sections/7175ed21-8fd3-4ee6-
8760-89667bdf0563/coursework/module/79727815-2f9c-48eb-926e-
fb4ba045ad29/segment/01dd3ecc-0415-4727-8b3b-e3a824214248
Dweck, C. (2013). Self-theories: Their Role in Motivation, Personality, and Development. In
Self-theories. Taylor and Francis. https://doi.org/10.4324/9781315783048
Eccles, W. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53(1),
109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153
Faraci, P., Lock, M., & Wheeler, R. (2013). Assessing leadership decision-making styles:
Psychometric properties of the leadership judgment indicator. Psychology Research and
Behavior Management, 6, 117-123. doi:10.2147/PRBM.S53713
Ford, J., Baldwin, T., & Prasad, J. (2018). Transfer of Training: The Known and the Unknown.
Annual Review of Organizational Psychology and Organizational Behavior, 5(1), 201–
225.
Hall, A. T., Frink, D. D., & Buckley, M. R. (2017). An accountability account: A review and
synthesis of the theoretical and empirical research on felt accountability. Journal of
Organizational Behavior, 38(2), 204-224. https://doi.org/10.1002/job.2052
Hall, A. T., Frink, D. D., & Buckley, M. R. (2017). An accountability account: A review and
synthesis of the theoretical and empirical research on felt accountability. Journal of
Organizational Behavior, 38(2), 204-224.
110
Hall, R., Grant, D., & Raelin, J. (2014). Leadership Development and Practice. SAGE
Publications Ltd. https://doi.org/10.4135/9781473915374
Hentschke, G. C., & Wohlstetter, P. (2004). Cracking the code of accountability. University of
Southern California Urban Education, Spring/Summer, 17–19.
Heuston, M. M., Leaver, C., & Harne-Britner, S. (2021). Using Data from a 360× Leadership
Assessment to Enhance Nurse Manager Transformational Leadership Skills. The Journal
of Nursing Administration, 51(9), 448–454.
https://doi.org/10.1097/NNA.0000000000001044
Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at
work. Academy of Management Journal, 33(4), 692-724.
https://journals.aom.org/doi/abs/10.5465/256287
Kaiser, R. B., & Curphy, G. (2014). Leadership development: The failure of an industry and the
opportunity for consulting psychologists. Consulting Psychology Journal, 65(4), 294-
302. https://doi.org/10.1037/a0035460
Kirkpatrick J. (2008). The new world level 1 reaction sheets.
http://www.kirkpatrickpartners.com/Portals/0/Storage/The%20new%20world%20level%
201%20reaction%20sheets.pdf
Kirkpatrick, J., & Kirkpatrick, W. (2016). Kirkpatrick's four levels of training evaluation. In
Kirkpatrick's four levels of training evaluation. atd press.
Koohang, A., Paliszkiewicz, J., & Goluchowski, J. (2017). The impact of leadership on trust,
knowledge management, and organizational performance. Industrial Management +
Data Systems, 117(3), 521–537. https://doi.org/10.1108/IMDS-02-2016-0072
111
Kragt, D., & Guenter, H. (2018). Why and when leadership training predicts effectiveness.
Leadership & Organization Development Journal, 39(3), 406–418.
Krathwohl, D. (2002) A Revision of Bloom’s Taxonomy: An Overview. Theory into Practice,
41(4), 212-218, https://doi.org/10.1207/s15430421tip4104_2
Lacerenza, C., Reyes, D., Marlow, S., Joseph, D., Salas, E., & Lacerenza, C. (2017). Leadership
training design, delivery, and implementation: A meta-analysis. The Journal of Applied
Psychology, 102(12), 1686–1718. https://doi.org/10.1037/apl0000241
Lencioni, P. (2007). The Five Dysfunctions of a Team: A Leadership Fable. In The Five
Dysfunctions of a Team (1. Aufl.). Jossey-Bass.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Sage Publications.
Ling, B., Guo, Y., & Chen, D. (2018). Change Leadership and Employees’ Commitment to
Change: A Multilevel Motivation Approach. Journal of Personnel Psychology, 17(2),
83–93. https://doi.org/10.1027/1866-5888/a000199
Marsh, J. A. (2012). Interventions promoting educators' use of data: Research insights and gaps.
Teachers College Record, 114(11), 1–48.
Marsh, J., & Farrell, C. (2015). How leaders can support teachers with data-driven decision
making: A framework for understanding capacity building. Educational Management
Administration & Leadership, 43(2), 269–289.
https://doi.org/10.1177/1741143214537229
Maxwell, J. A., 1941. (2013). Qualitative research design: An interactive approach (3rd ed.).
SAGE Publications.
McGee, H. M., & Johnson, D. A. (2015). Performance motivation as the behaviorist views it.
Performance Improvement, 54, 15–21. doi:10.1002/pfi.21472
112
McKinsey & Company (2014). Why leadership-development programs fail.
https://www.mckinsey.com/featured-insights/leadership/why-leadership-development-
programs-fail
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
implementation. (4th ed.). Jossey-Bass.
Milliman, J & Ferguson, J. (2008). In Search of the "Spiritual" in Spiritual Leadership: A Case
Study of Entrepreneur Steve Bigari. The Business Renaissance Quarterly, 3(1), 19–.
Moores, L. E. (2013). The U.S. Army medical corps leadership development program. United
States Army Medical Department Journal, 7, 4-5.
https://www.thefreelibrary.com/The+US+Army+Medical+Corps+leadership+developme
nt+program.-a0342568761
Moynihan, D., & Pandey, S. (2007). The role of organizations in fostering public service
motivation. Public Administration Review, 67(1), 40-53.
http://www.jstor.org.libproxy2.usc.edu/stable/4624539
Northouse, P. G. (2016). Leadership: theory and practice (7th ed.).
Ogidan, A. (2014). Leadership development programs and employee retention: A
phenomenological study (Vol. 76, Issue 4). ProQuest Dissertations Publishing.
Pajares, F. (2002). Gender and Perceived Self-Efficacy in Self-Regulated Learning. Theory into
Practice, 41(2), 116–125. https://doi.org/10.1207/s15430421tip4102_8
Rachmawati, A., & Lantu, D. (2014). Servant Leadership Theory Development & Measurement.
Procedia - Social and Behavioral Sciences, 115(C), 387–393.
113
Reinelt, C., Foster, P., & Sullivan, S. (2002). Evaluating outcomes and impacts: A scan of 55
leadership development programs. W. K. Kellogg Foundation.
http://www.racialequitytools .org/ resourcefiles/reinelt.pdf
Sandelowski, M. (2000). Focus on research methods: Whatever happened to qualitative
description? Research in Nursing & Health, 23(4), 334-340.
Schindler, L., & Burkholder, G. J. (2016). A mixed methods examination of the influence of
dimensions of support on training transfer. Journal of Mixed Methods Research, 10(3),
292-310. https://doi.org/10.1177/1558689814557132
Senge, P. M. (2005). Missing the boat on leadership. Leader to Leader, 2005(38), 28-30.
https://doi.org/10.1002/ltl.150
Shahin, A., & Zairi, M. (2006). Strategic Management, Benchmarking and The Balanced Score
Card (BSC): An Integrated Methodology. International Journal of Applied Strategic
Management, 2(2), 1-10.
Solansky, S. (2010). The evaluation of two key leadership development program components:
Leadership skills assessment and leadership mentoring. The Leadership Quarterly, 21(4),
675-681.
Stiehl, S., Felfe, J., Elprana, G., & Gatzka, M. (2015). The role of motivation to lead for
leadership training effectiveness. International Journal of Training and Development,
19(2), 81–97.
Thomas, E., Wells, R., Baumann, S., Graybill, E., Roach, A., Truscott, S., Crenshaw, M., &
Crimmins, D. (2019). Comparing Traditional Versus Retrospective Pre-/Post-assessment
in an Interdisciplinary Leadership Training Program. Maternal and Child Health Journal,
23(2), 191–200.
114
2013 Training Industry Report. (2013). Training Magazine, 50(6), 22-35.
https://trainingmag.com/2013-training-industry-report
USAID (2014). LGBT vision for action: Promoting and Supporting the Inclusion of Lesbian,
Gay, Bisexual, and Transgender Individual n.
https://www.usaid.gov/sites/default/files/documents/1861/LGBT_Vision_For_Action_M
ay2014.pdf
Veenman, v. (2006). Metacognition and Learning: Conceptual and Methodological
Considerations. Metacognition and Learning, 1(1), 3–14. https://doi.org/10.1007/s11409-
006-6893-0
Wheeler, D., & Sillanpa'a, M. (1998). Including the stakeholders: The business case. Long Range
Planning, 31(2), 201-210.
Wilkinson, D. (2018, March) There are too many leadership concepts… but which ones are
redundant? The Oxford Review Briefings. https://www.oxford-review.com/leadership-
concepts/
Woltring, C., Constantine, W., & Schwarte, L. (2003). Does leadership training make a
difference? The CDC/UC Public Health Leadership Institute: 1991-1999. Public Health
Management, 9(2), 103-122. https://the
frontofthejersey.files.wordpress.com/2013/03/5_public_health_mgmt_and_practice.pdf
Yukl, G., Gordon, A., and Taber, T. (2002). A hierarchical taxonomy of leadership behavior:
Integrating a half century of behavior research. Journal of Leadership and
Organizational Studies 9, no. 1: 15-31.
115
Appendix A: Interview Protocol
Research Questions:
• RQ1. What factors do organizational leaders identify as successful for promoting the use
and application of data?
• RQ2. What systemic barriers may or may not exist that prevent LDP coordinators,
managers, and Executives from using data to make programmatic decisions?
• RQ3. How is data used to identify, define, and manage accountability relationships
required to make the LDP successful?
Respondent Type: One LDP coordinator and one LDP participant for a high potential
leadership development program lasting two years
Introduction to the Interview:
➢ The introduction will be used for both the LDP Coordinators and LDP Participants. One
difference is noted at the end of the first paragraph.
Thank you for agreeing to the interview today. My name is Gina Minniti and I am
currently a student in the USC Rossier School of Education Organizational Change and
Leadership program. As part of my dissertation, I am conducting research to learn how learning
and development coordinators are utilizing data to inform decision-making. The study does not
aim to evaluate you or your organization's techniques or experiences. Rather, I am trying to learn
more about measurement strategy and data utilization, and hopefully learn about practices that
help improve leadership development experiences for participants. You were selected to speak to
because:
116
➢ For LDP Coordinators: Your role as a LDP coordinator will provide valuable insight into
how leaders collect and analyze data to measure the effectiveness of their programs, as
well as what barriers and enablers may or may not exist within your organization.
➢ For LDP Participants: Your role as a LDP participant will provide valuable insight into
how leaders collect data within their programs, as well as how that data is used to make
decisions that may or may not impact your experience as a program participant.
I appreciate that you provided your signed interview consent form before today's
interview, which outlined that (1) all information will be held confidential, (2) your participation
is voluntary and you may stop at any time if you feel uncomfortable, and (3) we do not intend to
inflict any harm. As the interview information packet I sent to you earlier described, this
interview will be recorded through the Zoom platform to facilitate notetaking. Only peer
researchers on the project and I will have access to the recording which will be destroyed after it
is transcribed.
This interview will last no longer than 90 minutes. The interview will have several
questions that I would like to cover. If time begins to run short, I will pause the interview and
discuss any request for possible follow-up at a later time. Before we begin, do you have any
questions?
Warm-up questions:
• How long have you been in your present position? At this organization?
• Briefly describe your role as it relates to measuring programs for effectiveness (if
appropriate).
Interview Questions and Probes
1. What Learning Development Program (LDP) elements are measured?
117
a. Do you have in-person training, mentoring, online modules, and other training
elements that would potentially be measured? (O)
2. How are the LDP elements measured?
a. Do you use surveys? Interviews? Who completes the surveys? Who is
interviewed? (O)
3. What tools are used for measurement?
a. What is used for data collection? What is used for data analysis? (O)
4. If areas of the program are not measured, why not?
a. Are there specific causes that prevent measurement or make it difficult? (K,M,O)
5. When barriers to measurement are identified, what is done to remove them?
a. How is help requested? Who is empowered to make decisions to change things?
(K,M,O)
6. How was the measurement strategy developed?
a. Who was involved in the development? How were aspects of theory and industry
best practices incorporated? (K,O)
7. What decisions are made using the data?
a. Who is involved in making the decisions? Who is data shared with? How is it
presented? (K,O)
8. How does your organization set goals?
a. How frequently are these goals reviewed? How frequently are they adjusted?
What are the criteria for adjustment? What happens when goals are not met?
(K,M,O)
9. How does your organization allocate resources for learning measurement each year?
118
a. How is data used to inform this? Does this include people and tools? (M,O)
10. How are participants held accountable for performance?
a. What happens when expectations are not met? (M)
Conclusion to the Interview:
Before we conclude the interview, was there anything more to tell you about using data
for decision-making with regards to your LDPs?
I want to thank you again for your time and feedback today. For the next steps in the
study, I will be concluding all interviews shortly and analyze and synthesize the information
collected for key themes and findings. This will be incorporated into my dissertation for further
review by peer researchers. If you wish, I can notify you of when my dissertation is officially
approved and published, per the process outlined within my program. As a reminder, all
interview material collected is confidential and no personally identifiable information will be
included in the dissertation to preserve anonymity.
Until then, please do not hesitate to reach out to me with any additional thoughts,
comments, or questions.
119
Appendix B: LDP Post Event Survey
1 2 3 4 5
Strongly Disagree Neutral Agree Strongly
Disagree Agree
Please answer the below questions using the scale above:
Level 1 – Reaction
The methods used by the instructor engaged me in the learning process.
The content of the training was relevant to my job.
Overall, I was satisfied with the training.
What I learned in the workshop will be valuable to implementing strategies that mediate
opposing team viewpoints
Level 2 – Learning
I learned new knowledge and skills during this training.
I am confident I can apply the knowledge I learned during this training.
This training was a worthwhile use of my time.
I will be able to more effectively coach my direct reports after attending training
I will be better able to interpret business data after attending training
Organizational
What barriers do you anticipate to applying knowledge, skills, and behavior on the job? (open-
ended)
120
Appendix C: LDP Follow-up Survey
1 2 3 4 5
Strongly Disagree Neutral Agree Strongly
Disagree Agree
Please answer the below questions using the scale above:
Level 1 – Reaction
What I learned in the workshop has been valuable to implement strategies that mediate opposing
team viewpoints
Level 2 – Learning
I applied the knowledge I learned during this training.
This training was a worthwhile use of my time.
I was able to more effectively coach my direct reports after attending training
I am better able to interpret business data after attending training
Level 3 – Behavior
Material provided during training (e.g. job aids) was useful back on the job.
My direct reports provide me feedback that I actively solicit their feedback.
Level 4 – Results
I identify potential solutions to barriers with little to no assistance when they occur on the job.
I effectively recognize output that consistently meets or exceeds expectations.
Organizational
What barriers did you encounter to applying knowledge, skills, and behavior on the job? (open-
ended)
121
Appendix D: LDP Shared Accountability Model Balanced Scorecard
Financial
Cost per program participant
Number of participants by year
Number of recent LDP graduates
Internal process
Candidate selection criteria quality
Enterprise common process quality
Quarterly LLOC visibility reviews
Learning and growth
LDP post event survey scores
LDP follow-up survey scores
Critical behaviors surveys
Customer
Communication roadshow surveys
Bi-annual leadership team reviews
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Knowledge, motivation, and organizational influences within leadership development: a study of a business unit in a prominent technology company
PDF
Learning the language of math: supporting students who are learning English in acquiring math proficiency through language development
PDF
Leave no leader behind (LNLB): leadership development for K-12 operations leaders
PDF
Leadership readiness: evaluating the effectiveness for developing managers as coaches
PDF
Nonprofit donor retention: a case study of Church of the West
PDF
Strengthening community: how to effectively implement a conflict resolution and peer mediation program on a secondary school campus: an evaluation study
PDF
Factors related to transfer and graduation in Latinx community college students
PDF
Leadership education and development that incorporates e-leadership: the “e” factor in leading in an electronic (virtual) work environment
PDF
Avoidance of inpatient medical necessity denials for short-stay admissions: an evaluative study
PDF
Supporting emergent bilinguals: implementation of SIOP and professional development practices
PDF
Employee retention and the success of a for-profit cosmetics company: a gap analysis
PDF
Organizational agility and agile development methods: an evaluation study
PDF
Principals’ impact on the effective enactment of instructional coaching that promotes equity: an evaluation study
PDF
The development of change leadership skills in aspiring community college leaders
PDF
Executive succession planning: a study of employee competency development
PDF
Closing the leadership development learning transfer gap
PDF
A case study of promising practices mentoring K-12 chief technology officers
PDF
Leadership development and Black pentecostal pastors: understanding the supports needed to enhance their leadership development and ministry effectiveness
PDF
The first-time manager journey: a study to inform a smoother leadership transition
PDF
Lean construction through craftspeople engagement: an evaluation study
Asset Metadata
Creator
Minniti, Gina K.
(author)
Core Title
Understand how leaders use data to measure the effectiveness of leadership development programs
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Degree Conferral Date
2021-12
Publication Date
11/05/2021
Defense Date
10/11/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
data,effectiveness,leadership,leadership development,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Min, Emmy (
committee chair
), Baker, Joshua (
committee member
), Maddox, Anthony (
committee member
)
Creator Email
gminniti@usc.edu,gunmang05@hotmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC16659546
Unique identifier
UC16659546
Legacy Identifier
etd-MinnitiGin-10205
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Minniti, Gina K.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
data
effectiveness
leadership development