Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Establishing a systematic evaluation of an academic nursing program using a gap analysis framework
(USC Thesis Other)
Establishing a systematic evaluation of an academic nursing program using a gap analysis framework
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: PROGRAM OUTCOME GAP ANALYSIS 1
ESTABLISHING A SYSTEMATIC EVALUATION OF AN ACADEMIC NURSING
PROGRAM USING A GAP ANALYSIS FRAMEWORK
by
Matthew Durkin
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2018
Copyright 2018 Matthew Durkin
PROGRAM OUTCOME GAP ANALYSIS 2
ABSTRACT
This study applies the gap analysis framework (Clark & Estes, 2008) to understand the
experiences of faculty members in a graduate college of nursing. The purpose of this study was
to conduct a needs analysis in the areas of knowledge and skill, motivation, and organizational
resources necessary for faculty members to achieve their organizational performance goal of
establishing measures and methods to evaluate program outcomes. Faculty members’
knowledge and skills, motivation, and organizational resources were assessed using survey and
interview data. Survey data was analyzed using descriptive statistics and interviews were
analyzed in an effort to establish whether or not performance gaps existed. Gaps related to
declarative factual knowledge; procedural knowledge; metacognitive knowledge; self-efficacy;
organizational resources; policies, processes, and procedures; cultural setting; and cultural model
were identified. This study led to the development of recommended activities to improve
knowledge and skills, motivation, and organizational components of a single college of nursing.
Additionally, this study established measures and methods for evaluating implementations of the
recommended activities.
PROGRAM OUTCOME GAP ANALYSIS 3
ACKNOWLEDGEMENTS
Over the years I have been fortunate to be supported by incredible people. I would like to
thank all of my family, colleagues, and mentors that have helped get me to where I am today.
To my wife Rachel: Thank you for taking care of me and our family as I worked through
this program. I am constantly in awe of how much of yourself you give to our family. I would
not be who I am without you.
To my girls Lily and Olivia: Thank you for always making me smile. It has been a joy
watching you grow and learn. You motivate me to be my best.
To my parents Sue and Dave: You have supported me in anything I ever chose to be
involved in. Thank you for always being there for me and believing in me. My mother passed in
November, 2017 but I know she is still with us in our hearts.
To my committee Dr. Ken Yates, Dr. Melora Sundt, and Dr. Shannon Faris: Thank you
for the wisdom, revisions, and laughs you have shared with me. You made the dissertation
process as enjoyable and valuable as it could have been.
PROGRAM OUTCOME GAP ANALYSIS 4
Table of Contents
Acknowledgements ......................................................................................................................... 3
List of Tables .................................................................................................................................. 9
List of Figures ............................................................................................................................... 13
Chapter One: Introduction ............................................................................................................ 14
Introduction of the Problem of Practice .................................................................................... 14
Organizational Context and Mission ........................................................................................ 14
Organizational Performance Status/Need ................................................................................. 15
Related Literature...................................................................................................................... 16
Importance of the Organizational Innovation ........................................................................... 16
Importance of Continuous Improvement .............................................................................. 17
Importance of Satisfying Accreditation Standards ............................................................... 18
Organizational Performance Goal ............................................................................................. 19
Stakeholders and Stakeholders’ Performance Goals ................................................................ 19
Stakeholder for the Study and Stakeholder Performance Gap.................................................. 21
Stakeholders’ Critical Behaviors .......................................................................................... 21
Purpose of the Project and Questions ....................................................................................... 21
Methodological Framework ...................................................................................................... 22
Definitions................................................................................................................................. 23
Organization of the Study ......................................................................................................... 23
PROGRAM OUTCOME GAP ANALYSIS 5
Chapter Two: Review of the Literature ........................................................................................ 25
Conceptual Framework ............................................................................................................. 25
Stakeholder Knowledge, Motivation and Organizational Factors ............................................ 25
Knowledge and Skills ........................................................................................................... 25
Motivation ............................................................................................................................. 31
Summary of assumed motivation influences. ....................................................................... 33
Organization .......................................................................................................................... 34
Summary ................................................................................................................................... 40
Chapter Three: Methodology ....................................................................................................... 41
Purpose of the Project and Questions ....................................................................................... 41
Conceptual and Methodological Framework ............................................................................ 41
Assessment of Performance Influences .................................................................................... 43
Knowledge Assessment ........................................................................................................ 43
Motivation Assessment ......................................................................................................... 45
Organization/Culture/Context Assessment ........................................................................... 47
Participating Stakeholders and Sample Selection ..................................................................... 50
Sampling ............................................................................................................................... 51
Recruitment ........................................................................................................................... 52
Instrumentation ......................................................................................................................... 53
Survey Design ....................................................................................................................... 53
PROGRAM OUTCOME GAP ANALYSIS 6
Interview Protocol Design .................................................................................................... 53
Validity ................................................................................................................................. 54
Data Collection ..................................................................................................................... 54
Surveys .................................................................................................................................. 54
Interviews .............................................................................................................................. 55
Data Analysis ............................................................................................................................ 55
Surveys .................................................................................................................................. 55
Interviews .............................................................................................................................. 55
Trustworthiness of Data ............................................................................................................ 55
Role of Investigator................................................................................................................... 56
Limitations ................................................................................................................................ 56
Chapter Four: Results and Findings ............................................................................................. 58
Participating Stakeholders ........................................................................................................ 58
Data Validation ......................................................................................................................... 59
Results and Findings for Knowledge Causes............................................................................ 61
Declarative Factual Knowledge ............................................................................................ 61
Declarative Conceptual Knowledge ..................................................................................... 69
Procedural Knowledge .......................................................................................................... 71
Results and Findings for Motivation Causes ............................................................................ 75
Value ..................................................................................................................................... 75
PROGRAM OUTCOME GAP ANALYSIS 7
Self-Efficacy ......................................................................................................................... 77
Results and Findings for Organization Causes ......................................................................... 81
Resources .............................................................................................................................. 81
Policies, Processes, & Procedures ........................................................................................ 85
Cultural Settings.................................................................................................................... 91
Cultural Models .................................................................................................................... 94
Summary of Validated Influences .......................................................................................... 100
Knowledge .......................................................................................................................... 100
Motivation ........................................................................................................................... 101
Organization ........................................................................................................................ 102
Chapter Five: Recommendations and Evaluation ...................................................................... 104
Purpose of the Project and Questions ..................................................................................... 104
Recommendations to Address Knowledge, Motivation, and Organization Influences .......... 104
Knowledge Recommendations ........................................................................................... 105
Motivation Recommendations ............................................................................................ 109
Organization Recommendations ......................................................................................... 111
Summary of Knowledge, Motivation and Organization Recommendations ...................... 118
Integrated Implementation and Evaluation Plan ..................................................................... 119
Organizational Purpose, Need and Expectations ................................................................ 119
Implementation and Evaluation Framework ....................................................................... 120
PROGRAM OUTCOME GAP ANALYSIS 8
Level 4: Results and Leading Indicators ............................................................................. 121
Level 3: Behavior ................................................................................................................ 124
Level 2: Learning ................................................................................................................ 129
Level 1: Reaction ................................................................................................................ 132
Evaluation Tools ................................................................................................................. 133
Data Analysis and Reporting .............................................................................................. 134
Summary of the Implementation and Evaluation ............................................................... 135
Limitations and Delimitations ................................................................................................. 136
Recommendations for Future Research .................................................................................. 137
Conclusion .............................................................................................................................. 138
References ................................................................................................................................... 139
Appendices .................................................................................................................................. 146
Appendix A: Survey .............................................................................................................. 146
Appendix B: Interview Items ................................................................................................. 159
Appendix C: Informed Consent/Information Sheet ............................................................... 161
Appendix D: Sample Recruitment Letter .............................................................................. 162
Appendix E: Level 1 and Level 2 Evaluation Instrument Immediately Following Program
Implementation ....................................................................................................................... 163
Appendix F: Evaluation Tool Delayed for a Period After the Program Implementation ...... 170
PROGRAM OUTCOME GAP ANALYSIS 9
LIST OF TABLES
Table 1 Organizational Mission, Organizational Goal and Stakeholder Performance Goals ....... 20
Table 2 Summary of Assumed Knowledge Influences on Faculty Members’ Ability to Evaluate
Program Outcomes........................................................................................................................ 30
Table 3 Summary of Assumed Motivation Influences on Faculty Members’ Ability to Evaluate
Program Outcomes........................................................................................................................ 34
Table 4 Summary of Assumed Organization Influences on Faculty Members’ Ability to Evaluate
Program Outcomes........................................................................................................................ 39
Table 5 Summary of Knowledge Influence and Method of Assessment ..................................... 45
Table 6 Summary of Motivation Influences and Method of Assessment ..................................... 47
Table 7 Summary of Organization Influences and Method of Assessment .................................. 49
Table 8 Five operational groups within Yvonne College of Nursing ........................................... 51
Table 9 Race and Ethnicity of Survey Respondents ..................................................................... 58
Table 10 Academic Rank of Survey Respondents ........................................................................ 59
Table 11 Declarative Knowledge Validation: Which of the Following is a Commission on
Collegiate Nursing Education (CCNE) Requirements Regarding Program Outcome Data
Analysis? ....................................................................................................................................... 62
Table 12 Declarative Knowledge Validation: True or False: For the Purposes of CCNE
Accreditation, Alumni Satisfaction Can Serve as an Indicator of Program Effectiveness. .......... 63
Table 13 Declarative Knowledge Validation: Select the Option That Best Defines the Following
Terminology as They Pertain to Higher Education. Program Outcome ....................................... 64
PROGRAM OUTCOME GAP ANALYSIS 10
Table 14 Declarative Knowledge Validation: Select the Option that Best Defines the Following
Terminology as they Pertain to Higher Education. Learning outcome ....................................... 65
Table 15 Declarative Knowledge Validation: Select the Option that Best Defines the Following
Terminology as they Pertain to Higher Education. Institutional Outcome .................................. 66
Table 16 Declarative Knowledge Validation: Select the Option that Best Defines the Following
Terminology as they Pertain to Higher Education. Assessment .................................................. 67
Table 17 Declarative Knowledge Validation: Select the Option that Best Defines the Following
Terminology as they Pertain to Higher Education. Evaluation ................................................... 67
Table 18 Value Validation: Rank the Following Processes in Order of Importance for Ensuring a
Successful Program (1 is Highest). ............................................................................................... 76
Table 19 Self-Efficacy Validation: I am Confident in my Ability to Analyze Program Outcome
Data. .............................................................................................................................................. 78
Table 20 Self-Efficacy Validation: I am Confident in my Ability to use Program Outcome Data
to Inform Decision-making. .......................................................................................................... 79
Table 21 Resources Validation: I Have Access to the Aggregated Program Outcome Data I
Need to Make Informed Decisions. .............................................................................................. 82
Table 22 Resources Validation: I Have Access to the Disaggregated Program Outcome Data I
Need to Make Informed Decisions. .............................................................................................. 84
Table 23 Policies, Processes, & Procedures Validation: The Policies of the College Support my
Ability to Affect Change at the College. ...................................................................................... 86
Table 24 Policies, Processes, & Procedures Validation: The Processes of the College Support
my Ability to Affect Change at the College. ................................................................................ 87
PROGRAM OUTCOME GAP ANALYSIS 11
Table 25 Policies, Processes, & Procedures Validation: The Procedures of the College Support
my Ability to Affect Change at the College. ................................................................................ 88
Table 26 Policies, Processes, & Procedures Validation: I Know What Pertinent Data Needs to be
Collected and Analyzed on an Annual Basis. ............................................................................... 90
Table 27 Cultural Setting Validation: Reports Generated by the College Exemplify its
Commitment to Achieving its Mission. ........................................................................................ 92
Table 28 Cultural Setting Validation: Reports Disseminated by the College Exemplify its
Commitment to Achieving its Mission ......................................................................................... 93
Table 29 Cultural Model Validation: How Would you Primarily Categorize the College's
Culture Regarding Accountability Practices? ............................................................................... 94
Table 30 Cultural Model Validation: The College Supports a Culture of Compliance............... 95
Table 31 Cultural Model Validation: The College Supports a Culture of Continuous
Improvement. ................................................................................................................................ 96
Table 32 Cultural Model Validation: The Goals of the Organization Related to Program
Outcome Evaluation are Communicated Effectively. .................................................................. 98
Table 33 Cultural Model Validation: Key stakeholders are Informed Regarding the College’s
Progress in Meeting its Mission. ................................................................................................... 99
Table 34 Summary of Assumed Knowledge Gaps Validated .................................................... 101
Table 35 Summary of Assumed Motivation Causes Validation ................................................. 102
Table 36 Summary of Assumed Organization Causes Validation .............................................. 103
Table 37 Summary of Knowledge Influences and Recommendations ....................................... 106
PROGRAM OUTCOME GAP ANALYSIS 12
Table 38 Summary of Motivation Influences and Recommendations ....................................... 110
Table 39 Summary of Organization Influences and Recommendations .................................... 112
Table 40 Outcomes, Metrics, and Methods for External and Internal Outcomes ...................... 123
Table 41 Critical Behaviors, Metrics, Methods, and Timing for Evaluation ............................. 125
Table 42 Required Drivers to Support Critical Behaviors .......................................................... 127
Table 43 Evaluation of the Components of Learning for the Program ...................................... 131
Table 44 Components to Measure Reactions to the Program ..................................................... 133
PROGRAM OUTCOME GAP ANALYSIS 13
LIST OF FIGURES
Figure 1. Explanatory Sequential Process Adapted from Merriam and Tisdell (2016). ............. 42
Figure 2. Gap Analysis Process Adapted From Clark and Estes (2008). ..................................... 42
PROGRAM OUTCOME GAP ANALYSIS 14
CHAPTER ONE: INTRODUCTION
Introduction of the Problem of Practice
Although there is a wealth of information regarding evaluation measures and instruments
in nursing education programs, there are fewer discussions on how the process of reviewing
program outcomes should be implemented in nursing education programs (Horne & Sandmann,
2012; Lewallen, 2015). Many nursing education programs adopt program outcome evaluation
plans as a way to satisfy regional and professional accreditors as well as Boards of Nursing
(Horne & Sandmann, 2012; Lewallen, 2015). Evaluating program outcomes to meet
accountability requirements is shortsighted.
The goal of outcome evaluation should be to foster program improvement, rather than, as
is often seen in practice, to meet accreditation or accountability requirements (Horne &
Sandmann, 2012). Nursing education programs are not able to fully understand their quality
(what they are doing well and what they need to improve) without reflection on program
outcome data (Lewallen, 2015). Additionally, the quality of nursing education can impact
quality of patient care and help reduce the frequency of preventable deaths (Howard, 2010;
James, 2013). This study examines the nature of program outcome evaluation practices at a
specific college of nursing.
Organizational Context and Mission
The Yvonne College of Nursing (YCN) [a pseudonym] is one of seven health professions
colleges within Grace-Rose University of Health Sciences (Grace-RoseU) [a pseudonym]. The
remaining colleges within the university are: the College of Medicine, College of Pharmacy,
College of Dentistry, College of Podiatry, College of Public Health, and College of
PROGRAM OUTCOME GAP ANALYSIS 15
Sciences. Grace-RoseU serves approximately 4,000 graduate students, and the YCN is home to
approximately 350 of those students.
YCN awards Master of Science in Nursing degrees, Doctor of Nursing Practice degrees
and a Family Nurse Practitioner certificate. There are multiple entry-points for students within
YCN including a Master’s Entry option for students with a bachelor’s degree in a non-nursing
field, a post-associate’s degree program, a post-bachelor’s degree program, a post-master’s
degree program, and a post-master’s certificate program.
The mission of the Yvonne College of Nursing (YCN) is: “…to progress nursing
education and prepare future nurses by developing relationships, encouraging collaboration in
practice, innovating, and exploring academic/service partnerships to improve health systems to
optimize health and healthcare for individuals, populations, and communities.” This mission
drives the curriculum and operations of the YCN. To achieve this mission, YCN must ensure
that all students have the knowledge, skills, and attitudes necessary to meet the needs of patients
and individuals throughout health systems.
Organizational Performance Status/Need
The YCN currently lacks systematic processes for establishing, collecting, and evaluating
program outcomes to demonstrate program effectiveness and inform continuous improvement
initiatives. As a result, the college is not able to make meaningful data-driven decisions
regarding program outcomes. Additionally, the lack of a systematic process for evaluating
program outcome data was cited as a compliance concern from a professional accreditor.
PROGRAM OUTCOME GAP ANALYSIS 16
Related Literature
Program outcomes represent the attributes and competencies students obtain through a
course of study (Maki, 2010). Program outcomes should align with broader institutional
outcome statements. Institutional outcomes, in turn, serve as a way of facilitating accountability
for institutions to realize their mission statement (Maki, 2010).
Evaluating program outcomes and institutional outcomes can help practitioners determine
how well the content delivered through coursework and clinical activities aligns with the
intended, consensus goals of the program or institution (Maki, 2010). Prioritizing program
outcome and institutional outcome evaluation is a way for leaders to promote shared purpose and
value throughout their organization. Promoting shared purpose and value throughout an
organization is considered a best practice for education leaders (Goldberg & Morrison, 2003).
Additionally, the quality of nursing education affects the future quality of patient care
(Howard, 2010). James (2013, p. 125) estimates that hospital-related errors result in 210,000
deaths each year in the United States. These deaths are caused by preventable errors related to
issues such as: medication errors, communication failures, and misuse of technology. The
quality of nursing education programs influences future generations of nurses, and has the
potential of preparing individuals capable of reducing the frequency errors that lead to outcomes
such as hospital-related deaths.
Importance of the Organizational Innovation
It is important for the YCN to establish a process for evaluating program outcome data
for primarily two reasons: to ensure continuous improvement, and to satisfy accreditation
requirements. Evaluating program outcomes will help the college better understand the
experience of students, and will help inform decision-making. The process will also help the
PROGRAM OUTCOME GAP ANALYSIS 17
college meet accreditation requirements - specifically Standard IV of the Commission on
Collegiate Nursing Education (CCNE) Standards for Accreditation of Baccalaureate and
Graduate Nursing Education Programs (2013).
Importance of Continuous Improvement
Continuous improvement efforts may be driven by internal or external organizational
factors. Grubb and Badway (2005) explain that institutions that support internal continuous
improvement practices are capable of improving educational methods and creating supportive
organizational environments. Conversely, Grubb and Badway note that institutions engaging in
continuous improvement efforts driven by external parties can encounter hostility and limited
compliance among staff and faculty members.
Analyzing program outcome data can help inform discussions and decisions. Dowd,
Malcolm, Nakamoto, and Bensimon (2012) contend that by using data to inform discussions,
practitioners can better understand student experiences and issues faced by students. Dowd et al.
(2012) state that in the absence of pertinent data, uninformed and possibly false anecdotal
knowledge can negatively influence discussions among faculty members and
administration. Misinformed discussions can lead to decisions or policies that may fail to
achieve their intended purposes, or worse, decisions or policies that negatively affect what they
intended to help. Program outcome evaluation contributes to continuous improvement efforts by
helping students and those involved in an academic program better understand one another’s
expectations.
Program outcome evaluation, and continuous improvement initiatives in general can also
be used to help ensure that professional and public needs are being met. Higher education
institutions and programs are held accountable to standards of professional standards and public
PROGRAM OUTCOME GAP ANALYSIS 18
demand (Blass, 2008). Similarly, Zemsky, Wegner, and Massy (2006) contend that commercial
and public interests are currently the leading forces in defining educational quality. Institutions
must continuously improve and evaluate their practices to ensure professional currency and that
the needs of the public are met.
Importance of Satisfying Accreditation Standards
According to Volkwein (2010), higher education accreditation traditionally comes in
three forms: institutional accreditation, programmatic accreditation, and individual
accreditation. Each form of accreditation serves a separate purpose. Volkwein explains that
institutional accreditation is a voluntary process (including regional and national accreditation)
of self-review and insight from peer evaluators, programmatic accreditation is focused on
specific professional standards, and individual accreditation focuses on licensing or credentialing
requirements. The YCN is accountable to all three types of accreditation. For institutional
accreditation Grace-Rose University is accredited by the WASC Senior College and University
Commission (WSCUC), for programmatic accreditation the YCN is accredited by the CCNE,
and on an individual level the YCN is accountable to the California Board of Registered Nursing
(BRN). Failure to meet institutional accreditation requirements can result in an institution losing
its ability to receive federal financial aid, while failure to meet programmatic or individual
accreditation requirements can result in an institution losing its privilege to prepare students for
specific careers.
Traditionally, accrediting bodies have been interested in verifying effectiveness of
institutions (Ewell, 2009). Accreditation processes generally rely on professional peers to serve
as reviewers to assess the effectiveness of a given institution. Although accreditation is an
external process, Grubb and Badway (2005) stress the importance of internally driven continuous
PROGRAM OUTCOME GAP ANALYSIS 19
improvement initiatives as ways of ensuring institutional effectiveness. Furthermore, Grubb and
Badway state that institutions that engage in internally driven efforts often meet most of the
demands of external entities - specifically accrediting bodies.
Bardo (2009) contends that institutions seeking to attain or retain their accreditation
status must fiscally invest in processes that support accreditation. Costs related to accreditation
include salaries for dedicated staff, faculty members, and administrators, as well as costs of
technical services and products. Failure to meet accreditation standards may result in institutions
or programs being required to participate in more rigorous or more frequent accreditation
requirements (more frequently required self-studies or site visits). Participating in more
frequent accreditation requirements can lead to increased human resource costs, and costs of
additional services.
Organizational Performance Goal
By December 2018, YCN will establish an evaluation system to measure program
outcomes and demonstrate program effectiveness of it graduates. The goal evaluation system
will utilize program outcome data in a manner that promotes continuous improvement for the
YCN. The college faculty and staff agreed on this goal after reflecting on the recommendations
that followed a CCNE accreditation visit in March of 2016. As a systematic process does not
currently exist, achievement of this goal will be actualized when the evaluation system is
established.
Stakeholders and Stakeholders’ Performance Goals
Stakeholder groups that will be responsible for developing the evaluation system include
faculty members, educational technology staff, and program-specific support staff
members. Each stakeholder group has goals that contribute towards the organization meeting its
PROGRAM OUTCOME GAP ANALYSIS 20
overarching performance goal. These individuals work most closely with the curricular,
instructional, and evaluative processes that are related to program outcome evaluation. The work
of all of these individuals contributes towards the college’s performance goals, as well as its
overarching mission. The mission, performance goal of interest to this study, and stakeholder
goals related to the performance goal are summarized in Table 1.
Table 1
Organizational Mission, Organizational Goal and Stakeholder Performance Goals
Organizational Mission
The mission of the Yvonne College of Nursing is to progress nursing education and prepare
future nurses by developing relationships, encouraging collaboration in practice, innovating,
and exploring academic/service partnerships to improve health systems to optimize health and
healthcare for individuals, populations, and communities.
Organizational Performance Goal
By December 2018, YCN will establish an evaluation system to measure program outcomes
and demonstrate program effectiveness of it graduates.
Faculty Members
By December 2018, create and
implement an evaluation program
that includes measures and methods
for evaluating YCN program
outcomes that aligns with standards
set by the accrediting bodies and
the college’s leadership.
Educational Technology
Staff
By December 2018,
enable functionality of the
learning management
system to report on
established program
outcome assessments.
Program Support Staff
By December 2018, create
and implement a plan to
report on program
outcomes that exists
outside of the formal
learning management
system.
PROGRAM OUTCOME GAP ANALYSIS 21
Stakeholder for the Study and Stakeholder Performance Gap
Faculty members will be the most involved in the development of the evaluation, as well
as implementation of the process. Faculty members of YCN participate in shared governance,
and are responsible for directing the continuous improvement efforts of the college. As such,
faculty members will be the stakeholders of focus for this gap analysis.
Currently, a program outcome evaluation plan does not exist at the YCN. The
organizational goal is that 100% of faculty members will implement a program outcome
evaluation plan. The gap in performance, therefore, is 100%.
Stakeholders’ Critical Behaviors
Critical behaviors are observable actions that help show transfer from learning to practice
(Kirkpatrick & Kirkpatrick, 2016). Four critical behaviors were identified as indicators
associated with faculty members achieving their goal in this study. The four critical behaviors
were:
1. Faculty members align the program outcomes with accreditation standards.
2. Faculty members determine the program outcome data to be collected.
3. Faculty members collect and analyze program outcomes data.
4. Faculty members use program outcome data to influence decision-making for
improvement.
Each critical behavior was aligned with one or more of the assumed knowledge, motivation, or
organizational gaps identified in Chapter Two.
Purpose of the Project and Questions
The purpose of this project was to conduct a needs’ analysis in the areas of knowledge
and skill, motivation, and organizational resources necessary for YCN faculty to achieve their
PROGRAM OUTCOME GAP ANALYSIS 22
performance goal. The analysis began by generating a list of possible gaps and transitioned to
examining gaps systematically to focus on actual or validated needs. While a complete needs’
analysis would have focused on all stakeholders, for practical purposes the stakeholders to be
focused on in this analysis were faculty members within the Yvonne College of Nursing at
Grace-Rose University of Health Sciences.
As such, the questions that guide this study are the following:
1. What are the knowledge, motivation, and organizational needs necessary for the YCN
faculty members to achieve their goal of creating and implementing an evaluation
program that includes measures and methods for evaluating academic program outcome
data by December 2018 that aligns with standards set by the accrediting bodies and the
program’s leadership?
2. What are the recommended knowledge, motivation, and organizational solutions to those
needs?
Methodological Framework
The methodological framework that was used in this study was Clark and Estes’ (2008)
gap analysis framework. Clark and Estes’ (2008) gap analysis framework was developed to
“help organizations make effective decisions about performance, products, and services” (p.
1). The framework leads practitioners through systematic steps including understanding
organizational goals, examining assumed knowledge and skills, motivation, and organizational
deficiencies, recommending interventions to address deficiencies, and evaluating the
effectiveness of interventions. Evidence-based solutions and definitions are included throughout
the gap analysis framework.
PROGRAM OUTCOME GAP ANALYSIS 23
Definitions
Assessment: A systemic and systematic process of examining student work against standards of
judgment (Maki, 2010, p. 3).
Evaluation: The process of determining the merit, worth, or value of something, or the product
of that process (Scriven, 1991, p. 1).
Institutional outcome: Institution-specific content or learning parameters - what students should
learn, understand, or appreciate because of their studies (Maki, 2010).
Learning outcome: Statement that translates learning into action, behaviors, and other texts from
which observers can draw inferences about the depth and breadth of student learning (Maki,
2010, p. 89).
Program outcome: Program-specific content or learning parameters - what students should
learn, understand, or appreciate because of their studies (Maki, 2010).
Organization of the Study
Five chapters are used to organize this study. This chapter provided the reader with the
key concepts and terminology commonly found in a discussion about program outcome
evaluation. The organization’s mission, goals and stakeholders as well as the initial concepts of
gap analysis adapted to needs analysis were introduced.
Chapter Two provides a review of current literature surrounding the scope of the study.
Topics of accreditation, data analysis, evaluation processes, and continuous improvement are
addressed in Chapter Two. Additionally, Chapter Two provides an overview of learning theories
focused on knowledge, motivation, and organizations.
PROGRAM OUTCOME GAP ANALYSIS 24
Chapter Three details the assumed gaps related to the organization’s performance.
Additionally, methodological considerations are discussed in Chapter Three. Chapter Four
presents the details of the results of the inquiry methods presented in Chapter Three.
Chapter Five includes recommendations based on findings, as well as a detailed
evaluation plan. The evaluation plan focuses on the effectiveness of the prescribed
recommendation-focused implementation plan. Finally, limitations and potential for future
research are discussed in Chapter Five.
PROGRAM OUTCOME GAP ANALYSIS 25
CHAPTER TWO: REVIEW OF THE LITERATURE
This chapter explores the knowledge, motivation, and organizational factors that could
impact faculty members’ ability to enact a systematic plan of analyzing and acting on program
outcomes. Indices related knowledge, motivation, and organizational factors were examined in
an effort to better understand what is working well, what gaps exist, and what action should be
taken to ensure continuous improvement. Definitions and examples of knowledge, motivation,
and organizational factors are discussed in this chapter.
Conceptual Framework
Clark and Estes’ (2008) gap analysis framework guided the conceptual framework of this
study. The gap analysis framework is used to help practitioners better understand knowledge,
motivation, and organizational factors that either contribute to or hinder an organization’s ability
to achieve its goals. Assumed knowledge, motivation, and organizational indices regarding the
problem were proposed, and subsequently examined in the gap analysis framework. Knowledge,
motivation, and organizational factors are better understood by use of literature of promising
practices and trends. Literature in education research will be used to better understand the
phenomena that exist within the Yvonne College of Nursing.
Stakeholder Knowledge, Motivation and Organizational Factors
Knowledge and Skills
Anderson and Krathwohl (2001) recognize four specific types of knowledge: declarative
factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive
knowledge. Multiple knowledge types allow individuals to better organize their understanding
and skills regarding a given phenomenon (Anderson & Krathwohl, 2001). All four knowledge
types are examined when using the Clark and Estes (2008) gap analysis framework.
PROGRAM OUTCOME GAP ANALYSIS 26
Declarative knowledge refers to facts, ideas, and standardized definitions that cannot be
disputed (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010). Individuals must have a
command of basic facts and ideas related to their work in order to be successful practitioners
(Grubb & Badway, 2015). In successful organizations, individuals are able to speak and work
with colleagues using agreed upon declarative knowledge in the form of definitions and
concepts.
The relationship between multiple sources of information is conceptual knowledge
(Rueda, 2011). Conceptual knowledge is used to organize multiple principles or definitions in
order to address a novel issue (Kumaran, Summerfield, Hassabis, & Maguire, 2009). Individuals
must be capable of organizing information from various sources in order to effectively
implement organizational practices such as program outcome assessment.
Procedural knowledge includes the application of methods or procedures in a stepwise
manner (Ambrose et al., 2010). Individuals must understand the requirements of each step as
well as the chronological order of each step needed to exhibit procedural knowledge. A given
procedure may be able to be carried out by an individual, or it may include the coordination of
multiple parties.
Metacognitive knowledge is distinct from other knowledge types as it focuses on
individuals’ awareness of cognition (Krathwohl, 2002). Metacognitive knowledge requires
individuals to justify a given position or way of knowing, and helps foster a rich understanding
of topics and how they work. Practitioners should be cognizant of their abilities while
performing tasks related to their work.
PROGRAM OUTCOME GAP ANALYSIS 27
Assumed declarative factual knowledge influences.
Faculty members need to know accreditation requirements regarding program
outcome evaluation. Program outcome evaluation can contribute to an organization’s ability to
establish or maintain accreditation (Wentland, 2012). Knowledge of accreditation requirements
is factual, declarative knowledge, as the requirements are standardized and non-disputable
(Ambrose et al., 2010). However, accreditation standards while non-disputable in their
existence, allow for interpretation. Additionally, accreditation standards are publicly available,
and are described in detail within the literature of accrediting organizations. Primarily, the YCN
is concerned with three accreditation organizations representing institutional accreditation
(WASC Senior College and University Commission), programmatic accreditation (Commission
on Collegiate Nursing Education), and individual accreditation (California Board of Registered
Nursing).
For faculty members to engage with the accreditation process, they must be
knowledgeable of the requirements (standards and expectations). Emenike, Raker, and Holme
(2013) found that faculty members in science-related disciplines can lack knowledge of
assessment and evaluation terminologies. Calegari, Sibley, and Turner (2015) stress the
importance of faculty members being knowledgeable of accreditation requirements and
processes. Faculty members who are knowledgeable of accreditation requirements and involved
in accreditation processes are more engaged than faculty members that work in institutions that
are less collaborative and dictate in a unilateral fashion (Calegari et al., 2015).
Faculty members need to know the purpose of program outcome evaluation. In
addition to satisfying accreditation requirements, program outcome assessment contributes to the
continuous improvement efforts of institutions (Maki, 2010). Continuous improvement efforts
PROGRAM OUTCOME GAP ANALYSIS 28
include considerations such as student success, operational costs, faculty-to-student ratios, and
instructional design. Faculty members need to know that the goal of program outcome
assessment extends beyond compliance requirements. Faculty members and staff agreeing on
goals and operational processes is a hallmark of a well-performing organization (Birnbaum,
1988).
Assumed conceptual knowledge influences.
Faculty members need to know the relationship between program outcome evaluation
and the mission of the college. Specifically, faculty members must be able to understand the
relationship between program outcome evaluation and the operations of the college. Program
outcomes are generally established through collaborative work, and are based on the mission of
the institution, college or specific program (Maki, 2010). Faculty members should be mindful of
the relationship that exists between program outcomes, program mission, and the mission of the
college.
Assumed procedural knowledge influences.
Faculty members need to know how to analyze program outcomes. The academic
preparation for most faculty members includes preparation though an accredited graduate
program. Many of these programs do not include formal training to become educators, or to
analyze program outcomes (Banta & Pike, 2012). Analyzing program outcomes is a stepwise
procedural knowledge process. Calegari et al. (2015) and Dowd et al. (2012) stress the need for
faculty members to be engaged in each step of outcome analysis processes to identify areas for
potential improvement. Faculty members must be capable of following the correct steps in the
correct order to effectively interpret assessment data.
PROGRAM OUTCOME GAP ANALYSIS 29
Faculty members need to be able to follow the steps of the program outcome evaluation
process. Similar to experiencing a potential preparation deficit of not being prepared to analyze
program outcome data, faculty members’ preparation may not prepare them to follow the steps
of an evaluation process (Banta & Pike, 2012). Evaluation plans generally include plans to
establish outcome measures, collect data, analyze data, and disseminating any changes or
reflections to applicable constituents (Maki, 2010). Each step is important, and faculty members
must be prepared to execute the proper steps in the proper order.
Assumed metacognitive knowledge influences.
Faculty members need to be able to reflect on their own ability to evaluate program
outcomes. Rickards, Abromeit, Mentkowski, and Mernitz (2016) explain that three questions
generally shape the practice of evaluation in higher education: “How well did students perform?
How trustworthy is the instrument or procedure? And how are the findings taken up by faculty?”
(p. 54). Rickards et al. (2016) further discuss that there is limited scholarship and attention
regarding how faculty members use and interpret evaluation data compared to scholarship and
attention on student performance and instrument validity and reliability. This suggests that
faculty members may not be capable of reflecting on their ability to analyze program outcome
data, or simply that they may not know why the activity is important.
Summary of assumed knowledge influences.
Assumed influences associated with declarative factual knowledge, conceptual
knowledge, procedural knowledge, and metacognitive knowledge were identified. In total, six
assumed knowledge gaps were identified. Table 2 provides a summary of assumed knowledge
influences.
PROGRAM OUTCOME GAP ANALYSIS 30
Table 2
Summary of Assumed Knowledge Influences on Faculty Members’ Ability to Evaluate
Program Outcomes.
Assumed Knowledge Influences Research Literature
Declarative Factual (terms, facts, concepts)
Faculty members need to know the accreditation requirements
regarding program outcomes evaluation.
Ambrose et al.
(2010)
Calegari et al. (2015)
Emenike et al. (2013)
Grubb and Badway
(2015)
Wentland, (2012)
Faculty members need to know the purpose of program outcome
evaluation
Ambrose et al.
(2010)
Maki (2010)
Birnbaum (1988)
Declarative Conceptual
Faculty members need to know the relationship between program
outcome evaluation and the operations of the college.
Kumaran et al.
(2009)
Maki (2010)
Rueda (2011)
Procedural
Faculty members need to know how to analyze and discuss program
outcomes.
Ambrose et al.
(2010)
Banta and Pike
(2012)
Calegari et al. (2015)
Dowd et al. (2012)
Maki (2010)
Faculty members need to be able to follow the steps of the program
outcome evaluation process.
Ambrose et al.
(2010)
Banta & Pike (2012)
Maki (2010)
Metacognitive
Faculty members need to be able to reflect on their own ability to
evaluate program outcome data.
Krathwohl, (2002)
Rickards et al. (2016)
Rueda (2011)
Suskie (2009)
PROGRAM OUTCOME GAP ANALYSIS 31
Motivation
Motivational indices of mental effort related to value, self-efficacy, and mood will also
be examined. Examining these indices can provide additional support for better understanding
the current gaps in performance. Each indices will be defined and utilized along with current
literature regarding program outcome evaluation.
Value is determined by a combination of the level of importance an individual assigns to
a task, as well as their expectation of success (Wigfield & Eccles, 2000). Furthermore,
importance and expectation are shaped by an individual’s previous experiences, aptitudes,
beliefs, and culture. This combination of influential factors makes a given individual’s value
unique, and task-specific. Understanding an individual’s value of a practice is an integral step in
understanding how to best support that individual in achieving a goal.
Similarly, self-efficacy can influence an individual’s willingness to participate in
activities to achieve a goal. Pajares (2006) explains that self-efficacy encompasses an
individual’s beliefs regarding their ability to successfully perform a given task. Self-efficacy is
generally established through: mastery learning, observing others, social persuasions, and
physiological reactions (Pajares, 2006). Of these types, mastery learning is the most common
method that impacts an individual’s self-efficacy (either positively or negatively depending on
experience).
An individual’s self-efficacy is shaped by their past experiences as well factors that are
influencing them at a specific time. Mood is a specific physiological reaction that can alter how
an individual responds to an event (Pajares, 2006). Examining individuals’ moods will help in
understanding both their overall self-efficacy and willingness to engage in a specific practice.
PROGRAM OUTCOME GAP ANALYSIS 32
Assumed value influences.
Faculty members need to see value in program outcome evaluation. Practitioners
commonly undertake program outcome evaluation as a means of satisfying accreditation
requirements (Maki, 2010). Despite the need to meet accreditation requirements, achievement of
accreditation should not be the driving value for implementing program outcome evaluation. An
attitude of valuing compliance over continuous improvement can have a deleterious impact on
the work of faculty members (Andrade, 2011; Bresciani, 2006; Maki, 2010).
When compliance-focused accountability measures such as accreditation are prioritized
over process improvement, institutions fail to reap the full benefits of their evaluation work
(Maki, 2010). Bresciani (2006) explains that an accreditation-focused approach to outcomes
evaluation is a “defensive posture” and that such a stance “obscures the true purpose of
assessment” (p. 16). Furthermore, Andrade (2011) contends that a compliance-focused
approach to outcomes evaluation can lead to organizations establishing undemanding goals that
do not help foster improvement. Based on the recommendations from Andrade (2011), Bresciani
(2006), and Maki (2010), institutions of higher education must balance the need to meet
accreditation and accountability standards while establishing practices that lead to continuous
improvements when establishing program outcome evaluation processes.
Assumed self-efficacy influences.
Faculty members need to have confidence in their ability to analyze program outcome
data to inform decision-making. Faculty members must be confident in their ability to analyze
program outcome data. As previously discussed, faculty members may have limited
opportunities to learn how to be educators prior to becoming faculty members (Banta & Pike,
PROGRAM OUTCOME GAP ANALYSIS 33
2012). Faculty members may choose not to engage in program outcome data analysis if they are
not confident in their ability to be successful (Pajares, 2006).
Faculty members need to have confidence in their ability to make data-informed
decisions. Similarly, faculty members must be confident in their ability to make data-informed
decisions. Motivation has been identified as an indicator of an organization or an individual’s
ability to engage in data-informed decision-making practices (Peck & McDonald, 2014). Faculty
members’ self-efficacy for data-informed decision making must be high if they are to be
expected participants and leaders in data-informed decision-making processes.
Assumed mood influences.
Faculty members need to feel positive about the potential impact program outcome
evaluation can have on the operations of the college. Faculty members should feel positive
about the potential impact of program outcome assessment. The process of program outcome
evaluation may make faculty members feel threatened (Suskie, 2009). Faculty members may
feel threatened that program outcome evaluation results will be used in a punitive fashion, or that
results may not accurately represent experiences. If a faculty member has a positive mood
regarding program outcome assessment they may engage more willingly, and more frequently
with the process of program outcome assessment than if they feel threatened by the process.
Summary of assumed motivation influences. Assumed influences associated with
value, self-efficacy, and mood were identified. In total, four assumed motivation influences
were identified. Table 3 provides a summary of assumed motivation influences.
PROGRAM OUTCOME GAP ANALYSIS 34
Table 3
Summary of Assumed Motivation Influences on Faculty Members’ Ability to Evaluate
Program Outcomes
Assumed Motivation Influences Supporting Research
Literature
Value
Faculty members need to value program outcome evaluation
Pintrich (2003)
Wigfield and Eccles,
(2000)
Self-Efficacy
Faculty members need to have confidence in their ability to analyze
program outcome data.
Mayer (2011)
Banta and Pike
(2012)
Pajares (2006)
Faculty members need to have confidence in their ability to make data-
informed decisions.
Mayer (2011)
Banta and Pike
(2012)
Peck and McDonald
(2014)
Mood
Faculty members need to feel positive about the potential impact
program outcome evaluation can have on the operations of the college.
Pajares (2006)
Suskie (2009)
Organization
Organizational factors that influence an organization’s performance include resources,
policies, processes, and procedures, cultural settings, and cultural models (Clark & Estes,
2008). Organizational factors can either promote or inhibit learning and an organization’s ability
to achieve its mission, vision, and goals (Bensimon, 2005). Understanding the impact structural
and cultural organizational factors are having on an organization is an important step in helping
that organization improve operations.
Resources. Inadequate resources or tools contribute to organizational issues that can
disrupt an organization’s ability to achieve its goals (Clark & Estes, 2008). Appropriate
PROGRAM OUTCOME GAP ANALYSIS 35
procurement and use of resources can help overcome organizational barriers, and save time and
money. Specifically, technology has served as a tool that can help organizations better meet
their needs while reducing cost and effort (Shaffer, 2015).
Policies and procedures. Policies and procedures are important aspects of education as
they influence an organization's practices. Organizations can align a desired practice with their
mission and clear policy and procedure statements to help ensure accountability and compliance
(Conley, 2015). For example, policies and procedures can help ensure equitable outcomes by
incentivizing practices that lead to equitable outcomes (Conley, 2015). Policies and procedures
should be purposely structured to help organizations and individuals achieve their goals.
Cultural setting. Cultural settings are aspects that contribute to an organization's
operations including the setup of meeting spaces, classrooms, offices, buildings, reports, and
promotional materials (Rueda, 2011). These physical aspects of an organization communicate
some of the values and operational practices of the organization. Visible cultural settings have a
reciprocal relationship with cultural models - the non-visible manifestations of an organization’s
values (Rueda, 2011).
Cultural models. Rueda (2011) states that cultural models have a tremendous influence
on the operations of an organization. Cultural models affect seemingly automated processes
such as individuals’ behavior, values, and priorities. Cultural models are developed over time,
and are relative and specific to a given organization (Rueda, 2011). Though they can be difficult
to observe, examining cultural models provide vital insight to an organization’s practices.
Assumed Resource Influences.
Faculty members need the resources to be able to review aggregated program outcome
data. Dowd et al. (2012) claim that many institutions lack the capacity to analyze data in a way
PROGRAM OUTCOME GAP ANALYSIS 36
that will improve student experiences. Dowd et al. (2012) specifically identified that many
institutions lack the necessary human resource allocations for student outcome data
analysis. According to Dowd et al. (2012), organizations of higher education must provide
ample time and resources if they intend to use program outcome data in a meaningful way.
The quality of information itself is also an important resource when examining program
outcomes. Survey data for example can be influenced by the way the survey is implemented.
Survey delivery method (physical or computer-based) can impact response rates (Lalla & Ferrari,
2011; LaRose & Hsin-yi, 2014), and demographics of respondents (Lin, Hewitt, & Videras,
2017; Reisenwitz, 2016; Rüdig, 2008).
Faculty members need the resources to be able to review disaggregated program
outcome data. The ability to analyze disaggregated student performance data is an important
component of developing an equitable and inquiry-driven institution (Witham & Bensimon,
2012). Analysis of disaggregated program outcome data can help faculty members better
understand the experience of students, identify trends, and plan for improvement. Data to be
used in program outcome evaluation must be representative. As previously discussed, various
data collection methods can influence the quality of the data (Lalla & Ferrari, 2011; LaRose &
Hsin-yi, 2014; Lin et al., 2017; Reisenwitz, 2016; Rüdig, 2008). Without access to accurate,
disaggregated program outcome data faculty members are not able to make informed decisions
in this regard.
Assumed Policy and Procedure Influences.
The organization has processes that allow faculty members to affect change at the
college. Rickards et al. (2016) stress the importance of practices that allow faculty members to
reflect on student performance data and affect change at the course, program, and organizational
PROGRAM OUTCOME GAP ANALYSIS 37
level. Similarly, Calegari et al. (2015) explain that faculty members who participate in decision-
making attain greater understanding of initiatives than those who do not. Faculty members
should be able to use program outcomes as the basis for informing decision-making. From an
organizational perspective, faculty members must have access to the information, but also must
have the autonomy to make decisions that will affect change at the college.
Programs are held accountable for collecting pertinent data related to program
outcomes. Alkin (2011) stresses the importance of individuals in programs understanding what
data needs to be collected, and how data will be used. Regarding program outcome data, faculty
members should know where, when, and how data is collected. Additionally, faculty members
should understand how data will be used, and plan for future needs.
Assumed cultural setting influences.
The organization provides evidence of program outcome evaluation. Suskie (2009)
encourages institutions to disseminate assessment and evaluation data in transparent, widely
available ways. A report of program outcomes and analysis can serve as a physical
representation of an organization's commitment to improvement. Additionally, organizations are
more likely to be accountable to their mission as well as information they have made publicly
available (Suskie, 2009).
Similarly, Jankowski and Reese Cain (2015) explain that communication of program
outcomes should include information regarding the process of establishing outcomes, analyzing
outcomes, and institutional responses to analysis. One of the primary goals of communicating
outcome data according to Jankowski and Reese Cain (2015) is to engage with internal and
external parties to ensure their needs are met. Failure to effectively communicate information
regarding outcomes could result in unmet needs of an internal or external party.
PROGRAM OUTCOME GAP ANALYSIS 38
Assumed cultural model influences.
The organization supports a culture of commitment to continuous
improvement. Accountability measures such as accreditation requirements have the potential to
create cultures of compliance, cultures of improvement, or a combination of the two (Grubb &
Badway, 2005). As previously mentioned in the values section of this discussion, compliance-
focused initiatives risk sacrificing authenticity (Bresciani, 2006), can lead to undemanding goals
(Andrade, 2011), and can foster efforts that do not lead to improvement (Maki, 2010). A
challenge in establishing demanding, improvement-focused processes is that some practitioners
are hesitant to present information that could be construed as negative (Dowd et al., 2012).
Ideally, the college will provide a culture of commitment to continuous improvement.
The goals of the organization related to program outcome evaluation are
communicated effectively. Grubb and Badway (2005) express that program or organizational
goals should be communicated effectively throughout the organization. Well-communicated
goals can have a positive impact on an organization's ability to achieve a goal, while poor
communication can have a negative impact on an organization’s ability to achieve a
goal. Specifically, poor communication can lead to misunderstanding or misuse of information
that may negatively affect student experiences (Palmer, 2012).
Summary of assumed organization influences.
Assumed influences associated with resources; policies, processes, & procedures; and
culture were identified. In total, seven assumed organization influences were identified. Table 4
provides a summary of assumed organization influences.
PROGRAM OUTCOME GAP ANALYSIS 39
Table 4
Summary of Assumed Organization Influences on Faculty Members’ Ability to Evaluate
Program Outcomes
Assumed Organization Influences Research Literature
Resources
Faculty members need the resources to be able to review
aggregated program outcome data.
Clark & Estes, (2008)
Dowd et al. (2012)
Lalla and Ferrari (2011)
LaRose and Hsin-yi
(2014)
Lin et al. (2017)
Reisenwitz (2016)
Rüdig (2008)
Shaffer (2015)
Faculty members need the resources to be able to review
disaggregated program outcome data.
Clark & Estes, (2008)
Lalla and Ferrari (2011)
LaRose and Hsin-yi
(2014)
Lin et al. (2017)
Reisenwitz (2016)
Rüdig (2008
Shaffer, (2015)
Witham & Bensimon,
(2012)
Policies, Processes, & Procedures
The organization has processes that allow faculty members to
affect change at the college.
Calegari et al. (2015)
Conley, (2015)
Rickards et al. (2016)
Programs are held accountable for collecting pertinent data related
to program outcomes.
Alkin, (2011)
Conley, (2015)
Culture
Cultural Setting
The organization provides evidence of program outcome
evaluation.
Jankowski and Reese
Cain (2015)
Rueda, (2011)
Suskie, (2009)
Cultural Model
The organization supports a culture of commitment to continuous
improvement.
Andrade (2011)
Bresciani, (2006)
Dowd et al. (2012)
PROGRAM OUTCOME GAP ANALYSIS 40
Grubb & Badway,
(2005)
Maki (2010)
Rueda, (2011)
The goals of the organization related to program outcome
evaluation are communicated effectively.
Grubb & Badway (2005)
Palmer (2012)
Summary
Knowledge, motivation, and organizational factors related to the YCN faculty members’
ability to systematically analyze program outcome data have been identified. Some factors
discussed are closely interrelated while others exist in relative isolation. Chapter Three outlines
the inquiry processes that were used to better understand the knowledge, motivation, and
organizational indices identified and discussed in Chapter Two.
PROGRAM OUTCOME GAP ANALYSIS 41
CHAPTER THREE: METHODOLOGY
Purpose of the Project and Questions
The purpose of this study was to conduct a needs analysis in the areas of knowledge and
skill, motivation, and organizational resources necessary for YCN faculty members to achieve
their organizational performance goal. The analysis began by generating a list of possible needs
and will then move to examining these systematically to focus on actual or validated needs.
While a complete needs analysis would focus on all stakeholders, for practical purposes the
stakeholder to be focused on in this analysis are faculty members within the Yvonne College of
Nursing at Grace-Rose University of Health Sciences.
The questions that guided this gap analysis were the following:
1. What are the knowledge, motivation, and organizational needs necessary for the YCN
faculty to achieve their goal of creating and implementing an evaluation program that
includes measures and methods for evaluating academic program outcome data by
December 2018 that aligns with standards set by the accrediting bodies and the
program’s leadership?
2. What are the recommended knowledge, motivation, and organizational solutions to
those needs?
Conceptual and Methodological Framework
Clark and Estes’ (2008) Gap Analysis framework guided the inquiry processes for this
study. Assumed knowledge, motivation, and organizational indices were examined using
surveys, interviews, or a combination of both surveys and interviews. Indices that were assessed
through surveys and interviews utilized an explanatory sequential method. The explanatory
sequential method guides practitioners to collect and interpret quantitative data first, and
PROGRAM OUTCOME GAP ANALYSIS 42
subsequently seek qualitative data using questions based on the initial quantitative findings
(Merriam & Tisdell, 2016).
Figure 1. Explanatory sequential process adapted from Merriam and Tisdell (2016).
Clark and Estes’ (2008) Gap Analysis framework was adapted for the purposes of this
study to accommodate the needs of the organization (Figure 2). This study used an innovation
adaptation of the Gap Analysis framework, as an overall process for program outcome evaluation
does not currently exist at the site. However, it is assumed that aspects of the process currently
exist. Figure 2 outlines the process of inquiry that will be used in this study.
Figure 2. Gap analysis process adapted from Clark and Estes (2008).
PROGRAM OUTCOME GAP ANALYSIS 43
Assessment of Performance Influences
Chapter Two introduced assumed knowledge, motivation, and organizational factors that
may contribute to the innovation of a program outcomes evaluation plan for the YCN. Each
knowledge, motivation, and organizational factor introduced in Chapter Two was examined
using individualized methods of inquiry. The selected methods of inquiry are presented along
with rationale for their use in this chapter (Chapter Three).
Knowledge Assessment
Assumed performance indices related to declarative factual, declarative conceptual,
procedural, and metacognitive knowledge discussed in Chapter Two will be assessed. The
specific assessment procedures described are guided by literature related to knowledge,
assessment, and inquiry. Table 5 provides an overview of the methods that were used, along
with sample survey and interview items.
Declarative factual knowledge assessment. Krathwohl (2002) recommends using
inquiry methods that require participants to demonstrate specific knowledge to assess an
individual’s declarative factual knowledge. Multiple-choice questions were used to assess
faculty members’ declarative factual knowledge. Declarative factual knowledge questions have
defined correct and incorrect answers (Ambrose et. al, 2010), as such multiple-choice questions
were used to effectively assess whether or not an individual could demonstrate their
knowledge. Additionally, interviews were used to help better understand any discrepancies that
arose in the multiple-choice declarative factual knowledge assessments.
Declarative conceptual knowledge assessment. Individuals demonstrate declarative
conceptual knowledge when they discuss the relationships that exist between categories or
principles (Krathwohl, 2002). Open-ended interview questions asking participants to explain
PROGRAM OUTCOME GAP ANALYSIS 44
relationships between concepts and practices were used to assess declarative conceptual
knowledge. A qualitative method best fits this dimension as qualitative responses can elicit
information regarding quality and understanding (Merriam & Tisdell, 2016).
Procedural knowledge assessment. Krathwohl (2002) also supports the use of
qualitative methods to assess an individual’s procedural knowledge. Faculty members responded
to open-ended interview questions asking them to articulate the steps required to enact specific
tasks. Responses to open-ended procedural questions helped demonstrate what procedural
knowledge exists, and what procedural knowledge may be missing.
Metacognitive knowledge assessment. Metacognitive knowledge is assessed by
eliciting and analyzing an individual’s reflection on their actions (Krathwohl,
2002). Metacognitive knowledge was assessed by means of participation in open-ended
interview questions. Faculty members were asked to discuss the mental activities and processes
they use to complete tasks.
A summary of knowledge influences and their assessment is listed in Table 5.
PROGRAM OUTCOME GAP ANALYSIS 45
Table 5
Summary of Knowledge Influences and Method of Assessment
Assumed Knowledge
Influences
Survey Items Interview Items
Declarative Factual
Faculty members need to know
the accreditation requirements
regarding program outcome
evaluation.
Multiple choice
regarding basic facts,
information, and
terminology.
Faculty members need to know
the purpose of program
outcome evaluation.
Can you explain the purpose of
program outcome assessment?
Declarative Conceptual
Faculty members need to know
the relationship between
program outcome evaluation
and the operations of the
college.
Can you give examples in your own
words how program outcome
evaluation aligns with operations of
college?
Procedural
Faculty members need to know
how to analyze and discuss
program outcome data.
Articulate in your own words how
programs should work through the
process of analyzing and discussing
program learning outcome data.
Faculty members need to be
able to follow the steps of the
program outcome evaluation
process.
Articulate in your own words the
steps in an effective program
evaluation process.
Metacognitive
Faculty members need to be
able to reflect on their own
ability to assess program
outcome data.
Present scenario and then ask to
self-analyze thought-process used
during scenario.
Motivation Assessment
Assumed performance indices related to value, self-efficacy, and mood discussed in
Chapter Two were assessed. The specific assessment procedures described are guided by
PROGRAM OUTCOME GAP ANALYSIS 46
literature related to motivation, assessment, and inquiry. Table 6 provides an overview of the
methods that were used, along with sample survey and interview items.
Value assessment. Rueda (2011) supports the use of self-reported data to assess
individuals’ value of a practice. Specifically, scales can be used to have individuals rank
practices in order of importance. By using a scaled ranking, individuals can communicate how
they value a practice in relation to other practices. Additionally, individuals can discuss their
experiences and expectations to communicate their value.
Faculty members were asked to rank program outcome data use among other factors
related to student success. This information helps illustrate how much of a priority program
outcome assessment is. Qualitatively, faculty members were asked to discuss reasons why the
college should engage in program outcome data analysis. This qualitative process allowed
faculty members to describe their own specific value and motivation.
Self-efficacy assessment. Similar to assessing value, assessing faculty members’ self-
efficacy was accomplished using a mixed-methods approach. Clark and Estes (2008) support the
use of questions focused on understanding individuals’ confidence in order to assess self-
efficacy. Survey items asked faculty members to select an ordinal value of their confidence
regarding aspects of program outcome evaluation, and interview questions asked faculty
members to describe how they feel about their abilities related to program outcome evaluation.
Mood assessment. Finally, interview questions that focus on emotional reactions were
used to assess mood. Clark and Estes (2008) support the use of open-ended questions to better
understand individuals’ personal experiences. Open-ended interview questions encouraged
interviewees to express their personal perspectives.
PROGRAM OUTCOME GAP ANALYSIS 47
Table 6
Summary of Motivation Influences and Method of Assessment
Assumed Motivation
Influences
Survey Items Interview Items
Value
Faculty members need to
value program outcome
assessment
List various processes
including program outcome
evaluation and ask faculty
members to rank in order
importance for student
success.
Could you discuss the
reasons the YCN should
engage in program outcome
assessment?
Self-Efficacy
Faculty members need to have
confidence in their ability to
analyze program outcome data
to inform decision-making.
Likert scale items:
I am confident in my ability to
analyze program outcome
data?
I am confident in my ability to
use program outcome data to
inform decision-making?
How do you feel about
your ability to analyze
program outcome data?
How do you feel about
your ability to use program
outcome data to inform
decision-making?
Mood
Faculty members need to feel
positive about the potential
impact program outcome
assessment can have on the
operations of the college.
How do you feel about the
college increasing
expectations related
program outcome
assessment at the college?
Organization/Culture/Context Assessment
Assumed performance indices related to resources, policies, processes, & procedures,
cultural setting, and cultural model discussed in Chapter Two were assessed. The specific
assessment procedures described are guided by literature related to motivation, assessment, and
inquiry. Table 7 provides an overview of the methods that were used, along with sample survey
and interview items.
Assessing resources. Clark and Estes (2008) support the use of surveys and interviews
to better understand the impact of available resources. Specifically, Clark and Estes (2008)
PROGRAM OUTCOME GAP ANALYSIS 48
encourage evaluators to understand how resources impact individuals’ ability to achieve their
goals. Survey items were used in order to better understand whether or not individuals felt that
resources are sufficient, and interview items were used to better understand ideal resources.
Assessing policies, processes, & procedures. Similarly, the goal of assessing policies,
processes, & procedures is to understand how they contribute to individuals’ ability to achieve
their goals (Clark & Estes, 2008). The organizational effect of policies, processes, and
procedures were assessed by means of survey and interview items. Survey items were used to
help determine whether or not policies, processes, and procedures are conducive to individuals’
ability to affect change at the college, and interview items focused on how policies, processes
and procedures affect individuals’ performance.
Assessing cultural setting. Assessing cultural settings can help institutions better
understand how they serve their constituents (Gallimore & Goldenberg, 2001). As outlined in
Chapter Two, cultural settings are physical examples of institutions’ values (Rueda,
2011). Since cultural settings are more observable than other organizational aspects, they were
assessed solely by means of survey items.
Assessing cultural model. Similar to the value of assessing cultural settings, Gallimore
and Goldenberg (2001) state that analyzing cultural models can help institutions better serve
their constituents. Since cultural models are more difficult to observe than cultural settings
(Rueda, 2011), both survey items and interview items were used to assess cultural
models. Survey items were used to help categorize faculty perceptions of cultural models, and
interview items were used to understand lived experiences of the YCN’s cultural model.
PROGRAM OUTCOME GAP ANALYSIS
49
Table 7
Summary of Organization Influences and Method of Assessment
Assumed Organization
Influences
Survey Items Interview Items
Resources
Faculty members need the
resources to be able to
review aggregated program
outcome data.
Likert scale:
I have access to the
aggregated data I need to
make informed decisions.
What specific information do
you need to make informed
decisions? Follow up: To what
extent do you have what you
need?
Faculty members need the
resources to be able to
review disaggregated
program outcome data.
I have access to
disaggregated program
outcome data I need to make
informed decisions.
What specific information do
you need to make informed
decisions? Follow up: To what
extent do you have what you
need?
Policies, Processes, &
Procedures
The organization has
processes that allow faculty
members to affect change at
the college.
Likert scale: The policies,
processes, and procedures of
the YCN support my ability
to affect change at the
college.
To what extent do policies,
processes, and procedures of
the YCN impact your ability
to affect change at the college?
Programs are held
accountable for collecting
pertinent data related to
program outcomes.
Likert scale: I know what
pertinent data needs to be
collected and analyzed on an
annual basis.
Culture
Cultural Setting
The organization provides
physical examples of its
commitment to program
learning outcome
assessment.
Likert scale item:
Reports generated and
disseminated by the YCN
exemplify its commitment to
achieving its mission.
Stakeholders are informed
regarding YCN’s progress in
meeting its mission.
Cultural Model
The organization supports a
culture of commitment to
continuous improvement.
Multiple choice item:
How would you describe the
YCN’s accountability
practices?
A culture of compliance
or
Describe the YCN’s
accountability practices.
What factors drive
accountability practices?
PROGRAM OUTCOME GAP ANALYSIS
50
A culture of continuous
improvement
Likert item:
The YCN supports a culture
of continuous improvement.
The YCN supports a culture
of compliance.
The goals of the
organization related to
program outcome
evaluation are
communicated effectively.
Likert item: The goals of the
organization related to
program outcome evaluation
are communicated
effectively.
Participating Stakeholders and Sample Selection
The stakeholder group of focus for this study was faculty members within the Yvonne
College of Nursing that do not hold administrative positions within the college. Faculty
members are responsible for both designing and implementing continuous improvement efforts
in the college, which include program outcome evaluation. The college employs 21 full-time
faculty members who teach throughout two degree programs and a post-master’s certificate. Of
the 21 faculty members, 8 faculty members also hold administrative roles within the
college. The 8 individuals who hold administrative roles within the college include the dean,
two assistant deans, three directors of various portions of the MSN curriculum, one director of
the DNP curriculum, and two assistant directors.
The YCN offers two degree programs, and a post-master’s certificate. However, the
MSN degree program has three entry points - students can enter the program with an associate's
degree in Nursing, bachelor’s degree in Nursing, or bachelor’s degree in a non-nursing
field. Additionally, the MSN curriculum includes four concentration areas. The complexity of
the college has resulted in five operational groups that in total represent all of the college’s
offerings. Table 8 outlines the five operational groups within the college.
PROGRAM OUTCOME GAP ANALYSIS
51
Table 8
Five operational groups within Yvonne College of Nursing
Operational
group
Potential entry
points
Potential exit points Number of faculty
members that teach
within operational
group
MSN-E Pre-
Licensure
Bachelor’s degree
or higher in a non-
nursing field
Transition to post-licensure,
distance based curriculum –
Either Ambulatory Care or
HSL
13
FNP Bachelor’s degree
or higher in nursing
field
Master’s degree in Nursing
with FNP Certificate
Or
Post-master’s FNP certificate
11
DNP Master’s degree or
higher in nursing
field
Doctor of Nursing Practice
degree
9
Ambulatory
Care
Associates degree
in nursing or higher
Or
Transition from
MSN-E Pre-
Licensure
Master of Science in Nursing
with concentration in
Ambulatory Care
6
HSL Associates degree
in nursing or higher
Or
Transition from
MSN-E Pre-
Licensure
Master of Science in Nursing
with concentration in
Ambulatory Care
6
Note. Faculty members teach throughout multiple operational groups.
Sampling
In evaluating programs, it is important to know the stakeholders who can provide the best
quality of information (Alkin, 2011). Key stakeholders (faculty members without administrative
PROGRAM OUTCOME GAP ANALYSIS
52
roles) were assessed by means of surveys only, interviews only, and combined use of surveys
and interviews. Each method utilized individualized sampling considerations.
Survey only. Sampling helps make large groups of respondents more manageable for
those involved in inquiry processes (Fink, 2016). However, due to the small number of full-time
faculty members that do not hold administrative roles within the college, sampling was not
necessary for indices that were assessed exclusively by survey. Indices that were assessed
exclusively by surveys were sent to all faculty members who do not hold administrative roles (13
individuals) within the college.
Interview only. The purpose (understanding a specific site) of this study required the
use of purposive sampling of interview subjects. Purposive samples are selected individually by
investigators in an effort to identify individuals who have crucial information for the study
(Merriam & Tisdell, 2016). Additionally, the intended use of purposive samples is to gain rich
understanding about specifically identified groups (Merriam & Tisdell, 2016).
Due to the size of the stakeholder group, all faculty members who do not hold
administrative roles were asked to participate in interviews. The goal of interviews was to reach
saturation of data. Saturation occurs when similar themes are elicited from interviewees and it
appears as though no novel information is being discovered (Merriam & Tisdell, 2016).
Recruitment
All faculty members were recruited to participate in indices that are being assessed by
survey and interviews. Recruitment for the survey took place through an email invitation that
includes a link to the survey. Recruitment for interviews intended to include at least one
representative from each of the five operational groups (Table 8). If saturation of responses was
not met through the initial interviews, additional e-mails would have been sent to encourage
PROGRAM OUTCOME GAP ANALYSIS
53
participation. The content of responses could have warranted the recruitment of additional
individuals within a specific operational group, or recruitment of additional faculty members
regardless of operational group.
Instrumentation
The instrumentation used for this study was surveys and interviews. Survey and
interview designs were guided by the assessment procedures introduced previously in this
chapter (Chapter Three). Each of the assumed indices identified in Chapter Two was assessed
through survey and interview instruments.
Survey Design
The survey design included the use of ordinal Likert scale items, categorical items that
force respondents to pick between two items, and items that ask respondents to select all that
apply from a pre-determined list. The survey methods used in this study followed the standards
discussed in the knowledge assessment, motivation assessment, and organizational assessment
sections of this chapter. The works of Ambrose et. al, (2010), Clark and Estes (2008),
Krathwohl (2002), and Rueda (2011) guided both the assessment activities and development of
specific survey items. Survey items were developed using assumed causes identified by the
literature presented in Tables 5, Table 6, and Table 7. Survey items are included in Appendix A.
Interview Protocol Design
Interviews in this study utilized a case study approach. Case studies are used when
examining a specific site or phenomenon (Merriam & Tisdell, 2016). Interview questions are
included in Appendix B. The research questions in this study intended to elucidate the
experiences of faculty members at a specific site, so the case study approach fit the study well.
PROGRAM OUTCOME GAP ANALYSIS
54
Interview questions utilized a semi-structured format. Semi structured interview formats
utilize specific questions, but allow for flexibility that may lead to a rich understanding of
phenomena (Merriam & Tisdell, 2016). The semi-structured approach was especially useful for
indices that were assessed using explanatory mixed-methods, since their intent was to clarify and
elucidate information.
Validity
Utilizing defined terminology and literature-based items helped in establishing content
validity (Creswell, 2014). Both surveys and interviews utilized items designed specifically for
this study. Survey and interview items were developed based on relevant literature and
definitions associated with the problem in an effort to help establish content validity.
Data Collection
Following University of Southern California Institutional Review Board (IRB) approval,
as well as approval from the site of the study (Grace-Rose University – a pseudonym),
participants were solicited by e-mail to participate in surveys and interviews. Non-
administrative faculty members’ contact information was publicly available, and was used to
solicit participation. All communications regarding surveys and interviews came directly from
the investigator.
Surveys
Surveys were sent electronically using Qualtrics ®, and could be completed within the
Qualtrics ® application. Individuals who did not respond to the survey received reminders every
five days over a period of twenty-one days. Reminder e-mails were sent automatically through
the Qualtrics ® system. The surveys remained open for a period of twenty-one days.
PROGRAM OUTCOME GAP ANALYSIS
55
Interviews
The investigator directly administered the interviews. Interviews took place in person, or
through the videoconference software, Zoom ®. Interviewees were asked for permission to
record interviews. In person interviews were recorded using a voice recorder, and
videoconference interviews were recorded using the record function within the software. The
length of interviews varied based on the semi-structured nature of each interview.
Data Analysis
Surveys
Survey data analysis focused on the utilization of descriptive statistics. Descriptive
statistics elucidate information regarding “frequencies or frequency distributions, measures of
central tendency, and measures of variation” (Fink, 2016 p. 137). Descriptive statistics helped
establish summaries of the knowledge, motivation, and organizational indices affecting faculty
members’ ability to evaluate program outcomes.
Interviews
Merriam and Tisdell (2016) support the use of open, axial, and selective codes in order to
organize qualitative data. Open codes are the most specific and plentiful codes, selective codes
represent the broad themes, and axial codes help bridge the gap between open codes and
selective codes (Merriam & Tisdell, 2016). Interviews were coded using open, axial, and
selective codes. A codebook was created in order to help facilitate the organization and
interpretation of the data.
Trustworthiness of Data
The investigator maintained ethical standards throughout the study. Glesne (2011)
explains that researchers must ensure that participants are well informed regarding their
PROGRAM OUTCOME GAP ANALYSIS
56
participation in a study. The investigator utilized ethical principles espoused by Glesne
including informing interviewees and survey respondents that they could end the interview at
any time, and exploring a research question that will benefit the organization in ways that
outweigh any potential risk.
The individuals selected for surveys and interviews were appropriate as they all serve as
faculty members who do not hold administrative roles at YCN. The investigator was confident
that each interviewee was open and honest in their discussion regarding their experience as a
faculty member. In an effort to encourage open dialogue, survey responses were collected
anonymously, and any identifying factors was removed from survey and interview data prior to
data analysis.
The investigator utilized triangulation in order to further ensure trustworthiness of the
data collected. Triangulation is achieved by comparing multiple sources of data to help
strengthen or dispel themes within data sets (Miles, Huberman, & Saldaña, 2014). The
researcher examined the relationship between multiple data sources throughout the coding and
analysis process.
Role of Investigator
The investigator in this study was the director of assessment within the Yvonne College
of Nursing. The participants were not subordinates to the investigator. Additionally, the
investigator had no direct role in evaluating performance or administering punitive actions upon
the participants.
Limitations
The site-specific design intended to yield rich data and recommendations for the YCN.
Due to the nature of this study, the findings were not generalizable. Any individuals or
PROGRAM OUTCOME GAP ANALYSIS
57
institutions interested in using of information discussed in this study must consider the site-
specific nature of the study.
Additionally, the scope of this study was to understand the experiences of a single
stakeholder group that is most directly associated with the study. The focus on a single
stakeholder group was an additional limitation of this study. A more robust study would include
information from faculty members that hold administrative roles, students, administrators,
practitioners at peer institutions, and additional applicable stakeholders.
PROGRAM OUTCOME GAP ANALYSIS
58
CHAPTER FOUR: RESULTS AND FINDINGS
Assumed knowledge, motivation, and organization gaps related to faculty members’
perceptions of program outcome evaluation identified in Chapter Two and Chapter Three were
assessed. Surveys were deployed to collect quantitative data, and interviews were conducted in
order to collect qualitative data. The results of the surveys and interviews have been organized
into corresponding assumed knowledge, motivation, and organization domains.
Participating Stakeholders
Faculty members who do not hold administrative positions within YCN were the
stakeholders of interest in this study, and as such were the stakeholders who provided the survey
and interview data. The population surveyed included thirteen faculty members; of the thirteen,
twelve completed the survey (response rate of 92%). Additionally, six faculty members
participated in interviews. Race and ethnicity data of survey respondents is presented in Table 9
and academic rank of survey respondents is presented in Table 10.
Table 9
Race and Ethnicity of Survey Respondents
Race/ethnicity Total Percentage
White/Caucasian 8 67%
Asian 2 17%
African American 1 8%
Hispanic or Latino 1 8%
PROGRAM OUTCOME GAP ANALYSIS
59
Table 10
Academic Rank of Survey Respondents
Academic Rank Total Percentage
Assistant Professor 9 75%
Associate Professor 2 16%
Professor 1 8%
Data Validation
Survey and interview data were collected in order to assess faculty members’ knowledge,
motivation, and perceptions of organizational support as they apply to program outcome
evaluation. As discussed in Chapter Three, some assumed gaps were assessed strictly using
surveys, some assumed gaps were assessed strictly by using interviews, and some assumed gaps
were assessed using an explanatory sequential method of using survey item analysis to inform
the need for additional or revised interview items (Merriam & Tisdell, 2016).
Each survey item was analyzed individually. Frequency data were analyzed in order to
display conflicting or complementary faculty viewpoints. Means and standard deviations are
presented when applicable.
Interview responses were collected until saturation was reached for the majority of
responses. In most cases themes emerged through interview responses. However, within some
assumed causes faculty members’ responses varied to a high degree. In cases that responses
varied to a high degree it was assumed that saturation could not be met.
PROGRAM OUTCOME GAP ANALYSIS
60
Analysis of assumed gaps that were assessed by both survey and interview data were
analyzed by comparing and contrasting survey and interview data. Assumed gaps that were
assessed using the explanatory sequential method provided an opportunity for triangulation of
survey and interview data. Analysis of assumed causes using the explanatory sequential method
found survey and interview data that supported one another, as well as survey and interview data
that presented conflicting data.
The purpose of data analysis was to determine whether or not an assumed gap could be
validated. For the purpose of this study, a validated gap is a gap that exists within the
organization that needs to be addressed in order to improve performance. Determining whether
or not gaps were validated followed a semi-structured format at the discretion of the researcher.
Assumed gaps that were assessed by surveys were considered validated if at least 25% of
responses indicated that a gap existed. However, depending on the magnitude of responses,
assumed causes that met the 25% criteria may not have been validated. Assumed causes
validated by surveys considered the standard deviation and mean of responses in order to
consider the magnitude of responses. In some cases, faculty members’ survey responses
presented wide ranges of responses with groups of faculty members responding, “strongly
disagree” for the same item as other groups of faculty members responded “strongly agree”.
Considering the standard deviation and mean of survey items that included a wide range of
faculty responses supported informed decision-making.
Each interview item was transcribed and analyzed as they applied to assumed gaps.
Assumed gaps that were assessed by interviews were validated when multiple participants met
the assumed cause with agreement. However, in some cases a single voice was considered to be
capable of validating an issue if the interviewee provided relevant examples.
PROGRAM OUTCOME GAP ANALYSIS
61
Finally, assumed gaps that were validated using both survey and interview data were
examined by comparing and contrasting data. Survey and interview items complimented one
another in some cases, and presented disparate findings in other cases. If either type of data
strongly suggested that an assumed gap existed then the gap was validated.
Results and Findings for Knowledge Causes
Faculty members’ knowledge was assessed by surveys and interviews. Results of
surveys and interviews are presented for each assumed cause within the categories of declarative
factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive knowledge.
Results of surveys and interviews are used to validate whether or not assumed gaps are present
within the YCN.
Declarative Factual Knowledge
Surveys and interviews were used to assess faculty members’ declarative knowledge.
Faculty members were surveyed on their knowledge of accreditation requirements, and were
interviewed in order to assess their knowledge of the purpose of accreditation requirements.
Results have been organized and evaluated in order to assess whether or not the assumed causes
have been validated.
Faculty members need to know the accreditation requirements regarding program
outcome evaluation. Surveys were used to assess faculty members’ declarative knowledge
regarding program outcome evaluation. Specifically, faculty members responded to prompts
intended to assess their knowledge of accreditation requirements and terminology related to
program outcome evaluation. Each prompt contained multiple-choice responses with a single
correct answer.
PROGRAM OUTCOME GAP ANALYSIS
62
Survey results. Twenty-five percent of faculty members did not demonstrate knowledge
of accreditation requirements. Within the survey prompt displayed in Table 11 there appears to
be misunderstanding among faculty members regarding how program outcomes are to be
established and defined. An intervention may be needed to help increase faculty members’
knowledge within this domain.
Table 11
Declarative Knowledge Validation: Which of the Following is a Commission on Collegiate
Nursing Education (CCNE) Requirements Regarding Program Outcome Data Analysis?
Response Count Percentage
Program outcomes are defined by the university
and incorporate expected levels of achievement.
0 0.0%
Program outcomes are defined by the program
and incorporate expected levels of
achievement.*
9 75.0%
Program outcomes are defined by CCNE and
incorporate expected levels of achievement.
3 25.0%
Notes. N=12
*Designates correct response
However, in an additional item regarding accreditation requirements, all faculty members
responded correctly. Faculty members understand certain indicators that can serve as indicators
of program effectiveness (Table 12). Faculty members do not require additional knowledge
within this portion of declarative knowledge.
PROGRAM OUTCOME GAP ANALYSIS
63
Table 12
Declarative Knowledge Validation: True or False: For the Purposes of CCNE Accreditation,
Alumni Satisfaction Can Serve as an Indicator of Program Effectiveness.
Response Count Percentage
True* 12 100%
False 0 0%
Notes. N=12
*Designates correct response
Faculty members’ knowledge was also assessed regarding terminology related to
accreditation requirements. Faculty member knowledge varied in defining different types of
terminology. Greater than 80% of faculty members accurately identified definitions for program
outcomes (Table 13), learning outcomes (Table 14), and institutional outcomes (Table 15).
PROGRAM OUTCOME GAP ANALYSIS
64
Table 13
Declarative Knowledge Validation: Select the Option That Best Defines the Following
Terminology as They Pertain to Higher Education. Program Outcome
Response Count Percentage
Institution-specific content or learning parameters
- what students should learn, understand, or
appreciate because of their studies.
0 0.0%
Program-specific content or learning parameters -
what students should learn, understand, or
appreciate because of their studies.*
11 91.7%
Statement that translates learning into action,
behaviors, and other texts from which observers
can draw inferences about the depth and breadth
of student learning.
1 8.3%
Notes. N=12
*Designates correct response
PROGRAM OUTCOME GAP ANALYSIS
65
Table 14
Declarative Knowledge Validation: Select the Option That Best Defines the Following
Terminology as They Pertain to Higher Education. Learning Outcome
Response Count Percentage
Institution-specific content or learning
parameters - what students should learn,
understand, or appreciate because of their
studies.
0 0.0%
Program-specific content or learning
parameters - what students should learn,
understand, or appreciate because of their
studies.
2 16.7%
Statement that translates learning into
action, behaviors, and other texts from
which observers can draw inferences
about the depth and breadth of student
learning.*
10 83.3%
Notes. N=12
*Designates correct response
PROGRAM OUTCOME GAP ANALYSIS
66
Table 15
Declarative Knowledge Validation: Select the option that best defines the following
terminology as they pertain to higher education. Institutional Outcome
Response Count Percentage
Institution-specific content or
learning parameters - what students
should learn, understand, or
appreciate because of their
studies.* 12 100%
Program-specific content or
learning parameters - what students
should learn, understand, or
appreciate because of their studies. 0 0%
Statement that translates learning
into action, behaviors, and other
texts from which observers can
draw inferences about the depth
and breadth of student learning. 0 0%
Notes. N=12
*Designates correct response
However, survey data elucidated that faculty members cannot discern the difference
between assessment and evaluation. Faculty members were asked to define assessment and
evaluation using multiple-choice options. Less than 10% of faculty members accurately defined
“assessment” (Table 16), while 33% of faculty members accurately defined evaluation (Table
17).
PROGRAM OUTCOME GAP ANALYSIS
67
Table 16
Declarative Knowledge Validation: Select the Option That Best Defines the Following
Terminology as they Pertain to Higher Education. Assessment
Response Count Percentage
The process of determining the merit,
worth, or value of something, or the
product of that process. 2 16.7%
A systemic and systematic process of
examining student work against standards
of judgment. * 1 8.3%
The collection, analysis, and interpretation
of information regarding an aspect of a
program in order to judge effectiveness.
9 75.0%
Notes. N=12
*Designates correct response
Table 17
Declarative Knowledge Validation: Select the option that best defines the following
terminology as they pertain to higher education. Evaluation
Response Count Percentage
The process of determining the merit, worth, or
value of something, or the product of that process.* 4 33.3%
A systemic and systematic process of examining
student work against standards of judgment.
2 16.7%
Measuring the academic readiness, learning
progress, skill acquisition, of students 4 33.3%
The systematic basis for making inferences about the
learning and development of students. 2 16.7%
Notes. N=12
*Designates correct response
PROGRAM OUTCOME GAP ANALYSIS
68
Interview findings. Interviews were not used to assess this gap.
Summary. Findings suggest that faculty members can demonstrate declarative factual
knowledge of some accreditation requirements regarding program outcome evaluation, but are
lacking in their knowledge of others. Specifically, most faculty members were not able to
demonstrate knowledge of terminology related to assessment and evaluation. Additionally, a
quarter of faculty members could not accurately identify how program outcomes are defined.
The need to improve faculty members’ knowledge as it relates to assessment and evaluation
terminology, as well as accreditation requirements has been validated.
Faculty members need to know the purpose of program outcome evaluation.
Survey results. Surveys were not used to assess this gap.
Interview findings. Interviews were used to assess faculty members’ knowledge
regarding the purpose of program outcome assessment. Faculty members accurately articulated
many aspects of the purpose of program outcome evaluation including: demonstrating
attainment of goals; quality control; and data collection for continuous improvement. All faculty
members who participated in interviews accurately identified aspects of the purpose of program
outcome evaluation.
Additionally, all faculty members discussed using program outcome evaluation as a
means for establishing and measuring performance in relation to goals, and collecting
information to use for continuous improvement efforts. In regards to using program outcome
evaluation in order to measure performance in relation to goals, one faculty member stated
“…but most of all it shows that we're doing the right thing, that we're actually accomplishing
what we say we're going to accomplish.” Similarly, another faculty member described the need
for program outcome evaluation:
PROGRAM OUTCOME GAP ANALYSIS
69
“...you have to know if all of your goals that you have out there and objectives are being
met. So if you don't analyze any of those outcome data components you have no idea if all
of your goals and have been attained or are even unrealistic. That's essential.”
Faculty members’ knowledge of program outcome evaluation aligned with aspects of
Maki’s (2010) explanation of program outcome evaluation - specifically, faculty members and
Maki express the impact program outcome evaluation has on directing continuous improvement
efforts. Five out of six (83%) of faculty members expressed that the purpose and value of
program outcome evaluation was to inform continuous improvement efforts. One faculty
member stated, “I think the value is that you can modify and change the program based on the
evaluations. You know if there was any identified areas that needed improvement based on your
evaluation you could change it at that point.” Similarly, another faculty member stated, “…it
gives us information in how we’re performing and also it lets us know whether we need to make
any modifications to actually meet whatever goal it may be.”
Summary. Faculty members demonstrated factual declarative knowledge regarding the
purpose of program outcome evaluation. The assumed gap of a lack of knowledge regarding the
purpose was not validated. No intervention will be needed in order to improve faculty
knowledge regarding the purpose of program outcome evaluation.
Declarative Conceptual Knowledge
Faculty members’ conceptual knowledge was assessed by means of interviews. Faculty
members were asked to describe the relationship between program outcome evaluation and the
operations of the college. Results have been organized and analyzed in an effort to validate the
conceptual knowledge gap.
PROGRAM OUTCOME GAP ANALYSIS
70
Faculty members need to know the relationship between program outcome
evaluation and the operations of the college.
Survey results. Surveys were not used to assess this gap.
Interview findings. Three of six (50%) faculty members expressed that program
outcome evaluation supports the operations of the college. One faculty member stated: “We
incorporate it (program outcome evaluation) into our normal process.” Similarly, other faculty
members expressed that operational processes and the goals of both the college and university
should be aligned to help reduce duplicative work, and ensure goals are being met. One faculty
member stated:
“ We've got the objectives of the program and also the university objectives and the
outcomes so if we start off with them intertwining then the objective should be met in
there. And we just have to evaluate to be sure they're being met. So first of all we have to
make sure that they're aligned to begin with and that we're meeting the goals we need.”
All faculty members consistently mentioned the relationship between program outcome
evaluation and continuous improvement efforts within the college. In discussing continuous
improvement efforts, faculty members demonstrated conceptual knowledge of how the process
of program outcome evaluation relates to the operations of the college. Faculty members
described a positive correlation between program outcome evaluation, and improved college
operations.
However, one faculty member shared that they felt that some newer faculty members
might experience challenges related to program outcome evaluation, and in seeing the
connection between evaluation and the operations of the college. Specifically, this interviewee
felt that inexperienced faculty members may cause challenges for the operations of the college:
PROGRAM OUTCOME GAP ANALYSIS
71
“…if you have newer faculty that maybe aren't fully socialized or assimilated or whatever we
want to call it and they're not at the level yet. There's always that risk, that it could affect
program outcomes.” Although experienced faculty members may have the required declarative
conceptual knowledge for this aspect of program outcome evaluation, it appears as if
inexperienced faculty members may not.
Summary. The assumed gap of faculty members’ conceptual knowledge regarding the
relationship between program outcome evaluation and the operations of the college was not
validated. Faculty members accurately described the relationship between program outcome
evaluation and operations. However, one faculty member felt it was important to ensure that
new faculty understand the relationship between program outcome evaluation and operations of
the college.
Procedural Knowledge
Procedural knowledge was assessed using interview questions. Faculty members were
asked to discuss how to analyze program outcome data, as well as describe the steps of program
outcome evaluation. Responses were conflicting, as some faculty members (66%) confidently
described the steps of program outcome analysis and evaluation, while others (33%) were either
not able to articulate the process, or stated they were unsure.
Faculty members need to know how to analyze and discuss program outcome data.
Survey results. Surveys were not used to assess this gap.
Interview findings. Four of six (67%) of faculty members accurately described the
process of analyzing program outcome data. Faculty members who accurately described the
process of evaluating program outcome data explained steps including analyzing the results in
relation to goals, analyzing outcomes within the context of how the content was delivered
PROGRAM OUTCOME GAP ANALYSIS
72
(within a course), analyzing outcomes based on any extenuating circumstances, seeing how
results fit within the scope of the program, incorporating student input, and contributing to
continuous improvement efforts.
Two of six (33%) of faculty members did not accurately describe the process of
analyzing program outcome data. Those that were not able to accurately describe the process
stated they were unsure of what they would do or described a lack of participation in the process.
One faculty member stated: “I’m not sure since I haven’t been part of the process in a long
time.”
Summary. The assumed gap of faculty members’ ability to analyze program outcome
data was validated. Although some faculty members (67%) accurately described the steps of
data analysis, others (33%) were not able to. Interventions will be needed to help faculty
members obtain and demonstrate this aspect of procedural knowledge.
Faculty members need to be able to follow the steps of the program outcome
evaluation process.
Survey results. Surveys were not used to validate this gap.
Interview findings. Results regarding faculty members’ ability to follow the steps of a
program outcome evaluation process were similar to those regarding their ability to describe how
program outcomes are to be analyzed. When asked to discuss the steps of evaluating program
outcomes interviewees gave dissimilar responses. Four of six faculty members (67%)
appropriately discussed the required steps. One faculty member provided an exemplary
overview of the process of program outcome evaluation:
“I guess the first thing would be, depending on how that annual report looks is to see if
the plans and goals and objectives were met. So, you have to go and you know go
PROGRAM OUTCOME GAP ANALYSIS
73
backwards to see what all the things that were laid out in the beginning and whatever
strategic planning event that were met or not. And then if they weren't then you have to
look at external and internal variables to see what the preventive barriers that prevented
you from being able to achieve those goals. If there is, you know if there was a hurricane
and you don't have power. Or whatever, if there were changes in the profession. So you
have to make sure that there isn't sort of an anomaly on the screen. And then if there
were internal variables that interfered that you know really should not have happened
then you know plans need to be put in place to prevent that from happening again and to
make sure that those variables don't occur again.”
However, other two of six (33%) faculty members failed to appropriately discuss
pertinent steps within the process. Additionally, some faculty members admitted that they were
not unaware of the appropriate steps. The lack of knowledge in this regard may prompt the need
for additional education for faculty members.
Summary. The assumed gap of faculty members’ ability to follow the steps of a program
outcome evaluation plan was validated. Although some faculty members (67%) accurately
described the steps of the evaluation process, others (33%) were not able to. Interventions will
be needed to help faculty members obtain and demonstrate this aspect of procedural knowledge.
Metacognitive Knowledge
Interviews were also used to assess metacognitive knowledge. Faculty members were
asked to reflect on their ability to evaluate program outcome data. Similar to the findings for
procedural knowledge, responses were split between faculty members who could clearly
articulate their abilities, and faculty members who were unsure or lacking confidence.
PROGRAM OUTCOME GAP ANALYSIS
74
Faculty members need to be able to reflect on their own ability to evaluate program
outcome data.
Survey results. Surveys were not used to assess this gap.
Interview findings. Faculty members who were able to effectively reflect on their
abilities (33%) expressed the importance of reflecting on how results fit within a given context,
and on the need for individuals to reflect on the process of program outcome evaluation. As one
respondent noted:
“So sometimes when you say these are our goals and then we didn't meet those goals,
maybe those weren't good goals. And so sometimes people are fallible, and are
influenced sometimes in the planning process by incorrect forces or whatever and or the
person wasn't the right person to lead whatever the situation.”
This response exemplifies the ability to be reflective throughout the program outcome
evaluation process.
Alternatively, the majority of faculty members (67%) were not able to reflect on their
abilities. The instances that individuals were not able to reflect on their abilities include
individuals stating they did not know, or others who seemed to have a tenuous understanding of
their abilities. The responses suggest that an intervention may be needed in order to improve
metacognitive knowledge related to program outcome evaluation.
Summary. Although some faculty members (33%) demonstrated exemplary abilities to
reflect on their knowledge of program outcome evaluation, others (67%) were not. The assumed
gap of faculty members’ capacity to reflect on their abilities to evaluate program outcome data
was validated. Faculty members’ capacity to reflect on their abilities will need to improve.
PROGRAM OUTCOME GAP ANALYSIS
75
Results and Findings for Motivation Causes
Faculty members’ motivation was assessed by surveys and interviews. Results of
surveys and interviews are presented for each assumed cause within the categories of value, self-
efficacy, and mood. Results of surveys and interviews are used to validate whether or not
assumed gaps are present within the YCN.
Value
Faculty members’ value of program outcome evaluation was by means of surveys and
interviews. Through surveys, faculty members were asked to what degree they valued program
outcome evaluation, and to rank program outcome evaluation within the context of other
academic practices. Interviewed faculty members were asked to discuss how they value program
outcome evaluation.
Faculty members need to value program outcome evaluation.
Survey results. Through survey data, faculty members expressed a high value for
program outcome evaluations. Faculty members were asked to rank various aspects of
educational operation in terms of their value of importance for ensuring a successful program.
Among six options, “Systematic analysis of program outcome data” received the second highest
ranking among respondents. Table 18 displays the available choices, as well as results of this
survey item.
PROGRAM OUTCOME GAP ANALYSIS
76
Table 18
Value Validation: Rank the Following Processes in Order of Importance for Ensuring a
Successful Program (1 is Highest).
Response Rank
Faculty to student advising 1
Systematic analysis of program outcome data 2
Direct assignment-specific feedback to students 3
Faculty scholarship efforts 4
Student to student advising 5
Clinical evaluations 6
Note. N=12
Interview findings. Similarly, interviews produced responses from faculty members that
represented a value for program outcome evaluation. All faculty members expressed that they
valued the practice of program outcome evaluation. One faculty member expressed the value of
using program outcome evaluation as a way to ensure that program offerings were current and
relevant, and that the program would remain fiscally sustainable: “Well if you want to stay in
business, to me it just makes sense that you should look at what you're doing periodically not just
kinda keep doing it because you've always done that.” Another faculty member continued to
discuss the value of using program outcome evaluation for continuous improvement efforts:
“Our goal should be to get better at whatever we’re seeking to accomplish. So it’s a good way in
continuing to improve.”
PROGRAM OUTCOME GAP ANALYSIS
77
However, in interviews one faculty member expressed that without accreditation
requirements, program outcome evaluation may not be valued as highly as it currently is. When
discussing the possibility of operations without accreditation oversight, one faculty member
stated: “I think a lot of assessment processes would get back-burnered. And not in a negligent
way but in a ‘I just haven't gotten to it yet’ way.” These findings suggest that there may be
conflicting levels of value of program outcome evaluation, based on accreditation requirements.
Summary. The assumed gap of faculty members’ value of program outcome evaluation
was not validated. Overall, faculty members expressed a high value for program outcome
evaluation by means of surveys and interviews. No intervention will be needed to improve
faculty members’ value of program outcome evaluation.
Self-Efficacy
Self-efficacy was assessed using survey items. Faculty members were asked about their
confidence in their ability to analyze program outcome data. Additionally, faculty members
were asked about their ability use program outcome data as a means for informing decision-
making.
Faculty members need to have confidence in their ability to analyze program
outcome data to inform decision-making.
Survey results. Survey results presented disparate levels of self-efficacy among faculty
members. Over 40% of respondents stated that they did not feel confident in their ability to
analyze program outcome data (Table 19), or in their ability to use program outcome data to
inform decision-making (Table 20). However, about 50% of respondents feel strongly in their
abilities to perform these tasks.
PROGRAM OUTCOME GAP ANALYSIS
78
Table 19
Self-Efficacy Validation: I am Confident in my Ability to Analyze Program Outcome Data.
Response Count Percentage
1 Strongly disagree
0 0.0%
2
1 8.3%
3
1 8.3%
4 Neither disagree nor
agree
3 25.0%
5
1 8.3%
6
4 33.3%
7 Strongly agree
2 16.7%
Notes. N=12
Mean=5
StDev=1.6
PROGRAM OUTCOME GAP ANALYSIS
79
Table 20
Self-Efficacy Validation: I am confident in my ability to use program outcome data to inform
decision-making
Response Count Percentage
1 Strongly disagree
1 9.1%
2
1 9.1%
3
1 9.1%
4 Neither disagree nor agree
1 9.1%
5
1 9.1%
6
4 36.4%
7 Strongly agree
2 18.2%
Notes. N=11
Mean=4.8
StDev=2
Interview findings. Interviews were not used to validate self-efficacy gaps.
Summary. The assumed gap of faculty members’ self-efficacy regarding their ability to
utilize program outcome evaluation to inform change was validated. Although the majority of
faculty members expressed confidence in their abilities, there was a large portion (~40%) that did
not. These findings align with findings related to faculty members’ procedural knowledge.
Mood
PROGRAM OUTCOME GAP ANALYSIS
80
Finally, faculty members’ moods were assessed as it relates to motivation and program
outcome evaluation. Mood was assessed using interview questions. Faculty members responded
favorably to the impact of program outcome evaluation and operations of the college.
Faculty members need to feel positive about the potential impact program outcome
assessment can have on the operations of the college.
Survey findings. Surveys were not used to assess this gap.
Interview findings. All faculty members felt that program outcome evaluation can help
foster informed decision-making and contribute towards continuous improvement. While the
overall attitude regarding program outcome evaluation was positive, faculty members did discuss
some concerns and challenges related to program outcome evaluation. Specifically, low survey
response rates were one area of concern for faculty members. As one faculty member discussed:
“If you only have a couple (of responses) then it's like it's kind of hard to generalize.” Some
identified indicators of program outcomes come from alumni surveys, which have response rates
around 30%. Low response rates were a concern of half of the faculty members interviewed.
Specifically, those concerned with low rates are concerned that some data may not be
representative of the student experience as a whole.
Summary. Faculty members expressed value and a positive mood towards program
outcome evaluation. The assumed gap of faculty members’ moods as they relate to program
outcome evaluation was not validated. However, faculty members expressed concern for the
potential of using misrepresentative information in program outcome evaluation. Although this
concern is not directly aligned with the assumed gap of faculty members’ moods as they relate to
program outcome evaluation, it should be considered as part of improved organizational
operations.
PROGRAM OUTCOME GAP ANALYSIS
81
Results and Findings for Organization Causes
Faculty members’ perceptions of program outcome evaluation as it applies to the
organization of the YCN were assessed by surveys and interviews. Results of surveys and
interviews are presented for each assumed cause within the categories of resources, policies,
processes, and procedures, cultural models and cultural settings. Results of surveys and
interviews are used to validate whether or not assumed causes are present within the YCN.
Resources
Faculty members responded to surveys and interviews in order to assess their perception
of program outcome evaluation resources. Surveys and interviews were used to assess whether
or not faculty members had access to the resources they felt they needed to enact program
outcome evaluation. Faculty members were asked to respond to prompts regarding access to
aggregated data as well as disaggregated data necessary to inform decision-making.
Faculty members need the resources to be able to review aggregated program
outcome data.
Survey results. The majority of faculty members surveyed (70%) expressed that they had
access to aggregated program outcome data needed to make decisions (Table 21). Additionally,
60% of survey respondents responded towards the “Strongly agree” end of the Likert scale.
Survey findings suggest that faculty members’ access to aggregated program outcome data is not
an issue.
PROGRAM OUTCOME GAP ANALYSIS
82
Table 21
Resources Validation: I have access to the aggregated program outcome data I need to make
informed decisions.
Response Count Percentage
1 Strongly disagree
0 0.0%
2
1 10.0%
3
1 10.0%
4 Neither disagree nor agree 1 10.0%
5
1 10.0%
6
3 30.0%
7 Strongly agree
3 30.0%
Notes. N=10
Mean=5.3
StDev=1.8
Interview findings. Interview responses provided additional insight into faculty
members’ opinions on having access to the information they need. Overall, when asked to
discuss what information they needed to make informed decisions, faculty members discussed
information that they indeed had access to. However, half of the interviewees expressed concern
in the quality of the information they had access to.
Specifically, faculty members identified response rates, and quality of responses as areas
that could negatively affect the resource of program outcome data. In discussing the quality of
program outcome data that is available, one faculty member stated: “For me what's important is
how many responses you got.” The same faculty member later continued: “You can't just say
‘well three people said this so we have to do it.’ ”
PROGRAM OUTCOME GAP ANALYSIS
83
Summary. The assumed gap of faculty members’ having access to the necessary
aggregated data to inform decision-making was validated. Although faculty members responded
favorably to survey items related to their access to aggregated data, interview responses
presented faculty members’ concern for the quality of data. Interventions will be necessary to
ensure faculty members have access to quality, representative data.
Faculty members need the resources to be able to review disaggregated program
outcome data.
Survey results. Faculty members were asked by survey whether they felt they had access
to the necessary disaggregated data needed to inform decision-making. Sixty percent of faculty
members agreed that they had access to the disaggregated data they needed to make informed
decisions (Table 22). Additionally, those that agreed responded towards the “Strongly agree”
end of the Likert scale.
PROGRAM OUTCOME GAP ANALYSIS
84
Table 22
Resources Validation: I have access to the disaggregated program outcome data I need to
make informed decisions.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
1 10.0%
3
1 10.0%
4 Neither disagree nor agree
2 20.0%
5
0 0.0%
6
3 30.0%
7 Strongly agree
3 30.0%
Notes. N=10
Mean=5.2
StDev=1.8
Interview findings. Interviews provided additional insight into how faculty members felt
about the access they have to disaggregated program outcome data. Similar to when asked about
access to aggregated data, half of the faculty members expressed the importance of
representative data with high response rates. Faculty members expressed the importance of
knowing demographic information regarding individuals represented in program outcome data:
“You need to have information about like not specifically who, but about what categories
respond. I think all of that, to me the details make a difference in what you're getting.
Even if you don't know who they are - where are they from? What's their influence?”
Summary. The assumed gap of faculty members’ having access to the necessary
disaggregated data to inform decision-making was validated. Similar to faculty members’
PROGRAM OUTCOME GAP ANALYSIS
85
responses to prompts related to aggregated data, faculty members responded favorably to survey
items related to their access to disaggregated data. However, interview responses presented
faculty members’ concern for the quality of data. Interventions will be necessary to ensure
faculty members have access to disaggregated quality, representative data.
Policies, Processes, & Procedures
The organization has processes that allow faculty members to affect change at the
college. Discrete survey items were used to collect information regarding faculty members’
perceptions of policies, processes, and procedures of the college. Additionally, faculty members
who were interviewed were asked to discuss how they felt the policies, processes, and
procedures of the college supported or did not support their ability to influence change at the
college level. Responses for each of the three survey items as well as interview responses
exemplify conflicting viewpoints among faculty members.
Survey results. Regarding policies, over 35% of faculty members feel strongly in their
disagreement regarding the college’s policies support of their ability to affect change (Table 23).
However, ~45% of survey respondents reported that they strongly agree that the college’s
policies are supportive of their ability to affect change. A large standard deviation (2.3)
additionally exemplifies disparate perceptions among faculty members.
PROGRAM OUTCOME GAP ANALYSIS
86
Table 23
Policies, Processes, & Procedures Validation: The policies of the college support my ability
to affect change at the college.
Response Count Percentage
1 Strongly disagree 2 18.2%
2
2 18.2%
3
0 0.0%
4 Neither disagree nor agree 0 0.0%
5
2 18.2%
6
4 36.4%
7 Strongly agree
1 9.1%
Notes. N=11
Mean=4.3
StDev=2.3
Regarding policies, nearly 60% do not agree that the processes of the college support
their ability to affect change at the college (Table 24). This exemplifies the difference that exists
between the college’s idealized version of itself (its policies) and what exists (its processes). A
larger percentage of individuals disagree that the processes of the college are supportive (~60%
do not agree compared to those that disagree that policies are supportive (~35% do not agree).
PROGRAM OUTCOME GAP ANALYSIS
87
Table 24
Policies, Processes, & Procedures Validation: The processes of the college support my ability
to affect change at the college.
Response Count Percentage
1 Strongly disagree 2 18.2%
2
1 9.1%
3
1 9.1%
4 Neither disagree nor agree 2 18.2%
5
3 27.3%
6
1 9.1%
7 Strongly agree
1 9.1%
Notes. N=11
Mean=3.9
StDev=2
Nearly 40% of faculty members do not feel that the procedures of the college support
their ability to affect change (Table 25). Results of this item are similar to those of the “policies”
item. Faculty members may feel that there is alignment between policies and procedures – or
may not discern a difference.
PROGRAM OUTCOME GAP ANALYSIS
88
Table 25
Policies, Processes, & Procedures Validation: The procedures of the college support my
ability to affect change at the college.
Response Count Percentage
1 Strongly disagree 1 9.1%
2
1 9.1%
3
2 18.2%
4 Neither disagree nor agree 0 0.0%
5
4 36.4%
6
2 18.2%
7 Strongly agree
1 9.1%
Notes. N=11
Mean=4.4
StDev=1.9
Interview findings. Interview findings differed from survey results, as all interview
responses strictly described policies, processes, and procedures as being supportive of faculty
members. As one faculty member noted:
“I don't think that they (policies, processes, and procedures) affect it negatively. The
reason is because it's the culture that we have here seems to be open to change and
revision and modification you know. So I don't think that it negatively impacts it. I know
that that they're always open to looking at the policies and changing them.”
Similarly, another faculty member shared “I never felt that there are any policies or procedures
that hinder what we do. I think we have an ability to change and we have.
PROGRAM OUTCOME GAP ANALYSIS
89
All interviewees expressed that processes support faculty members’ ability to have a
voice that can affect change. One faculty member shared: “I think I can have a voice and I can
use creativity or anything that I think might make things better.” Similarly, another faculty
member shared “I would have to say yes I feel like my voice is being heard and that I could
affect positive change.”
Summary. The assumed gap of supportive processes was validated. Although
interviewees discussed supportive organizational processes, survey results presented conflicting
viewpoints among faculty members. The large portion of faculty members that do not agree that
processes are supportive validates this assumed gap.
Programs are held accountable for collecting pertinent data related to program
outcomes.
Survey results. Survey data shows that approximately 45% of faculty members do not
know what pertinent data needs to be collected and analyzed on an annual basis (Table 26).
Results related to this survey item ranged throughout the middle of the Likert scale, as no
respondents selected either the lowest end of the scale (strongly disagree) or the highest end of
the scale (strongly agree). The average response (4.5) skewed slightly towards the “agree” side
of the scale.
PROGRAM OUTCOME GAP ANALYSIS
90
Table 26
Policies, Processes, & Procedures Validation: I know what pertinent data needs to be
collected and analyzed on an annual basis.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
0 0.0%
3
4 36.4%
4 Neither disagree nor agree 1 9.1%
5
2 18.2%
6
4 36.4%
7 Strongly agree
0 0.0%
Notes. N=11
Mean=4.5
StDev=1.4
Interview findings. Interview responses elucidated that faculty members two of six
interviewees (33%) were unsure about what information needed to be collected. One faculty
member shared:
“I think a lot of that stuff in the past I’ve left up to my directors and my dean. But then
we talk about something, we’ll say “oh is that OK with the BRN?” so I’ve learned and
looked into the regulatory statutes more, and I think we could all benefit from learning
more about that accreditation process. But in the past in our particular program
because of our boss was so dynamic, she did a lot of the stuff herself, and I think that
limited us from growing in that aspect.”
PROGRAM OUTCOME GAP ANALYSIS
91
Another interviewee expressed uncertainty regarding the necessary data that needed to be
collected.
“I think we're accountable to the BRN mainly and I'm not sure who else after that. It's
even more than that. There is the in-between stuff. I don't know the between stuff and
I’m not really familiar with the way that works in this college.”
Summary. The assumed gap of programs being accountable for collecting pertinent data
was validated. Both survey results and interview findings presented uncertain faculty responses.
An intervention to help increase certainty and understanding of expectations will help improve
performance in this area.
Cultural Settings
Faculty members’ perception of the cultural setting was assessed using surveys and
interviews. Faculty members were asked if they felt that reports generated by and disseminated
by the college were representative of the college’s mission. Additionally, interviewed faculty
members were asked to describe ideal ways the college could communicate its achievement of its
goals.
The organization provides physical examples of its commitment to program learning
outcome assessment. Survey results illustrate conflicting views among faculty members. More
than 25% of faculty members do not feel that reports generated by the college exemplify its
commitment to achieving its mission (Table 27). Similarly, more than 25% of faculty members
do not feel that reports disseminated by the college are representative of the college achieving its
mission (Table 28).
PROGRAM OUTCOME GAP ANALYSIS
92
Table 27
Cultural Setting Validation: Reports Generated by the College Exemplify its Commitment to
Achieving its Mission.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
3 27.3%
3
0 0.0%
4 Neither disagree nor agree 0 0.0%
5
2 18.2%
6
3 27.3%
7 Strongly agree
3 27.3%
Notes. N=11
Mean=5
StDev=2.1
PROGRAM OUTCOME GAP ANALYSIS
93
Table 28
Cultural Setting Validation: Reports Disseminated by the College Exemplify its Commitment
to Achieving its Mission.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
2 18.2%
3
1 9.1%
4 Neither disagree nor agree 0 0.0%
5
1 9.1%
6
4 36.4%
7 Strongly agree
3 27.3%
Notes. N=11
Mean=5.2
StDev=1.9
Interview findings. Interviewed faculty members provided varying responses regarding
the ideal types of information and means of communication for reporting achievement. When
asked to describe the ideal means of communicating achievement to outside stakeholders, faculty
members expressed a myriad of methods including: e-mail communication, social media,
physical publications, web-based advertisements, one-on-one discussions, as well as open public
forums.
Summary. The assumed gap of physical examples effectively communicating the college
achieving its mission was validated. Over 25% of survey respondents did not feel that the
college effectively communicated achievement of its mission. Additionally, interview responses
PROGRAM OUTCOME GAP ANALYSIS
94
displayed a lack of agreement among faculty regarding what effective communication of the
college achieving its mission is.
Cultural Models
Cultural model was assessed using survey items and interviews. The college’s
accountability practices were an important aspect of the cultural model that was examined using
surveys and interviews. Additionally, faculty members responded to survey prompts and
regarding communication of the goals of program outcome evaluation.
The organization supports a culture of commitment to continuous improvement.
Survey results. Three survey items were used to assess faculty members’ perception of
the accountability culture of the college. Faculty members were asked to categorize whether the
college utilized a culture of compliance or culture of continued improvement (Table 29), if they
agreed that the college supports a culture of compliance (Table 30), and if they agreed that the
college supports a culture of continuous improvement (Table 31).
Table 29
Cultural Models Validation: How Would you Primarily Categorize the College's Culture
Regarding Accountability Practices?
Response Count Percentage
A culture of compliance
4 33.3%
A culture of continuous
improvement
8 66.7%
Note. N=12
PROGRAM OUTCOME GAP ANALYSIS
95
Table 30
Cultural Model: The College Supports a Culture of Compliance.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
2 18.2%
3 0 0.0%
4 Neither disagree nor agree 2 18.2%
5
3 27.3%
6
2 18.2%
7 Strongly agree
2 18.2%
Notes. N=11
Mean=4.8
StDev=1.7
PROGRAM OUTCOME GAP ANALYSIS
96
Table 31
Cultural Model Validation: The College Supports a Culture of Continuous Improvement.
Response Count Percentage
1 Strongly disagree 1 9.1%
2
2 18.2%
3
1 9.1%
4 Neither disagree nor agree 0 0.0%
5
3 27.3%
6
2 18.2%
7 Strongly agree
2 18.2%
Notes. N=11
Mean=4.5
StDev=2.1
A third of survey respondents primarily categorize the college’s culture as a culture of
compliance. Additionally, faculty members responded similarly when asked to rank their
agreement that the college supports a culture of compliance to when they were asked if the
college supports a culture of continuous improvement. These results align with and support
interview findings.
Interview findings. Interviews elucidated similar results regarding faculty members’
perceptions of the college’s cultural model. Interestingly, some faculty members (33%) who
described the college’s practices as being focused on compliance spoke favorably about the idea
of compliance. These individuals focused on explaining the value of compliance in terms of
PROGRAM OUTCOME GAP ANALYSIS
97
quality control – that the standards set by nursing accrediting bodies were developed in a manor
to ensure a quality educational experience.
As one faculty member noted: “I'd worry the quality would go down without
accountability (without compliance).” Another faculty members supported this sentiment:
“I think having that bar is always good. Because ya know when you’re doing the limbo
it's good to know where the bar is. You have to know what the standard is and if we meet
it or exceed it and I think that’s important. I think the college has very high standards,
but having the external forces does still make us approach things in a certain way.”
Summary. The assumed cause of the organization needing to support a culture of
continuous improvement was validated. Both survey and interview findings included large
portions of individuals who categorized the college’s culture as a culture of compliance.
Interventions will be necessary to help strengthen the college’s culture of continuous
improvement.
The goals of the organization related to program outcome evaluation are
communicated effectively. Communication regarding program outcome evaluation was
assessed using surveys and interviews. Survey items assessed faculty members’ beliefs
regarding how goals related to program outcome evaluation are communicated within the
college, as well as how they are communicated to external stakeholders. Interview items
assessed faculty members’ opinions regarding what information the college should use to
communicate achievement of its goals.
Survey results. Approximately 45% of faculty members do not feel that the goals of
program outcome evaluation are communicated effectively (Table 32). These findings align
with the purpose of the study – that there is a need to establish stronger practices related to
PROGRAM OUTCOME GAP ANALYSIS
98
program outcome evaluation. Similarly, 40% of faculty members do not agree that key
stakeholders are informed regarding the college’s progress in meeting its mission (Table 33).
Table 32
Cultural Model Validation: The Goals of the Organization Related to Program Outcome
Evaluation are Communicated Effectively.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
2 18.2%
3
3 27.3%
4 Neither disagree nor agree 0 0.0%
5
1 9.1%
6
4 36.4%
7 Strongly agree
1 9.1%
Notes. N=11
Mean=4.5
StDev=1.9
PROGRAM OUTCOME GAP ANALYSIS
99
Table 33
Cultural Model Validation: Key Stakeholders are Informed Regarding the College’s Progress
in Meeting its Mission.
Response Count Percentage
1 Strongly disagree 0 0.0%
2
0 0.0%
3
2 20.0%
4 Neither disagree nor agree 2 20.0%
5
3 30.0%
6
1 10.0%
7 Strongly agree
2 20.0%
Notes. N=10
Mean=4.9
StDev=1.5
Interview findings. Interviewees provided insight regarding what is lacking in current
communication practices, and ideal ways of engaging with key stakeholders. The most
commonly identified indicators that could be shared with external parties were related to alumni
success (66% of faculty members described this method). As one faculty member noted: “I think
showing how successful our alumni are is a great way to do that (communicate achievement of
goals).” Similarly, another faculty member shared:
“And what I know from experience that people out there use they look at our pass rates,
they look at our graduates’ ability to get jobs and the kind of jobs they’re able to get and
the kinds of feedback that we get from the employers - saying that these people are
independent thinkers that they know - they've hired our graduates as their employees.
PROGRAM OUTCOME GAP ANALYSIS
100
And they're completely different animals because they can think on their feet and they
don't have to you know be given specific direction.”
Summary. Surveys and interviews validated the gap of goals of program evaluation being
communicated effectively. A large portion (greater than 40%) of faculty members felt that goals
and outcomes were not being communicated effectively. Interviews provided insight regarding
how goals could be communicated more effectively in the future.
Summary of Validated Influences
Knowledge
Gaps in four of the six assumed knowledge influences were validated through surveys
and interviews. Table 34 presents an overview of the results of the assessment of each assumed
knowledge influence. Recommendations to improve each validated cause will be discussed in
Chapter Five.
PROGRAM OUTCOME GAP ANALYSIS
101
Table 34
Summary of Assumed Knowledge Gaps Validated
Assumed Knowledge Influences Gap Validated?
Declarative Factual
Faculty members need to know the accreditation
requirements regarding program outcome evaluation.
Yes
Faculty members need to know the purpose of program
outcome evaluation.
No
Declarative Conceptual
Faculty members need to know the relationship between
program outcome evaluation and the operations of the
college.
No
Procedural
Faculty members need to know how to analyze and discuss
program outcome data.
Yes
Faculty members need to be able to follow the steps of the
program outcome evaluation process.
Yes
Metacognitive
Faculty members need to be able to reflect on their own
ability to evaluate program outcomes.
Yes
Motivation
Gaps in one of the three assumed motivation influences were validated through surveys
and interviews. Table 35 presents an overview of the results of the assessment of each assumed
knowledge influence. Recommendations to improve each validated cause will be discussed in
Chapter Five.
PROGRAM OUTCOME GAP ANALYSIS
102
Table 35
Summary of Assumed Motivation Causes Validation
Assumed Motivation Influences Gap Validated?
Value
Faculty members need to value program outcome
evaluation
No
Self-Efficacy
Faculty members need to have confidence in their
ability to analyze program outcomes to inform
decision-making.
Yes
Mood
Faculty members need to feel positive about the
potential impact program outcome evaluation can
have on the operations of the college.
No
Organization
Gaps in all seven assumed organization influences were validated. Table 36 presents an
overview of the results of the assessment of each assumed knowledge influence.
Recommendations to improve each validated cause will be discussed in Chapter Five.
PROGRAM OUTCOME GAP ANALYSIS
103
Table 36
Summary of Assumed Organization Causes Validation
Assumed Organization Influences Gap Validated?
Resources
Faculty members need the resources to be able to review
aggregated program outcome data.
Yes
Faculty members need the resources to be able to review
disaggregated program outcome data.
Yes
Policies, Processes, & Procedures
The organization has processes that allow faculty members to
affect change at the college.
Yes
Programs are held accountable for collecting pertinent data
related to program outcomes.
Yes
Culture
Cultural Setting
The organization provides physical examples of its
commitment to program outcome evaluation.
Yes
Cultural Model
The organization supports a culture of commitment to
continuous improvement.
Yes
The goals of the organization related to program outcome
evaluation are communicated effectively.
Yes
Chapter Five examines proposed solutions for each validated cause. Each proposed
solution utilizes evidence-based recommendations identified through relevant academic
literature. Proposed solutions will be shared with faculty members and administration of the
Yvonne College of Nursing.
PROGRAM OUTCOME GAP ANALYSIS
104
CHAPTER FIVE: RECOMMENDATIONS AND EVALUATION
Purpose of the Project and Questions
The purpose of this project was to conduct a needs analysis in the areas of knowledge and
skill, motivation, and organizational resources necessary for YCN faculty to achieve their
performance goal of establishing program outcome evaluation processes and procedures. In
order to better understand the needs, one of the primary questions guiding this project was: What
are the knowledge, motivation, and organizational needs necessary for the YCN faculty to
achieve their goal of creating and implementing an evaluation program that includes measures
and methods for evaluating academic program outcome data by December 2018 that aligns with
standards set by the accrediting bodies and the program’s leadership? Additionally,
recommendation and evaluation procedures were guided by the question: What are the
recommended knowledge, motivation, and organizational solutions to those needs?
Recommendations to Address Knowledge, Motivation, and Organization Influences
Knowledge, motivation, and organizational influences related to program outcome
evaluation processes and procedures were examined to determine YCN’s strengths and areas for
improvement. This chapter provides details regarding recommendations for each validated
knowledge, motivation, and organizational gap that has been identified. Three sections are used
to organize knowledge, motivation, and organizational gaps respectively. Each section begins
with a table summarizing the knowledge, motivation, or organization cause, priority, evidence-
based principles supporting recommendation, and a brief statement outlining the context-specific
recommended solution.
PROGRAM OUTCOME GAP ANALYSIS
105
Knowledge Recommendations
Introduction. Data collection and analysis elucidated one gap in faculty members’
factual knowledge, two gaps in faculty members’ procedural knowledge, and one gap in faculty
members’ metacognitive knowledge. No gaps were identified regarding faculty members’
conceptual knowledge. A summary of knowledge influences, gaps, and recommendations is
presented in Table 37. Detailed descriptions of each recommendation subsequently follow Table
37.
PROGRAM OUTCOME GAP ANALYSIS
106
Table 37
Summary of Knowledge Influences and Recommendations
Assumed Knowledge
Influence
Priority
Principle and Citation Context-Specific
Recommendation
Factual
Faculty members need to
know the accreditation
requirements regarding
program outcome
evaluation
High Explicitly link new
material to prior
knowledge improves
retention of new
knowledge (Ambrose
et al., 2010).
Show where analogies
break down to
improve
understanding of new
knowledge (Ambrose
et al., 2010).
Present information
regarding assessment and
evaluation, and link
discussion to specific
processes of the
organization.
Conceptual
No “conceptual” gaps were
validated
n/a n/a n/a
Procedural
Faculty members need to
know how to analyze and
discuss program outcome
data.
High Frequent practice for
short periods of time
helps cognitive
integration of learning
(Stanovich, 2003).
Faculty members will work
through short activities to
practice their program
outcome data analysis
skills.
Faculty members need to
be able to follow the steps
of the program outcome
evaluation process.
High Frequent practice for
short periods of time
helps cognitive
Faculty members will work
through short activities to
practice their program
PROGRAM OUTCOME GAP ANALYSIS
107
integration of learning
(Stanovich, 2003).
outcome data analysis
skills.
Metacognitive
Faculty members need to
be able to reflect on their
own ability to evaluate
program outcome data.
High The use of
metacognitive
strategies facilitates
learning (Baker,
2006).
Faculty members will have
opportunities to debrief
regarding their
experiences.
Declarative knowledge solutions. A gap in faculty members’ declarative knowledge
related to program outcome evaluation was identified and validated. Faculty members must
know accreditation requirements that the college is accountable to. Faculty members
demonstrated knowledge of some standards and terminology, but performed poorly when
defining education terminology related to evaluation and assessment. Ambrose et al. (2010)
recommends linking new material to prior knowledge, and exploring analogies as methods to
improve declarative knowledge.
Emenike et al. (2013) found that faculty members in science-related disciplines could
lack knowledge of assessment and evaluation terminologies. In examining methods to improve
faculty members’ knowledge of assessment and evaluation terminologies, Emenike et al. (2013)
found that presenting faculty members with assessment and evaluation terminology and potential
analogous terms from scientific disciplines helped faculty members better demonstrate
knowledge of educational terminology. YCN can enact Ambrose et al.’s (2010) principles of
linking new knowledge to prior knowledge, as well as practices espoused by Emenike et al.
(2013) of comparing assessment and evaluation terminologies to scientific terminologies to close
the gap in faculty members’ declarative knowledge.
PROGRAM OUTCOME GAP ANALYSIS
108
Procedural knowledge solutions. Two procedural knowledge gaps were identified and
validated. Faculty members need to be able to analyze and discuss program outcome data, and
faculty members need to be able to follow the steps of a program outcome evaluation process.
(Stanovich, 2003) suggests implementing frequent practice for short periods of time to help
establish or improve procedural knowledge.
Calegari et al. (2015) utilized Kotter’s (1996) model of organizational change to improve
faculty members’ assessment and evaluation practices within a single college. Specific to
improving procedural knowledge, Calegari et al. (2015) described implementing multiple
opportunities for faculty members to practice their skills with increasingly difficult activities and
experiments. YCN can implement Stanovich’s (2003) recommendation of frequent practice
experiences, as well as the practices espoused by Calegari et al. (2015) to provide continuous
support for faculty development. Frequent practice will help individuals self-assess their
progress and provide constant reminders of the college’s focus on improving its shared effort to
increase program outcome evaluation.
Metacognitive knowledge solutions. One metacognitive gap was identified and
validated. Faculty members need to be able to reflect on their ability to evaluate program
outcomes. Baker (2006) recommends individuals engage in metacognitive strategies to enhance
learning, such as debriefing activities focused on reflection.
Calegari et al. (2015) discuss the use of structured reflective discussions as a part of their
efforts to improve faculty members’ knowledge and skills related to assessment and
evaluation. By providing a space for faculty members to discuss their experiences it is more
likely that they will be able to reflect on their abilities than if they were not afforded the
opportunity to reflect and discuss. Discussions can help identify areas of faculty strengths as
PROGRAM OUTCOME GAP ANALYSIS
109
well as areas that may need additional support. Additionally, Dowd et al. (2012) encourage
organizations that are establishing or improving the use of outcome-based decision making to
engage in discussions about the analysis process. Dowd et al. (2012) explain that discussing
outcome is important because it helps practitioners determine whether or not conclusions based
on outcomes are appropriate.
Efforts to address metacognitive knowledge gaps can work in concert with efforts to
address procedural knowledge gaps. Faculty members will participate in their frequent practice
of program outcome analysis activities. Subsequently, they will engage in discussions regarding
their experiences in the process.
Motivation Recommendations
Introduction. Data collection and analysis elucidated one gap in faculty members’ self-
efficacy. No gaps were identified regarding faculty members’ value or mood as they relate to
program outcome evaluation. A summary of motivation influences, gaps, and recommendations
is presented in Table 38. Detailed descriptions of each recommendation subsequently follow
Table 38.
PROGRAM OUTCOME GAP ANALYSIS
110
Table 38
Summary of Motivation Influences and Recommendations
Assumed Motivation
Influence
Priority Principle and Citation Context-Specific
Recommendation
Value
No “value” gaps were
identified
n/a n/a n/a
Self-Efficacy
Faculty members need to
have confidence in their
ability to analyze program
outcome data to inform
decision-making.
High Modeled behaviors
improve learning
(Denler, Wolters, &
Benzon, 2006)
Expert faculty members will
discuss their journey
towards mastery of
analyzing program outcome
data.
Mood
No “mood” gaps were
identified
n/a n/a n/a
Self-Efficacy solutions. Faculty members should collaborate with peers in order to
improve their self-efficacy as it relates to program outcome evaluation. Faculty members that
maintain high self-efficacy as it relates to program outcome evaluation should be identified and
used as model cases in discussions regarding establishing a program outcome evaluation process.
Models who describe their attainment of skills and knowledge help increase the self-efficacy of
their peers (Denler et al., 2006).
Calegari et al. (2015) describe establishing a “guiding team” (p.37) to serve as models of
excellence in assessment and evaluation activities. The guiding team established by Calegari et
al. (2015) included representatives from each academic rank within the college. In soliciting
leadership from multiple ranks, Calegari et al. increased the likelihood that faculty members
PROGRAM OUTCOME GAP ANALYSIS
111
outside of the guiding team saw the models as peers. The YCN can solicit nominations from
various ranks to serve as leaders in the program in order to establish representative peer models
for those who do not self-nominate for leadership opportunities.
Organization Recommendations
Introduction. Data collection and analysis elucidated organizational gaps. Validated gaps
included: two gaps related to cultural models, one gap related to cultural setting, two gaps related
to policies and procedures, and two gaps related to resources. A summary of organization
influences, gaps, and recommendations is presented in Table 39. Detailed descriptions of each
recommendation subsequently follow Table 39.
PROGRAM OUTCOME GAP ANALYSIS
112
Table 39
Summary of Organization Influences and Recommendations
Assumed Organization
Influence
Priority
Principle and Citation Context-Specific
Recommendation
Cultural Models
The organization supports
a culture of commitment
to continuous
improvement.
High Effective change
efforts utilize
feedback to determine
when/if improvement
is happening (Clark &
Estes, 2008).
Commit operational
efforts towards
continuous improvement
practices.
The goals of the
organization related to
program outcome
evaluation are
communicated
effectively.
High Effective
organizations ensure
that organizational
messages, rewards,
policies, and
procedures that
govern the work of the
organization are
aligned with, or are
supportive of
organizational goals
and values (Clark &
Estes, 2008).
Collaborate with faculty
members, staff, and
administration to
establish a calendar of
key dates for program
outcome evaluation
process.
Cultural Settings
The organization
provides physical
High Effective
organizations ensure
that organizational
Collaborate with faculty
members to develop
PROGRAM OUTCOME GAP ANALYSIS
113
examples of its
commitment to program
learning outcome
evaluation.
messages, rewards,
policies, and
procedures that
govern the work of the
organization are
aligned with, or are
supportive of
organizational goals
and values (Clark &
Estes, 2008).
communication plan for
program outcome
evaluation processes and
outcomes.
Policies and Procedures
The organization has
processes that allow
faculty members to affect
change at the college.
High Effective
organizations ensure
that organizational
messages, rewards,
policies, and
procedures that
govern the work of the
organization are
aligned with, or are
supportive of
organizational goals
and values (Clark &
Estes, 2008).
Collaborate with faculty
members and
administration to create
standard times and places
for faculty members to
have input.
Programs are held
accountable for collecting
pertinent data related to
program outcomes.
High Effective
organizations ensure
that organizational
messages, rewards,
policies, and
procedures that
Collaborate with faculty
members and
administration to create
standard process and
timeline for collecting
PROGRAM OUTCOME GAP ANALYSIS
114
govern the work of the
organization are
aligned with, or are
supportive of
organizational goals
and values (Clark &
Estes, 2008).
and analyzing program
outcome data.
Resources
Faculty members need
the resources to be able to
review aggregated
program outcome data.
High Effective change
efforts ensure that
everyone has the
resources needed to do
their job (Clark &
Estes, 2008).
Director of Assessment
to provide representative
program outcome data on
a scheduled basis to
faculty members.
Faculty members need
the resources to be able to
review disaggregated
program outcome data.
High Effective change
efforts ensure that
everyone has the
resources needed to do
their job (Clark &
Estes, 2008).
Director of Assessment
to provide representative
program outcome data on
a scheduled basis to
faculty members.
Cultural model solutions. Two gaps related to cultural models were identified and
validated - the organization must support a culture of commitment to continuous improvement,
and the goals of the organization related to program outcome evaluation must be communicated
effectively. In order to eliminate these organizational gaps, revised organizational processes
should be developed and implemented. Committing operational efforts towards continuous
PROGRAM OUTCOME GAP ANALYSIS
115
improvement and establishing clear policies, procedures, and goals will help improve
organizational practices (Clark & Estes, 2008).
YCN can improve its operational practices by better establishing a culture of continuous
improvement. Continuous improvement efforts typify Clark and Estes’ recommendation of
utilizing feedback to ensure improvement is happening. Dowd et al. (2012) warn that some
practitioners may be hesitant to engage in continuous improvement efforts because they do not
want to report anything that could be construed as negative. Additionally, Grubb and Badway
(2005) share that faculty members may be hostile towards continuous improvement efforts if
they feel that the practice is required and driven by an external party. YCN must ensure faculty
members that continuous improvement efforts are not punitive, and are directed by internal
entities in order to avoid the potential negative aspects associated with continuous improvement
efforts. Although faculty members may be hesitant to report information linked to improvement
needs, continuous feedback of aspects of program outcome evaluation that can be improved will
be critical for organizational success
Practices such as committing to continuous improvement are further supported by clear
visions, goals, and measures. Clark and Estes (2008) explain that clear vision, goals, and
measures are “the key elements for successful change” (p. 117). By establishing clear
expectations, faculty members will better understand their responsibilities. Additionally,
providing a clear definition of the goals of program outcome evaluation can help faculty
members understand the value of the practice (Calegari et al., 2015). Finally, Palmer (2012)
notes that establishing clear measures of progress will help ensure that responsible parties are
held accountable.
PROGRAM OUTCOME GAP ANALYSIS
116
Cultural settings solutions. The YCN must provide examples of its commitment to
program outcome evaluation. Clark and Estes (2008) suggest communications related to
organizational progress should be frequent, straightforward, and aligned with the goals of the
organization. YCN can engage with relevant stakeholders on a regular basis to keep them
updated on progress related to program outcome evaluation.
Palmer (2012) explains that clarity is a priority when communicating outcome data with
external entities. Clear communication efforts reduce the possibility of ambiguity or
misrepresentation of information. Furthermore, Jankowski and Reese Cain (2015) explain that
institutions communicating outcomes
“need to effectively communicate and share information with both external and internal
audiences in ways that provide context, answer questions, inform decisions, and respond
to audience needs. Additionally, they need to communicate about the process of
assessment, the outcomes of student learning, and the institutional response to those
outcomes” (p. 202).
Through interviews, faculty members identified relevant stakeholders including hospital
partners, current students, and potential students. Additionally, interviewees discussed a range of
options for communicating program outcome evaluation data including: email updates, video
posts on social media, and physical forums held on campus. These suggestions can serve as a
starting point for the full faculty to discuss and agree upon an ideal plan for communication.
Policies and procedures solutions. Two policy and procedure gaps were identified and
validated. A large portion of faculty felt that they did not have the ability to affect change at the
college, and that program faculty members are not held accountable for collecting pertinent
PROGRAM OUTCOME GAP ANALYSIS
117
program outcome data. Although policies exist that intend to meet these governance and
accountability needs, faculty members felt that policies are not enacted effectively.
Clark and Estes (2008) as well as Rueda (2011) espouse the practice of aligning
organizational structures with organizational goals. By applying Clark and Estes (2008), and
Rueda’s (2011) recommendations YCN can revise policies and procedures to ensure faculty
members are active members within the structure, and require faculty members to be accountable
for collecting pertinent program outcome data. Ensuring faculty members are embedded within
governance and accountability structures should alleviate organizational gaps and increase
collaboration throughout the college.
Improving the ways in which faculty members participate in shared governance can
improve operations and the motivation of faculty members. Rickards et al. (2016) explains that
to best engage faculty members in decision-making processes, the outcomes of decision-making
processes must be meaningful to faculty members. Additionally, Calegari et al. (2015) explain
that for faculty members “participation in decision-making about - and implementation of -
organizational initiatives fosters greater understanding and acceptance of those initiatives” (p.
31). As faculty members’ participation in decision-making increases, their willingness to engage
in accountability practices should increase as well.
Resources solutions. Clark and Estes (2008) stress the importance of ensuring that
organizations provide adequate resources to achieve goals. Surveys and interviews identified
gaps regarding faculty members’ access to aggregated program outcome data and disaggregated
program outcome data. Specifically, faculty members expressed concerns regarding the low
survey response rates and ultimately the integrity of the data they receive. YCN must provide
representative data to faculty members to ensure they have adequate resources.
PROGRAM OUTCOME GAP ANALYSIS
118
Survey delivery method (physical or computer-based) can impact response rates (Lalla &
Ferrari, 2011; LaRose & Hsin-yi, 2014), and demographics of respondents (Lin et al., 2017;
Reisenwitz, 2016; Rüdig, 2008). Additionally, Fink (2016) supports the use of incentives to help
increase response rates. YCN can experiment with different survey methods and incentives to
find the methods of support that elicit the most representative responses.
However, as efforts are made to improve response rates, faculty members should have
realistic expectations. Dumford and Miller (2015) explain that alumni response rates are often
low due to factors such as survey fatigue, inaccurate contact information, and the general burden
of completing surveys online. Faculty members should not use low response rates as an excuse
for not engaging with the data, and should be prepared to analyze data within the context of the
response rate. Establishing and discussing realistic expectations can take place during all-faculty
meetings as well as during faculty orientation activities.
Summary of Knowledge, Motivation and Organization Recommendations
Implementing knowledge recommendations will need to be done in a purposeful manner.
Faculty members’ will be presented with information and asked to engage in discussion in order
to improve their factual knowledge. Additionally, faculty members will engage in frequent
practice to improve their procedural knowledge, and receive feedback in order to improve their
metacognitive knowledge.
Motivation recommendations will be focused on improving faculty members’ self-
efficacy. Actualized recommendations will require faculty members to engage in mastery
experiences. Knowledge recommendations can be implemented in a manner consistent with
mastery experiences in an interconnected fashion. Additionally, throughout the development
PROGRAM OUTCOME GAP ANALYSIS
119
process, faculty members will be able to rely on and engage with faculty members who already
maintain high self-efficacy.
Organizational recommendations include establishing initiatives, timelines, and practices
that are agreed-upon by all college faculty members. Additionally, recommendations have been
made to help improve the quality and integrity of data collected by the college. Improving the
quality of data collected will require broad organizational improvements rather than faculty
member input.
Integrated Implementation and Evaluation Plan
Organizational Purpose, Need and Expectations
The mission of Yvonne College of Nursing (YCN) is: “…to progress nursing education
and prepare future nurses by developing relationships, encouraging collaboration in practice,
innovating, and exploring academic/service partnerships to improve health systems to optimize
health and healthcare for individuals, populations, and communities.” This mission drives the
curriculum and operations of YCN. To achieve this mission, YCN must ensure that all students
have the knowledge, skills and attitudes necessary to meet the needs of patients and individuals
throughout health systems.
YCN currently lacks systematic processes for establishing, collecting, and evaluating
program outcome data to demonstrate program effectiveness and inform continuous
improvement initiatives. As a result, the college is not able to make meaningful data-driven
decisions regarding program outcomes. Additionally, the lack of a systematic process for
evaluating program outcome data was cited as a compliance concern from a professional
accreditor.
PROGRAM OUTCOME GAP ANALYSIS
120
The desired outcome for this project is to establish policies and procedures for program
outcome evaluation practices. The stakeholders of focus for this project are faculty members. In
line with the desired outcome of establishing policies and procedures for program outcome
evaluation practices are the goals of ensuring faculty members have the requisite knowledge,
motivation, and organizational support to engage in program outcome evaluation practices.
Implementation and Evaluation Framework
Recommendations to improve faculty members’ knowledge, motivation, and YCN’s
organizational practices will be implemented and evaluated utilizing the New World Kirkpatrick
Model (Kirkpatrick & Kirkpatrick, 2016). The New World Kirkpatrick Model is used to monitor
the impact training, or new learning activities have on an organization in broad, mid-level, and
narrow terms. The model categorizes outcomes into four levels: Level 1 - Reaction; Level 2 -
Learning; Level 3 - Behavior, and Level 4 - Results and Leading Indicators (Kirkpatrick &
Kirkpatrick, 2016).
The New World Kirkpatrick Model differs from the traditional Kirkpatrick Model
(Kirkpatrick, 2006) most notably in its focus on specificity (Kirkpatrick & Kirkpatrick, 2016).
For example, in the traditional Kirkpatrick Model Level 1, outcomes consisted solely of
satisfaction, whereas The New World Kirkpatrick Model introduces the need to establish
measures of engagement and relevance in Level 1. Similarly, Level 2 outcomes in The New
World Kirkpatrick Model build on previous iterations of the Kirkpatrick Model by adding
dimensions of motivation and commitment. The addition of evaluating measures of motivation
and commitment aligns with the Clark and Estes’ Gap Analysis Framework (2008).
Despite the chronological order of levels 1-4, Kirkpatrick and Kirkpatrick (2016) suggest
developing evaluation efforts starting with Level 4, and progressing in reverse order to Level 1.
PROGRAM OUTCOME GAP ANALYSIS
121
Level 4 outcomes are broad and are associated with the mission of the organization while Level
1 outcomes focus on participant’s reaction to the implementation of training.
Level 4: Results and Leading Indicators
Kirkpatrick and Kirkpatrick (2016) explain that for non-profit organizations such as
YCN, Level 4 results exemplify the organization’s ability to “accomplish the mission while
responsibly using the resources available” (p. 12). Due to the difficulty of aligning discreet
activities with broad organizational mission statements, Kirkpatrick and Kirkpatrick suggest
identifying leading indicators that signify an organization’s progress towards actualizing its
mission. Leading indicators are short-term, measurable aspects of performance that are aligned
with an organization’s mission.
Leading indicators may be formative in nature (Kirkpatrick & Kirkpatrick, 2016). Using
formative data is especially useful within the context of this project, as the focus of the project is
improvement and innovation. By constantly analyzing and reacting to formative data, the
organization can ensure implemented recommendations are effective.
An important external indicator identified in this study is the external communication of
the college’s attainment of its mission. Communicating this information exemplifies the
college’s commitment to achieving its mission. Communicating attainment of mission can also
allow the college to exemplify their commitment to analyzing program outcome data. Table 40
includes an overview of the outcomes, metrics, and methods for this external outcome.
Internal indicators include faculty members’ confidence in their abilities to utilize
program outcome data to inform decision-making, as well as an increase in faculty members’
trust that they can have input in decision-making. Additionally, faculty members’ use of
operational practices including course evaluation data, course assessment data, and survey data
PROGRAM OUTCOME GAP ANALYSIS
122
will serve as internal indicators. Faculty members will be asked to provide input regarding any
change in their activities as they relate to these internal indicators. Table 40 includes information
regarding the outcomes, metrics, and methods for the internal indicators.
PROGRAM OUTCOME GAP ANALYSIS
123
Table 40
Outcomes, Metrics, and Methods for External and Internal Outcomes
Outcome Metric(s) Method(s)
External Outcomes
Improve external
communication regarding
the college actualizing its
mission.
Annually, increased frequency
of collaborative partnerships
with external organizations.
Compile annual program
outcome evaluation data and
determine external stakeholders
to share information with.
Internal Outcomes
Increased confidence in
ability to utilize program
outcomes to inform
decision-making.
Monthly, average faculty
confidence scores related to
ability to utilize program
outcomes to inform decision-
making increase month-over-
month.
Solicit input from faculty
members at each college-wide
monthly meeting.
Increased use of course
evaluation data to inform
decision-making.
Once-per-semester faculty
members will describe any
changes in use of course
evaluation data to inform
decision-making.
Solicit input from faculty
members at each end of
semester report meeting.
Increased use of class
assessment data to inform
decision-making.
Once-per-semester faculty
members will describe any
changes in use of class
Solicit input from faculty
members at each end of
semester report meeting.
PROGRAM OUTCOME GAP ANALYSIS
124
assessment data to inform
decision-making.
Increased use of survey
data to inform decision-
making.
Once-per-semester faculty
members will describe any
changes in use of survey data to
inform decision-making.
Solicit input from faculty
members at each end of
semester report meeting.
Increased trust that faculty
members have input in
decision-making.
Once-per-semester faculty
members will describe any
changes in their ability to have
input in decision-making.
Solicit input from faculty
members at each end of
semester report meeting.
Level 3: Behavior
Critical behaviors. Kirkpatrick and Kirkpatrick’s (2016) level 3 outcomes are used to
understand the level to which individuals demonstrate newly attained knowledge and attitudes.
Kirkpatrick and Kirkpatrick describe the importance of identifying and assessing critical
behaviors. Critical behaviors are observable actions that help show transfer from learning to
practice (Kirkpatrick & Kirkpatrick, 2016). The four critical behaviors identified in this project
are outlined in Table 41.
PROGRAM OUTCOME GAP ANALYSIS
125
Table 41
Critical Behaviors, Metrics, Methods, and Timing for Evaluation
Critical Behavior Metric(s) Method(s) Timing
1. Faculty members
align the program
outcomes with
accreditation
standards.
The number of
curriculum maps
rejected by curriculum
committee.
The curriculum
committee reviews
submitted course maps
to ensure proper
alignment between
program outcomes and
accreditation standards.
During first 90 days
of implementation.
Thereafter - once
every two years or
as needed per
curriculum revision
policy.
2. Faculty members
determine the
program outcome
data to be
collected.
The number of course
reports submitted with
complete data set.
The program directors
for each program review
each course report form
to ensure all applicable
program outcome data
has been collected.
Once per semester
3. Faculty members
collect and analyze
program outcomes
data.
The number of course
reports submitted with
appropriate analysis.
The program directors
for each program review
each course report form
to ensure all applicable
program outcome data
has been collected.
Once per semester.
PROGRAM OUTCOME GAP ANALYSIS
126
4. Faculty members
use program
outcome data to
influence decision-
making for
improvement.
The number of
completed end-of-
semester action plans
including program
outcome data analysis
completed.
Director of assessment
reviews end-of-semester
action plans to ensure
program outcome
analysis.
Once per semester.
Required drivers. Required drivers support attainment and implementation of critical
behaviors. Specifically, required drivers are “processes and systems that reinforce, monitor,
encourage, and reward performance of critical behaviors.” (Kirkpatrick, & Kirkpatrick, 2016, p.
53). Required drivers displayed in Table 42 will support critical behaviors identified in Table
41.
PROGRAM OUTCOME GAP ANALYSIS
127
Table 42
Required Drivers to Support Critical Behaviors
Method(s) Timing Critical Behaviors Supported
Reinforcing
Job aid including
accreditation requirements
and definitions.
Ongoing 1
Job aid including all
program learning outcomes
Ongoing 1, 2, 3, 4
Job aid including checklist
for program outcome
evaluation process
Ongoing 2, 3, 4
College-wide meetings to
discuss goals of program
outcome evaluation process.
Monthly 1, 2, 3, 4
Encouraging
Peer mentoring and
collaboration.
Ongoing 2, 3, 4
Feedback from directors and
curriculum committee.
Once per semester 1, 2, 3, 4
Rewarding
Acknowledgement at end-
of-semester meeting when
reports are accurately
produced.
Once per semester 2, 3, 4
Monitoring
Audit of courses to ensure
data-collection mechanisms
are in place.
Ongoing 1, 2, 3, 4
PROGRAM OUTCOME GAP ANALYSIS
128
Audit of repository of
course reports.
Ongoing 1, 2, 3, 4
Audit of curriculum maps. Ongoing 1, 2, 3, 4
Organizational support. Reinforcing required driver processes will include job aids as
well as collaborative meetings. Job aids will be developed to help establish and improve
declarative knowledge in regard to accreditation standards, as well as procedural knowledge
related to program outcome evaluation. Additionally, college-wide meetings will be used as
required drivers in order to help improve organizational aspects including cultural models and
cultural settings.
Additional required drivers will help encourage faculty members’ performance. Peer
mentoring activities align with previously discussed recommendations to boost self-efficacy.
Faculty members will also receive feedback from directors and the curriculum committee.
Providing specific feedback also aligns with previously discussed recommendations for
improving metacognitive knowledge.
Faculty members’ efforts will be rewarded through acknowledgement at each end-of-
semester meeting. End-of-semester meetings are attended by all faculty members, and are a rare
opportunity for the college to collaborate and celebrate. Celebrating program outcome
evaluation progress also aligns with previously discussed recommended organizational actions
related to cultural models and policies and procedures of the college.
Finally, progress made in attaining and enacting critical behaviors will be monitored.
Course data, end-of-semester reports, and curriculum maps can all be analyzed to determine
whether or not necessary data exists. These processes can be audited in an ongoing matter to
ensure that efforts are consistently efficient.
PROGRAM OUTCOME GAP ANALYSIS
129
Level 2: Learning
Faculty members will be asked to learn new knowledge and develop new attitudes related
to program outcome evaluation. Kirkpatrick and Kirkpatrick (2016) explain that learning is “the
degree to which participants acquire the intended knowledge, skills, attitude, confidence, and
commitment based on their participation in the training.” (p. 42). The degree to which faculty
members acquire the intended attributes will be positively correlated with the quality of the
learning goals, program, and evaluation.
Learning goals. Based on validated gaps identified in Chapter Four, six learning goals
have been developed. Although learning goals are listed individually they do not exist in
isolation. For example, as faculty members’ ability to carry out the steps of analyzing data
improves their confidence and ability to reflect on their abilities should also improve.
Ultimately, learning goals will be used to help YCN evaluate the effectiveness of its
programming. Upon completion of the recommended solutions, faculty members will be able to:
1. Recognize accreditation requirements regarding program outcome evaluation.
2. Carry out the steps of analyzing program outcomes.
3. Carry out the steps of a program outcome evaluation process.
4. Reflect on own abilities to evaluation program outcome data.
5. Appraise their confidence in their abilities to analyze program outcome data.
6. Develop organizational processes, policies and procedures to support program outcome
evaluation practices.
Program. A program will be implemented in order to actualize recommendations
outlined in Table 37, Table 38, and Table 39. The program will be focused on faculty members’
attainment and demonstration of the identified learning goals. The program will focus on faculty
PROGRAM OUTCOME GAP ANALYSIS
130
members’ knowledge and motivation, as well as faculty members’ input regarding organizational
practices.
The program will be implemented during monthly all-faculty meetings. Predominantly,
faculty members attend all-faculty meetings in person, however some participate synchronously
using webcam, audio, and screen sharing technologies. The program will be offered in a format
that meets the needs of both in person participants and virtual participants.
The program will be offered over a timeframe of four months. Throughout the four
months of the program faculty members will engage in activities that allow for evaluation of the
six established learning goals. Kirkpatrick and Kirkpatrick (2016) stress the importance of
evaluation as this process allows programs to demonstrate their value.
Evaluation of the components of learning. Kirkpatrick and Kirkpatrick (2016) espouse
the use of formative and summative evaluation methods including discussions, knowledge
checks, group demonstrations, surveys, and action plans. Each of these methods will be
employed in order to monitor faculty members’ attainment of the established learning goals.
Table 43 outlines the methods and activities that will be evaluated throughout the program.
PROGRAM OUTCOME GAP ANALYSIS
131
Table 43
Evaluation of the Components of Learning for the Program
Method(s) or Activity(ies) Timing
Declarative Knowledge “I know it.”
Discussions focusing on the differences
between evaluation and assessment.
During workshop
Knowledge checks using multiple choice
items related to accreditation requirements.
At the conclusion of workshop
Procedural Skills “I can do it right now.”
Small groups demonstration focusing on
ability to analyze program outcome data.
During workshop
Small groups discuss and teach back the steps
of program outcome evaluation process.
During workshop
Attitude “I believe this is worthwhile.”
Open-ended survey item regarding the value
of program outcome evaluation.
At the conclusion of workshop
Confidence “I think I can do it on the job.”
Faculty members discuss abilities to engage
in program outcome evaluation in one-on-one
format.
During workshop
Commitment “I will do it on the job.”
Create action plan with input from all faculty
members to establish a calendar of key dates
During workshop
PROGRAM OUTCOME GAP ANALYSIS
132
and actions for program outcome evaluation
process.
Create action plan with input from all faculty
members to develop communication plan for
program outcome evaluation processes and
outcomes.
During workshop
Level 1: Reaction
Level 1 evaluations focus on participants’ reactions (Kirkpatrick & Kirkpatrick, 2016).
Important aspects of Level 1 evaluations include participants’ engagement, the relevance of the
program, and participants’ satisfaction. Each aspect of Level 1 will be evaluated both in
formative and summative manners.
During workshops, participants will be observed and asked to participate in discussions.
One week after each workshop participants will be asked to respond to a survey regarding their
reactions. Table 44 presents an overview of proposed components to measure reactions.
PROGRAM OUTCOME GAP ANALYSIS
133
Table 44
Components to Measure Reactions to the Program
Method(s) or Tool(s) Timing
Engagement
Instructor observation During workshops
Post-workshop survey 1 week after each workshop
Relevance
Pulse check discussion During workshops
Post-workshop survey 1 week after each workshop
Customer Satisfaction
Instructor observation During workshops
Pulse check discussion During workshops
Post-workshop survey 1 week after each workshops
Evaluation Tools
Immediately following the program implementation. Faculty members who
participate in the program will be asked to engage in evaluation activities regarding the impact
the program had on their knowledge, motivation, and practice within the organization.
Kirkpatrick and Kirkpatrick (2016) suggest evaluating the impact of a program immediately
following its implementation, as well as after a period of time. Each evaluation serves a specific
purpose.
The purpose of the evaluation immediately following the implementation of the program
is to evaluate Level 1 (engagement, relevance, and customer satisfaction) and Level 2
PROGRAM OUTCOME GAP ANALYSIS
134
(knowledge and motivation) outcomes (Kirkpatrick & Kirkpatrick, 2016). The evaluation
immediately following the implementation of the program in YCN will utilize multiple choice
survey items, a ranking scale, and open-ended prompts. The proposed evaluation tool to be used
immediately following the program implementation is presented in Appendix E.
Delayed for a period after the program implementation. Kirkpatrick and Kirkpatrick
(2016) suggest evaluating the impact of the program after a period of time has passed since
implementation of the program. Delaying the evaluation allows participants to reflect more on
the impact of the program, and also allows time for participants to apply what they’ve learned.
For the purpose of this program, the delayed evaluation will be sent out 16 weeks after the final
workshop.
The evaluation following a delayed period of time addresses Level 1 (reaction), Level 2
(learning), Level 3 (behavior), and Level 4 (results and leading indicators) outcomes (Kirkpatrick
& Kirkpatrick, 2016). Open-ended questions and multiple-choice survey items elicit feedback
from participants regarding changes that have occurred as a result of the program. The proposed
evaluation tool to be used 16 weeks after the program implementation is presented in Appendix
F.
Data Analysis and Reporting
An analysis of findings will be presented to YCN faculty members, students, and
leadership, as well as applicable administrators at Grace-Rose University. Findings will include
results and themes from immediate evaluation and delayed evaluation, as well as internal and
external outcomes from Level 4, and metrics related to critical behaviors from Level 3. The
presentation of findings will celebrate the progress made up until that point in time.
PROGRAM OUTCOME GAP ANALYSIS
135
During the celebration of progress, faculty members will be asked to record video
testimonials regarding their experience learning about and practicing program outcome
evaluation. Faculty members will be asked to envision new ways the college can use data to
inform decision-making in the future, and to discuss any organizational needs that could
potentially help the college improve its operations. The videos will be posted within a secured
forum for faculty members to review.
Summary of the Implementation and Evaluation
The New World Kirkpatrick Model provides an advantageous framework for planning
and implementing evaluative efforts (Kirkpatrick & Kirkpatrick, 2016). The model helps
establish meaningful and measurable indicators from the first day of training through months
after the conclusion of a program. Furthermore, the model provides a framework for identifying
measurable outcomes related to broad, mission-centered indicators, as well as smaller-in-scope
learning outcomes and reactions.
The value of utilizing the New World Kirkpatrick Model (2016) in this specific project is
that it will provide YCN faculty and leadership with information regarding the college’s ability
to actualize its mission, as well as information regarding the effectiveness of the recommended
program. Providing evidence of the college actualizing its mission will help faculty members
and college leadership understand the contribution of program outcome evaluation to the success
of the college. Furthermore, evaluating the quality of the program has the potential to elucidate
the need for additional programming, or help inform future programming in other aspects of the
college’s operations.
PROGRAM OUTCOME GAP ANALYSIS
136
Limitations and Delimitations
The focus of this study was to better understand the knowledge, motivation, and
organizational gaps that contribute to faculty members’ ability to participate in program outcome
evaluation. Limitations and delimitations of this study include a small group of participants,
which also led to a lack of input from various stakeholders. Additionally, there was effort made
to understand the impact of varying levels of experience among participants.
This study focused on the experiences of non-administrative, full-time faculty members.
Focusing on a single group was advantageous, as this particular group of participants had limited
knowledge of administrative practices and responsibilities. For this reason, it was important not
to group input from non-administrative faculty members with administrative faculty members, as
the two groups likely experience processes at the college in different ways.
Similarly, the focus on a single group (non-administrative, full-time faculty members)
excluded other key stakeholders who could have provided additional insight to better understand
the needs and gaps within the college. Specifically, adjunct faculty members, students (including
potential students and alumni), university administration, as well as community partners all could
have contributed to this study and provided rich information regarding areas for improvement.
However, the decision to focus on a single stakeholder group was necessary in order to maintain
the feasibility of this study.
Furthermore, this study failed to examine the impact of demographic data including
race/ethnicity, age, and gender. Additionally, there was no analysis of employment-specific
information such as number of years working as an educator, previous administrative experience,
and modality of teaching (predominantly face-to-face or online). A lack of analysis of this
PROGRAM OUTCOME GAP ANALYSIS
137
information was purposeful, as the population being examined was small (14) and any
demographic information could potentially identify a participant.
Recommendations for Future Research
Future research initiatives can address limitations identified in this study. A wider scope
within Yvonne College of Nursing can examine the impact of various demographics and levels
of experience on faculty members’ knowledge, motivation, and practice within organizational
structures. Additionally, widening the scope of the study could include input from key
stakeholders such as students, administrative faculty members, university administrators, and
external stakeholders. Increasing the scope of the study within the college could provide
additional information to close knowledge and motivation gaps and improve organizational
operations.
Additionally, future research could examine faculty members’ knowledge, motivation,
and experience with organizational processes throughout the Grace-Rose University campus.
Examining factors at the university level could potentially uncover knowledge, motivation, and
organizational gaps that affect the entire university. If such gaps are identified, university
resources can be used collaboratively to help close the gaps.
Finally, future research could also focus on faculty members at other schools of nursing.
As program outcome evaluation is a required component for Commission on Collegiate Nursing
Education accreditation, there may be some institutions that have established exemplary
practices, and others that have significant gaps. Comparing gaps among multiple institutions
may uncover gaps that need to be addressed at a regional or national level.
PROGRAM OUTCOME GAP ANALYSIS
138
Conclusion
In 2016 the Commission on Collegiate Nursing Education recommended that the Yvonne
College of Nursing establish and enact program outcome evaluation processes. This study
examined the knowledge, motivation, and organizational aspects that most-impact faculty
members at YCN in their ability to establish and enact program outcome evaluation processes.
Gaps in knowledge, motivation, and organizational processes were identified.
Developmental activities were established using evidence-based educational methods,
and The New World Kirkpatrick Method (2016). Recommendations focused on each gap
identified were discussed, and implemented in The New World Kirkpatrick Method. Evaluation
activities will take place throughout a range of performance outcomes.
The college will embark on implementing and evaluating recommendations. Fortunately,
survey and interview data found that faculty members at YCN do in fact value the idea of
program outcome evaluation. With this value established, the college will be capable of enacting
the recommendations and reaping the benefits of continuous improvement practices.
PROGRAM OUTCOME GAP ANALYSIS
139
REFERENCES
Alkin, M. C. (2011). Evaluation essentials: From a to z. New York, NY: Guilford Press.
Ambrose, S., Bridges, M., DiPietro, M., Lovett, M., & Norman, M. (2010). How learning works:
Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing.
New York, NY: Longman.
Andrade, M. S. (2011). Managing change: Engaging faculty in assessment opportunities.
Innovative Higher Education, 36(4), 217-233.
Baker, L. (2006). Metacognition in comprehension instruction: What we've learned since NRP.
In C. C. Block & S. R. Parris (Eds.), Comprehension instruction: Research-based best
practices (2nd ed., pp. 65-79).
Banta, T. W., & Pike, G. R. (2012). The bottom line: Will faculty use assessment findings? In C.
Secolsky & D. B. Denison (Eds.), Handbook on measurement, assessment, and
evaluation in higher education (pp. 47-56). New York, NY: Routledge.
Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual
college or university: Five trends and their implications. New Directions for Higher
Education, 145, 47-58.
Bensimon, E. (2005). Closing the achievement gap in higher education: An organizational
learning perspective. New Directions for Higher Education, 2005(131), 99-111.
Birnbaum, R. (1988). How colleges work: The cybernetics of academic organization and
leadership. San Francisco, CA: Jossey-Bass.
Blass, E. (2008). Professional learning and work-based learning: Divergence in rhetoric,
convergence in reality. International Journal of Learning, 14(9), 59-66.
PROGRAM OUTCOME GAP ANALYSIS
140
Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review. Sterling,
VA: Stylus Publishing.
Calegari, M. F., Sibley, R. E., & Turner, M. E. (2015). A roadmap for using Kotter's
organizational change model to build faculty engagement in accreditation. Academy of
Educational Leadership Journal, 19(3), 31-43.
Clark, R., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Charlotte, NC: Information Age Publishing.
Conley, D. (2015). A new era for educational assessment. Education Policy Analysis Archives,
23(8).
Creswell, J. (2014). Research design: Qualitative, quantitative, and mixed methods approaches
(4th ed.). Thousand Oaks, C.A.: SAGE Publications.
Denler, H., Wolters, C., & Benzon, M. (2006). Social cognitive theory. Retrieved from
http://www.education.com/reference/article/social-cognitive-theory/
Dowd, A. C., Malcolm, L., Nakamoto, J., & Bensimon, E. M. (2012). Institution researchers as
teachers and equity advocates: Facilitating organizational learning and change. In E. M.
Bensimon & L. Malcolm (Eds.), Confronting equity issues on campus: Implementing the
Equity Scorecard in theory and practice (pp. 191-212). Sterling, VA: Stylus Publishing.
Dumford, A. D., & Miller, A. L. (2015). Are those rose-colored glasses you are wearing?
Student and alumni survey responses. Research & Practice in Assessment, 10(2), 5-14.
Emenike, M., Raker, J., & Holme, T. (2013). Validating chemistry faculty members' self-
reported familiarity with assessment terminology. Journal of Chemical Education, 90,
1130-1136. doi:dx.doi.org/10.1021/ed400094j
PROGRAM OUTCOME GAP ANALYSIS
141
Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension.
Retrieved from
http://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf
Fink, A. (2016). How to conduct surveys: A step-by-step guide (6th ed.). Thousand Oaks, CA:
SAGE.
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist,
36(1), 45-56.
Glesne, C. (2011). Becoming qualitative researchers: An introduction (4th ed.). Boston, MA:
Pearson Education Inc.
Goldberg, B., & Morrison, D. M. (2003). Co-Nect: Purpose, accountability, and school
leadership. In J. Murphy & A. Datnow (Eds.), Leadership lessons from comprehensive
school reforms (pp. 57-82). Thousand Oaks, CA: Corwin Press.
Grubb, N., & Badway, N. (2005). From compliance to improvement: Accountability and
assessment in California Community Colleges.
Horne, E., & Sandmann, L. (2012). Current trends in systematic program evaluation of online
graduate nursing education: An integrative literature review. Journal of Nursing
Education, 51(10), 570-580. doi:10.3928/014848342012082006
Howard, J. (2010). The missing link: Dedicated patient safety education within top-ranked US
nursing school curricula. Journal of Patient Safety, 6(3), 165-171.
James, J. T. (2013). A new, evidence-based estimate of patient harms associated with hospital
care. Journal of Patient Safety, 9(3), 122-128.
PROGRAM OUTCOME GAP ANALYSIS
142
Jankowski, N., & Reese Cain, T. (2015). From compliance reporting to effective communication.
In G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. Reese Cain, P. T. Ewell, P.
Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher
education. San Francisco, CA: Jossey-Bass.
Kirkpatrick, D. (2006). Seven keys to unlock the four levels of evaluation. Performance
Improvement, 45(7), 5-8.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Four levels of training evaluation. Alexandria,
VA: Association for Talent Development.
Kotter, J. P. (1996). Leading change. Boston, MA: Harvard Business School Press.
Krathwohl, D. (2002). A revision of Bloom's Taxonomy: An overview. Theory Into Practice,
41(4), 212-218.
Kumaran, D., Summerfield, J., Hassabis, D., & Maguire, E. (2009). Tracking the emergence of
conceptual knowledge during human decision making. Neuron, 63(6), 889-901.
Lalla, M., & Ferrari, D. (2011). Web-based versus paper-based data collection for the evaluation
of teaching activity: Empirical evidence from a case study. Assessment & Evaluation in
Higher Education, 36(3), 347-365. doi:10.1080/02602930903428692
LaRose, R., & Hsin-yi, S. T. (2014). Completion rates and non-response error in online surveys:
Comparing sweepstakes and pre-paid cash incentives in studies of online behavior.
Computers in Human Behavior, 34, 110-119.
Lewallen, L. P. (2015). Practical strategies for nursing education program evaluation. Journal of
Professional Nursing, 31(2), 133-140.
doi:http://dx.doi.org/10.1016/j.profnurs.2014.09.002
PROGRAM OUTCOME GAP ANALYSIS
143
Lin, W., Hewitt, G., & Videras, J. (2017). 'I'm meltiiiiiing...': The decline of response rates and
the impact of nonresponse bias on the results of national surveys at small colleges. New
Directions for Institutional Research, 173, 51-62. doi:10.1002/ir.20212
Maki, P. (2010). Assessing for learning (2nd ed.). Sterling, VA: Stylus Publishing.
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
implementation (4th ed.). San Francisco, CA: Jossey-Bass.
Miles, M., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods
sourcebook (3rd ed.). Thousand Oaks, CA: SAGE.
Pajares, F. (2006). Self-efficacy theory. Retrieved from http://www.education.com/reference/
article/self-efficacy-theory/
Palmer, J. C. (2012). The perennial challenges of accountability. In C. Secolsky & D. B. Denison
(Eds.), Handbook on measurement, assessment, and evaluation in higher education. New
York, NY: Routledge.
Peck, C. A., & McDonald, M. A. (2014). What is a culture of evidence? How do you get one?
And... should you want one? Teachers College Record, 116(3), 1-27.
Reisenwitz, T. H. (2016). Student evaluation of teaching: An investingation of nonresponse bias
in an online context. Journal of Marketing Education, 38(1), 7-17.
doi:10.1177/0273475315596778
Rickards, W. H., Abromeit, J., Mentkowski, M., & Mernitz, H. (2016). Engaging faculty in an
evaluative conversation. New Directions for Evaluation, 151, 53-68.
Rüdig, W. (2008). Assessing nonresponse bias in activist surveys. Qual Quant, 44, 173-180.
doi:10.1007/s11135-008-9184-9
PROGRAM OUTCOME GAP ANALYSIS
144
Rueda, R. (2011). The 3 Dimensions of Improving Student Performance. New York, NY:
Teachers College Press.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Thousand Oaks, CA: Sage.
Shaffer, K. (2015). Improving operations with online enrollment and registration. School
Business Affairs, 81(5), 37-38.
Stanovich, K. E. (2003). The fundamental computational biases of human cognition: Heuristics
that (sometimes) impair decision making and problem solving. In J. E. Davidson & R. J.
Sternberg (Eds.), The psychology of problem solving (pp. 291-342). Cambridge, England:
Cambridge University Press.
Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco,
CA: Jossey Bass.
Volkwein, J. F. (2010). The assessment context: Accreditation, accountability, and performance.
New Directions for Institutional Research, S1, 3-12.
Wentland, E. (2012). Survey use in academic contexts: Considerations and guidelines. In C.
Secolsky & D. B. Denison (Eds.), Handbook on measurement, assessment, and
evaluation in higher education. New York, NY: Routledge.
Wigfield, A., & Eccles, J. S. (2000). Expectancy value theory of motivation. Contemporary
Educational Psychology, 25, 68-81. doi:10.1006/ceps.1999.1015
Witham, K. A., & Bensimon, E. M. (2012). Creating a culture of inquiry around equity and
student success. In S. D. Museus & U. M. Javakumar (Eds.), Creating campus cultures:
Fostering success among racially diverse student populations (pp. 46-67). New York,
NY: Routledge.
PROGRAM OUTCOME GAP ANALYSIS
145
Zemsky, R., Wegner, G. R., & Massy, W. F. (2006). Remaking the American university:
Market-smart and mission-centered. New Brunswick, NJ: Rutgers University Press.
PROGRAM OUTCOME GAP ANALYSIS
146
Appendix A
Survey
Nursing Evaluation Gap Analysis Survey
You are invited to participate in a research study. Research studies include only people
who voluntarily choose to take part. This document explains information about this study.
You should ask questions about anything that is unclear to you. This study has been
approved by the University of Southern California Institutional Review Board (Protocol
Number: UP-17-00666). Additionally, an IRB Authorization Agreement has been made
between the University of Southern California and Grace-Rose-U regarding this study.
PURPOSE OF THE STUDY This study aims to conduct a needs’ analysis in the areas of
knowledge and skill, motivation, and organizational resources necessary for faculty
members to achieve their performance goal of establishing practices that support program
outcome data analysis.
PARTICIPANT INVOLVEMENT Participation in this study is voluntary. If you agree to
take part in this study, you will be asked to participate in a survey. Subsequently, a
request to participate in an interview will be sent.
CONFIDENTIALITY There will be no identifiable information obtained in connection
with this study. Your name, address or other identifiable information will not be
collected. Required language: The members of the research team, and the University of
PROGRAM OUTCOME GAP ANALYSIS
147
Southern California’s Human Subjects Protection Program (HSPP) may access the data.
The HSPP reviews and monitors research studies to protect the rights and welfare of
research subjects. When the results of the research are published or discussed in
conferences, no identifiable information will be used. INVESTIGATOR CONTACT
INFORMATION The Principal Investigator is Matt Durkin (mpdurkin@usc.edu,
518.727.1902). The Faculty Advisors are Melora Sundt (sundt@usc.edu, 310.403.6671) and
Kenneth Yates (kennetay@usc.edu, 310.963.0946). IRB CONTACT INFORMATION
University Park Institutional Review Board (UPIRB), 3720 South Flower Street #301, Los
Angeles, CA 90089-0702, (213) 821-5272 or upirb@usc.edu
Page Break
PROGRAM OUTCOME GAP ANALYSIS
148
The purpose of the first section of this survey is to assess your knowledge regarding key terms
and policies in regards to program outcome assessment and evaluation.
Select the option that best defines the following terminology as they pertain to higher education.
Q1 Program outcome:
o Institution-specific content or learning parameters - what students should learn,
understand, or appreciate because of their studies (1)
o Program-specific content or learning parameters - what students should learn, understand,
or appreciate because of their studies (2)
o Statement that translates learning into action, behaviors, and other texts from which
observers can draw inferences about the depth and breadth of student learning (3)
PROGRAM OUTCOME GAP ANALYSIS
149
Q2 Learning outcome:
o Institution-specific content or learning parameters - what students should learn,
understand, or appreciate because of their studies. (1)
o Program-specific content or learning parameters - what students should learn, understand,
or appreciate because of their studies. (2)
o Statement that translates learning into action, behaviors, and other texts from which
observers can draw inferences about the depth and breadth of student learning. (3)
Q3 Institutional outcome:
o Institution-specific content or learning parameters regarding what students should learn,
understand, or appreciate because of their studies. (1)
o Program-specific content or learning parameters regarding what students should learn,
understand, or appreciate because of their studies. (2)
o Statement that translates learning into action, behaviors, and other texts from which
observers can draw inferences about the depth and breadth of student learning. (3)
PROGRAM OUTCOME GAP ANALYSIS
150
Q4 Assessment:
o The process of determining the merit, worth, or value of something, or the product of that
process. (1)
o A systemic and systematic process of examining student work against standards of
judgment. (2)
o The collection, analysis, and interpretation of information regarding an aspect of a program
in order to judge effectiveness. (3)
Q5 Evaluation:
o The systematic basis for making inferences about the learning and development of students.
(4)
o The process of determining the merit, worth, or value of something, or the product of that
process. (1)
o A systemic and systematic process of examining student work against standards of
judgment. (2)
o Measuring the academic readiness, learning progress, skill acquisition, of students (3)
Page Break
PROGRAM OUTCOME GAP ANALYSIS
151
Q6 Which of the following is a Commission on Collegiate Nursing Education
(CCNE) requirements regarding program outcome data analysis?
o Program outcomes are defined by the university and incorporate expected levels of
achievement (1)
o Program outcomes are defined by the program and incorporate expected levels of
achievement (2)
o Program outcomes are defined by CCNE and incorporate expected levels of achievement
(3)
Q7 True or False: For the purposes of CCNE accreditation, alumni satisfaction can serve as an
indicator of program effectiveness.
o True (1)
o False (2)
Page Break
PROGRAM OUTCOME GAP ANALYSIS
152
Thank you for your responses. The next section will ask you to share information regarding your
personal motivation as well as your opinion on organizational structures and processes.
Q8 Rank the following processes in order of importance for ensuring a successful program (1 is
highest):
______ Faculty to student advising (1)
______ Student to student advising (2)
______ Direct assignment-specific feedback to students (3)
______ Systematic analysis of program outcome data (4)
______ Clinical evaluations (5)
______ Faculty scholarship efforts (6)
Q9 How would you primarily categorize the college's culture regarding accountability practices?
o A culture of compliance (1)
o A culture of continuous improvement (2)
PROGRAM OUTCOME GAP ANALYSIS
153
Q10 Please rate your disagreement or agreement with the following prompts:
PROGRAM OUTCOME GAP ANALYSIS
154
Strongly
disagree
(1)
(2) (3)
Neither
disagree
nor
agree
(4)
(5) (6)
Strongly
Agree
(7)
Don't
know/Cannot
evaluate (8)
I am confident
in my ability
to analyze
program
outcome data.
(1)
o o o o o o o o
I am confident
in my ability
to use
program
outcome data
to inform
decision-
making. (2)
o o o o o o o o
PROGRAM OUTCOME GAP ANALYSIS
155
I have access
to the
aggregated
program
outcome data
I need to make
informed
decisions. (3)
o o o o o o o o
I have access
to the
disaggregated
program
outcome data
I need to make
informed
decisions. (4)
o o o o o o o o
The policies of
the college
support my
ability to
affect change
at the college.
(5)
o o o o o o o o
PROGRAM OUTCOME GAP ANALYSIS
156
The processes
of the college
support my
ability to
affect change
at the college.
(6)
o o o o o o o o
The
procedures of
the college
support my
ability to
affect change
at the college.
(7)
o o o o o o o o
Reports
generated by
the college
exemplify its
commitment
to achieving
its mission.
(8)
o o o o o o o o
PROGRAM OUTCOME GAP ANALYSIS
157
Reports
disseminated
by the college
exemplify its
commitment
to achieving
its mission.
(9)
o o o o o o o o
Key
stakeholders
are informed
regarding the
college's
progress in
meeting its
mission. (10)
o o o o o o o o
The college
supports a
culture of
compliance.
(11)
o o o o o o o o
PROGRAM OUTCOME GAP ANALYSIS
158
The college
supports a
culture of
continuous
improvement.
(12)
o o o o o o o o
I know what
pertinent data
needs to be
collected and
analyzed on
an annual
basis. (13)
o o o o o o o o
The goals of
the
organization
related to
program
outcome
evaluation are
communicated
effectively.
(14)
o o o o o o o o
PROGRAM OUTCOME GAP ANALYSIS
159
Appendix B
Interview Items
1. Can you explain the purpose of program outcome assessment?
2. What is the value of program outcome assessment?
3. Can you give examples in your own words of how program outcome evaluation aligns with
operations of the college?
4. Can you articulate in your own words the process of analyzing and discussing program learning
outcome data?
5. Can you articulate in your own words what an ideal process of evaluating student learning
outcome data would look like?
6. When you receive student program learning outcome data, what steps would you take in
analyzing? What is your thought process throughout this practice?
7. Could you discuss the reasons the YCN should engage in program outcome assessment?
8. How do you feel about your ability to analyze program outcome data?
9. How do you feel about your ability to use program outcome data to inform decision-making?
10. How do you feel about increasing program outcome assessment at the college?
11. What specific information do you need to make informed decisions regarding program
outcomes?
12. Tell me how do policies, processes, and procedures of the YCN support your ability to affect
change at the college?
13. Can you describe the YCN’s accountability practices?
14. In your opinion, what is the difference between a culture of compliance and a culture of
continuous improvement? Is one better than the other?
15. What does YCN do internally that drives accountability practices?
PROGRAM OUTCOME GAP ANALYSIS
160
16. How does YCN utilize external accountability standards?
17. Does the YCN primarily use internal or external standards?
18. What would YCN do differently without internal accountability requirements?
19. What type of report or information would effectively show that the CGN is achieving its
mission?
a. What would be the ideal way to communicate this to key stakeholders?
PROGRAM OUTCOME GAP ANALYSIS
161
Appendix C
Informed Consent/Information Sheet
University of Southern California
Rossier School of Education
3470 Trousdale Pkwy, Los Angeles CA, 90089
ESTABLISHING A SYSTEMATIC EVALUATION OF AN ACADEMIC NURSING
PROGRAM USING A GAP ANALYSIS FRAMEWORK
You are invited to participate in a research study. Research studies include only people who
voluntarily choose to take part. This document explains information about this study. You should
ask questions about anything that is unclear to you.
PURPOSE OF THE STUDY
This study aims to conduct a needs’ analysis in the areas of knowledge and skill, motivation, and
organizational resources necessary for faculty members to achieve their performance goal of
establishing practices that support program outcome data analysis.
PARTICIPANT INVOLVEMENT
If you agree to take part in this study, you will be asked to participate in a survey and interview.
CONFIDENTIALITY
There will be no identifiable information obtained in connection with this study. Your name,
address or other identifiable information will not be collected.
Required language:
The members of the research team, the funding agency and the University of Southern
California’s Human Subjects Protection Program (HSPP) may access the data. The HSPP reviews
and monitors research studies to protect the rights and welfare of research subjects.
When the results of the research are published or discussed in conferences, no identifiable
information will be used. (Remove this statement if the data are anonymous)
INVESTIGATOR CONTACT INFORMATION
The Principal Investigator is Matt Durkin (mpdurkin@usc.edu, 518.727.1902).
The Faculty Advisors are Melora Sundt (sundt@usc.edu, 310.403.6671) and Kenneth Yates
(kennetay@usc.edu, 310.963.0946).
IRB CONTACT INFORMATION
University Park Institutional Review Board (UPIRB), 3720 South Flower Street #301, Los
Angeles, CA 90089-0702, (213) 821-5272 or upirb@usc.edu
PROGRAM OUTCOME GAP ANALYSIS
162
Appendix D
Sample Recruitment Letter
Hello,
I am seeking your input through surveys and interviews for my dissertation study - Establishing a
Systematic Evaluation of an Academic Nursing Program Using A Gap Analysis
Framework. The data collected through surveys and interviews will be used to help better
understand the knowledge, motivation and organizational indices that impact faculty members at
this college.
Please complete the electronic survey by following the link below:
Qualtrics Link
The survey contains 16 questions, and should take about 20 minutes to complete. The survey
will remain open until xx/xx/xxxx
Additionally, I would appreciate it if I could schedule a subsequent interview between the dates
of xx/xx/xxxx and xx/xx/xxxx. Interviews will be scheduled for 30 minutes. Please reply to this
e-mail with a date and time that will work for you within that timeframe.
Results of surveys and interviews will be used for the purposes of the dissertation study -
Establishing a Systematic Evaluation of an Academic Nursing Program Using A Gap Analysis
Framework. All information presented in throughout this project will not include identifiable
information.
If you have any questions or would like to be removed from future communications regarding
this study, please reply to this e-mail accordingly.
PROGRAM OUTCOME GAP ANALYSIS
163
Appendix E
Level 1 and Level 2 Evaluation Instrument Immediately Following Program Implementation
Level 1 - Reaction
The purpose of the following questions is to evaluate your reaction to the program outcome
evaluation workshop.
Q1 The information presented during the workshop was engaging.
(Please rate your agreement with this statement on a scale of 1-7 – 1 =Strongly Disagree, 7 =
Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q2 Please provide any comments regarding how you felt engaged with the workshop.
________________________________________________________________
________________________________________________________________
________________________________________________________________
PROGRAM OUTCOME GAP ANALYSIS
164
________________________________________________________________
________________________________________________________________
Q3 The information presented during the workshop aided in my learning about program outcome
evaluation. (Please rate your agreement with this statement on a scale of 1-7 – 1 =Strongly
Disagree, 7 = Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q4 Please provide any comments regarding how you felt the workshop aided in your learning.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q5 The information presented during the workshop was relevant to my current or future role.
PROGRAM OUTCOME GAP ANALYSIS
165
(Please rate your agreement with this statement on a scale of 1-7 – 1 =Strongly Disagree, 7 =
Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q6 Please provide any comments regarding how you felt the workshop was relevant to your
current or future role.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q7 I am satisfied with the quality of the workshop that was presented.
PROGRAM OUTCOME GAP ANALYSIS
166
(Please rate your agreement with this statement on a scale of 1-7 – 1 =Strongly Disagree, 7 =
Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q8 Please provide any comments regarding the quality of the workshop.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Level 2 - Learning
PROGRAM OUTCOME GAP ANALYSIS
167
The purpose of the following questions is to evaluate what you learned from the program
outcome evaluation workshop.
Q9 Please define the following term as it applies to a higher education setting:
Assessment
o The process of determining the merit, worth, or value of something, or the product of that
process. (1)
o A systemic and systematic process of examining student work against standards of judgment.
(2)
o The collection, analysis, and interpretation of information regarding an aspect of a program
in order to judge effectiveness. (3)
Q10 Please define the following term as it applies to a higher education setting:
Evaluation
o The process of determining the merit, worth, or value of something, or the product of that
process. (1)
o A systemic and systematic process of examining student work against standards of judgment.
(2)
o The systematic basis for making inferences about the learning and development of students.
(3)
o Measuring the academic readiness, learning progress, skill acquisition, of students (4)
PROGRAM OUTCOME GAP ANALYSIS
168
Q11 Sort the following steps of the program outcome evaluation process into the appropriate
order
______ Analyze program outcome data (1)
______ Agree with program faculty on the demonstrated indicator of program outcome (2)
______ Discuss implications of data on potential future changes (3)
______ Communicate with key stakeholders regarding next steps (4)
______ Determine the expected level of performance (5)
______ Collect performance data (6)
Q12 Please describe the value of evaluating program outcome data:
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q13 I am more confident in my ability to evaluate program outcome data now than I was prior to
the workshop.
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
PROGRAM OUTCOME GAP ANALYSIS
169
o (6)
o Strongly agree (7)
Q14 Please explain how you feel more confident in your ability to evaluate program outcome
data.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q15 I understand my roles and responsibilities in the established program outcome evaluation
process.
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
PROGRAM OUTCOME GAP ANALYSIS
170
Appendix F
Evaluation Tool Delayed for a Period After the Program Implementation
Level 4 - Results and Leading Indicators
The purpose of the following questions is to evaluate the quality of performance results since
completion of the program outcome evaluation workshop.
Q1 External communication regarding the college actualizing its mission has increased since
completion of the program outcome evaluation workshop. (Please rate your agreement with this
statement on a scale of 1-7 – 1 =Strongly Disagree, 7 = Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q2 Please describe how external communication regarding the college actualizing its mission has
changed since completion of the program outcome evaluation workshops.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
PROGRAM OUTCOME GAP ANALYSIS
171
Q3 I am more confident in my ability to utilize data to inform decision-making since completion
of the program outcome evaluation workshops. (Please rate your agreement with this statement
on a scale of 1-7 – 1 =Strongly Disagree, 7 = Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
PROGRAM OUTCOME GAP ANALYSIS
172
Q4 I have increased the use of program outcome data since completion of the program outcome
evaluation workshops.
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q5 Please describe how you have increased your use of program outcome data since completion
of the program outcome evaluation workshops.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Level 3 - Behavior
The purpose of the following questions is to evaluate changes in behavior since completion of
the program outcome evaluation workshops.
Q6 Since completion of the program outcome evaluation workshops have you engaged in any of
the following activities?
PROGRAM OUTCOME GAP ANALYSIS
173
Yes (1) No (2) Don't know (3)
Align program
outcomes with
accreditation
standards. (1)
o o o
Determine the
program outcome
data required to be
collected. (2)
o o o
Collected and
analyzed program
outcome data. (3)
o o o
Used program
outcome data to
inform decision-
making. (4)
o o o
Q7 Since completion of the program outcome evaluation workshops have you noticed any
changes in your behavior at work? If so, what changes have you noticed?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Level 2 – Learning
The purpose of the following questions is to evaluate what you learned from the program
outcome evaluation workshop.
Q8 Please describe the value of program outcome evaluation:
________________________________________________________________
________________________________________________________________
PROGRAM OUTCOME GAP ANALYSIS
174
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q9 Please describe your role and responsibilities in the program outcome evaluation process.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q10 I am more confident in my ability to utilize program outcome data to inform decision-
making since completion of the program outcome evaluation workshops. (Please rate your
agreement with this statement on a scale of 1-7 – 1 =Strongly Disagree, 7 = Strongly Agree)
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Level 1 – Reaction
The purpose of the following questions is to evaluate your reaction to the program outcome
evaluation workshops.
Q11 I have been able to utilize information I learned in the program outcome evaluation
workshop. (Please rate your agreement with this statement on a scale of 1-7 – 1 =Strongly
Disagree, 7 = Strongly Agree)
PROGRAM OUTCOME GAP ANALYSIS
175
o Strongly disagree (1)
o (2)
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q12 Please provide a specific example of how you have applied what you learned during the
program outcome evaluation workshop.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Q13 The information presented during the program outcome evaluation workshop prepared me
to perform my job at a higher level. (Please rate your agreement with this statement on a scale of
1-7 – 1 =Strongly Disagree, 7 = Strongly Agree
o Strongly disagree (1)
o (2)
PROGRAM OUTCOME GAP ANALYSIS
176
o (3)
o Neither disagree nor agree (4)
o (5)
o (6)
o Strongly agree (7)
Q14 Please describe how the information presented during the program outcome evaluation
workshops helped or did not help you perform your job at a higher level:
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Abstract (if available)
Abstract
This study applies the gap analysis framework (Clark & Estes, 2008) to understand the experiences of faculty members in a graduate college of nursing. The purpose of this study was to conduct a needs analysis in the areas of knowledge and skill, motivation, and organizational resources necessary for faculty members to achieve their organizational performance goal of establishing measures and methods to evaluate program outcomes. Faculty members’ knowledge and skills, motivation, and organizational resources were assessed using survey and interview data. Survey data was analyzed using descriptive statistics and interviews were analyzed in an effort to establish whether or not performance gaps existed. Gaps related to declarative factual knowledge
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An examination of barriers to effective supervision from the perspective of employees within a federal agency using the GAP analysis framework
PDF
A gap analysis of course directors’ effective implementation of technology-enriched course designs: An innovation study
PDF
Establishing a systematic evaluation of positive behavioral interventions and supports to improve implementation and accountability approaches using a gap analysis framework
PDF
Examining the adoption of electronic health records system in patient care and students’ education using the GAP analysis approach
PDF
Implementation of the Social Justice Anchor Standards in the West Coast Unified School District: a gap analysis
PDF
Examining teachers' roles in English learners achievement in language arts: a gap analysis
PDF
An examination of the facilitators of and barriers to effective supervision from the perspective of supervisors in a federal agency using the gap analysis framework
PDF
An examination using the gap analysis framework of employees’ perceptions of promising practices supporting teamwork in a federal agency
PDF
An examination of employee perceptions regarding teamwork in the workplace within a division of the Food and Drug Administration (FDA) using the gap analysis approach
PDF
Evaluating collective impact in a local government: A gap analysis
PDF
The role of higher education in bridging workforce skills gaps: an evaluation study
PDF
A gap analysis of employee satisfaction for the National Park Service: Wailele
PDF
An examination of supervisors’ perspectives of teamwork in a federal agency: promising practices and challenges using a gap analysis framework
PDF
Achieving high levels of employee engagement: A promising practice study
PDF
A gap analysis of employee satisfaction for the National Park Service: Camp Moxie site
PDF
Collaborative instructional practice for student achievement: an evaluation study
PDF
Factors contributing to student attrition at a healthcare university: a gap analysis
PDF
Faculty research performance evaluation with the gap analysis framework
PDF
Gap analysis of employee satisfaction at a national park: Round Hill Park
PDF
An evaluation of the character education program at Kamehameha Schools Hawai’i High School using the gap analysis approach
Asset Metadata
Creator
Durkin, Matthew Paul
(author)
Core Title
Establishing a systematic evaluation of an academic nursing program using a gap analysis framework
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/05/2018
Defense Date
03/13/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,assessment,continuous improvement,evaluation,faculty,gap analysis,Higher education,learning outcome,OAI-PMH Harvest,program evaluation
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Sundt, Melora (
committee chair
), Yates, Kenneth (
committee chair
), Faris, Shannon (
committee member
)
Creator Email
matthew.paul.durkin@gmail.com,mpdurkin@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-490825
Unique identifier
UC11266662
Identifier
etd-DurkinMatt-6147.pdf (filename),usctheses-c40-490825 (legacy record id)
Legacy Identifier
etd-DurkinMatt-6147.pdf
Dmrecord
490825
Document Type
Dissertation
Rights
Durkin, Matthew Paul
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
continuous improvement
evaluation
faculty
gap analysis
learning outcome
program evaluation