Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Improving the evaluation method of a military unit: a gap analysis
(USC Thesis Other)
Improving the evaluation method of a military unit: a gap analysis
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: IMPROVING THE EVALUATION METHOD 1
IMPROVING THE EVALUATION METHOD OF A MILITARY UNIT:
A GAP ANALYSIS
by
John R. Harrison
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2020
Copyright 2020 John R. Harrison
IMPROVING THE EVALUATION METHOD 2
DEDICATION
To my wife and two sons.
Thanks
IMPROVING THE EVALUATION METHOD 3
ACKNOWLEDGEMENTS
To Dr. Yates, Dr. Foulk, Dr. Donato, and the USC community.
Thanks
IMPROVING THE EVALUATION METHOD 4
TABLE OF CONTENTS
DEDICATION ............................................................................................................................2
ACKNOWLEDGMENTS ...........................................................................................................3
TABLE OF CONTENTS ............................................................................................................4
LIST OF TABLES ......................................................................................................................8
LIST OF FIGURES .....................................................................................................................9
ABSTRACT ................................................................................................................................10
CHAPTER ONE: INTRODUCTION..........................................................................................11
Introduction to the Problem of Practice ..........................................................................11
Organizational Context and Mission ...............................................................................12
Organizational Performance Status .................................................................................12
Related Literature ............................................................................................................13
Importance of Addressing the Problem ...........................................................................13
Organizational Performance Goal....................................................................................14
Description of Stakeholder Groups .................................................................................14
Stakeholders Groups’ Performance Goals .......................................................................15
Purpose of the Project and Questions ..............................................................................16
Methodological Framework .............................................................................................17
Definitions .......................................................................................................................18
Organization of the Project ..............................................................................................19
CHAPTER TWO: REVIEW OF THE LITERATURE ...............................................................20
Evaluation Methods Assess Job Performance .................................................................20
The Army’s Evaluation Methods .........................................................................20
Evaluation Methods External to the Army ..........................................................22
Attributes of Successful Individuals ................................................................................24
Attributes of Successful Leaders .........................................................................24
Cognitive capacity …...............................................................................26
Psychological profile ...............................................................................27
Job-specific performance ........................................................................27
High-stress environments ........................................................................28
Explanation of Clark and Estes’ (2008) Framework .......................................................29
Leader Expert’s Knowledge and Motivation Influences ….............................................30
Knowledge and Skills ………..............................................................................30
Factual knowledge influence …...............................................................31
Conceptual knowledge influence ….........................................................32
Procedural knowledge influence ..............................................................33
Motivation ............................................................................................................35
Self-efficacy theory ..................................................................................36
Utility value construct ..............................................................................38
Organization..........................................................................................................40
General organization theory .....................................................................40
Cultural models.........................................................................................41
Culture statement .........................................................................41
Performance orientation................................................................41
Cultural settings .......................................................................................42
Performance feedback...................................................................42
Ranked amongst other Experts ....................................................42
IMPROVING THE EVALUATION METHOD 5
Conceptual Framework: The Interaction of Knowledge, Motivation, and Organizational
Context .............................................................................................................................44
Conclusion .......................................................................................................................51
CHAPTER THREE: METHODOLOGY ....................................................................................53
Participating Stakeholders…............................................................................................53
Survey and Interview Sampling Criteria and Rationale.......................................54
Criterion 1 ................................................................................................54
Criterion 2.................................................................................................54
Survey Sampling (Recruitment) Strategy and Rational…....................................54
Interview sampling (Recruitment) Strategy and Rational ...................................55
Explanation of Choices ........................................................................................55
Data Collection and Instrumentation................................................................................56
Surveys.................................................................................................................57
Interviews.............................................................................................................57
Observation ..........................................................................................................58
Documents and Artifacts......................................................................................58
Data Analysis....................................................................................................................59
Credibility and Trustworthiness ......................................................................................59
Validity and Reliability …...............................................................................................59
Ethics................................................................................................................................62
Limitations and Delimitations .........................................................................................64
CHAPTER FOUR: RESULTS AND FINDINGS ......................................................................66
Participating Stakeholders ...............................................................................................66
Determination of Assets and Needs ................................................................................67
Results and Findings for Knowledge Causes ..................................................................68
Factual Knowledge...............................................................................................68
Influence 1................................................................................................68
Survey results ..............................................................................68
Interview findings.........................................................................71
Observation ..................................................................................72
Document analysis .......................................................................72
Summary ......................................................................................72
Procedural Knowledge .........................................................................................72
Influence 1................................................................................................72
Survey results ..............................................................................72
Interview findings ........................................................................72
Observation...................................................................................73
Document analysis .......................................................................73
Summary ......................................................................................74
Conceptual Knowledge ........................................................................................74
Influence 1................................................................................................74
Survey results ..............................................................................74
Interview findings ........................................................................75
Observation ..................................................................................76
Document analysis .......................................................................76
Summary ......................................................................................76
Results and Findings for Motivation Causes ...................................................................76
IMPROVING THE EVALUATION METHOD 6
Self-Efficacy ........................................................................................................76
Influence 1................................................................................................76
Survey results ..............................................................................77
Interview findings ........................................................................77
Observation...................................................................................77
Document analysis........................................................................78
Summary.......................................................................................78
Utility Value Construct.........................................................................................78
Influence 1................................................................................................78
Survey results...............................................................................78
Interview findings ........................................................................78
Observation...................................................................................79
Document analysis........................................................................79
Summary.......................................................................................79
Results and Findings for Organization Causes.................................................................79
Cultural Models ...................................................................................................79
Influence 1................................................................................................79
Survey results...............................................................................79
Interview findings.........................................................................81
Observation ..................................................................................82
Document analysis........................................................................82
Summary.......................................................................................82
Influence 2................................................................................................82
Survey results...............................................................................82
Interview findings.........................................................................82
Observation ..................................................................................83
Document analysis........................................................................83
Summary.......................................................................................83
Cultural Settings...................................................................................................84
Influence 1................................................................................................84
Survey results...............................................................................84
Interview findings.........................................................................84
Observation ..................................................................................85
Document analysis........................................................................85
Summary.......................................................................................85
Influence 2................................................................................................85
Survey results...............................................................................85
Interview findings.........................................................................85
Observation ..................................................................................86
Document analysis........................................................................86
Summary.......................................................................................86
Summary of Validated Influences....................................................................................86
Knowledge............................................................................................................87
Motivation............................................................................................................87
Organization.........................................................................................................87
CHAPTER FIVE: RESULTS ......................................................................................................89
Introduction and Overview...............................................................................................89
IMPROVING THE EVALUATION METHOD 7
Organizational Context and Mission................................................................................89
Organizational Performance Goal....................................................................................89
Description of Stakeholder Groups..................................................................................90
Goal of the Stakeholder Group for the Study...................................................................90
Propose of the Project and Questions...............................................................................92
Recommendations for Practice to Address KMO Influences ..........................................93
Introduction ….........................................................................................93
Procedural knowledge..............................................................................95
Declarative-factual knowledge: Identify common attributes...................96
Declarative-conceptual knowledge: Identify what is success..................97
Motivation Recommendations …........................................................................97
Introduction ….........................................................................................98
Self-efficacy.............................................................................................99
Utility value construct ...........................................................................100
Organization Recommendations........................................................................101
Introduction ….......................................................................................101
Cultural models......................................................................................104
Cultural settings.....................................................................................104
Integrated Implementation and Evaluation Plan............................................................105
Implementation and Evaluation Framework......................................................105
Organizational Purpose, Need, and Expectations .............................................106
Level Four: Results and Leading Indicators .....................................................107
Level Three: Behavior ......................................................................................109
Critical behaviors...................................................................................109
Required drivers ....................................................................................109
Organizational support...........................................................................112
Level Two: Learning..........................................................................................112
Learning goals........................................................................................112
Program..................................................................................................112
Evaluation of the components of learning.............................................114
Level One: Reaction..........................................................................................115
Evaluation Tools ...............................................................................................116
During and immediately following the program implementation.........116
Delayed for a period after the program implementation .......................116
Data Analysis and Reporting.............................................................................117
Summary ...........................................................................................................118
Strengths and Weakness of the Approach......................................................................118
Limitations and Delimitations........................................................................................119
Future Research..............................................................................................................119
Conclusion......................................................................................................................120
References..................................................................................................................................123
APPENDIX A Survey Protocol.................................................................................................128
APPENDIX B Interview Protocol.............................................................................................133
APPENDIX C Level 1 and Level 2 Evaluation Instrument.......................................................135
APPENDIX D Blended Evaluation Instrument.........................................................................138
IMPROVING THE EVALUATION METHOD 8
LIST OF TABLES
Table 1 Stakeholder Groups’ Performance Goals…....................................................................15
Table 2 Knowledge Influences ....................................................................................................34
Table 3 Motivation Influences .....................................................................................................39
Table 4 Organizational Influences ...............................................................................................43
Table 5 Requested Documents.....................................................................................................58
Table 6 Survey Results for Factual Knowledge of Leader Experts .............................................69
Table 7 Survey Results for Conceptual Knowledge of Leader Experts.......................................74
Table 8 Survey Results for Organizational Knowledge of Leader Experts ................................80
Table 9 Knowledge Assets or Needs as Determined by the Data ...............................................87
Table 10 Motivation Assets or Needs as Determined by the Data ..............................................87
Table 11 Organization Assets or Needs as Determined by the Data............................................87
Table 12 Organizational Mission, Global Goal, and Stakeholder Performance Goals................90
Table 13 Summary of Knowledge Influences and Recommendations.........................................93
Table 14 Summary of Motivation Influences and Recommendations ........................................98
Table 15 Summary of Organization Influences and Recommendations....................................102
Table 16 Outcomes, Metrics, and Methods for External and Internal Outcomes......................108
Table 17 Critical Behaviors, Metrics, Methods, and Timing for Evaluation.............................109
Table 18 Required Drivers to Support Critical Behaviors.........................................................110
Table 19 Evaluation of the Components of Learning for the Program......................................114
Table 20 Components to Measure Reactions to the Program....................................................115
IMPROVING THE EVALUATION METHOD 9
LIST OF FIGURES
Figure 1 Conceptual Framework: Interaction of Stakeholder Knowledge and Motivation within
Organizational Cultural Models and Settings .............................................................................50
IMPROVING THE EVALUATION METHOD 10
ABSTRACT
This research study used Clark and Estes’ (2008) gap analysis framework to conduct a needs
assessment of the knowledge, motivation, and organizational influences that support or obstruct
the development of an organizational specific job-performance evaluation method. The purpose
of the study was to identify the desired and undesired attributes of soldiers and what survey items
uncovered those attributes. In addition, this study sought to uncover the organization’s collective
requirements for the implementation and maintenance of this evaluation method. A mixed-
methods approach utilizing a survey combined with an interview was used to collect data from
100% of the 30 designated stakeholders, and an in-depth analysis of the data was conducted.
Findings from this study indicated that the identification of attributes of successful and
unsuccessful soldiers and a specific evaluation method to assess soldiers is a need. Also, the new
evaluation method addresses the organization’s motivation and organizational influences through
the design and proposed implementation strategy. The evaluation method created by this study
has already been adopted by the organization and is currently being implemented. This study
intends to improve a soldier’s potential and to help identify, assess, counsel, and develop future
generations of soldiers.
IMPROVING THE EVALUATION METHOD 11
CHAPTER ONE: INTRODUCTION
Introduction of the Problem of Practice
Special Operations leaders consist of military personnel from the Army, Navy, Air Force,
and Marines but have an inability to predict leadership ability during the assessment, selection,
and training process. Currently, extensive data has been analyzed from a candidate’s application
through selection, assessment, in-depth training, and finally, acceptance to the unit (Picano,
Roland, Williams, & Rollins, 2006). Their psychological fitness must be evaluated because
these “special warriors” (Mountz, 1993) are expected to perform complex skills under unusually
demanding conditions (Picano, Roland, Rollins, & Williams, 2002). A comprehensive
psychological assessment is needed to evaluate if a candidate is psychologically suitable for high
stress, high demand military positions (Picano et al., 2006). A candidate’s intelligence is directly
related to their probability of successful completion of the assessment and selection process.
Boe (2017) argues that even with the extensive psychological, intelligence, and physical testing,
it is difficult to predict which candidates will be successful. Current assessments are inadequate
for determining if a candidate will be a successful leader once they complete the required
training and are assigned to a unit, and if so, to what degree (Bartone, Roland, James, &
Williams, 2008).
Unit 123 does not have an internal mechanism in place to professionally soldiers once
they have been accepted into Unit 123. They currently rely on the US military’s job
performance evaluation tool: the Noncommissioned Officer Evaluation Report (NCOER), which
is an overinflated and inaccurate evaluation of military leaders (Johnson, 2012). The NCOER is
the only military-wide evaluation system and is not a consistent method to evaluate a
Noncommissioned Officer’s (NCO) job performance, especially within Special Operations.
Special operations lack of an effective evaluation method is a contributing factor to the gap in
IMPROVING THE EVALUATION METHOD 12
knowledge of what attributes are consistent in high-performing soldiers and thus what attributes
must be recruited, selected, and assessed to increase leadership potential.
Organizational Context and Mission
Unit 123, a pseudonym, is a Special Operations unit that provides subject matter
expertise for the United States’ priorities and initiatives. Unit 123 lies within the continental
United States, located with the necessary support to deploy quickly on behalf of the United
States and its allies. The organization recruits from all branches of the armed forces and
internally selects, assesses, trains and develops service members, and creates “Experts,” a
pseudonym, to conduct operations. These Experts are one of the most diligently selected and
trained individuals in government service. Once an Expert has successfully served on an
operational team for eight to 12 years, at least two years as a Team Leader (TL) and identified as
qualified to take the next step in leadership, the Expert is asked to be an instructor in the Training
Section. From Unit 123’s website, the Training Section’s mission is “Recruit, Assess, Select and
Train Unit 123 experts, provide continuity/consistency for institutional training, and subject
matter expertise for Unit 123 priorities and initiatives.”
Organizational Performance Status
The organizational performance problem at the root of this study is the lack of an Expert-
specific evaluation method within Unit 123. Unit 123’s only standardized evaluation method is
the NCOER. The NCOER was implemented to evaluate all 278,000 NCOs in over 190 job
specialties (Shute, 2018). The all-encompassing NCOER is not specialized enough to adequately
evaluate the 200 plus NCOs that make-up Experts. Unit 123 cannot achieve its goal until desired
attributes are identified for recruitment, and the attributes can only be identified through an
Expert-specific evaluation method.
IMPROVING THE EVALUATION METHOD 13
Related Literature
Various research has been conducted in order to identify the psychological, mental and
physical characteristics make of members of Special Operations (Bartone et al., 2008; Cline,
2015; Judge, Colbert, & Ilies, 2004; Nowicki, 2017; Picano et al., 2002, 2006). Their
psychological fitness must be evaluated because these “special warriors” (Mountz, 1993) are
expected to perform complex skills under unusually demanding conditions (Picano et al., 2002).
Picano et al. (2002) primarily used the Sentence Completion Test (SCT) to evaluate in their
research. A comprehensive psychological assessment is needed to evaluate if a candidate is
psychologically suitable for high stress, high demand military position (Picano te al., 2006).
Bartone et al. (2008) conducted hardiness research and found that a 1-point increase in a
hardiness score correlated to a 3.3% increase in graduation probability for Special Forces.
Bartone et al. (2008) utilized the Disposition Resilience Scale (DRS), which is extensively used
by the US military to measure hardiness. Intelligence appears to be the most prototypical
characteristic that positively relates to leadership ability based on a Lord, Foti, and De Vader
(1984) study that found intelligence attributed to 10 of 11 leadership categories (Judge, Colbert,
& Ilies, 2004). By using a multivariate logistic regression model, Nowicki (2017) found that if a
prospective candidate has at least one semester of a college experience, they have a 20.5% higher
success rate of completing the Marine’s equivalent training course.
Importance of Addressing the Problem
Special Operations’ inability to predict leadership potential is an important problem to
solve for a variety of reasons. Currently, there is no data collection, research, or analysis
conducted on current unit members, and without a suitable internal evaluation method, Unit 123
cannot assess the quality of performance of the Experts that complete the training. Unit 123’s
mission is to deliver soldiers by organically assessing, selecting, and training candidates to the
IMPROVING THE EVALUATION METHOD 14
highest possible job proficiency (Boe, 2017; Picano et al., 2002, 2006). If the right candidate is
identified, not only will the unit more efficiently allocate training resources but also, more
importantly, a better leader will be forward-deployed conducting operations (Nowicki, 2017).
Boe (2017) argues that even with the extensive psychological, intelligence, and physical testing,
it is difficult to predict what candidates will be successful. Gen. Ordierno (2015) concluded
from his research that while “we cannot predict the trajectory of warfare… Army leaders of
tomorrow must have highly developed critical and creative thinking skills that enable them to
make informed and effective decisions in the midst of chaos.”
Organizational Performance Goal
Unit 123’s performance goal is by June 2020; it will identify and report KMO attributes
for high-performing Experts and incorporate a process to recruit identified attributes in 100% of
candidates, increasing the ten-year retention rate of Experts by 20%. The performance goal
directly supports Unit 123’s global goal of by June 2022; it will have implemented a new
evaluation method, identified attributes, and adjusted recruitment strategies seeking identified
attributes. During a meeting with the Training Section’s Command Sergeant Major (CSM) and
Commander, Unit 123’s CSM, and the cadre, the researcher’s dissertation subject and the
potential benefit to Unit 123 was discussed, and the goal was established. The ten-year
benchmark was established since that is the traditional amount of time an Expert will be on an
operational team, and a 20% retention rate increase is desired without factoring casualties from
combat and training. Even though it will take ten years to determine the achievement of this goal
completely, the Training Section can set benchmarks upon completion of the training course and
during the completion of each periodical evaluation method suggested in this research.
IMPROVING THE EVALUATION METHOD 15
Description of Stakeholder Groups
Unit 123’s Senior Leaders need to provide command guidance to the recruitment,
assessment, and selection sections so new potential Expert candidates can be identified and
selected for training. Training Section’s Instructors must identify and collect the necessary data
from the training course and influence operational experts to complete surveys on current
experts. Leader Experts must collaborate to identify what KMO attributes are desired of Experts
and facilitate the implementation of a new evaluation method.
Stakeholders Groups’ Performance Goals
Table 1.
Organizational Mission, Global Goal, and Stakeholder Performance Goals
Organizational Mission
Recruit, Assess, Select and Train Unit 123 experts, a pseudonym, provide
continuity/consistency for institutional training, and subject matter expertise for Unit 123
priorities and initiatives.
Organizational Performance Goal
123’s goal is by June 2020; it will identify and report KMO attributes for high-performing
Experts and incorporate a process to recruit identified attributes in 100% of candidates,
increasing the ten-year retention rate of Experts by 20%.
Unit 123’s Senior
Leaders
Training Section’s
Instructors
Leader Experts
By 2019, the leadership
will emplace a new profile
for experts that will
increase retention by 20%
over a ten-year period.
By 2019, the Training Section
instructors will develop a
profile for experts that will
increase retention by 20% over
a ten-year period.
By 2019, Leader Experts will
develop evaluation questions
designed to identify desired
attributes of Experts that are
likely to be retained for a ten-
year period.
IMPROVING THE EVALUATION METHOD 16
In order to accomplish this organizational goal, all stakeholder groups will play an
important role. However, for practical purposes, the stakeholders of focus for this study will be
Leader Experts. Leader Experts have already completed eight to twelve years on an operational
team and are either currently an instructor or have already completed instructor time and are now
leaders of multiple teams. There are only currently 30 Leader Experts serving within Unit 123.
Leader Experts only make up 1-2% of Unit 123’s soldiers; they understand the selection and
training process and still have a direct impact on the operational environment and are Experts on
teams. Leader Experts are most affected if the organizational goal is achieved.
To develop the stakeholder goal, Leader Experts from Unit 123 must identify what KMO
attributes are desired and what survey questions can identify those attributes. This process will
involve Unit 123 Senior Leaders, psychologists, recruiters, instructors, Leader Experts, and
Experts themselves. Historically, after ten years of an Expert serving in Unit 123, roughly 80%
remain operational. Once this study introduces new recruitment data, Unit 123 can measure the
attrition rate continuously and identify trends well before the ten-year benchmark. Increasing the
retention rate for Unit 123 will increase its operational capacity, which directly affects the
stakeholder’s capability overseas and allows more time at home with their families. If this goal
is not achieved, Unit 123 will continue to operate at 80% of its capacity, placing a higher
operational demand on Experts and their families.
Purpose of the Project and Questions
The purpose of this study is to improve Unit 123’s evaluation method in order to identify
attributes of top-performing Experts. Special Operation Experts consist of military personnel
from the Army, Navy, Air Force, and Marines but have an inadequate evaluation method, which
prevents the identification of attributes of top performers. Included will be an in-depth
explanation of the knowledge, motivation, and organizational gap analysis model developed by
IMPROVING THE EVALUATION METHOD 17
Clark and Estes (2008). Once the knowledge, motivation, and organizational influences are
defined, they can be used as a lens to examine Experts’ knowledge, motivation, and
organizational influences. The implementation of a new evaluation method will facilitate Unit
123’s performance goal that by 2022 it will have implemented a new evaluation tool, identified
attributes, and adjusted recruitment strategies seeking identified attributes.
The research questions, which Creswell (2014) sees as major signposts for readers are
1. What are the knowledge, motivation, and organizational attributes of Experts
necessary for Unit 123 to understand in order to develop an effective evaluation
method?
2. What knowledge and motivational attributes of top-performing Experts correlate
with Unit 123’s evaluation methods?
3. What are the Expert’s knowledge and motivation influences that interact with
Unit 123’s culture and how can these Expert’s knowledge and motivation
influences help achieve Unit 123’s goals?
Methodological Framework
Clark and Estes’ (2008) framework will be used for a systematic gap analysis of Leader
Expert’s KMO influences with an embedded, mixed methods design to identify an evaluation
method. This study analyzes the gap between actual and preferred performance by identifying
the knowledge, motivational, and organizational influences of Leader Experts. The framework
provides a lens for viewing performance gaps and another phenomenon by understanding them
through potential influencers. Using personal knowledge and related literature, assumed
interfering elements will be identified and validated by using surveys, literature review, and data
analysis. Research-based solutions will be recommended and evaluated comprehensively.
IMPROVING THE EVALUATION METHOD 18
Definitions
Assessment: The process of assessing packets by Unit 123 Senior Leaders of military
personnel from the Army, Navy, Air Force, and Marines that desire to attend Unit 123’s
selection course.
Selection: Unit 123’s course to assess a candidate’s: physical capability; psychological
suitability; self-drive; and intestinal fortitude through an extended land navigation course
through a rigorous mountain environment.
Training Course: A six-plus month training course where candidates are intensely
scrutinized and taught the skills necessary to be a junior Expert including but not limited to
rifle/pistol marksmanship, close-quarters battle (CQB), and parachute operations.
Expert: Selected from the Army, Navy, Air Force, and Marines that have completed the
training course and are currently serving on an operational team that is constantly preparing to
deploy and deploying under Unit 123 for the interests of the United States Government.
Team Leader: Has been on an operational team for eight to twelve years, conducted six
to ten combat deployments, and commanded a team of Experts.
Leader Expert: Leader Experts are Experts that have already completed eight to twelve
years on an operational team and are either currently an instructor or have already completed
instructor time and are leaders of multiple Team Leaders, their teams, and supporting elements.
Senior Leader: Senior Leaders have successfully served as a Leader Expert and
currently holds a senior leadership position within Unit 123.
Organization of the Project
This dissertation will be organized using five chapters. Chapter One will introduce the
reader to the key concepts, terminology, and provides a roadmap of what will be addressed in the
IMPROVING THE EVALUATION METHOD 19
study. The roadmap will lead the reader through the organization’s mission, goals, and
stakeholders as well as the initial concepts of gap analysis. In Chapter Two, a review of the
current literature relevant to the study will be presented. Topics of evaluation methods for job
performance, attributes of successful individuals, and the knowledge, motivational and
organizational factors will be addressed. Chapter Three outlines the assumed interfering
elements and identifies the methodology related to study stakeholders, participants, data
collection, and analysis. Chapter Four reviews, assesses, and conducts an in-depth analysis of
the data, results, and findings. In Chapter Five, possible solutions based on data and literature
will be provided as well as, providing recommendations for an implementation and evaluation
plan for the solutions.
IMPROVING THE EVALUATION METHOD 20
CHAPTER TWO: REVIEW OF THE LITERATURE
This literature review will examine gaps in the evaluation method and the attributes
desired of top-performing Experts in Unit 123. An overview of the literature regarding the
Army’s job performance evaluation method, the Non-Commissioned Officer Evaluation Report
(NCOER) and some private sector evaluation methods will be presented. In addition to the
evaluation method, a general overview of the research of the attributes needed to be successful at
the top of the Special Operations community and what methods are available to assess the
attributes. Included in the chapter will be an in-depth explanation of the knowledge, motivation,
and organizational gap analysis model developed by Clark and Estes (2008). Once the
knowledge, motivation, and organizational influences are defined, they can be used as a lens to
examine Experts’ knowledge, motivation and organizational influences on performance.
Evaluation Methods Assess Job Performance
A comprehensive review of both the military’s current evaluation methods and different
methods utilized by civilian organizations will help identify gaps and guide this research to an
effective evaluation model for this Special Operations unit.
The Army’s Evaluation Methods
Currently, the Army utilizes the NCOER, which is an overinflated and inaccurate
evaluation tool to assess military leaders and an even less effective tool within Special
Operations units (Johnson, 2012).
Minaudo (2007) investigated how Management by Objectives (MBO) was widely used
by top management companies and the United States military but faded out in the 1980s and
1990s. Minaudo studied 50 soldiers within the United States Army and NATO throughout 12
IMPROVING THE EVALUATION METHOD 21
locations in four European countries. Ten of the 50 soldiers participated in an in-depth interview
process that focused on three research questions.
The researched questions and synopsis of Minaudo’s findings for each are:
1. How do people describe the experience of using the MBO system in the
organization being studied? They had an even mix of positive and negative
experiences, but the soldiers overwhelmingly agreed that their goals are not
correlated to the organization’s mission (Minaudo, 2007).
2. What suggestions do people have for improving the MBO system and how do
these suggestions relate to the strengths and weaknesses of an MBO system as
highlighted in the literature? Minaudo (2007) found the areas of improvement
were leadership involvement, training, inaccurate or inflated appraisals,
understanding, and execution of the MBO system, and the linkage between goals
and the MBO system.
3. How does motivation or lack of motivation to use the MBO system translate into
supportive or non-supportive behavior for the MBO system? Minaudo (2007)
contributed five factors affecting motivation: to avoid punishment, personal pride,
and achievement, advancement or promotion, duty or obligation to develop
subordinates, and to meet job requirements. The findings were inconclusive
because of insufficient data (Minaudo, 2007).
Wilson’s (2012) empirical article focused on 300 United States Army Master Sergeants,
one grade below the highest enlisted rank in the Army while attending a professional
development school necessary to become the highest rank in the Army. Wilson’s four research
questions asked, what is the relationship between the NCO's self-efficacy, self-esteem, locus of
IMPROVING THE EVALUATION METHOD 22
control and their performance, overall performance, and overall potential? Wilson (2012) found
that self-esteem and locus of control had a negative relationship, while self-efficacy had a
positive relationship on overall performance and potential. Wilson also saw no relationship
between self-efficacy, self-esteem, locus of control, and performance (Wilson, 2012). This
research looked at the evaluation of NCOs and how the validity and reliability of the NCOER
have come under great scrutiny since its revision in 1988 (Wilson, 2012). Wilson (2012)
concluded that NCOs do not put much weight on the NCOER as an evaluation tool, and the
Army should revise it. An evaluation tool designed to capture all the tenets of a successful
noncommissioned officer is vital. An attempt to update the NCOER was undertaken in 2015.
Vergun’s (2015) theoretical article discusses how the NCOER is outdated and attempts to
use one generic form to evaluate all NCOs regardless of rank or level of responsibility. Vergun
further explains how the NCOER makes it difficult to tell the difference between a stellar and
average performance and an average performer and that hopefully, the “new” NCOER in 2015
should capture attributes and competencies of leadership and account for their level of
responsibility. The NCOER has been updated and is attempting to adapt to the progression of
the Army, yet it is still not an effective evaluation tool for the top one percent of the Army, and
an additional tool must be tested. Since the NCOER is an MBO evaluation method that the
private sector stopped using decades ago, it would be prudent to examine what evaluation
methods have been developed and implemented external to the Army.
Evaluation Methods External to the Army
Job performance evaluation that assesses employees’ attributes are often conducted by
researchers and should be examined to help the Army improve by identifying multiple evaluation
methods. In a theoretical article, Brannick, Cadle, & Levine (2012) discusses four methods of
IMPROVING THE EVALUATION METHOD 23
job analysis: critical incident technique (CIT), functional job analysis (FJA), the task inventory,
and DACUM or developing a curriculum. CIT uses critical incidents to identify effective and
ineffective reactions to job requirements in order to evaluate job performance. FJA is used to
determine the purpose of the job by talking to SMEs and evaluating the complexity of what
employees are asked to perform. Task inventory analysis used by the Air Force is a four-step
process: identify tasks required, prepare the questionnaire, obtain task ratings, analyzing and
interpreting data. DACUM is widely used in education. Conventional Validation Strategy
compares test/selection scores with the scores related to job performance for the same
employees. Brannick et al. (2012) found correlations between tests and job performance,
especially cognitive tests. However, employees’ work duties and task completion are an
important part of job analysis (Brannick et al., 2012).
Miners & Cote's (2006) empirical study investigated the question, how does emotional
intelligence positively relates to the job performance of organization members with low
cognitive intelligence and, as such, compensate for low cognitive intelligence? Are individuals
with high cognitive intelligence expected to exhibit high job performance and hence preventing
room for correction and improvement? The study examined 175 full-time employees of a large
public university (Miners & Cote, 2006). Each participant was tested for both cognitive and
emotional intelligence in a 100-minute session. Supervisors of each participant completed a
questionnaire to evaluate the job performance of the participant. The job performance evaluation
consisted of; five-item scale from McCarthy and Goffin to assess task performance and a 16-item
scale from Lee and Allen to assess organizational citizenship behavior (Lee & Allen, 2002;
McCarthy & Goffin, 2001). Research outside of the Army that addresses the evaluation of job
performance provides validated and existing evaluation tools that can be adapted for the purpose
IMPROVING THE EVALUATION METHOD 24
of assessing the top one percent of the military. In order to develop an improved evaluation
method, research should be conducted to find what attributes the evaluation tool must identify
and rate.
Attributes of Successful Individuals
Identifying the attributes of individuals that make it to the top percentage of military and
private sector organizations has been a consistent research subject and will continue to in the
future. If researchers can identify the common blend of attributes of the highest level of these
top performers, a level of predictive analytics can be initiated for recruitment purposes.
Attributes of Successful Leaders
In order to identify the attributes of successful leaders, an understanding of how to assess
an individual’s mental, psychological, and physical profile should be determined in both the
military and the private sector. Brannick et al.'s (2012) theoretical article analyzed the
knowledge, skills, abilities, and other characteristics, predictor measures and performance
outcomes in job evaluations. Brannick identified two schemes to break down attributes:
Sensory, Motor, Intellectual, Rewards, and Personality (SMIRP) and Occupational Information
Network (O*NET) (2012). SMIRP: elements include; Sensory, vision, hearing, touch, taste, and
smell; motor, requirements necessary for using the body to perform at the job;
intellectual/cognitive, information processing, including perception, thinking, and memory;
rewards, interests, values, and related characteristics of employees during work that are
motivating or intrinsically satisfying; personality, behaviors such as conscientiousness,
neuroticism, and extroversion generally falling within the Big-Five. O*NET is the source of
human abilities with six different descriptors: worker requirements; experience requirements;
IMPROVING THE EVALUATION METHOD 25
worker characteristics; occupational requirements; occupation-specific requirements; and
occupation characteristics (Brannick et al., 2012).
Cline's (2017) dissertation focused on whether the Mission Critical Teams Instructor
Cadre Development Program has the ability to increase mission success, survivability, and
sustainability. Allow Cline’s dissertation is not peer-reviewed, it is important to include this
research because it is one of the few studies attributes in elite military units. Cline’s (2017)
primary research questions are, would a University Assisted, Mission Critical Team Instructor
Cadre Development Program increases the ability of the Mission Critical Teams to achieve
Mission Success, Survivability and Sustainability? For this literature review, findings and
discussion will focus on Cline’s research into the common attributes of the Mission Critical
Teams’ (MCT) members. Cline conducts unstructured interviews, narrative inquiry, white paper
presentations, and MCT Summits, which consisted of 220 participants from over five
consecutive years. Cline (2017) conducted a systematic examination of a numerous number of
subjects for the detection of unwanted attributes MCTs that extrapolated the 20 most common
attributes from the 148 initial attributes. Cline ranked them in order of times of occurrence in
data collection: peer acceptance, adaptability, drive, professional, a bias for action, aptitude,
integrity, toughness, agency communicative mindfulness discerning, discipline, leadership,
accountability, fitness confidence, loyalty, trust, and courage (2017).
Boe (2017) researched using IQ tests, personality tests, interviews, the prognosis of
leadership, and academic potential to predict job performance in military officers. Boe’s
research asked the following questions, which character strengths do experienced military
officers consider most important? To what degree of consistency is there between the character
strengths chosen in the present study and our previous studies (2017). Boe’s population
IMPROVING THE EVALUATION METHOD 26
consisted of twenty-one experienced Norwegian Army Officers during a six-month basic officer
educational course (2017). When the participants were asked to rank 24 character strengths, the
top ones consisted of teamwork, integrity, and persistence (Boe, 2017).
The purpose of Kreager's (2010) purpose of the study was to determine the effectiveness
of the Computerized Special Operations Resiliency Test (C-SORT) in conjunction with both
cognitive ability and physical fitness for predicting successful training performance. The
research’s population consisted of 359 male U.S. Naval recruits from SEAL training (Kreager,
2010). Kreager asked for the following questions: Does cognitive ability alone predict training
and performance equal to or better than a battery of initiatives? What are the physiological and
psychological responses to high-stress influences on performance in a military setting? Kreager
utilizes two tests: the C-SORT was created to predict training performance, in association with a
candidate’s physical fitness, while the test of Performance Strategies (TOPS) was developed to
assess the psychological process that motivates successful athletic performance in both practices
and competition. Kreager’s results were contradictory to what was expected: physical fitness is
not a statistically significant predictor; the C-SORT offered minimal findings since only one
characteristic, cognitive ability, was associated with performance; the participant’s age was also
not statistically significant.
Cognitive capacity. Miners & Cote's (2006) empirical research and findings were
discussed previously under the evaluation methods external to the military, but their research
regarding cognitive capacity is worth highlighting. The compensatory model is when cognitive
intelligence (CI) moderates the association between emotional intelligence (EI) and job
performance. Miners & Cote asserts that EI is a type of intelligence and used the Mayer-
Salovey-Caruso Emotional Intelligence Test (MSCEIT) to measure EI (2006). CI is positively
IMPROVING THE EVALUATION METHOD 27
related to task performance and organizational citizenship behaviors dimensions of job
performance by using the Culture Fair Intelligence Test (Miners & Cote, 2006). Miners and
Cote found that employees with high EI can make up for comparatively low CI.
Psychological profile. Picano, Roland, Rollins, & Williams' (2002) empirical study
focused on 190 male U.S. Army soldiers during a psychological suitability assessment for
assignment to an organization with unusually high demands, nonstandard operations in extreme
environments and requiring extended separation from family. Below are Picano et al. (2002)
undeclared research question: how does the Sentence Completion Tests (SCTs) compare to the
Multivariate Personality Inventory (MMPI), peer ratings, clinical interview, and other
assessment tools as a personality assessment instrument for high-level military soldiers? The
SCT was found to be valid and reliable at identifying the level of defensiveness towards
psychological testing (Picano et al., 2002)
Job-specific performance. Charbonnier-Voirin and Roussel (2012) empirical article
researched the effectiveness of a new measurement tool for an individual’s performance in
organizations. The study consisted of three samples of the population: 111 participants from a
variety of companies, 228 from a telecommunications and service company, and 296 from an
aircraft company (Charbonnier-Voirin & Roussel, 2012). Charbonnier-Voirin & Roussel's
research question was, how does their new adaptive performance measurement tool compare to
Han & Williams, and Griffin & Hesketh, and Pulakos tool is the current standard? Utilizing 36
identified items and using a seven-point Likert scale, Charbonnier-Voirin & Roussel highlighted
handling emergencies and crises, managing work stress, solving problems creatively, dealing
with uncertain and unpredictable work situations, training and learning effort, interpersonal
adaptability, cultural adaptability, and physical adaptability. Charbonnier-Voirin & Roussel
IMPROVING THE EVALUATION METHOD 28
found that their new measurement tool largely corroborated Pulakos’ research except for
Physical Adaptability and Interpersonal and Cultural Adaptability, both reasoned to be due
Charbonnier-Voirin & Roussel’s population lacking a physical component to the job
requirements (2012).
Pulakos, Arad, Donovan, and Plamondon (2000) conducted the study involving two
studies: one to develop the model of adaptive performance and the second was an empirical
study of the model. Study one investigated 9,462 incidents collected from 21 different jobs
within 11 military, other government, and private sector organizations (Pulakos et al., 2000).
Pulakos et al. concluded that adaptive performance appears to be a multidimensional construct,
and by identifying a taxonomy of work relevant, adaptive behaviors, the next step is to examine
the model and how it can be implemented for different jobs. Pulakos et al.’s second study was
an empirical study of 175 Army soldiers from nine different jobs as well as other historical
professions where adaptability is necessary to be successful, such as police investigators, state
troopers, attorneys, executive assistants, and air traffic controllers. Pulakos et al.'s research goal
of this research was to develop a taxonomy of adaptive performance that can be observed and
measured regarding each individual’s proficiency or level of contribution (2000). Overall,
Pulakos et al. found that adaptive performance requirements are low to moderate for the majority
of jobs and tended to be much higher for higher-level professions or supervisory jobs such as
NCOs, special Forces, and research scientists (2000). Pulakos et al.’s (2000) research identified
eight dimensions of adaptive performance which will be considered for this research, which are
handling emergencies and crises; handling work stress; solving problems creatively; dealing with
uncertain and unpredictable work situations; learning work tasks, technologies, and procedures;
demonstrating interpersonal adaptability; demonstrating cultural adaptability; and demonstrating
physically oriented adaptability.
IMPROVING THE EVALUATION METHOD 29
High-stress environments. Bartone, Roland, James, & Williams (2008) used the
Dispositional Resilience Scale (DRS), which is a 45-item hardiness measure, on 1,138 male
Special Forces candidates. Hardiness is a psychological category associated with resilience,
health, and performance under high-stress situations (Bartone et al., 2008). Bartone et al.'s
(2008) research questioned if the higher level of hardiness a candidate has, according to the
DRS, corresponds with a higher completion rate of Special Forces training. Bartone et al. (2008)
found that candidates with higher hardiness scores had a significantly higher graduation rate, and
logistic regression confirmed hardiness as a significant predictor of graduation of the Special
Forces training.
In summary, there is an extensive and ever-expanding list of desired attributes for top
performers in all career fields and a focused and deliberate investigation of the top physical,
mental, and psychological attributes that are required for Army soldiers at the highest
level. Once attributes have been identified, an in-depth look at the knowledge, motivation, and
organizational influences associated with high-performers should be conducted to help establish
a new evaluation method.
Explanation of Clark and Estes (2008) Framework
Clark and Estes' (2008) gap analysis model provides a systematic framework for
improving organizational performance by analyzing the gap between existing and preferred
performance. After a gap is identified, this conceptual framework studies the knowledge and
motivation of the stakeholder and how these are essential to reach stakeholder and organizational
goals. Clark and Estes (2008) assert that in addition to understanding stakeholder’s knowledge
and motivations, organizational influences provide either the needed support or undesirable
barriers to the accomplishment of the goal. Krathwohl (2002) categorizes knowledge influences
IMPROVING THE EVALUATION METHOD 30
into four knowledge types: factual knowledge is the basic elements that individuals must know;
conceptual knowledge is the interrelationships among basic elements, theories, models, and
structures; procedural knowledge is how to do something, and metacognitive knowledge is the
general awareness and knowledge of one’s cognition.
Motivation gets people moving, drives them to continue moving, and dictates how much
time and effort they will work on tasks (Clark & Estes, 2008). Clark and Estes (2008) emphasize
that motivation falls under three motivational indexes: active choice, taking action to pursue the
attainment of a goal; persistence, people will finish regardless of adversity or distraction; and
mental effort, weigh the effort to achieve the goal, and decide to do so. Lastly, a solid
understanding of the organizational influences and culture is critical to accomplish the goals and
effect real change (Clark & Estes, 2008).
In order for Unit 123 to address the Leader Expert’s goal, Clark and Estes’ (2008) gap
analysis model will be utilized to explore the knowledge, motivation, and organizational needs to
meet their performance goal. The following section reviews the literature related to assumed
knowledge and skills related to the Leader Expert’s performance goal. Then, assumed
motivational influences of Leader Experts would be addressed, examining the motivations of
Experts at each level and their impact on reaching the performance goal. Lastly, assumed
organizational influences and culture of Unit 123 would be addressed and how they could
positively or negatively affect the achievement of the Leader Experts' goals. Results from
interviews, surveys, and a literature review will be analyzed to determine the knowledge,
motivation, and organizational influences on the performance goal and are thoroughly examined
in the methodology chapter.
IMPROVING THE EVALUATION METHOD 31
Leader Expert’s Knowledge and Motivation Influences
Knowledge and Skills
A comprehensive review of literature from military, business, and education fields is
necessary for Unit 123 to achieve its organizational goal. By 2019, 100% of Leader Experts will
have taken part in the survey/interview process and the new evaluation methods designed to
identify KMO attributes of Experts. An examination of the particular knowledge and skills of
the Leader Experts must be accomplished in order to achieve this goal. The Leader Experts must
understand the knowledge and skills themselves in order to identify the desired knowledge and
skills in Experts. According to Clark and Estes (2008), it is essential to determine if employees
(Experts) possess the proper knowledge and skills to be successful and productive at their jobs.
Along with determining an Expert’s knowledge and skills, an analysis of Unit 123’s
performance problems and an examination of how the organization’s knowledge and skills
influence can help evaluate an Expert’s performance. Unit 123’s lack of an effective evaluation
method is a contributing factor to Unit 123’s possible gap in knowledge of what attributes are
consistent in high-performing Experts. Such a gap in knowledge would indicate a need for gap
analysis to identify what steps need to be taken to reach the Training Section’s goals (Clark &
Estes, 2008).
Krathwohl (2002) categorizes knowledge influences into four knowledge types: factual,
conceptual, procedural, and metacognitive. Factual knowledge consists of the basic elements
that individuals must know; conceptual knowledge involves the interrelationships among basic
elements, theories, models, and structures; procedural knowledge is the how to do something;
and metacognitive knowledge is the general awareness and knowledge of one’s cognition
(Krathwohl et al., 2002). In the following literature review of knowledge influences, this
researcher focused on the factual, conceptual, and procedural knowledge influences.
IMPROVING THE EVALUATION METHOD 32
Factual knowledge influence. Factual knowledge is commonly known as facts such as
terminology or details, which must be understood to solve a problem effectively (Rueda, 2011).
An example of factual knowledge within a military organization is understanding the capabilities
and limitations of weapon systems, air support platforms, subordinate units, and host nation
forces and the legal restriction of each when conducting operational planning. Unit 123’s factual
knowledge influence is the Leader Experts’ need to identify attributes of successful and
unsuccessful Experts. Cline’s (2017) examination of Mission Critical Teams (MCT), which Unit
123 is consistent with, extrapolated the 20 most common attributes from the 148 initial attributes.
They are ranked in order of time of occurrence in data collection: peer acceptance, adaptability,
drive, professionalism, bias for action, aptitude, integrity, toughness, agency communicative
mindfulness discerning, discipline, leadership, accountability, fitness confident, loyalty, trust,
and courage (Cline, 2017). These common attributes act as a factual knowledge attributes of
Experts in Unit 123, and according to Krathwohl et al. (2002), is the terminology and basic
elements that must be understood for Instructors to solve problems.
In order to make an assessment of this knowledge influence, the Leader Experts must
research selection packets and counselor notes from successful and unsuccessful Experts and be
able to identify key attributes that are consistent prior to, during, and years after completion of
training. Identifying these attributes is key for Unit 123 to attain its organizational goal.
Currently, Unit 123 uses extensive personality tests, IQ tests, review boards, and professional
assessments of candidates before becoming Experts. This method makes is difficult to predict
which selected candidates will be the most successful in their careers (Boe, 2017). By cross-
referencing these attributes prior to and during training with attributes identified through a new
evaluation method, a new understanding of consistent attributes can be determined. Unit 123’s
IMPROVING THE EVALUATION METHOD 33
next obstacle is conceptualizing what makes someone a successful or unsuccessful Expert in
Unit 123.
Conceptual knowledge influence. Conceptual knowledge pertains to the
interrelationships between the different elements of an organization, and the knowledge of
categories, principles, and generalizations function together (Krathwohl et al., 2002). An
example of conceptual knowledge within a military organization would be knowledge of what
duty positions in the organization is for “key” leaders to develop and what duty positions are not
considered development positions and used for soldiers not considered future leaders. Leader
Expert’s conceptual knowledge influence is the Leader Experts’ need to know what Experts are
considered successful and unsuccessful while serving in Unit 123. Consistent with other leaders
in all fields, there are high, mid, and low performers. The instructor’s knowledge of an
organization’s structure and hierarchical position enables another method of identifying
successful and unsuccessful Experts. In order to make an assessment of this knowledge
influence, the Leader Experts need to categorize a list of Experts as either successful, average, or
unsuccessful. This alone will not determine successful Experts, but it can provide an additional
data point and possibly demonstrate how one’s perception reflects their performance. An
established list of successful and unsuccessful Experts is necessary to cross-reference selection
and training data to find KMO attributes and achieve the organizational goal.
Procedural knowledge influence. Procedural knowledge is knowing how to do
something or the methods, techniques, algorithms, and methodologies required to achieve goals
(Rueda, 2011). An example of procedural knowledge within a military organization would be
the Army’s evaluation tool, the NCOER, specifically the step-by-step process and standards to
write, validate, and submit the report through the proper channels. Confirmed by Johnson
(2012), the NCOER is overinflated and includes inaccurate evaluations of military leaders,
IMPROVING THE EVALUATION METHOD 34
especially in Special Operation units. Unit 123 does not have an internal mechanism in place to
professionally evaluate Experts once they are accepted into the organization and currently relies
on the US military’s NCOER. Unit 123’s Leader Expert’s procedural knowledge influence is
that Unit 123 does not have an effective evaluation tool to assess current Experts. A new internal
evaluation method would not only be a more effective method of evaluation but establish a new
how to do something, which according to Krathwohl, et al., is the basis for procedural knowledge
(2002).
A specific evaluation method to assess Experts is necessary to attain the Leader Expert’s
organizational goal. Leader Experts must develop a survey and interview questions for Expert’s
first line leaders that go beyond the NCOER’s assessment methods. Even though these Experts
initially underwent an evaluation of military skills, stress resilience, adaptability, mental and
social intelligence, and psychological attributes, they have not all been evaluated after becoming
an Expert (Picano, Roland, Williams, & Rollins, 2006). Leader Expert’s survey and interview
questions will not only evaluate Experts’ performance skills but their knowledge and
motivations. Since these questions are asked after years of job performance, Unit 123 can
evaluate stamina over long periods of stress, quality of social interactions with fellow Experts
and competence in a practical situation (Cline, 2017). These questions will establish a base for a
new procedure to evaluate Experts; once the responses to the questions are consolidated, positive
and negative attributes can be drawn from the data. These attributes will be evaluated and
compared as a factual knowledge influence. Table 2 below provides the organizational mission,
organizational goal, and information of two knowledge influences, knowledge types, and
knowledge influence assessments identified in the literature review. Table 2 indicates a
procedural and factual influence that helps the Training Section to achieve its organizational
goal.
IMPROVING THE EVALUATION METHOD 35
Table 2
Knowledge Influences, Types, and Assessments for Knowledge Gap Analysis
Organizational Mission
Recruit, Assess, Select and Train Unit 123 Experts, a pseudonym, provide
continuity/consistency for institutional training, and subject matter expertise for Unit 123
priorities and initiatives.
Organizational Global Goal
Unit 123’s global goal is by June 2022; it will have implemented a new evaluation method,
identified attributes, and adjusted recruitment strategies seeking identified attributes.
Stakeholder Goal
By 2019, 100% of Leader Experts will have taken part in the survey/interview process and the
new evaluation methods designed to identify KMO attributes of Experts.
Knowledge Influence Knowledge Type Knowledge Influence
Assessment
Leader Experts need an evaluation
method to properly assess Experts
within Unit 123.
Procedural The Leader Experts will be
asked to help develop survey and
interview questions for Expert’s
first line leaders to evaluate
Experts.
Leader Experts need to identify
common attributes of successful
and unsuccessful Experts.
Factual Training Section Instructors will
be asked to research selection
packets and counselor notes
from successful and
unsuccessful Experts.
Leader Experts need to identify
what Experts are considered
successful and unsuccessful while
serving in Unit 123.
Conceptual The Leader Experts will be
asked to give five examples of
successful and unsuccessful
Experts.
Motivation
Motivation gets people moving, drives them to continue moving, and dictates how much
time and effort they will spend on tasks (Clark & Estes, 2008). Clark and Estes (2008) assert
that motivation falls under three motivational indexes: active choice, taking action to pursue the
attainment of goal; persistence, people will finish regardless of adversity or distraction; mental
IMPROVING THE EVALUATION METHOD 36
effort, and weighs the effort to achieve the goal and decides to do so. Through expectancy-value
motivational theory, Eccles (2009) synthesizes motivation to two fundamental questions: “Do I
want to do the task?” and “Can I do the task?” Eccles (2009) viewed the motivation to answer
the first question was determined by four constructs: intrinsic interest, or perceived enjoyment
while engaging in the task; attainment value, or engagement in the task aligned with one’s
identity; utility value, or value of task to achieve one’s immediate and long-range goals; and
perceived cost, or one’s cost to participate in the activity. With regards to the second question,
“Can I do the task?” Eccles (2009) states that if it is answered, yes, it predicts better performance
and higher motivation. This confidence and belief in one’s capabilities of attaining goals fall
under social cognitive theory as self-efficacy theory (Bandura, 2005; Usher & Pajares, 2006).
This researcher will use the self-efficacy theory and the utility value construct as lenses to
examine the motivations of the Training Section’s Instructors towards attaining the
organizational goal.
Self-efficacy theory. Self-efficacy theory is one’s own perception of their capabilities to
complete a task (Bandura, 2005; Pintrich, 2003; Rueda, 2011; Usher & Pajares, 2006). A portion
of one’s belief of their capabilities and belief in how others perceive their abilities derive from
personal and social constructs (Pintrich, 2003). Usher and Pajares (2006) theorize that one’s
self-efficacy is a result of prior knowledge, previous success and failures, and feedback from
others with regards to completing the task. Bandura (2000) asserts that these negative and
positive perceptions directly correlate to their motivation to take action. The higher the self-
confidence in one's own capability the higher the self-efficacy (Bandura, 2000). Clark and Estes
(2008) emphasize that high self-efficacy can increase the motivation of individuals to achieve
organizational goals. Rueda (2011) contributes to the thought, stating individuals with low self-
IMPROVING THE EVALUATION METHOD 37
efficacy become demotivated and disengage from the goal. Self-efficacy within the boundaries
of this research and especially regarding Experts is worth deeper exploration as an influence.
The motivation of a candidate seeking assignment to high demand and non-routine
military missions is one of the five domains psychologists assess for suitability of Experts
(Picano, Roland, Rollins, & Williams, 2002). Prior to becoming an Expert at Unit 123, these
soldiers were the top of their peer groups. Once they completed the training and were hired as
Experts, they were most likely no longer the most proficient of their teammates (Cline, 2017).
Applying the self-efficacy theory, these soldiers had a very high self-efficacy and extreme
confidence; however, once they were hired by Unit 123, they became average compared to their
exceptional teammates, which could impact their self-efficacy. A common expression in Unit
123 is “I’ve never tried so hard to be average” (“Unit 123 Website,” 2018). Usher and Pajares
(2006) discuss how mastery experience is proven to predict heightened self-efficacy and Experts
gain a lot of mastery in the “hard” skills related to the job including shooting, CQB, mobility,
climbing, parachute operations, etc. Experts maintain the self-efficacy related to those skills;
even though their performance is not always the best of their peers, they are still of the
profession. Unit 123 places a lot of additional “soft” tasks on Experts beyond traditional training
including but not limited to high level political and international briefings and engagements,
diplomacy, interagency coordination, and other activities reliant on public speaking, articulation,
personal, professional image and relationship building skills.
Unit 123 has recently begun to prepare Experts with training and guidance for these
“soft” skills; historically, Experts were told to figure it out and perform. In addition to not
training Experts in these soft skills, Unit 123 has recently started academic training in leadership.
However, it does not have an academic or formal instructor training program. Although Leader
Experts have been training soldiers for ten to twenty years on operational teams, they have not
IMPROVING THE EVALUATION METHOD 38
received any formal education for instructing new candidates. This gap was the basis behind
Cline’s dissertation and development of The University Assisted Mission Critical Team
Instructor Cadre Development Program (2017). Most instructors have not had the opportunity to
attend any, of the leadership courses or Cline’s program. This possible gap has the potential to
cause a lack of confidence or reduced self-efficacy in an Expert’s instruction, leadership, or soft
skills capability. Overall, expected self-efficacy of Experts is high in hard skills. By having
Expert leaders and instructors self-assess their confidence in evaluating subordinate job
performance, a greater understanding of their self-efficacy in their soft skills can be gained. In
order for this research to be effective, these Expert leaders and instructors must believe they are
involved and are contributing to the study to maintain motivation to persist (Usher & Pajares,
2006).
Utility value construct. Utility value is one of the four constructs of value under the
expectancy-value theory (Eccles, 2009; Rueda, 2011). Utility value is how much the task is
aligned with an individual’s personal and professional goals and what the cost-benefit analysis is
of performing the task (Eccles, 2009). This becomes a motivator when the individual associates
an increased benefit with an increased cost, especially if the rate of benefit increase is higher
than the rate of cost return.
Every Leader Expert and instructor in Unit 123 has undergone the assessment, selection,
and training process and understands firsthand how it works. Unit 123 has been very successful
in attaining quality individuals, but every Expert in the organization has worked with Experts
that, even though they have completed the same training, are not as competent or morally stable.
These less than optimal Experts do not last long but are reminders that the process is not perfect.
Leaders Experts and instructors see the instruction hours, equipment, and financial costs just to
train an Expert and understand the total cost to the organization and the U.S. military. An
IMPROVING THE EVALUATION METHOD 39
additional cost of a less than optimal Expert on an operational team is the potential for that
Expert to not represent Unit 123 or the U.S. government properly. In addition to less than
optimal Experts on teams is the possibility that the assessment and selection process will miss
quality Experts. It is difficult to quantify this because Unit 123 does not know if they miss a
superior Expert. Unlike a professional sports team, where a missed quality athlete plays for a
rival team and their missed opportunity is very apparent if Unit 123 misses a quality soldier they
do not fight for ISIS. Expert leaders and instructors need to see the value of assessing, selecting,
and training the best possible Experts and that by identifying key attributes of both successful
and unsuccessful Experts, Unit 123 leaders can attain a higher percentage of quality Experts to
meet its mission. Table 3 indicates assumed motivational influences that aid Unit 123 to achieve
its organizational goal.
Table 3
Motivation Influences
Organizational Mission
Recruit, Assess, Select and Train Unit 123 experts, a pseudonym, provide continuity/consistency
for institutional training, and subject matter expertise for Unit 123 priorities and initiatives.
Organizational Global Goal
Unit 123’s global goal is by June 2022; it will have implemented a new evaluation method,
identified attributes, and adjusted recruitment strategies seeking identified attributes.
Stakeholder Goal
By 2019, 100% of Leader Experts will have taken part in the survey/interview process and the
new evaluation methods designed to identify KMO attributes of Experts.
Assumed Motivation Influences Motivational Influence Assessment
Utility Value –
Leader Experts need to see the usefulness in
identifying attributes of high performing experts
to improve the assessment and selection process.
Interview:
“What is the value in identifying attributes of
high performing experts to improve the
assessment and selection process?”
Self-Efficacy – Interview:
IMPROVING THE EVALUATION METHOD 40
Leader Experts are confident in their ability to
accurately evaluate an Expert’s job performance
under their command.
“How confident do you feel in your ability to
evaluate subordinate Expert’s job
performance?”
Organization
An organization’s culture is the driving force for change, or it can be the barrier that
prevents positive change (Clark & Estes, 2008). Unit 123 must look beyond the knowledge and
motivation gaps and address the embedded organizational factors that influence Unit 123 from
the accomplishment of its performance goal (Clark & Estes, 2008). This study evaluates how
Unit 123’s need to maintain a culture that encourages innovation, high standards, and
performance improvement will influence the knowledge and motivational factors impacting the
achievement of the performance goal. This section will start with a general organization theory
overview and transition to focus on the organizational models and settings of Unit 123 and how
they affect the Leader Experts' accomplishment of their goal.
General organization theory. In order to examine an organization’s culture, an analysis
of visible artifacts espoused beliefs, values, and behavioral norms along with the underlying
interrelationships and intangible assumptions must be undertaken (Schein, 2017). The visible
artifacts are easy to identify, but the reviewer's bias towards their organization must be accounted
for, and the underlying assumptions are fundamentally what the organizational culture relies on
(Schein, 2017). Culture is often thought of as a group’s shared identity and creates stability, and
when a leader uses power to enforce new behavior, it shapes the culture and, in turn, changes the
group (Schein, 2017). Kezar (2001) asserts that culture and organizational change is a natural
part of organizational development and is inevitable. Organizations that constantly adapt and
change are successful in the long term (Senge, 1990). An analysis of an organization’s cultural
models and settings provides a holistic view of the organization’s culture (Gallimore &
IMPROVING THE EVALUATION METHOD 41
Goldenberg, 2010). The study of Unit 123 will investigate how the cultural models and settings
influence Leader Expert’s attainment of their goal.
Cultural models. Cultural models are often invisible beliefs, values, and attitudes that
result in cultural practices and shared mental schema within an organization (Gallimore &
Goldenberg, 2010). Unit 123’s stated culture statement is “The Relentless Pursuit of Excellence
in Everything that we do” (Unit 123 Website, 2018). This culture statement is viewed as a
cultural model along with another cultural model; Unit 123 maintains a culture that encourages
innovation, high standards, and performance improvement.
Culture statement. For Expert Leaders to construct a new evaluation method and
identify attributes common amongst top-performing Experts, Unit 123 needs to uphold its culture
statement, “The Relentless Pursuit of Excellence in Everything that we do” (Unit 123 Website,
2018). The culture statement acts as Unit 123’s collective identity, which Bolman and Deal
(2013) assert that strengthens and bonds the organization’s internal as well as external
perception. Although Experts are the main effort of Unit 123, they comprise roughly five
percent of the organization, and the culture statement encompasses all elements of Unit 123, both
operational and supporting. Clark and Estes’ (2008) discusses how the achievement of the
performance goals is interconnected to the proper support and resources from all organizational
elements. All elements of Unit 123, from the front-line Experts to the logistical and culinary
staff, are all expected to live the culture and constantly pursue excellence, regardless of the task.
Performance orientation. Unit 123 needs to maintain a culture that encourages
innovation, high standards, and performance improvement if it wants to enable the Leader
Experts. Grove (2005) asserts that high performing organizations encourage as well as reward
innovation, high standards, and performance improvement. Grove (2005) expands on high-
performance organizations and describes positive characteristics such as how they value;
IMPROVING THE EVALUATION METHOD 42
training, development, competitiveness, materialism, and formal feedback as necessary for
performance improvement. Unit 123 is constant with each of these characteristics except formal
feedback as necessary for performance improvement. Unit 123’s performance orientation is
consistent with high performing organizations, but if it wants to live up to its culture statement, it
must be in everything it does (or the statement outlines?), including feedback.
Cultural settings. Cultural settings are often the visible manifestation of the cultural
models such as reward, communication, and organizational policies and rules (Gallimore &
Goldenberg, 2001). Unit 123’s cultural settings are an ineffective method for performance
feedback and a culture where Experts are not always aware of how they stand in the eyes of
superiors.
Performance feedback. Unit 123 needs to create an effective method for performance
feedback if it wants to uphold its culture statement and performance orientation. A continuous
feedback loop is vital for professional development and achievement of organizational goals
(Archibald, Coggshall, Croft, & Goe, 2011). Unit 123 provides constructive feedback
consistently during the initial training course but lacks an established evaluation method that
would provide feedback to Experts once they are on a team. Without accurate feedback that
identifies the skills and knowledge that the Expert lacks, improvement cannot be expected
(Anderman & Anderman, 2009). Experts consist of the top one percent of all military service
members, but an evaluation method to differentiate between them does not exist. Proper,
consistent feedback cannot be expected until an evaluation method is developed.
Ranked amongst other Experts. Unit 123 needs to provide feedback for Experts, so they
know how they ranked amongst other Experts. The lack of an effective evaluation process
described above prevents the leader from being transparent to Experts on how they are valued in
the organization. Rueda (2011) stated that without a transparent and systematic processes
IMPROVING THE EVALUATION METHOD 43
organizations are at risk of failure. Clark and Estes’ (2008) assert that not only do employees
benefit from a feedback loop but also organizational performance increases when leadership
stays involved in employees and continuously assesses their performance. Table 3 below
provides examples of interview questions focused on unearthing assumed cultural models and
settings that impact the achievement of the goal. Table 4 indicates assumed organizational
influences that aid Unit 123 in the achievement of its organizational goal.
Table 4
Summary of Organizational Influences
Organizational Mission
Recruit, Assess, Select and Train Unit 123 Experts, a pseudonym, provide continuity/consistency
for institutional training, and subject matter expertise for Unit 123 priorities and initiatives.
Organizational Global Goal
Unit 123’s global goal is by June 2022; it will have implemented a new evaluation method,
identified attributes, and adjusted recruitment strategies seeking identified attributes.
Stakeholder Goal
By 2019, 100% of Leader Experts will have taken part in the survey/interview process and the
new evaluation methods designed to identify KMO attributes of Experts.
Assumed Organizational Influences Organizational Influence Assessment
Cultural Model Influence:
The organization needs to uphold its culture
statement.
Interview:
How can Experts uphold Unit 123’s culture
statement?
Cultural Model Influence:
The organization needs a culture that encourages
innovation, high standards, and performance
improvement.
Interview:
How can Unit 123 encourage innovation,
high standards, and performance
improvement?
Cultural Setting Influence:
The organization needs to create an effective
method for performance feedback.
Interview:
How can Unit 123 create an effective
method for performance feedback?
Cultural Setting Influence:
The organization needs to provide feedback for
Experts, so they know how they ranked amongst
other Experts.
Interview:
How can Unit 123 provide feedback for
Experts, so they know how they ranked
amongst other Experts?
IMPROVING THE EVALUATION METHOD 44
Conceptual Framework: The Interaction of Knowledge, Motivation, and Organizational
Context
According to Maxwell (2013), the conceptual framework is the explanation of the
interactions between concepts, theories, and variables of a study in order to address discrepancies
and possible gaps. Merriam and Tisdell (2016), assert that “conceptual framework” and
“theoretical framework” and often interchanged in research and must be understood when
conducting a study. The study uses the conceptual framework to generate theory by connecting
previous literature and research to the proposed research questions (Maxwell, 2013; Merriam &
Tisdell, 2016). The framework provides a lens for viewing performance gaps and other
phenomena by understanding them through potential influencers.
Clark and Estes’ (2008) gap analysis break down influencers into three primary
categories; stakeholder knowledge and motivation and organizational influences. Even though
these potential influences are presented independently of each other, they are not mutually
exclusive and do not remain isolated from each other. The knowledge and motivational factors
impact organizational influences and vice-versa which causes a requirement to understand the
relationship between all three influences. The conceptual framework presented studies previous
research from education, business, and government studies and applies it to achieving Unit 123’s
goals. A holistic view of the influences is necessary for Unit 123 and Leader Experts to develop
an effective evaluation method and identify attributes of top-performing Experts. In order for
Unit 123 to achieve its goal by June 2022: it will have to implement a new evaluation method,
identified attributes, and adjusted recruitment strategies seeking identified attributes. Unit 123
must also understand its cultural models and settings.
IMPROVING THE EVALUATION METHOD 45
Schein (2017) discusses the complexity of defining culture but asserts that culture is the
way employees think, feel, and behave when dealing with similar problems and is often seen as a
group’s shared identity, which acts as a source of stability and consistency. The study will
differentiate the notion of cultural models and cultural settings when disguising culture and
context. Cultural models are invisible, and automated values, beliefs, and attitudes towards an
organization and cultural settings are visible manifestations of cultural models (University of
Southern California, 2018). Unit 123 stated culture statement is “The Relentless Pursuit of
Excellence in Everything that we do” (Unit 123 Website, 2018). This culture statement is
viewed as a cultural model along with another cultural model; Unit 123 maintains a culture that
encourages innovation, high standards, and performance improvement. Unit 123’s unintended
manifestations of the cultural models become the cultural settings. The cultural settings are an
ineffective method for performance feedback and a culture where Experts are not always aware
of how they stand in the eyes of superiors. Schein (2017), views the visible cultural settings as
easy to identify, but difficult to analyze, due to underlying assumptions of the organizational
context. Unit 123’s cultural models help support the stakeholder’s goal by providing an
environment of innovation and improvement. According to Langley et al. (2009), successful
change is more likely when active feedback is in place, there is a shared vision of improvement,
and it is clearly articulated. Along with Unit 123, the Leader Experts, the stakeholders, have an
active role in the achievement of this goal.
Leader Experts will have a critical role during this study. Leader Experts will participate
in an interactive interview process to develop a consensus of goals and objectives which will
lead to the survey questions that will be used to determine what knowledge, motivation and
organizational attributes are common amongst top-performing Experts and what survey
IMPROVING THE EVALUATION METHOD 46
questions will be used to identify them (Creswell, 2014; Garson, 2014). A Leader Expert’s
understanding of the knowledge and motivational influences of Experts will help in the
achievement of this goal.
The conceptual framework focuses on Leader Experts procedural factual and conceptual
knowledge types. Procedural knowledge is knowing how to do something, like Leader Experts'
knowledge of current evaluation methods (Rueda, 2011). To achieve their goal, Leader Experts
must understand the current evaluation model in order to help develop a specific method to
evaluate a very small subset of soldiers, Experts. Factual knowledge is commonly known as
facts like terminology or details like a list of ideal attributes of Experts that Leader Experts must
understand to solve a problem effectively (Rueda, 2011). Leader Experts must consolidate and
prioritize this factual knowledge to effectively aid in the achievement of their goal. Conceptual
knowledge pertains to the interrelationships between the different elements of an organization,
and the knowledge of categories, principle, and generalizations function together (Krathwohl et
al., 2002). Leader Experts must grasp the conceptual knowledge of what duty positions require
different levels of performance of Experts and how they are categorized to achieve their goal.
Along with knowledge influences, motivational influence must be understood and evaluated by
Leader Experts.
Motivation gets people moving, drives them to continue moving, and dictates how much
time and effort they will work on tasks (Clark & Estes, 2008). Self-efficacy theory and the
utility value construct will be used as lenses to examine motivations of the Leader Experts
towards attaining Unit 123’s goal. Self-efficacy theory is one’s perception of their capabilities to
complete a task derived from personal and social constructs (Bandura, 2005; Pintrich, 2003;
Rueda, 2011; Usher & Pajares, 2006). Although Experts have a heightened self-efficacy of
IMPROVING THE EVALUATION METHOD 47
“hard” skills due to mastery during training, skills related to instruction and public speaking are
reduced due to a lower self-efficacy and training (Usher & Pajares, 2006). To achieve Unit
123’s goal, Leader Experts and Expert Instructors must believe they are involved and are
contributing to the study to maintain motivation to persist (Usher & Pajares, 2006). The utility
value is how much the task is aligned with an individual’s personal and professional goals and
what the cost-benefit analysis is of performing the task (Eccles, 2009). Leader Experts need to
see the value of assessing, selecting, and training the best possible Experts, see the instruction
hours, equipment, and financial costs just to train an Expert and understand the total cost to the
organization and the U.S. military.
Leader Expert’s procedural knowledge of existing evaluation models influences their
self-efficacy. Usher & Pajares (2006) theorize that one’s self-efficacy is impacted by feedback
from others and the current evaluation methods do not provide feedback applicable to Unit 123’s
Experts. The more knowledge Leader Experts have of the current evaluation methods and what
they assess and more importantly, what they fail to assess will aid in the development of a new,
effective evaluation method. The procedural knowledge will enable the new method to provide
more accurate feedback, which will increase the self-efficacy of Experts which Bandura (2000),
asserts directly correlates to their motivation to take action. An example would be the new
evaluation method identified that a specific Expert is perceived to have a low strategic
understanding and he attends a strategic studies course to expand knowledge, and in-turn
heightens his self-efficacy. By actively studying how the evaluation methods should be changed
to impact self-efficacy Leader Experts can establish a new how to do something, which
according to Krathwohl, et al. is the bases for procedural knowledge (2002).
IMPROVING THE EVALUATION METHOD 48
The utility value construct of Leader Experts influences the drive to focus their factual
knowledge attributes of Experts. How the Leader Expert’s cost-benefit analysis of identifying
Expert attributes aligns with their personal and professional goals is the utility value construct
(Eccles, 2009). When Expert Leaders see the benefit of examining their factual knowledge,
which are details that must be understood to solve a problem effectively, then the utility value is
acting as a motivation (Rueda, 2011). An example would be Leader Experts investigate the total
mental, physical, and financial cost to develop an Expert and weigh that against their
understanding of what attributes are desired and trying to make the process more efficient.
Leader Experts see the value of assessing, selecting, and training the best possible Experts and
use their knowledge of key attributes of both successful and unsuccessful Experts. Unit 123 can
attain a higher percentage of quality Experts to meet its mission.
Unit 123’s cultural model of maintaining a culture that encourages innovation, high
standards, and performance could influence Leader Expert’s confidence in their ability to
accurately evaluate Experts or self-efficacy. Cultural models and motivation have a very strong
relationship, and organizations can positively and negatively influence underlying motivational
constructs, such as self-efficacy (Hirabayashi, 2018). Unit 123’s high-performance culture
creates an environment that can positively or negatively affect an Expert’s self-efficacy. An
example is an Expert who sees the culture of success and is empowered and driven to perform
because success breeds success. On the other hand, if an Expert starts with a lower self-efficacy
and is intimidated by Unit 123’s culture of excellence, it can have an adverse effect and lower
motivation, these are called motivation killers (Hirabayashi, 2018).
The interactive conceptual framework illustration below presents the relationship
between the knowledge, motivation, and organization and how their influences are intertwined
IMPROVING THE EVALUATION METHOD 49
and are not isolated. Clark and Estes (2008) assert that influences from all three factors must be
studied together in order for the organization to accomplish its goal. In Figure 1, below, is a
visual representation of the interactive conceptual framework described as a concept map by
Maxwell (2013) and a description of the interactions will be discussed after.
IMPROVING THE EVALUATION METHOD 50
Figure 1. Conceptual Framework: Interaction of Stakeholder Knowledge and Motivation
within Unit 123 Cultural Models and Settings
Unit 123 Influences
Cultural Model: Unit 123 maintains cultural that encourages innovation, high
standards and performance improvement; and Unit 123 upholds its culture
statement: The relentless pursuit of excellence in everything that we do. Cultural
Setting: Unit 123 has an ineffective method for performance feedback; and Experts
with Unit 123 do not always know where they stand in the eyes of superiors.
Leader Experts
Knowledge types: Procedural – knowledge of current
evaluation methods; factual – knowledge of most
common attributes of successful leaders; and
conceptual – knowledge of Unit 123’s leadership
positions and what they say about an Expert’s career
path. Motivation types: Self-efficacy – maintain high
self-efficacy once Experts are competing against the top
1% of the military; utility-value – motivated to make
Unit 123 more effective in selecting, training and hiring
new Experts.
Unit 123’s global goal is by June 2022; it will have implemented a new
evaluation method, identified attributes, and adjusted recruitment
strategies seeking identified attributes.
IMPROVING THE EVALUATION METHOD 51
Unit 123’s global goal by June 2022; it will have implemented a new evaluation method,
identified attributes, and adjusted recruitment strategies seeking identified attributes, which is
portrayed in the yellow rectangle in Figure 1. The green oval in Figure 1 presents the Expert
Leader’s procedural, factual, and conceptual knowledge influences and the self-efficacy and
utility value motivational influences. Unit 123’s organizational culture and context are depicted
in the large blue oval. The green oval lies within the blue oval because the organizational culture
and context influence the stakeholder’s knowledge and motivation influences. The blue and
green ovals are not circles because an oval allows for different shaped objects or ideas; it is more
flexible than a circle. The knowledge and motivation influences are enveloped in the
organizational oval because their interactions must be seen holistically in order for Unit 123 to
achieve its goal. The small arrow is the channeling of the research and aimed directly at the
accomplishment of the organizational goal. The yellow rectangle is located after the blue and
green ovals because it is the mission that the influences are attempting to complete.
Conclusion
The purpose of this study is to improve Unit 123’s evaluation method in order to identify
attributes of top-performing Experts. Special Operation Experts consist of military personnel
from the Army, Navy, Air Force, and Marines but have an inadequate evaluation method, which
prevents the identification of attributes of top performers. Chapter 2 identified and explained the
assumed knowledge, motivation, and organizational influences that relate to Unit 123’s
evaluation problem. First, knowledge influences relating to understanding evaluation methods,
pertinent attributes of Experts, and the duty positions of Unit 123 were examined (Krathwohl,
2000; Rueda, 2011). Secondly, motivational influences looked at the self-efficacy of Experts on
how evaluations affect self-confidence and the utility-value construct of Experts and how a new
IMPROVING THE EVALUATION METHOD 52
evaluation method needs to be seen to increase the quality of the next generation of Experts
(Rueda, 2011; Wigfield, Tonks, & Klauda, 2009). Thirdly, the discussion of the organizational
influence focused primarily on how Unit 123’s cultural model of high performance and standards
influences and, in turn, is influenced by its cultural setting of an ineffective method for
performance feedback. These knowledge, motivation, and organizational gaps identified were
analyzed using Clark and Estes’ (2008) framework. Additionally, in Chapter Two, a general
overview of the research of the attributes needed to be successful at the top of the Special
Operations community and what methods are available to assess the attributes.
In the methodology section, Clark and Estes’ (2008) framework will be used for a
systematic gap analysis of Leader Expert’s KMO influences and how they will impact Unit 123’s
evaluation methods in order to find common attributes of top-performing Experts. The research
methodology will consist of a mixed-methods approach. A mixed-methods design will study
Leader Experts by conducting an interview using a combination of survey and interview
questions to assess their KMO influences (Garson, 2014). The survey will help guide the
evaluation questions, and the interview questions will identify Leader Expert’s KMO gaps. The
survey will examine the knowledge, motivation, and organizational influences of Leader Experts.
IMPROVING THE EVALUATION METHOD 53
CHAPTER THREE: METHODS
The purpose of this study was to improve Unit 123’s evaluation method in order to
identify attributes of top-performing Experts. Special Operation Experts consist of military
personnel from the Army, Navy, Air Force, and Marines but have an inadequate evaluation
method, which prevents the identification of attributes of top performers. Included was an in-
depth explanation of the knowledge, motivation, and organizational gap analysis model
developed by Clark and Estes (2008). Once the knowledge, motivation, and organizational
influences were defined, they were used as a lens to examine Experts’ knowledge, motivation,
and organizational influences. The implementation of a new evaluation method will facilitate
Unit 123’s performance goal that by 2022 it will have implemented a new evaluation tool,
identified attributes, and adjusted recruitment strategies seeking identified attributes.
The research questions, which Creswell (2014) sees as major signposts for readers are:
1. What are the knowledge, motivation, and organizational attributes of Experts
necessary for Unit 123 to understand in order to develop an effective evaluation
method?
2. What are the Experts’ knowledge and motivation influences that interact with
Unit 123’s culture and how can these Experts’ knowledge and motivation
influences help achieve Unit 123’s goals?
3. What are the recommended knowledge, motivation, and organizational solutions
to those needs?
Participating Stakeholders
The stakeholder group of focus for this study was Leader Experts within Unit 123. Unit
123 currently consists of 200 plus Experts that are currently operational and will be ultimately
evaluated by the survey developed by this research. Leader Experts are Experts that have
IMPROVING THE EVALUATION METHOD 54
already completed eight to twelve years on an operational team and are either currently an
instructor or have already completed instructor time and are now a leader of multiple teams.
There are presently 30 Leader Experts serving within Unit 123. A mixed-method approach
utilizing a survey combined with an interview was conducted in this study. This study used a
census consisting of all 30 Leader Experts and was conducted in order to establish what survey
questions will be asked in a new Expert evaluation method. According to Merriam & Tisdell
(2016), good data is generated by asking good questions.
Survey and Interview Sampling Criteria and Rationale
Criterion 1. Must be an active duty enlisted Leader Expert that has completed his time
as a Team Leader. Team Leader time is necessary, so the Leader Expert knows what is expected
of an Expert and has personally evaluated Experts for multiple years.
Criterion 2. Must currently or have already completed instructor time in the Training
Section but have not moved to a senior leadership position yet. Instructor experience is required,
so the Leader Expert has a true understanding of what an Expert goes through during the
assessment, selection, and training process and knows the attributes that are currently being
assessed. Leader Experts have first-hand knowledge and experience of Experts that have
completed the training course and did not perform appropriately on an operational team.
Survey Sampling (Recruitment) Strategy and Rational
The researcher utilized a census strategy. A survey was employed, which, according to
Creswell (2014), is beneficial in the economy of design and for rapid turnaround in data
collection. A census strategy is conceivable because 100% of Leader Experts work within Unit
123, and even though Experts conduct military deployments and operations routinely, the
researcher had the ability to conduct the survey. A census strategy survey of the 30 Leader
Experts was conducted, and the researcher personally knows each Leader Expert, ensuring
IMPROVING THE EVALUATION METHOD 55
participation. Both closed-ended and open-ended survey questions were asked only in the survey
of Leader Experts. According to Fink (2013), it allows for ease of scoring for the closed-ended
questions and an opportunity for participants to provide their own opinion in the open-ended
questions. The participants were provided the survey via secure email prior to the interview to
allow time for completion and absorption of the material. The researcher has already been
granted support by senior leadership, and all Leader Experts participated.
The surveys themselves were conducted within a Unit 123’s secure facility. Ensuring the
accuracy of the information and consistency of the survey process is important to the validity of
the research, according to Fink (2015). The surveys of Leader Experts were collected prior to or
at the time of the interview.
Interview Sampling (Recruitment) Strategy and Rational
The interviews included the same participants as the survey and, thus, included 100% of
the Leader Experts using a census strategy. The interviews were used to ensure the full story
was being told and not just answers to standardized questions (Weiss, 1995). The interview took
place upon completion of the survey and was an interactive discussion used to structured
interview questions and complete survey questions to guide the discussion. The researcher was
able to introduce, discuss, answer questions, and create buy-in personally with every Leader
Expert. In the event of a condensed timeline, the researcher forward deploy and conduct surveys
overseas.
Explanation of Choices
A census strategy was chosen due to the relatively small population size of the Leader
Experts and their centralized location, Unit 123. A census strategy allowed the researcher to
study the entire population, eliminating the possible bias from sampling. When given the option
to research the whole population, sampling is not required. If it were necessary for the
IMPROVING THE EVALUATION METHOD 56
researcher to conduct sampling, a purposeful sampling strategy would have to be chosen.
According to Maxwell (2006), purposeful sampling deliberately selects individuals that can
provide relevant information related to the research questions. The Leader Experts chosen were
the Experts already identified for future senior leadership positions. The interviews were
conducted in person, and use a semi-structured format that would allow for open-ended questions
and potential follow-on questions (Maxwell, 2006; S Merriam & Tisdell, 2016). The researcher
already has a high level of rapport with the interviewees, which is key for establishing trust and
allows for probing questions (Johnson & Christensen, 2014).
If observations were a requirement for the research, a census strategy for sampling would
have been conducted. The researcher would have monitored first-line leaders conducting job-
performance counseling of the Experts under their command. According to Johnson &
Christensen (2014), being unobstructive is vital so as not to affect the participants. The
researcher, with permission, would view and record the counseling remotely in order to not
influence the results. The observations would be in a real-world setting, or naturalistic
observation, to allow the most genuine results (Johnson & Christensen, 2014).
Data Collection and Instrumentation
The two methods of data collection chosen for this study were surveys and interviews.
This two-phase mixed methods approach consisting of a Delphi method survey and an interview
with Leader Experts. Phase I utilized the Delphi method in which Leader Experts rated a list of
potential evaluation survey questions using a four-point scale to assess effectiveness in
evaluating job success and conclude with open-ended questions. Surveys provided quantitative
analysis to guide additional questions (Creswell, 2014; Fink, 2015). Phase I provided
participants time to complete the survey, reflect on the open-ended questions, and prepare
constructive feedback for the interview. Phase II was the interview conducted by the researcher
IMPROVING THE EVALUATION METHOD 57
and provided an understanding of the Leader Experts KMO influences towards a new evaluation
method. This section describes how surveys and interviews were used to provide the researcher
with a mixed methods view of the study.
Surveys
The survey was developed and conducted by the researcher in-person or over Video-
Telephone-Conference (VTC) if the Leader Expert is currently on a military deployment. The
survey demonstrated Leaders Experts' knowledge of what characteristics the evaluation survey
needs to evaluate. The evaluation questions the Leader Experts rated assessed what KMO
factors are important for success in Unit 123. The survey provided digitally via secure email
along with instructions. Once the survey was completed, it was returned to the researcher in-
person before the interview. During Phase I, a quantitative survey using a Delphi method was
completed by the 30 Leader Experts. This survey included 50 potential Likert scale survey
questions that the Leader Experts rated using a four-point Likert scale to assess how well the
survey questions evaluate an Expert’s job performance under the KMO framework.
Additionally, in the first survey, the Leader Experts were asked a couple of open-ended questions
asking what additional survey questions should be asked or what other KMO attributes should
the evaluation survey try to assess. Appendix A presents the survey protocol.
Interviews
Interviews followed the standardized open-ended interview protocol and use the
completed survey to guide discovering questions. The interview covers the Phase I survey to
assess Leader Expert’s knowledge and includes six additional open-ended questions; two
focusing on assumed motivation, and four on organizational influences of Leader Experts. Each
participant only conducted one interview, which was scheduled for 45 minutes, but the
researcher was available for 120 minutes in the event the participant wants to discuss further.
IMPROVING THE EVALUATION METHOD 58
The interviews were conducted within a Unit 123 secure facility, with appropriate privacy, and
were informal. Although the interviews were conducted in English.. Clark & Estes (2008) insist
that in the absence of demonstration, a description of procedures and examples are discovered
better through interviews and not surveys. Appendix B presents the interview protocol.
Observation
No observations were conducted in this study.
Documents and Artifacts
Unit 123 has extensive data on all Experts collected during the recruitment, assessment,
selection, and the training course. Merriam & Tisdell (2016) insist that historical documents
offer insight and understanding of research studies. This data set is used as a Unit 123 internal
effort of predictive analytics to predict the successful completion of the training course. The data
for every Expert from 2010 to the present is compiled for data analysis. The researcher analyzed
this data set to understand what characteristics were determined to predict the successful
completion of the training course and use it to develop the evaluation survey questions. The
psychologist within Unit 123 maintains the data set, the researcher has access to the data set, and
the psychologist act as a buffer if any confidentially problems arise. No confidentially issue is
foreseen. The researcher is currently a Training Course Instructor and has access to Training
Course data to aid in evaluation question development. Table 5 lists the documents and sources
the researcher was provided.
Table 5
Requested documents
Document Source
Data set of Experts from 2010 to present Unit 123 Psychologists Office
IMPROVING THE EVALUATION METHOD 59
Training Course Scores of Experts from 2010 to
present
Training Course Sergeant Major
Document and artifact analysis triangulated with the survey, and the interviews offered
added validity to the research (Creswell, 2014).
Data Analysis
Upon the completion of data collection, data analysis was conducted, and the quantitative
and qualitative data derived from the surveys and interviews were inputted into Excel. Both the
survey items and interview questions aligned with the research questions that framed this study.
The survey consisted of ordinal type questions consisting of 50 Likert scale items. Upon the
completion of the data collection, the survey data was reviewed, coded, and a codebook was
created for cross-referencing. The survey was immediately followed by the interview allowing
the results of the survey to facilitated discovery questioning. Upon completion of the interviews,
analytic memos were used to document thoughts, concerns, and initial conclusions. When data
collection subsided, the transcripts were transcribed and coded for analysis in three phases.
Guided by the conceptual framework the first phase, open coding, looked for empirical codes
and applying priori codes. The second phase is where empirical and a priori codes are
aggregated into analytic/axial codes to conduct analysis. The third phase of data analysis
identified pattern codes and themes that surfaced in relation to the research questions and
conceptual framework. The conceptual framework acted as a lens during document analysis.
Credibility and Trustworthiness
According to Clark & Estes (2008), the method of data collection must be credible, and
the investigator’s intentions, trustworthiness, and respect for the participants must be impeccable.
During the entirety of the qualitative phase of this study, the researcher was the instrument of
IMPROVING THE EVALUATION METHOD 60
research. Merriam & Tisdell (2016) caution that there is an inherent bias during the qualitative
phase of research. According to Merriam & Tisdell (2016), recognizing the bias is a critical step.
The researcher was aware of their role as a researcher and the potential of this inherent bias to
maintain credibility and trustworthiness. This section describes the steps the researcher has
taken to increase credibility and trustworthiness by reducing the inherent bias through the entire
research process.
The initial step taken for this study was connecting the relevant literature with the context
and goals of the study. These connections led to the research design choices and interview
questions. Second, the confidentiality of data collection is paramount to the researcher, and all
data was stored on a secure laptop and backed-up on a secure server within Unit 123’s secure
compound. The researcher is aware of his role within the organization, and that he can affect
and can be affected by the research process, Merriam & Tisdell (2006) refer to this as
“reflexivity,” and it is important to the credibility of the study. The researcher is not a supervisor
or in a position above or below the participants, yet the researcher holds a position parallel to the
participants. This should aid in developing credibility that the participants understand the
researcher has no direct effect on their position and that the study was held in complete
confidence. The researchers were transparent during the survey and interview process of the
study about his personal biases, assumptions, values, and how they can affect the findings
(Maxwell, 2013; S. Merriam & Tisdell, 2016). Thirdly, triangulation is used to improve internal
validity by comparing and crosschecking the quantitative survey, interviews, and documents and
artifacts (Creswell, 2014; S. Merriam & Tisdell, 2016). Triangulation enables the convergence
of multiple sources of data, which validated one another, further increases credibility (McMillen
& Stewart, 2009; Merriam & Tisdell, 2016)
IMPROVING THE EVALUATION METHOD 61
Validity and Reliability
Merriam & Tisdell (2016), insist that it is vital to have both valid and reliable findings in
quantitative research, ensuring internal and external validity and reliability. In order to achieve
validity and reliability for quantitative analysis, the study employed proper instrument design,
implementation, and, if necessary, redesign (Creswell, 2014; Salkind, 2014). Creswell (2014),
views the validity of a research instrument is how it assesses the constructs it is intended to
assess. The researcher developed the quantitative instrument for this study by examining
validated research and instrumentation designed by Bandura (2000), Cline (2017), Dembo &
Eaton (2000), and Pintrich (2003). 360-degree leadership assessment surveys developed by Unit
123 for leadership development courses were also examined for survey design (“Unit 123
Website,” 2018.) Even though the survey was designed using previous research and surveys, it
is still a pilot and steps need to be taken to increase validity further. The intent of the survey
provided to the participants is to validate the survey items themselves for evaluating the job
performance of Experts.
To ensure the survey questions are evaluating the proper KMO influences the participant
were given questions designed to examine the influences independently. Encompassed in the 50
questions were 17 focused on knowledge, 11 focused on motivation, ten focused on
organizational, nine on general job performance, and three open-ended questions. By including
100% of Leader Experts, it provides enough to go beyond generalizing since it includes the
entire population which contributes to greater validity (Johnson & Christensen, 2014).
Whether or not the researcher’s instrument can produce consistent results throughout the
entire study is its reliability (Salkind, 2014). The census sampling strategy used in this study
reduces bias and in turn increases reliability (Krueger & Casey, 2009). Participants were already
aware of the study and anticipated the commencement of the study. The researcher emailed all
IMPROVING THE EVALUATION METHOD 62
30 of the participants with instructions, the survey to complete and request to have the survey
completed and returned to the researcher within one week of reception. In the event of a no-
response, the researcher contacted the participant via secure email, secure Instant Messenger
(IM), or secure phone call. Again, the researcher is a peer of the participant who has worked
with them for eight to fifteen years. At the end of the day, the reliability of the survey relies on
the participants, to be honest, open, and provide critical, constructive feedback. Ultimately, each
participant has the highest level of securing clearance, has undergone the most stringent
assessment, selection, and training process in the U.S. government, and has proven his
commitment to Unit 123’s priorities over eight to twelve combat deployments.
Ethics
The researcher ensured this mixed-methods study falls within expected duties of the
participant’s position without compromising the security of themselves or Unit 123 and they
were not be subjected to harm, invasion of privacy, or deception (Glesne, 2011; Merriam &
Tisdell, 2016). The research utilized a quantitative Delphi method survey followed up by a
qualitative interview. All subjects were emailed with an explanation of the study, the survey,
and instructions. The participants were asked to return the survey by email prior to the interview
or physically at the time of the interview. The development of an evaluation survey is a priority
of the command and supported throughout Unit 123, and all participants completed the survey
and interview process. One hundred percent of the 30 Leader Experts participated in the survey
and interview. If a subject did object to the survey and interview, the researcher would have
advised senior leadership to allow non-participation, but this was unexpected. Participants were
not be offered incentives because none is needed in the study (Glesne, 2011). The completion of
the survey and interview is part of the duties required by the participant’s duty position. All
IMPROVING THE EVALUATION METHOD 63
internal Unit 123 personal data and associated information were secured on the Department of
Defense SECRET level computers solely within Unit 123’s facility.
The researcher is a member of the organization where the study is being conducted, and
thus possible ethical considerations were addressed. The researcher is a mid-level leader who is
referred to as an Expert Leader within this research, which has been identified as the stakeholder
group. Leadership at all levels is aware of the researcher’s study and no negative feedback or
reluctance has been presented. This is primarily due to the lack of a method to evaluate an
Expert’s job performance has been universally agreed upon gap within the unit for decades. As a
senior active duty NCO with over 12 years within the organization, the researcher understands
that his role as an organization member and the researcher must remain separate to ensure
validity and protect human subjects. The intent of surveys and interviews is to develop an
evaluation survey to be used in Unit 123 in the future. The Delphi method survey is developed
by the researcher as a starting point to generate thought and discussion during the survey and
interview process. Since the researcher developed the initial survey, he had an impact on the
study, but since the purpose of the Delphi survey and interviews is to adjust the survey, this
mitigated the impact of the researcher. The ability to conduct this research is only possible with
the unanimous support of leadership, securing permission to access participants (Creswell,
2014). As the researcher is the primary data collector, his biases and assumptions must be
addressed (Merriam & Tisdell, 2016).
The inherent biases the researcher needed to address when conducting this study include:
- The researcher knows every Leader Expert personally. Interactions ranging from living
with them during military deployments with constant contact to occasional conversations
and meetings when schedules align;
IMPROVING THE EVALUATION METHOD 64
- The researcher belongs to the stakeholder group and has a similar rank, seniority, and
experience;
Assumptions made by the researcher include:
- Leader Experts want a new evaluation tool:
- Leader Experts will put the necessary time and effort to make the evaluation questions
appropriate;
- The evaluation tools will be implemented upon the completion of this research.
While the researcher is in a leadership position, all participants are parallel in position and rank,
and no participant is under the researcher’s command or will be within the foreseeable future.
Therefore, the research can persist in being ethically sound with regards to the position, and all
participants, leadership, and the researcher have the highest level of security clearance, ensuring
confidentiality.
Limitations and Delimitations
The researcher is aware of the potential limitations and delimitations of this study.
According to Merriam & Tisdell (2016), all studies face limitations, which can result in
producing inaccurate or unrepresented data. Some limitations that exist for this study include:
- The survey and interview design were explicitly created for this study and thus have not
been validated in prior research studies.
- Participants might conduct the surveys and interviews in austere environments, i.e.,
combat zones or high-stress events, and thus their commitment to the study could be
hindered.
- The study was dependent on the truthfulness of the participants.
- Participants could unexpectedly and immediately deploy, which can delay the survey and
interview process.
IMPROVING THE EVALUATION METHOD 65
Decisions that the researcher made that may have implications for the study are its
delimitations. Some delimitations that exist for this study include:
- Focusing on a particular stakeholder group of Leader Experts and not other stakeholder
groups.
- Using the KMO influences as the lens of evaluation could preclude significant areas of
job performance from being evaluated.
IMPROVING THE EVALUATION METHOD 66
CHAPTER FOUR: RESULTS AND FINDINGS
This chapter presents the results and findings of the study. Chapter Four is organized into
categories of assumed knowledge, motivation, and organization causes and were delineated in
Chapter 3 using Clark and Estes’ (2008) Gap Analytic Conceptual Framework. The study
focused on the factual, conceptual, and procedural knowledge influences needed by Leader
Experts to identify and evaluate the desired attributes of Experts. Self-efficacy and utility-value
construct theories were the motivational influences considered for this study to assess Leader
Expert's confidence in evaluating and the usefulness of knowing attributes of Experts. Lastly,
the organizational influences were studied to assess how cultural models and cultural settings
influence Unit 123’s ability to accomplish its goal. A mixed-method approach for data
collection was used in this study. Document analysis followed by a two-phase mixed-method
approach consisting of a Delphi method survey followed by an interview to understand the
knowledge, motivation, and organization challenges facing Leader Experts.
Initially, the study began with the document analysis of the extensive data on all Experts
collected during the recruitment, assessment, selection, and the training course in order to aid in
the development of the evaluation survey items. Next, all Leader Experts were surveyed where
they rated a list of potential evaluation survey questions using a four-point scale to assess
effectiveness in evaluating job success. Immediately following the completion of the survey,
Leader Experts were interviewed by the researcher for a 30-90 minute one-on-one, in-person
interview. The recently completed survey contributed to discovery questions, and interview
topics expanded the depth of the qualitative data collection. All 30 Leader Experts within Unit
123 were surveyed, interviewed, recorded, transcribed, and analyzed with consent.
IMPROVING THE EVALUATION METHOD 67
Participating Stakeholders
The stakeholder group of focus for this study was Leader Experts within Unit 123.
Leader Experts represent 30 of the 200 plus Experts within Unit 123. Leader Experts are, on
average, 39 years old, have 19 years of military service, 12 years in Unit 123, 11 combat
deployments, 3.6 years of college, 97% Caucasian, and have been identified as future senior
leaders. One hundred percent of the Leader Experts participated in the study, which made it
possible because the researcher traveled to worldwide locations to conduct data collection.
Determination of Assets and Needs
This mixed-method study utilized two data sources: surveys and interviews. These
quantitative and qualitative data sources were used as the criteria for determining the assets and
needs of Unit 123 in accordance with the assumed causes. Discussed in Chapter 3 were an in-
depth description of the assumed causes and an explanation of the Likert survey and its items.
This survey provided an asset and needs assessment categorizing from high to low Leader
Expert’s assumed knowledge, motivation, and organizational influences. The criteria used for
assessing any gaps and determining the need from the survey data on assumed knowledge,
motivation, and organizational influences for the creation and implementation of a new Unit 123
specific evaluation tool were as follows: using the mean rating of the survey items, any items
that fell in the top one-third of the survey were determined a need; items that feel below the top
one-third did not meet the threshold and were not considered a need. The survey results were
used to guide discovery questions during the interviews. In addition, the interviews were used to
investigate the following questions: what is wrong with the current system; what items need to
be altered or adjusted to highlight attributes; do items need to be added to uncover additional
attributes; if the survey is considered a need, how should it be implemented; and does it conform
to Unit 123 culture; etc. The criteria used for determining the yes-no questions during the
IMPROVING THE EVALUATION METHOD 68
interview process was 90%, or 27 out of 30 Leader Experts had to agree to be considered a need.
The criteria for variable answer questions was 66% of 20 out of 30 Leader Experts had to agree
to meet the threshold.
Results and Findings for Knowledge Causes
The assumed knowledge influences of Leader Experts were described in Chapter 3.
These were categorized into three groups and assessed by surveys, interviews, and document
analysis. This section reports on the results and findings of the knowledge assets and assumed
knowledge influences in each knowledge category. The three of Krathwohl’s (2002) four
categories of knowledge types: factual knowledge is the basic elements that individuals must
know; conceptual knowledge is the interrelationships among basic elements, theories, models,
and structures; and procedural knowledge is how to do something.
Factual Knowledge
Influence 1. Leader Experts need to identify common attributes of successful and
unsuccessful Experts
Survey results. Leader Experts were asked to rate potential survey items using a four-
point Likert scale to rate how important the attribute is to success in Unit 123: One, Not relevant
to success; two, Nice to have but not necessary; three, Important but not critical to success; four,
Critical to success. This survey acted as a needs assessment categorizing from high to low what
Leader Experts need within an evaluation tool, to assess Expert’s attributes and job performance.
One-hundred percent of Leader Experts were surveyed during data collection and 100% of the
survey items and interview questions were answered. Each survey item expresses the Leader
Expert's perception of what attributes they believe are critical to success in Unit 123, conveying
their factual knowledge. Table 6 below displays the 50 survey items and the mean response of
the 30 participants. On the four-point Likert scale the mean ranged from 2.6 to 4.0, and the
IMPROVING THE EVALUATION METHOD 69
median was 3.0 or above on all items. A threshold of a mean of 3.2 was determined as the cut
score. Unit 123 Leader Expert rated 48 of the 50 survey items above 3.2, which falls in the top
one-third of the responses, considered to be valued high in Likert scale surveys (Narli, 2010). To
be determined as a need, a mean of 3.2 is required, and 100% of Leader Experts must have
responded to the survey item, 48 of the 50 items meet this threshold.
Table 6
Survey Results for Factual Knowledge of Leader Experts
Factual knowledge Item Mean
1. Competency in CQB tactics, techniques, and methods: 4.0
2. Competency in marksmanship and small unit tactics: 3.8
3. Competency in secondary Team tasks, i.e., mobility, climbing, and water: 3.3
4. Competency in parachute operations: 3.5
5. Ability to retain technical and operational information: 3.7
6. Ability to articulate concepts and ideas to both military and civilian decision-
makers using appropriate verbiage and language:
3.3
7. Ability to identify operational gaps and find innovative solutions: 3.4
8. Ability to approach problems from a new direction (think outside the box): 3.3
9. Ability to rapidly process and apply new information effectively: 3.8
10. Ability to make things happen when given the minimum amount of
information:
3.8
11. Ability to successfully complete a task when only asked once (fire and forget): 3.4
12. Ability to instruct or teach techniques and skills to Experts and partners: 3.3
13. Natural talent - things come easy without much effort: 2.6
14. Motivation or work ethic with regards to physical fitness and physical
readiness (Grit):
3.6
15. Motivation and drive when tasked to undertake an intensive and challenging
duty:
3.5
IMPROVING THE EVALUATION METHOD 70
16. Motivation and drive to accomplish a challenging task when the beneficiary is
the large group or an external partner:
3.5
17. Motivation or work ethic with regards to maintaining and increasing the hard
skills:
3.7
18. Motivation or work ethic when asked to perform a “less than desired” job or
duty:
3.6
19. Motivation or work ethic towards professional/self-development: 3.2
20. Ability to effectively communicate with outside agencies and organizations: 3.3
21. Ability to tactfully express alternative viewpoints to peers and superiors: 3.7
22. Cares about those both inside and outside his immediate circle: 3.7
23. Demonstrates integrity: 3.8
24. Ability when working in an ambiguous environment: 3.5
25. Understands Unit 123’s internal relationships and personalities and how to
navigate them:
3.3
26. Uphold Unit 123’s culture statement, “The Relentless Pursuit of Excellence in
Everything that we do?”
2.6
27. Embodies Unit 123’s core values; trust adaptability, and commitment: 3.6
28. Strategic understanding of operational environment: 3.5
29. Work/home balance: 3.6
30. Works well in high-stress environments: 3.7
31. Leadership potential: 3.8
32. Expert is self-aware of how he is perceived by peers and leaders: 3.7
33. Overall work ethic: 3.8
34. Overall, as an Expert: 3.9
35. Ability to apply lessons learned from negative experiences: 3.5
36. Understand how his decisions and actions impact the larger organization: 3.7
37. Demonstrates adaptability: 3.5
38. Ability to handle criticism: 3.8
IMPROVING THE EVALUATION METHOD 71
39. Ability to bounce back from traumatic events: 3.3
40. Works well with others or makes an effort to work well with others: 3.8
41. Potential for lifestyle to negatively affect performance and/or longevity: 3.8
42. Ability to identify and address problems without being told: 3.8
43. Ability to identify risk and mitigate it: 3.9
44. Ability to develop subordinates and pass on knowledge amongst Experts
“mentorship”:
3.8
45. Ability to not let emotions drive decision making: 3.8
46. Places the needs of the team above his own “selfless”: 4.0
47. Able to convey thoughts effectively through the written word, emails, and
papers:
3.7
48. Demonstrates maturity: 4.0
49. Seeks to improve weakness and shortcomings: 4.0
50. Stays current with emerging technology: 3.7
Interview findings. Interviews with all 30 Leader Experts made it clear that identifying
optimal attributes of Experts is necessary to properly evaluate their job performance. Participant
14 explained,
I like a lot of these attributes, like, applies lessons learned from negative experiences,
ability to handle criticism, and demonstrates adaptability. These are important to me, and
we are not looking for these right now, having these questions laid out would allow more
time and focus counseling.
Participant 25, the most senior of Leader Experts in Unit 123 stated, “Some questions seem
redundant, but I want to know the answer to everyone. These questions can help ID the root
things we want in an Expert and help tell Experts what and how they need to fix it.” Even more,
participant 1 discussed how
IMPROVING THE EVALUATION METHOD 72
Honestly, if I had these (survey questions) to guide my counseling, I would have done a
better job as a Team Leader. It would have forced me to look at the Expert from different
angles and I think we would have both benefited from it.
With respect to the number of survey items desired to identify attributes on the evaluation tool,
29 of the 30 participants did not want to reduce the number of survey items in any way.
Participant 7 said,
I wouldn’t take any away; there are probably even more questions I would want to know
that is not even on this. There should be a way to submit recommended questions down
the road in the future. So, you know, we can keep it as a living document.
Participant 13 further explained, “Well… I want to say some should be taken away, but… I
want to know the response to every one of these questions, and I think leaders will want to also.”
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess this influence.
Summary. The assumed influence that Leader Experts need to identify common
attributes of successful and unsuccessful Experts was determined to be a need. The survey
established that Leader Experts need an evaluation tool with items proposed in the studied survey
to identify an Expert’s attributes and job performance. Leader Experts rated 96% of the
proposed survey items above the cut score to be used on a Unit 123 evaluation tool. In addition,
during the interview process, 97% of Leader Experts did not want the number of questions
reduced. Since these survey items are being used to validate a needs assessment, the factual
knowledge influences are determined to be a need.
Procedural Knowledge
Influence 1. Leader Experts need an evaluation method to properly assess Experts
within Unit 123.
IMPROVING THE EVALUATION METHOD 73
Survey results. The survey did not highlight or assess procedural knowledge influences.
Interview findings. Interviews with Leader Experts highlighted the need for a Unit 123
specific evaluation tool. One hundred percent of the Leader Experts said the NCOER was not
adequate to evaluate Experts. Participant 20 explained, “We (Unit 123) only use the NCOER to
get guys promoted, if someone actually uses it (NCOER) for job performance, I don’t know
about it.” Participant 24 complained,
The NCOER is a joke. Does anyone actually use that (NCOER) to get a look at a guy’s
performance? No, no way. Anything that is even remotely designed for an Expert would
be great. This (survey questions) is at least a way to start the conversation.
Participant 1, revealed
Honestly, if I had these (survey questions) to guide my counseling, I would have done a
better job as a Team Leader… We (Unit 123) don’t do a good enough job at evaluating a
guy after the training course. Experts deserve better from us (leaders).
When asked if Experts are consistently provided feedback from their leaders, each and every
Leader Experts replied in the negative. Participant 6 said,
Consistently? I would say 50/50 if a Team Leader provides good feedback. We just get
away with it because of the quality of the guys. But when someone really needs it, he’s
got a 50/50 chance depending on his leader. That sounds bad when I say it out loud.
Participant 13 further explained, ”Unfortunately, no. Some guys are great at feedback, and some
either don’t have the leadership ability or dislike uncomfortable conversations. We’ve
(leadership) tried to move guys around to be a better leader when able but sometimes it’s too
little too late.”
Observation. No observations were made for this influence.
IMPROVING THE EVALUATION METHOD 74
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. Leader Experts need an evaluation method to properly assess Experts within
Unit 123 was determined to be a need. Although the survey did not assess the procedural
knowledge the interviews substantially determined the Leader Experts desire a new evaluation
tool. One hundred percent of Leader Experts agreed that: Experts are currently not consistently
provided feedback on performance; Unit 123 does not have an appropriate evaluation tool; the
proposed survey items should be part of a mandatory evaluation tool. Therefore, this procedural
knowledge influence is determined to be a need.
Conceptual Knowledge
Influence 1. Leader Experts need to identify what Experts are considered successful and
unsuccessful while serving in Unit 123.
Survey results. Leader Experts were asked to rate potential survey items using a four-
point Likert scale to rate how important the attribute is to success in Unit 123: One, Not relevant
to success; two, Nice to have but not necessary; three, Important but not critical to success; four,
Critical to success. This survey acted as a needs assessment categorizing from high to low what
Leader Experts need within an evaluation tool to assess Expert’s attributes and job performance.
Table 6, above displayed the 50 survey items and the mean response of the 30 participants.
Forty-eight of the 50 Items were determined as a need. Table 7 shows the six items with a mean
of 3.9 or above representing the attributes Leader Experts expressed as the most important in
determining if an Expert is successful or unsuccessful. On the four-point Likert scale, the mean
ranged from 2.6 to 4.0, and the median was 4.0 for the top 38 items. To further highlight the
most critical, a mean of 3.9 was determined as the cut score, considering the over 90% of Leader
Experts rated it as “critical to success.”
IMPROVING THE EVALUATION METHOD 75
Table 7
Survey Results for Conceptual Knowledge of Leader Experts
Survey Item Mean
46. Places the needs of the team above his own “selfless”: 4.0
1. Competency in CQB tactics, techniques, and methods: 4.0
48. Demonstrates maturity: 4.0
49. Seeks to improve weakness and shortcomings: 4.0
34. Overall, as an Expert: 3.9
43. Ability to identify risk and mitigate it: 3.9
Interview findings. It was revealed that Leader Experts’ knowledge of successful and
unsuccessful Experts is a need. Leader Experts were asked if Unit 123 should perform data
analysis of the result of the evaluation tool and keep track of successful and unsuccessful
Experts. One hundred percent of the Leader Experts responded that the results should be
analyzed but responses on successful and unsuccessful Experts varied. Participant 15 explained,
Yes, the data should be analyzed, how else would we use this to benefit recruitment? I
think recruitment is a close second in importance to feedback. I hesitate to give a solid
line on who and who is not successful. You can be a successful Expert without being a
leader; do we consider that guy unsuccessful because he didn’t become a leader? We
need to look at that more.
Participant 26 further explained,
Let the psychs take care of the data analysis. But guys will always argue about whose
successful. We (leaders) just need to come up with a hard line and stick to it. Like… if
you complete your TL time, success. If you get fired, not successful. It won’t be perfect,
but easy and right 90ish percent of the time.
IMPROVING THE EVALUATION METHOD 76
Participant 29, proposed
Let's do the data analysis and see what the results say after four or five years. This kind
of survey will hopefully show us what a successful and unsuccessful guys looks like. We
won’t need to find a leadership position to make a mark of success. The evaluations will
tell us throughout their career.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. Leader Experts need to identify what Experts are considered successful and
unsuccessful while serving in Unit 123 was determined to be a need. Table 7 displayed the top
six items from the survey with a mean of 3.9 or above representing the attributes Leader Experts
expressed as the most important in determining if an Expert is successful or unsuccessful. One
hundred percent of Leader Experts thought that the results should be analyzed. However, how
Unit 123 evaluates successful and unsuccessful Experts needs to be determined beyond the scope
of this study. The survey items established a priority of what attributes are needed to determine
success, and the interviews confirmed that successful and unsuccessful Experts being identified
is a need.
Results and Findings for Motivation Causes
The assumed motivational influences of Leader Experts were described in Chapter 3.
These were categorized into three groups and assessed by surveys, interviews, and document
analysis. This section reports on the results and findings of the motivation assets and assumed
motivation influences through the lenses of self-efficacy theory and the utility value construct.
The results are used to determine if Leader Experts have the assumed motivational assets.
IMPROVING THE EVALUATION METHOD 77
Self-Efficacy
Influence 1. Leader Experts are confident in their ability to accurately evaluate Experts’
job performance under their command.
Survey results. Surveys did not highlight or assess this influence.
Interview findings. The data revealed that Leader Experts being confident in their ability
to accurately evaluate Experts’ job performance under their command is a need. The results and
findings of this study found that 100 percent of the Leader Experts were confident in their ability
to evaluate subordinates themselves. However, only 80 percent of Leader Experts were
confident that Team Leaders under their command could consistently evaluate Experts beneath
them. Participant 24 said,
I believe I am able to evaluate guys and I don’t mind having an uncomfortable
conversation. But that may not be true for the other Team Leaders I worked with.
Sometimes I had to tell one of his (other Team Leader) guys that they were messed up.
That might not be right, but it’s better than him not knowing.
Participant 12 explained,
I know you are talking to and asking all the Leader Experts these questions and they most
likely all do a good job of evaluating their teammates, but that’s the problem. The guy
chosen to be a Leader Expert is the ones that actually do the right thing. They are only
one of three Team Leaders up for the position of Leader Expert. I can’t say I’m confident
in the ability of the other Team Leaders to assess their Teammates.
Participant 14, stated,
I think we will all be able to do this. The Team Leaders that might suck at counseling
and feedback will be much better if they have questions like you have here. These
IMPROVING THE EVALUATION METHOD 78
questions will give confidence to those that are not confident because they now have a
starting point.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. Leader Experts being confident in their ability to accurately evaluate Experts’
job performance under their command is a need. Although Leader Experts are confident
themselves, only 80% are confident in the leaders under their command, and therefore it is
identified as a need.
Utility Value Construct
Influence 1. Leader Experts need to see the usefulness in identifying attributes of high
performing Experts to improve the assessment and selection process.
Survey results. Surveys did not highlight or assess this influence.
Interview findings. The data revealed that Leader Experts see the usefulness in
identifying attributes of high performing Experts to improve the assessment and selection
process; therefore, it is an asset. The results and findings of this study found that 92 percent of
Leader Experts thought identifying attributes of high performing Experts would improve the
assessment and selection process. Participant 25, who has run the assessment and selection
process previously said,
If we know what to look for, of course, it will be easier to recruit. We have a lot of data
from previous selections, and we can see at least what attributes are common amongst
those who passed selection. It would be even better if we knew what attributes are
common amongst successful Experts.
Participant 28 explained,
IMPROVING THE EVALUATION METHOD 79
I think we (Unit 123) already do this for the most part. We have found who and what
works and look for more of the same. This should give us more things to look for. I just
hope we are not over complicating it.
Participant 3 disagreed,
Like we always say around here…We don’t know why selection works, but we know it
works. If it is not broken, don’t fix it. I think if we make more criteria and only look for
the perfect guy, we might miss potentially great guys that happen not to fit the mold.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. The assumed influence that Leader Experts see the usefulness in identifying
attributes of high performing Experts to improve the assessment and selection process was
determined to be an asset. The interview responses showed that 92% of the Leader Experts
already recognized the usefulness of this influence. Therefore, this influence is determined to be
an asset.
Results and Findings for Organization Causes
The assumed organizational influences of Leader Experts were described in Chapter 3.
These were categorized into two groups and assessed by surveys and interviews. This section
reports the results of the survey and interviews and categorizes them under the cultural model
and cultural setting for each assumed organizational asset.
Cultural Models
Influence 1. The organization needs to uphold its culture statement.
Survey results. Leader Experts were asked to rate potential survey items using a four-
point Likert scale to rate how important the attribute is to success in Unit 123: One, Not relevant
IMPROVING THE EVALUATION METHOD 80
to success; two, Nice to have but not necessary; three, Important but not critical to success; four,
Critical to success. This survey acted as a needs assessment categorizing from high to low what
Leader Experts need within an evaluation tool to assess Expert’s attributes and job performance.
Table 6 previously displayed the 50 survey items and the mean response of the 30 participants.
48 of the 50 Items were determined as a need. Table 8, below, shows the eight items that were
intended to assess the importance Leader Experts placed on Experts acclimates to Unit 123
culture. This cultural model does not refer to how an Expert acclimates to the culture but rather
if the Experts need to conform to the culture statement itself. Table 8, below compares the mean
of the organizational items, the overall mean, and the mean of the item specifically focused on
the culture statement. While the mean of the organizational items is slightly higher than the
overall, 3.62 to 3.6 respectively, the cultural statement shares the lowest mean of the entire
survey with a 2.6. This indicates that while Unit 123 culture is considered important to uphold,
the culture statement does not properly reflect the culture and does not need to be upheld. Since
the culture statement is not appropriate, the organization cannot uphold it. Therefore, it is a
need.
Table 8
Survey Results for Organizational Knowledge of Leader Experts
Factual knowledge Item Mean
23. Demonstrates integrity: 3.8
24. Ability when working in an ambiguous environment: 3.5
25. Understands Unit 123’s internal relationships and personalities and how to
navigate them:
3.3
27. Embodies Unit 123’s core values; trust adaptability, and commitment: 3.6
36. Understand how his decisions and actions impact the larger organization: 3.7
37. Demonstrates adaptability: 3.5
IMPROVING THE EVALUATION METHOD 81
43. Ability to identify risk and mitigate it: 3.9
Mean of organizational items 3.62
Mean of all items 3.60
Culture Statement Alone
26. Uphold Unit 123’s culture statement, “The Relentless Pursuit of Excellence
in Everything that we do?”
2.6
Interview findings. The data revealed that the organization does uphold its culture but
that Leader Experts do not agree with the culture statement itself. When Leader Experts were
asked about Unit 123’s culture statement or brought up the culture statement question on their
own, these were some of the responses. Participant 17 brought up during the survey,
The culture statement. I hate this. Why do we still have this thing? He (previous leader)
came up with it, posted it everywhere, and nobody likes it. Why haven’t we (Unit 123)
taken it down and acted like it never happened?
During the interview Participant 27 said,
I don’t disagree with the culture statement; it is right. However, it was not collectively
thought up or agreed upon. Did you know it’s the culture statement for a TV news
channel? If we really feel we “need” a culture statement, let's take the time and come up
with one. By the way, I don’t think we need one.
Participant 12 pointed out,
Having a culture statement goes against our culture. It puts us in a box, tells people what
we do, and how we do it. We (Unit 123) constantly adapt and find ways to do the
impossible, because we find a way to accomplish the mission. If one sentence defined us,
we would lose our competitive advantage.
Participant 20 revealed,
IMPROVING THE EVALUATION METHOD 82
I know no Expert likes the culture statement, and I understand, but it’s not for Experts,
it’s for support guys. Understand that 90 plus percent of Unit 123 is only here to support
the Experts and they haven’t gone through the same process to get here and don’t
intuitively uphold Unit 123’s culture. Yes the culture statement could be better, but
Experts will never be happy with it, because they don’t need it.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. The assumed influence that the organization needs to uphold its culture
statement is not a need. The culture statement item had a mean of 2.6, which were the lowest
items of a survey that is mean were 3.6. The interviews showed that 97% of the participants
disagree with the culture statement. Through analysis of both surveys and interviews, it was
apparent that the culture statement itself is not aligned with Unit 123’s culture and, therefore, not
a need.
Influence 2. The organization needs a culture that encourages innovation, high
standards, and performance improvement.
Survey results. Presented in the previous influence, Table 8 shows the eight items that
were intended to assess the importance Leader Experts placed on Experts acclimates to Unit 123
culture. Table 8 showed that the mean of the organizational items is slightly higher than the
overall, 3.62 to 3.6 respectively, indicating that Leader Expert holds Unit 123’s culture in high
regard. Innovation, high standards, and performance improvement are critical components of
Unit 123’s culture, and the survey results reveal that Unit 123 understands and upholds it.
Therefore, this organizational influence is determined as an asset.
IMPROVING THE EVALUATION METHOD 83
Interview findings. The data revealed that the organization does encourage innovation,
high standards, and performance improvement as part of its culture. One hundred percent of
Leader Experts agreed that Unit 123’s culture encourages these elements. However, 83 percent
of Leader Experts discussed how some leaders are not cultivating innovation at the same level as
previous generations. Examples of responses to the question regarding if Unit 123 encourages a
culture of innovation, high standards, and performance improvement are to follow. Participant
30, the oldest of the interviewed, said
The culture of high standards and performance is not what brings guys here, it’s the
innovation. In Experts’ old units, they are told what to do and how to do it. Innovation is
not only new equipment and weapons, but it’s also the innovation to find new ways to do
things. I think that freedom of movement and action it one of our biggest recruitment
tools and we are losing it. I’ve seen a slow change to a more cautious command where
leaders want to go with the tried and true rather than new things.
Participant 18 responded,
I think we (Unit 123) do a good job, at least better than anywhere else in the Army. I
think high standards and high performance is just expected when you get hired.
Encouraging innovation, that is where we are so different. Encouraging innovation
means, leadership must assume risk, and that is hard outside this building.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
IMPROVING THE EVALUATION METHOD 84
Summary. The assumed influence that the organization needs a culture that encourages
innovation, high standards, and performance improvement is an asset. The survey results
revealed that innovation, high standards, and performance improvement are critical components
of Unit 123’s culture. Even though the interviews exposed that 83% of Leader Experts felt
previous generations cultivated innovation better, 100% of Leader Experts agreed that Unit 123’s
culture encourages innovation, high standards, and performance improvement. Through the
examination of both surveys and interviews, this influence was determined to be assets.
Cultural Settings
Influence 1. The organization needs to create an effective method for performance
feedback.
Survey results. Surveys did not highlight or assess this influence.
Interview findings. The data exposed that the organization needs to create an effective
method for performance feedback. Participant 12 said,
The reason I filled out that survey, we are sitting here, and you are doing this research is
to make this happen, right? We need something to replace the NCOER, and actually
have a standard in place to give guys some feedback on their performance.
Participant 9 responded,
Experts get a ton of feedback during the training course, and I think they expect it will
continue when they reach a team. It doesn’t for everyone. Anything that will force guys
to have a constructive conversation about how they are performing is vital. These
questions look like a good approach.
Participant 2 replied,
IMPROVING THE EVALUATION METHOD 85
I think guys get feedback, but not always constructive. I see Team Leaders making fun
of a guy’s faults, and that might be all the feedback provided. Oddly enough, this works
on most guys. But when hints and nudges do not work, something has to be done. A tool
that Team Leaders have to do that is standardized will at least give Experts a common
starting place for evaluations and counseling.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. The assumed influence that the organization needs to create an effective
method for performance feedback is a need. One hundred percent of the Leader Experts said the
NCOER was not adequate, and Unit 123 should create its evaluation method. In addition, One
hundred percent of the participants recommended that the evaluation be mandatory. This
influence was determined to be a need.
Influence 2. The organization needs to provide feedback for Experts, so they know how
they ranked amongst other Experts.
Survey results. Surveys did not highlight or assess this influence.
Interview findings. The data has shown that the organization needs to provide feedback
for Experts, so they know how they ranked amongst other Experts. Participant 3 stated,
These guys crave feedback. They get it all through the training course, and most of the
time, unless they are messed up, they don’t hear anything. No news is good news, that’s
how I was brought up in Unit 123. Guys nowadays need more at-a-boys. I think we need
a routine in place to force both the leaders and Experts to talk, good and bad.
IMPROVING THE EVALUATION METHOD 86
Participant 15 said,
That’s one of the main problems with the NCOER. It ranks Experts against their
equivalent in the regular military. They have volunteered for the hardest path at every
turn, challenged themselves to be one of the best soldiers possible, and the Army ranks
them against the guy just looking for a paycheck and trying not to deploy overseas.
Experts need to be evaluated against other Experts, so they understand where they rank
amongst their actual peers.
Participant 21 responded with passion,
This is huge, I hate this, it’s personal to me. A guy can be on a team for years, never told
they are doing anything wrong, and one day they are told that they are not getting to be a
Team Leader or passed over for something else because of their performance. But no one
ever said to them that they weren’t performing, how can they fix something if they don’t
know they are wrong? If a guy is just average to below average, in Unit 123, this can
happen with poor leadership. Any evaluation tool, survey, force counseling, anything
would force feedback. We need to force feedback.
Observation. No observations were made for this influence.
Document analysis. Document analysis did not highlight or assess procedural
knowledge influences.
Summary. The assumed influence that the organization needs to provide feedback for
Experts, so they know how they ranked amongst other Experts, is a need. Every Leader Expert
made many comments and suggestions desiring feedback. With one hundred percent of the
IMPROVING THE EVALUATION METHOD 87
participants desiring mandatory feedback of some sort, it is determined that this influence is a
need.
Summary of Validated Influences
Tables 9, 10, and 11 show the knowledge, motivation and organization influences for this
study and their determination as an asset or a need.
Knowledge
Table 9
Knowledge Assets or Needs as Determined by the Data
Assumed Knowledge Influence Asset or Need?
Factual
Leader Experts need to identify common attributes of successful
and unsuccessful Experts.
Need
Procedural
Leader Experts need an evaluation method to properly assess
Experts within Unit 123.
Need
Conceptual
Leader Experts need to identify what Experts are considered
successful and unsuccessful while serving in Unit 123
Need
Motivation
Table 10
Motivation Assets or Needs as Determined by the Data
Assumed Motivation Influence Asset or Need?
Self-Efficacy
Leader Experts are confident in their ability to accurately evaluate
Expert’s job performance under their command.
Need
Utility-Value Construct
Need
IMPROVING THE EVALUATION METHOD 88
Leader Experts need to see the usefulness in identifying attributes
of high performing Experts to improve the assessment and
selection process.
Organization
Table 11
Organization Assets or Needs as Determined by the Data
Assumed Organization Influence Asset or Need?
Cultural Models
The organization needs to uphold its culture statement.
Asset
Cultural Models
The organization needs a culture that encourages innovation, high
standards, and performance improvement.
Asset
Cultural Settings
The organization needs to create an effective method for
performance feedback.
Need
Cultural Settings
The organization needs to provide feedback for Experts, so they
know how they ranked amongst other Experts.
Need
Chapter Five will provide recommended solutions for each of the seven influences
presented as a need in Chapter Four. These recommendations will be presented with
implementation and evaluation plans based on empirical evidence.
IMPROVING THE EVALUATION METHOD 89
CHAPTER FIVE: RESULTS
Introduction and Overview
The assumed influences were validated and presented in Chapter Four and categorized
under knowledge, motivation, and organization challenges. Below, Chapter Five will discuss the
recommendations for the knowledge, motivation, and organizational gaps that were identified in
this study. The New World Kirkpatrick Model (2016) will be the approach used to integrate the
recommendations and evaluate results. Before the results are presented here is a brief
reintroduction to the organizational context and mission followed by the performance and
stakeholder’s goals.
Organizational Context and Mission
Unit 123, a pseudonym, is a Special Operations unit that provides subject matter
expertise for the United States’ priorities and initiatives. Unit 123 lies within the continental
United States, located with the necessary support to quickly deploy on behalf of the United
States and its allies. The organization recruits from all branches of the armed forces and
internally selects, assesses, trains and develops service members, and creates “Experts,” a
pseudonym, to conduct operations. These Experts are one of the most diligently selected and
trained individuals in government service. Once an Expert has successfully served on an
operational team for eight to twelve years, at least two years as a Team Leader (TL) and
identified take the next step in leadership the Expert is asked to be an Instructor in the Training
Section. From Unit 123’s website, the Training Section’s mission is “Recruit, Assess, Select,
and Train Unit 123 experts, provide continuity/consistency for institutional training, and subject
matter expertise for Unit 123 priorities and initiatives.”
IMPROVING THE EVALUATION METHOD 90
Organizational Performance Goal
Unit 123’s performance goal is by June 2020; it will identify and report KMO attributes
for high-performing Experts and incorporate a process to recruit identified attributes in 100% of
candidates, increasing the ten-year retention rate of Experts by 20%. The performance goal
directly supports Unit 123’s global goal of by June 2022; it will have implemented a new
evaluation method, identified attributes, and adjusted recruitment strategies seeking identified
attributes. During a meeting with the Training Section’s Command Sergeant Major (CSM) and
Commander, Unit 123’s CSM, and the cadre, the researcher’s dissertation subject and the
potential benefit to Unit 123 was discussed, and the goal was established. The ten-year
benchmark was established since that is the traditional amount of time an Expert will be on an
operational team, and a 20% retention rate increase is desired without factoring casualties from
combat and training. Even though it will take ten years to determine the achievement of this goal
completely, the Training Section can set benchmarks upon completion of the training course and
during the completion of each periodical evaluation method suggested in this research.
Description of Stakeholder Groups
Unit 123’s Senior Leaders need to provide command guidance to the recruitment,
assessment, and selection sections so new potential expert candidates can be identified and
selected for training. Training Section’s Instructors must identify and collect necessary data
from the training course and influence operational experts to complete surveys on current
experts. Leader Experts must collaborate to identify what KMO attributes are desired of Experts
and facilitate the implementation of a new evaluation method.
Goal of the Stakeholder Group for the Study
Table 12.
Organizational Mission, Global Goal, and Stakeholder Performance Goals
IMPROVING THE EVALUATION METHOD 91
Organizational Mission
Recruit, Assess, Select, and Train Unit 123 experts, a pseudonym, provide
continuity/consistency for institutional training, and subject matter expertise for Unit 123
priorities and initiatives.
Organizational Performance Goal
123’s goal is by June 2020; it will identify and report KMO attributes for high-performing
Experts and incorporate a process to recruit identified attributes in 100% of candidates,
increasing the ten-year retention rate of Experts by 20%.
Unit 123’s Senior
Leaders
Training Section’s
Instructors
Leader Experts
By 2019, the leadership
will emplace a new profile
for experts that will
increase retention by 20%
over a ten-year period.
By 2019, the Training Section
instructors will develop a
profile for experts that will
increase retention by 20% over
a ten-year period.
By 2019, Leader Experts will
develop evaluation questions
designed to identify desired
attributes of Experts that are
likely to be retained for a ten-
year period.
In order to accomplish this organizational goal, all stakeholder groups will play an
important role. However, for practical purposes, the stakeholders of focus for this study will be
Leader Experts. Leader Experts are Experts that have already completed eight to twelve years on
an operational team and are either currently an instructor or have already completed instructor
time and are now a leader of multiple teams. There are only currently 30 Leader Experts serving
within Unit 123. Leader Experts only make up 1-2% of Unit 123’s soldiers, they understand the
selection and training process, and still have a direct impact on the operational environment and
the on the Experts on teams. Leader Experts are most affected if the organizational goal is
achieved.
IMPROVING THE EVALUATION METHOD 92
To develop the stakeholder goal, Leader Experts from Unit 123 must identify what KMO
attributes are desired and what survey questions can identify those attributes. This process will
involve Unit 123 Senior Leaders, psychologists, recruiters, instructors, and Leader Experts and
Experts themselves. Historically, after ten years of an Expert serving in Unit 123, roughly 80%
remain operational. Once this study introduces new recruitment data, Unit 123 can measure the
attrition rate continuously and identify trends well before the ten-year benchmark. Increasing the
retention rate for Unit 123 will increase its operational capacity, which directly affects the
stakeholder’s capability overseas, and allows more time at home with their families. If this goal
is not achieved, Unit 123 will continue to operate at 80% of its capacity, placing a higher
operational demand on Experts and their families.
Purpose of the Project and Questions
The purpose of this study is to improve Unit 123’s evaluation method in order to identify
attributes of top-performing Experts. Special Operation Experts consist of military personnel
from the Army, Navy, Air Force, and Marines but have an inadequate evaluation method, which
prevents the identification of attributes of top performers. Included will be an in-depth
explanation of the knowledge, motivation, and organizational gap analysis model developed by
Clark and Estes (2008). Once the knowledge, motivation, and organizational influences are
defined, they can be used as a lens to examine Experts’ knowledge, motivation, and
organizational influences. The implementation of a new evaluation method will facilitate Unit
123’s performance goal that by 2022, it will have implemented a new evaluation tool, identified
attributes, and adjusted recruitment strategies seeking identified attributes.
The research questions, which Creswell (2014) sees as major signposts for readers are:
IMPROVING THE EVALUATION METHOD 93
1. What are the knowledge, motivation, and organizational attributes of Experts
necessary for Unit 123 to understand in order to develop an effective evaluation
method?
2. What knowledge and motivational attributes of top-performing Experts correlate
with Unit 123’s evaluation methods?
3. What are the Experts’ knowledge and motivation influences that interact with
Unit 123’s culture, and how can these Experts’ knowledge and motivation
influences help achieve Unit 123’s goals?
Recommendations for Practice to Address KMO Influences
Introduction. According to Clark and Estes (2008) in order for organizations to achieve
their goals, they must address knowledge gaps. Krathwohl (2002) categorizes knowledge
influences into four knowledge types, factual, conceptual, procedural, and metacognitive. Table
13 consists of factual, conceptual, and procedural assumed knowledge influences and their
probability of being validated based on the most frequently mentioned influences to achieving
the stakeholder’s goals during the surveys, informal one-on-one interviews, and through the
literature review. Table 1 indicates that these influences have a high probability of being
validated and for achieving the stakeholder goal. In addition, Table 1 outlines the related
theoretical principles and context-specific recommendations for these highly probably influence
resulting from this mixed-methods research.
Table 13
Summary of Knowledge Influences and Recommendations
Assumed Knowledge Influence
D-F=Declarative Factual
D-C=Declarative Conceptual
P=Procedural
M=Metacognitive
Validated
as an Asset
or Need
Priority
Yes, No
(Y, N)
Principle and
Citation
Context-Specific
Recommendation
IMPROVING THE EVALUATION METHOD 94
Leader Experts need an
evaluation method to
properly assess Experts
within Unit 123. (P)
Need Y Use training
when employees
need a
demonstration,
guided practice,
and feedback to
perfect a new
procedure (Clark
& Estes, 2008).
New employees
who are trained
and assessed
infrequent
intervals have a
higher likelihood
of adapting to
the company’s
culture and
achieving a
higher rate of
success
(Grossman &
Salas, 2011)
Provide Leader
Experts and Team
Leaders training and
a glossary of
terminology related
to the evaluation tool
and provide a
method of feedback.
Schedule periodic
assessment of the
evaluation tool in
order to discuss the
positive and negative
results and
perceptions.
Leader Experts need to
identify common attributes
of successful and
unsuccessful Experts. (D-F)
Need Y Acquiring skills
for expertise
frequently begins
with learning
declarative
knowledge about
individual
procedural steps
(Clark & Estes,
2008)
Define what
attributes are
common amongst
successful and
unsuccessful Experts
and what questions
identify those
attributes in the
evaluation tool.
Leader Experts need to
identify what Experts are
considered successful and
unsuccessful while serving
in Unit 123. (D-C)
Need N To develop
mastery,
individuals must
acquire
component
skills, practice
integrating them,
and know when
to apply what
they have
learned (Schraw
Implement the
Leader Experts
developed evaluation
tool and use the
results to correlate
with Experts that
were fired from Unit
123 and Experts that
reached senior
positions in Unit
123.
IMPROVING THE EVALUATION METHOD 95
& McCrudden,
2013)
Procedural knowledge. The data shows that Leaders Experts lack procedural
knowledge about how to precisely evaluate their subordinates consistently. During the data
collection, Leader Experts expressed their negative opinions of the current evaluation methods
and their overwhelming desire for a Unit 123 specific evaluation tool. Leader Experts have been
using the existing evaluation methods for their entire careers, and acknowledging the prior
knowledge can hinder learning is necessary (APA, 2010). Leader Experts need an evaluation
method to accurately assess Expert’s job performance within Unit 123. The results and findings
of the interview process showed that 100% of Leader Experts believe Experts are not evaluated
correctly and that the current method is inadequate. Based on information processing system
theory, Leader Expert’s prior knowledge can help or hinder learning; this must be understood to
close the procedural gap. Clark and Estes (2008) suggest using training when employees need a
demonstration, guided practice, and feedback to perfect a new procedure. This suggests that
leaders in Unit 123 need training and guidance on new tools and procedures. The
recommendation then is to provide Leader Experts and Team Leaders training and a glossary of
terminology related to the evaluation tool. An example would be to a providing Leader Experts
with a working copy of a potential evaluation tool with explanations of terminology and a means
to provide feedback.
Experts initially underwent an evaluation of military skills, stress resilience, adaptability,
mental and social intelligence, and psychological attributes but they all have not been evaluated
after becoming an Expert (Picano, Roland, Williams, & Rollins, 2006). Unit 123 does not have
an internal mechanism in place to evaluate Experts once they are accepted into the organization
and currently relies on the US military’s NCOER. Procedural knowledge is knowing how to do
IMPROVING THE EVALUATION METHOD 96
something or the methods, techniques, algorithms, and methodologies required to achieve goals
(Rueda, 2011). Currently, Leader Experts recognize the need for a new evaluation method but
do not have the knowledge to create or implement one. Therefore, Leader Experts will have to
be given a potential evaluation tool and the training to implement in order to close the procedural
and overall organizational gap.
Declarative-factual knowledge: Identify common attributes. Leader Experts need to
identify common attributes of successful and unsuccessful Experts. The results and findings of
the interview process indicated that Leader Experts need comprehensive declarative-factual
knowledge of attributes associated with Expert’s performance and how to identify them. The
theory of attributions supported the recommendations to close this declarative-factual knowledge
gap. Anderman and Anderman (2009) agree that accurate feedback that identifies the skills or
knowledge the learner lacks, along with communication that skills and knowledge can be
learned, followed with the teaching of these skills and knowledge. This suggests that if Leader
Experts are provided with common attributes and associated indicators, new identification skills
can be increased. The recommendation then is to have Unit 123 psychologists define common
attributes of successful and unsuccessful Experts, and what survey questions identify those
attributes in the evaluation tool.
Factual knowledge is commonly known as facts such as terminology or details, which
must be understood to solve a problem effectively (Rueda, 2011). Cline’s (2017) examination of
Mission Critical Teams (MCT), which Unit 123 is consistent with, extrapolated the 20 most
common attributes from the 148 initial attributes. They are ranked in order of times of
occurrence in data collection; peer acceptance, adaptability, drive, professionalism, bias for
action, aptitude, integrity, toughness, agency communicative mindfulness discerning, discipline,
leadership, accountability, fitness confident, loyalty, trust, and courage (Cline, 2017). These
IMPROVING THE EVALUATION METHOD 97
common attributes act as factual knowledge of attributes of Experts in Unit 123, and according to
Krathwohl et al. (2002), is the terminology and basic elements that must be understood for
Instructors to solve problems.
Declarative-conceptual knowledge: Identify what is success. Leader Experts need to
determine what Experts are considered successful and unsuccessful while serving in Unit 123.
The results and findings of this study show that the majority of Leader Experts agree that any
Expert that becomes a Leader Expert should be considered successful, but Experts that don’t
reach the position of Leader Expert would not be considered unsuccessful. Interviews of Leader
Experts also showed that although identifying fired Experts as unsuccessful is not 100%
accurate, it is consistent enough for data analysis. Schraw and McCrudden (2013) state to
develop mastery, individuals must acquire component skills, practice integrating them, and know
when to apply what they have learned. This would suggest that the position of Leader Experts
requires the proper level of mastery and intestinal fortitude to be considered successful in Unit
123, and when an Expert is fired, they can be considered unsuccessful. The recommendation
then implements the Leader Experts developed evaluation tool and use the results to correlate
with Experts that were fired from Unit 123 and Experts that reached the position of Leader
Expert as a minimum in Unit 123.
Conceptual knowledge pertains to the interrelationships between the different elements of
an organization, and the knowledge of categories, principle and generalizations function together
(Krathwohl et al., 2002). Leader Expert’s knowledge of an organization’s structure and
hierarchical positions enables another method of identifying successful and unsuccessful
Experts. An established list of successful and unsuccessful Experts is necessary to cross-
reference selection and training data to find KMO attributes and achieve the organizational goal.
Motivation Recommendations
IMPROVING THE EVALUATION METHOD 98
Introduction. Table 14 represents the list of motivational influences, whether they were
validated, and recommendations based on the data collected during the surveys, interviews, and
supported by the literature review and review of motivation theory. Clark and Estes’ (2008)
discussion of motivational theory focuses on three indicators of motivation in task performance:
choice, taking action to pursue the attainment of a goal; persistence, people will finish regardless
of adversity or distraction; and mental effort, weigh the effort to achieve the goal, and decide to
do so. All the participants hold a leadership position at the highest levels of the military, and it
can be assumed they have a high level of motivation concerning the choice, persistence, and
mental effort. As indicated in Table 14, the motivational influences were validated and a priority
to achieving the stakeholders and organizational goals. Table 14 also shows the
recommendations for these influences based on theoretical principles and analysis of the
collected data.
Table 14
Summary of Motivation Influences and Recommendations
Assumed Motivation
Influence
Validated
as an
Asset or
Need
Priority
Yes, No
(Y, N)
Principle and
Citation
Context-
Specific
Recommendati
on
Leader Experts are confident
in their ability to accurately
evaluate Experts’ job
performance under their
command. (Self-efficacy)
Need Y When employees
have high self-
efficacy they
believe they can
successfully lead
others, while
understanding
their own
shortcomings and
potential areas of
growth
(Grossman &
Salas, 2011;
Usher & Pajares,
2006)
Provide a
periodic,
instructive, open
form to facilitate
feedback
amongst leaders
that conduct
evaluations.
Promote a
discussion of
lessons learned
and create an
environment for
candid feedback
to improve the
IMPROVING THE EVALUATION METHOD 99
Feedback,
modeling, and
positive
expectancies
increases self-
efficacy and
enhances
motivation
(Usher &
Pajares, 2006)
tool or the
evaluators
themselves.
Leader Experts need to see
the usefulness in identifying
attributes of high performing
Experts to improve the
assessment and selection
process. (Utility value
construct)
Need Y Utility value is
how much the
task is aligned
with an
individual’s
personal and
professional
goals and what
the cost-benefit
analysis is of
performing the
task (Eccles,
2009)
Provide
examples of
attributes of
high performers
already
identified
through the data
collection and
previous studies
conducted by
Unit 123
psychologists.
Provided
examples of
Experts who are
recognized in
Unit 123 as
successful
Experts and
Leaders and
their qualities.
Self-efficacy. Leader Experts are confident in their ability to accurately evaluate
Experts’ job performance under their command. The results and findings of this study found that
100% of the Leader Experts were confident in their ability to evaluate subordinates themselves.
However, only 80% of Leader Experts were confident that Team Leaders under their command
could consistently evaluate Experts beneath them. In order to close this gap, this study
recommends a principal rooted in expectancy-value theory. When employees have high self-
efficacy they believe they can successfully lead others while understanding their shortcomings
IMPROVING THE EVALUATION METHOD 100
and potential areas of growth (Grossman & Salas, 2011; Usher & Pajares, 2006). Usher and
Pajaras (2006) found feedback, modeling, and positive expectancies increase self-efficacy and
enhance motivation. This would suggest that Leader Experts must provide training, modeling,
and feedback to increase their self-efficacy in their ability to transfer skills to Team Leaders.
The recommendation is to provide a periodic, instructive, open form to facilitate feedback
amongst leaders that conduct evaluations and to create an environment for candid feedback to
improve the tool or the evaluators themselves.
Usher and Pajares (2006) theorize that one’s self-efficacy is a result of prior knowledge,
previous success and failures, and feedback from others with regards to completing the task.
Usher and Pajares (2006) discuss how mastery experience is proven to predict heightened self-
efficacy and Experts gain a lot of mastery in the “hard” skills related to the job including;
shooting, close-quarters combat, mobility, climbing, skydiving, etc. However, Leader Experts,
they have not received any formal education on evaluating or instruction. Leader Experts rely on
“on the job” training, which is often more than adequate for most Experts when provided
feedback. A portion of one’s belief of their capabilities and confidence in how others perceive
their abilities is derived from personal and social constructs (Pintrich, 2003).
Utility-value construct. Leader Experts need to see the usefulness in identifying
attributes of high performing experts to improve the assessment and selection process. The
results and findings of this study indicate that 100% of Leader Experts need a new evaluation
tool, but only 87% confirmed that it would also help in the assessment and selection process. A
recommendation based on goal orientation theory’s concept that making evaluation private
decreases performance orientation will be used to close this motivation gap. Eccles (2009) states
that utility value is how much the task is aligned with an individual’s personal and professional
goals and what the cost-benefit analysis is of performing the task. If Leader Experts were able to
IMPROVING THE EVALUATION METHOD 101
see how evaluation data from existing Experts can be used to increase the assessment and
selection effectiveness, then they can see the cost-benefit analysis. The recommendation is to
provide examples of attributes of high performers already identified through the data collection
and previous studies conducted by Unit 123 psychologists.
Clark & Estes (2008) state that motivation gets people moving, drives them to continue
moving, and dictates how much time and effort they will work on tasks. The utility-value
construct can be seen through Eccles’ (2009) expectancy-value motivational theory’s first
fundamental question “Do I want to do the task?” and asks a different second question, “Is it
worth my time and energy?” When Leader Experts associate an increased benefit with an
increased cost, especially if the rate of benefits increase is higher than the rate of cost return, it
will motivate. Expert Leaders and instructors need to see the value of assessing, selecting and
training the best possible Experts and that by identifying key attributes of both successful and
unsuccessful Experts Unit 123, leaders can attain a higher percentage of quality Experts to meet
its mission.
Organization Recommendations
Introduction. An organization’s culture is the driving force for change, or it can be a
barrier that prevents positive change (Clark & Estes, 2008). Table 15 consists of two cultural
model and two cultural setting influences and their probability of being validated based on the
most frequently mentioned influences to achieving the stakeholder’s goals during the surveys,
informal one-on-one interviews, and through the literature review. Cultural models are often
invisible beliefs, values, and attitudes that result in cultural practices and shared mental schema
within an organization and cultural settings are often the visible manifestation of the cultural
models such as reward, communication and organizational policies and rules (Gallimore &
Goldenberg, 2010). Table 15 also represents that both of the cultural model influences were not
IMPROVING THE EVALUATION METHOD 102
validated as a gap and one of the two is considered a priority. However, the two cultural setting
influences were validated as a gap and priority for Unit 123. Table 15 indicates that these
influences have a high probability of being validated and for achieving the stakeholder goal. In
addition, Table 15 outlines the related theoretical principles and context-specific
recommendations resulting from this mixed-methods research.
Table 15
Summary of Organization Influences and Recommendations
Assumed Organization
Influence
Validated
as an
Asset or
Need
Priorit
y
Yes,
No
(Y, N)
Principle and
Citation
Context-
Specific
Recommendat
ion
Cultural Model Influence:
The organization needs to
uphold its culture statement.
Asset N The culture
statement acts as
a collective
identity which
Bolman and Deal
(2013) assert
strengthens and
bonds the
organization’s
internal as well
as external
perception.
While there
exists an
abundance of
dislike for the
organization’s
culture
statement itself,
individuals
must maintain
and promote
the existing
culture.
Cultural Model Influence:
The organization needs a culture
that encourages innovation, high
standards, and performance
improvement.
Asset Y Grove (2005)
asserts that high
performing
organizations
encourage as
well as reward
innovation, high
standards, and
performance
improvement.
During existing
leadership
courses and
off-site senior
leadership,
meeting
continue to
ensure
innovation,
high standards,
and
performance
improvement
are still a
priority.
IMPROVING THE EVALUATION METHOD 103
Cultural Setting Influence:
The organization needs to create
an effective method for
performance feedback.
Need Y A continuous
feedback loop is
vital for
professional
development and
achievement of
organizational
goals (Archibald,
Coggshall, Croft,
& Goe, 2011).
Clark and Estes
(2008) assert that
not only do
employees
benefit from a
feedback loop
but also
organizational
performance
increases when
leadership stays
involved in
employees and
continuously
assess their
performance
Design a
specific
evaluation
method
purpose-built to
assess an
Expert’s job-
performance
and identify
attributes. The
evaluation must
have input and
buy-in from
stakeholders.
Cultural Setting Influence:
The organization needs to
provide feedback for Experts so
they know how they ranked
amongst other Experts.
Need Y A continuous
feedback loop is
vital for
professional
development and
achievement of
organizational
goals (Archibald,
Coggshall, Croft,
& Goe, 2011).
Rueda (2011)
stated that
without a
transparent and
systematic
processes
organizations are
at risk of failure.
In conjunction
with a specific
evaluation
method
purpose-built to
assess an
Experts will be
the requirement
for the
evaluator and
the evaluated to
discuss the
results. Acting
as a forcing
function to
stimulate a
feedback loop.
IMPROVING THE EVALUATION METHOD 104
Cultural models. The results and findings of this study show that 100% of Leader
Experts agree that the organization encourages a culture of innovation, high standards, and
performance improvement. However, 97% of Leader Experts do not agree with Unit 123’s
culture statement. Unit 123’s encouragement of its culture is considered an asset. Nevertheless,
Unit 123’s culture statement itself needs to be changed to reflect the stakeholders' opinions, but
since Unit 123’s culture is effective, this influence is considered an asset. Culture is often
thought of as a group’s shared identity and creates stability, and when a leader uses power to
enforce new behavior, it shapes the culture and, in turn, changes the group (Schein, 2017). While
there exists an abundance of dislike for the organization’s culture statement itself, individuals
must maintain and promote the existing culture. The recommendation for this influence is in two
parts: 1) since the organization's culture is determined to be an asset, continue the status quo; and
2) during regularly scheduled leadership meetings put the culture statement on the agenda to
determine if or how it should be changed, or if Unit 123 needs a culture statement.
The culture statement acts as a collective identity, which Bolman and Deal (2013) assert
strengthens and bonds the organization’s internal as well as external perception. While empirical
research addresses the importance of a culture statement, interviews shown the 91% of the
participants believe Unit 123 should not even have a culture statement. In line with the above
recommendation, Kouzes & Posner (2007) recommend that discussions must be initiated to
understand what individual actions contribute to the achievement of the organizational goal.
Unit 123 must support organizational change because it is a natural part of organizational
development and is necessary for success in the long term (Kezar, 2001; Senge, 1990).
Cultural Settings. The results and findings of this study indicate that 100% of Leader
Experts agree that Unit 123 needs an organic evaluation method and that it needs to be
mandatory to achieve its organizational goals. A recommendation grounded in attribution theory
IMPROVING THE EVALUATION METHOD 105
will be used to address these organizational gaps. Clark and Estes’ (2008) assert that not only do
employees benefit from a feedback loop, but also organizational performance increases when
leadership stays involved in employees and continuously assesses their performance. This would
suggest that providing Experts a consistent feedback loop would not only benefit them but the
larger organization. The recommendation is two-fold: 1) design a specific evaluation method
purpose-built to assess an Expert’s job-performance and identify attributes; and, 2) requirement
that the evaluator and the evaluated discuss the results, opening a feedback loop. An example
would be a Team Leader who would routinely conduct a job performance evaluation on each of
his team members using a survey style tool similar to the proposed one in the study. After
completion, the Team Leader would sit down with the Expert and discuss his strengths,
weaknesses, and provide guidance on how to address any problem areas. These
recommendations would help Experts understand where they rank amongst their peers.
Leader Experts and Team Leaders must provide an environment that Experts are
provided constructive and consistent feedback to not only improve themselves but the
organization. When Experts cannot identify the skills and knowledge that they lack,
improvement cannot be expected (Anderman & Anderman, 2009). Providing a continuous
feedback loop is vital for the professional development of employees and the achievement of
organizational goals (Archibald, Coggshall, Croft, & Goe, 2011). Rueda (2011) stated that
organizations without transparent and systematic processes are at risk of failure.
Integrated Implementation and Evaluation Plan
Implementation and Evaluation Framework
The New World Kirkpatrick Model will be used as the implementation and evaluation
approach for this study (Kirkpatrick & Kirkpatrick, 2016). This most recent version is in the
reverse order from the original 1998 model. This model consists of four levels that focus on
IMPROVING THE EVALUATION METHOD 106
evaluating training by starting with the organization's goals and working backward to initiate and
drive change. Kirkpatrick’s four levels in the recommended order begins with the end first.
Level Four, results, is the degree the outcomes occur as a result of the training and support
implemented. Level Three, behavior, the degree participants apply training while on the job.
Level Two, learning, the degree participants retain, and intend to use the knowledge and
motivation they received from the training. Level One, reaction, the degree participants
associate the training with increased job satisfaction and performance. The New World
Kirkpatrick Model (2016) ties training to performance by determining what the organizational
goals are first, then recommending training that is backed by evidence to lead to measurable
results. While the model provides a straightforward plan, it also allows flexibility, and the
implementer can cut corners or bypass unnecessary steps if they are deemed unnecessary
(Kirkpatrick & Kirkpatrick, 2016).
Organizational Purpose, Need, and Expectations
Unit 123’s Training Section’s mission is to recruit, assess, select, and train Unit 123
Experts, provide continuity/consistency for institutional training, and subject matter expertise for
Unit 123 priorities and initiatives. Unit 123’s global goal is by June 2022; it will have
implemented a new evaluation method, identified attributes, and adjusted recruitment strategies
seeking identified attributes. Unit 123 lacks an effective evaluation method, which is a
contributing factor to the gap in knowledge of what attributes are consistent in high-performing
soldiers and thus what attributes must be recruited, selected, and assessment to increase
leadership potential. Unit 123 needs an evaluation method specifically designed to assess
Experts’ performance, attributes, and provide constructive feedback.
The stakeholder goal is by December 2019, 100% of Leader Experts will have taken part
in the survey/interview process, and a new evaluation method designed to identify KMO
IMPROVING THE EVALUATION METHOD 107
attributes of Experts will be implemented. During the discovery and research process, it became
apparent that without an evaluation method to identify successful and unsuccessful Experts, it
would be impossible to investigate their attributes. Since Unit 123 used the NCOER for
performance evaluation, which did not accurately assess or highlight the characteristics of
Experts, a specially designed evaluation method for Experts could close multiple gaps. The
stakeholder's goal was identified as the first and vital step in understanding the knowledge,
motivation, organizational influences, and barriers associated with developing a new evaluation
method. Understanding that the goal is to increase the effectiveness of recruitment and retention
of Experts from the beginning allowed the design of the evaluation tool to support the goal.
In order to achieve the organizational goal, the stakeholder’s goal must be accomplished
first. As Chapter Five is being written, the stakeholder goal has already been attained. Unit
123’s leadership decided to implement an evaluation method recommended by this research and
it is currently being used on the first group of Experts. This meets the stakeholder’s short-term
recommendations but to take the data derived from the new evaluation method and apply that to
the recruitment, assessment, selection, and training process still needs to be accomplished to
meet the organizational mission and goal. The expectations of the evaluation tools can be
assessed within the near term by listening to feedback. Unit 123’s must be patient with
expectations regarding the effect on the recruitment, assessment, selection, and training process
since it will be years before data can be utilized effectively, and a correlation can be made with
the evaluation method.
Level 4: Results and Leading Indicators
Table 16 below displays the proposed Level Four: results and leading indicators in the
form of outcomes, metrics, and methods for both external and internal outcomes for Unit 123’s
Leader Experts. The internal outcomes include 100% completion of the interviews, the
IMPROVING THE EVALUATION METHOD 108
evaluation as conducted on time to every Expert, and that the data will be collected and analyzed
for predictive analytics. If these internal outcomes are achieved as a result of training, a new
evaluation method, and employee engagement, then external outcomes presented below should
also be realized.
Table 16
Outcomes, Metrics, and Methods for External and Internal Outcomes
Outcome Metric(s) Method(s)
External Outcomes
Increase in the success
rate of the selection
process
Percentage of candidates that
complete the selection course
increases
Assessment and Selection
sections Bi-annual course data
analysis
Increase in the success
rate of the Training course
Percentage of candidates that
complete the Training Course
increases
The Training Section's Bi-
annual course data analysis
Increase in the retention
rate of Experts during
their career
Percentage of Experts that leave
Unit 123 for performance or
integrity issues
Unit 123’s annual manning
process analysis
Internal Outcomes
100% of Leader Experts
completed the survey and
interview process
All 30 Leader Experts have
participated
Confirm participation.
Experts are evaluated
more frequently
Time elapsed between
performance evaluations less
than current, one year.
Monitor the implementation of
the new evaluation method’s
eight-month process
100% of Experts are
provided performance
feedback from Team
Leaders
Percentage of Experts that have
completed the evaluation and
feedback process
Leader Experts confirms that
Team Leader has evaluated
every Expert under his
command by checking that the
evaluation reports are signed.
Unit 123 psychologists
collect and analyze data
from the new evaluation
method to identifying
Expert’s attributes
Evaluation data is being entered
in the existing data set for
predictive analytics
Bi-annual report to Unit 123
Senior Leaders
IMPROVING THE EVALUATION METHOD 109
Level 3: Behavior
Critical behaviors. Table 17 below displays the proposed Level Three: behavior, the
degree participants apply training while on the job, which has the most significant impact on the
results if the implementation plan if performed appropriately (Kirkpatrick & Kirkpatrick, 2016).
The crucial element of the critical behavior step is it requires stakeholders, supporting parties,
and leaders to hold one another accountable to ensure the achievement of the goal. Four critical
behaviors have been identified that key groups will have to consistently perform to reach the
common goal. One, Team Leaders conduct an evaluation for each member on their team and
provide feedback. Two, Leader Experts will meet with Team Leaders to review evaluations.
Three, Unit 123 psychologists analyze data from evaluations to identify trends and propose
improvements. Four, Unit 123 psychologists, and Senior Leaders use data from multiple
evaluations to adjust recruitment strategies. Table 17 presents the metrics, methods, and timing
to timing for each of these outcome behaviors.
Table 17
Critical Behaviors, Metrics, Methods, and Timing for Evaluation
Critical Behavior Metric(s)
Method(s)
Timing
1. Team Leaders
conduct an
evaluation for each
member on their
team and provide
feedback.
The teams must
submit a completed
Evaluation signed by
both Team Leader and
Expert.
Evaluation submitted to
Leader Expert
At least every eight
months or when
Leadership requests
an evaluation
impromptu.
2. Leader Expert
will meet with
Team Leaders to
review evaluations
Leader Expert signs
evaluation
Leader Experts releases
evaluation to command
and psychologists
At least every eight
months or when
Leadership requests
an evaluation
impromptu
3. Unit 123
psychologists
analysis data from
evaluations to
identify trends and
Psychologists submit
results to Senior
Leadership
Senior Leadership
disseminates results to
appropriate Leader
Experts
At least every four
months or when
Leadership requests
an evaluation
impromptu
IMPROVING THE EVALUATION METHOD 110
propose
improvements
4. Unit 123
psychologists and
Senior Leaders use
data from multiple
evaluations to
adjust recruitment
strategies
Unit 123
psychologists and
Senior Leaders submit
results to Recruitment
Section leadership
Recruitment Section
adjusted recruitment,
assessment, and
selection process
seeking newly identified
attributes of successful
Experts.
Once data collection
and analysis is
sufficient to
implement
Required drivers. Level Three is critical for the achievement of the goals and therefore
Level Three highlights the required drivers that support and holds accountable the critical
behaviors outlined above in Table 17 (Kirkpatrick & Kirkpatrick, 2016). These such drivers
reinforce, encourage, reward, and monitor individuals and groups that play a role in contributing
to the behaviors and, in turn, the achievement of the goals. Support from groups from all levels
in Unit 123 is required to reach the outcome behaviors; these groups include but are not limited
to; Experts, Team Leaders, Leader Experts, Unit 123 psychologists, Senior Leaders, and
command teams. Table 18 below outlines the recommended drivers needed to support the
critical behaviors presented above.
Table 18
Required Drivers to Support Critical Behaviors
Method(s) Timing
Critical Behaviors
Supported
Reinforcing
Researcher and psychologists
outline ongoing development
and implementation
procedures in an open forum
briefing
Annual 3, 4
Leader Experts conducts one-
on-one meetings with
underperforming Experts
following evaluations
Every eight months 1, 2
IMPROVING THE EVALUATION METHOD 111
Team Leaders are encouraged
to provide feedback on the
evaluation tool at the end of
survey or an in person with
psychologists
Ongoing 3, 4
Unit 123 psychologists adapt
evaluation method to ensure
questions are relevant to
Experts, based on pre-
programmed feedback
mechanisms
Ongoing 1, 2, 3, 4
Leaders throughout Unit 123
conduct off-site meeting to
the promote and assess
evaluation method
Quarterly 2, 3, 4
Encouraging
Team Leaders and Leader
Experts meet to discuss the
pros and cons of the
evaluation tool and share
experiences
Monthly 1, 2
Psychologists provide
feedback on results and how
they are being used to
improve recruitment
Bi-annual 4
Rewarding
Evaluation results of poor
performing Experts provide
the necessary justification for
removal from Unit 123
Ongoing 1, 3
Monitoring
Senior Leaders track data
collected from evaluations
and the impact it has on the
current command climate,
recruitment, selection, and
training of new Experts.
Ongoing 1, 3, 4
Organizational support. Kirkpatrick and Kirkpatrick (2016) reason that without
accountability, even the most motivated individuals can go back to the status quo. Units 123
most not only support the change process during but especially afterward by monitoring these
drivers of the critical behaviors. First and foremost, Unit 123 leadership needs to maintain its
IMPROVING THE EVALUATION METHOD 112
“open-door” policy and be open to criticism and potential changes to any portion of the
evaluation process. Second, during the quarterly command “off-site” meetings Senior Leaders
must continue to ensure the evaluation tool is being implemented properly and that the data
collected in being appropriately analyzed to positively affect the recruitment, selection, and
training process.
Level 2: Learning
Learning goals. Following the completion of the recommended solutions previous
discussed, Experts at all leadership positions should be able to:
1. Apply the steps necessary to properly evaluate and provide constructive feedback to
Experts. (Procedural)
2. Describe what attributes are desirable and undesirable for Experts to possess. (Factual)
3. Compare evaluation results from Experts of varying success to identify trends.
(Conceptual)
4. Be confident in their ability to accurately evaluate an Expert’s job performance under
their command. (Self-efficacy)
5. Understand the usefulness in identifying attributes of Experts to improve the assessment
and selection process. (Utility-Value)
Program. The learning goals listed in the previous section will be achieved by a series
of initial informational briefings to provide leaders an outline of not only the new evaluation
method but also the journey that was untaken to develop such an approach. While a large part of
Unit 123’s leadership was part of the data collection and development processes, it is still
necessary that everyone understand the how and why Unit 123 is making a new evaluation
method mandatory. During these initial conversations, Unit 123 psychologists will present the
user interface of the evaluation tool and discuss how it will continue to be a living document,
IMPROVING THE EVALUATION METHOD 113
designed to change with Unit 123. Following the briefings, the researcher and psychologists will
conduct four training sessions, one for each section, outlining the steps and rationale behind each
step of the new evaluation method. After completing these initial briefings Team Leaders,
Leader Experts, and Senior Leaders should be confident in their ability to administer the new
method and understand the importance of providing feedback to the Experts themselves.
Immediately following the first Leader Expert’s execution of the evaluation method by
his Team Leaders, the researcher and psychologists will sit down with the Leader Expert and his
Team Leaders to discuss the results. Using an Experts' just completed evaluation, previous
psychological assessments, and previous counseling the psychologist can provide leaders the
following: what positive and negative attributes were uncovered by what survey item, how
correlating the results will identify trends, and that this will be a foundation in the recruitment,
selection, and training process. The researcher and psychologists' explanation of the how and the
why behind the evaluation tool will help leaders understand the usefulness and increase their
self-confidence in administering the evaluation method. Since the evaluations were
recommended to be conducted every eight months, it will be a minimum of eight months before
a subsequent evaluation can take place. During this entire process, Unit 123’s leadership must
monitor and, when necessary, pressure leaders that are reluctant to participate. Unit
123’psychologists must monitor the evaluation feedback and adapt evaluation method to ensure
survey items are relevant to Experts. The Recruitment and Selection command team must, once
they have been provided the data analysis, adjust recruitment strategies and conduct some testing
to validate its effectiveness.
Evaluation of the components of learning. It is important to evaluate if learning was
effective, comprehensive, and focused on reaching the performance goals or it can be a waste of
resources. Kirkpatrick and Kirkpatrick’s (2016) Level 2 looks at the degree individuals have
IMPROVING THE EVALUATION METHOD 114
acquired the knowledge, skills, attitude, confidence, and commitment based on their effort during
training. Participants' confidence in their ability to apply what they learned during training is
vital in closing the gap between learning and behavior (Kirkpatrick & Kirkpatrick, 2016). Table
19 below lists of evaluation methods as well as timing for these components of learning.
Table 19
Evaluation of the Components of Learning for the Program
Methods or Activities Timing
Declarative Knowledge “I know it.”
Knowledge checks during discussions and small
group activities at Unit 123 professional
development training
Post-training at schedule professional
development course facilitate specifically
for Unit 123 Experts
Whole group discussions During and post-training
Conduct mock evaluation and counseling in pairs
and share lessons learned with a group
During the training and at professional
development training
Procedural Skills “I can do it right now.”
Demonstration of evaluating an Expert using
hypothetical scenarios in groups to show the
proper process
During the training and at professional
development training
Participants present the process of evaluating,
counseling, submitting to leadership, and
receiving data analysis from psychologists.
During the training, one-on-one senior
leadership counseling and at professional
development training
Attitude “I believe this is worthwhile.”
Discussions of the value of evaluations,
feedback, and self-awareness
During the training, at professional
development training, and at Unit 123 off-
site meetings
Share success stories and/or lessons learned from
administering the evaluation method.
During the training and at professional
development training
Likert scale survey with a narrative option Conducted at the completion of each
evaluation completed
Confidence “I think I can do it on the job.”
Discussions addressing concerns, challenges,
potential barriers, and questions
During the training, at professional
development training, and at Unit 123 off-
site meetings
Likert scale survey with a narrative option Conducted at the completion of each
evaluation completed
Commitment “I will do it on the job.”
Discussions following practice and feedback During training
Develop milestones for data analysis and
predictive analytics for psychologists that are
reliant on maximum participation from Experts.
At the end of training and monitored
annually
IMPROVING THE EVALUATION METHOD 115
Level 1: Reaction
In Level One Kirkpatrick and Kirkpatrick (2016) recommend measuring the degree
participants finds the training is satisfying, engaging, and relevant to their jobs. Effectively
evaluating the Level One components are vital because a high level of each directly correlates to
yield more favorable results. Table 20 highlights the Level One components and methods to
measure reactions to the training.
Table 20
Components to Measure Reactions to the Program
Method(s) or Tool(s) Timing
Engagement
Attendance During training and discussions
Completion of evaluations on time At a predetermined interval, approximately
every eight months
Participation during an off-site and
professional development meeting
During an off-site and professional
development meeting
Participation that leaders to provide
feedback to the psychologist to improve the
evaluation method
Conducted at the completion of each
evaluation completed
Relevance
Brief pulse check via survey during
evaluations
Surveys conducted at the completion of each
evaluation complete
Brief pulse check via discussions Discussions during the training, one-on-one
senior leadership counseling, at professional
development training, and the command
team’s “open-door” policy
Customer Satisfaction
Brief pulse check via survey during
evaluations
Surveys conducted at the completion of each
evaluation complete
Command Climate survey Annual Unit 123 survey completed by all
Experts and addresses all issues Unit 123 is
confronting
IMPROVING THE EVALUATION METHOD 116
Evaluation Tools
In order to maximize the transfer of learning to behavior, show the value of training and
the achievement of organizational goals, and evaluation of the program necessary (Kirkpatrick &
Kirkpatrick, 2016). The evaluation tools used during, immediately after, and delayed for a
period after the program implementation are outlined in the section below.
During and immediately following the program implementation. During each of the
informational briefings, Unit 123 leaders are every level will take part in an open forum
discussion facilitated by the researcher. Discussion topics will aim to assess the degree to which
the leaders perceive the following: how they feel the training is relevant, how committed they are
to the process, how confident they are in the accomplishment of the goals, and how satisfied they
are with the presented evaluation method. The next opportunity to collect feedback from
participants will be during the four training sessions, one for each section, conducted by the
researcher, and Unit 123 psychologists. During the training sessions, participants will partake in
mock evaluations and counseling and share lessons learned with the group. The researcher and
psychologists will lead discussions with the same goals as previously described as well as
administer a survey to assess the learners’ Level One and Level Two components (see Appendix
C).
Delayed for a period after the program implementation. Approximately two weeks
after the first section’s training session will be the first application of the evaluation method on
operational Experts. At the end of the evaluation, the survey is an opportunity for the evaluator
to provide feedback on the evaluation method itself. Also, the researcher and Unit 123
psychologist will conduct a meeting with Team Leaders and Leader Experts to assess the
evaluation method through open discussion and a Level One and Level Two survey (see
Appendix D). The four sections will conduct the initial evaluation of Experts every four months.
IMPROVING THE EVALUATION METHOD 117
The above discussions and surveys will be conducted in each section and will take approximately
one year to complete all four sections. Throughout and after the planned evaluations Unit 123
has scheduled discussion regarding the evaluation process during professional development
training, Senior Leader off-site meetings, and during the annual all-hands briefings.
Data Analysis and Reporting
The Level Four goals for this implementation plan is split into three groups, separated by
time and outcome. First, the internal outcomes associated with the implementation and
execution of the new evaluation method will be analyzed and reported. The evaluation survey is
conducted one hundred percent on Unit 123’s internal network, and participation can be
monitored. Unit 123’s leadership, supported by the psychologists, can quickly discover the
participation rate and see who, if any, have not followed command guidance. This data can be
displayed on the internal network on command. As data from evaluations are collected, Unit 123
psychologists begin data analysis and give Senior Leaders updates on attribute identification
derived from data at monthly commander update meetings. This data will be available for
Leader Experts and above.
Second, external outcomes associated with the implementation and execution of the new
evaluation method will be analyzed and reported. These reports will require a minimum of two
years before results can be expected once psychologists have analyzed enough data, guide
recruitment strategies to selected candidates more effectively than data from selection, and
subsequently, the Training Course can be analyzed. Two selections and Training Courses are
held each year, and Unit 123 will track candidates that were identified but the psychologists' data
and compare results. In addition, the overall success rate of both selection and the Training
Course will be monitored, compared to previous years, and reported. Lastly, external outcomes
associated with the implementation and execution of the new evaluation method will be analyzed
IMPROVING THE EVALUATION METHOD 118
and reported regarding the retention rates of Experts. The impact the evaluation tool on retention
rate is affected by both the potential higher quality of new Experts and that the additional
feedback and counseling provided to existing Experts will improve retention. The Unit 123
retention officer maintains retention rates; however, The Selection and Training command team
overseas and is responsible for monitoring retention.
Summary
Kirkpatrick and Kirkpatrick's (2016) New World Model guided the design,
implementation, and evaluation recommendations for Unit 123’s stakeholder and organizational
goals. By using the framework and starting with the results, the expectation was to identify gaps
in the evaluation process. Also, by conducting implementation and evaluation concurrently, the
researcher can make changes during the implementation process. The New World Kirkpatrick
Model also aligns with Unit 123’s culture of continuously adapting to increase efficiency, and its
leadership development courses facilitate an ongoing forum to conduct evaluations of the
program.
Strengths and Weaknesses of the Approach
The Clark and Estes (2008) Gap Analysis model was used to provide the framework for
this improvement study. As with any methodological framework, it has its inherent strengths
and weaknesses. By starting with understanding the problem, and transitioning to the
organizational and stakeholder goals, the framework creates a solid foundation for analysis.
With the goals and the problem understood, the framework utilized the knowledge, motivation,
and organizational influences to guide the literature review, methodology, and gap analysis. The
influences provided a lens to view the data during this mixed-methods study. The strength of the
gap analysis is the identification of the root causes of the problem and the potential solutions.
The potential weakness of this framework was rectified by incorporating the Kirkpatrick and
IMPROVING THE EVALUATION METHOD 119
Kirkpatrick New World Model (2016), which complemented the gap analysis provided a
systematic method to evaluate the study's results.
Limitations and Delimitations
Merriam & Tisdell (2016) assert that limitations exist in all studies, and producing
imprecise or miss represented data is always a risk. Only one limitation identified in Chapter
Three was relevant during the data collection. The survey and interview design were explicitly
created for this study and thus have not been validated in prior research. Although both were
supported by the literature and corroborated by Unit 123 psychologists, they have not been field-
tested. In addition, this study had two delimitations. One, even though one hundred percent of
the stakeholder group participated in the data collection, they only consist of 30 of the over 200
Experts. Although the stakeholder was specially selected, it is unknown what contributions the
Senior Leaders and Team Leaders could have contributed to the study. Two, Unit 123 is such a
unique organization that the product designed by this study is not transferable to other
organizations.
Future Research
Unit 123’s inability to predict leadership potential is still a problem that must be
addressed, and although this study focused on the most significant gap, additional steps should
be taken to solve the problem. This study focused specifically on the lack of an internal method
to professionally evaluate Experts. Even though the evaluation method resulting from this study
has been embraced by Unit 123, there is a need for future research.
The first recommendation is that Unit 123’s psychologists collect the resulting data from
the evaluation method and identify attributes associated with successful and unsuccessful
Experts. Senior Unit 123 leaders will inform the psychologists of the criteria of successful and
unsuccessful Experts and then conduct data analysis to identify the correlating data. This
IMPROVING THE EVALUATION METHOD 120
research will take years to conduct appropriately, and while Unit 123 psychologists are capable,
it is recommended that a Leader Expert or Senior Leader do the research.
The second recommendation is for Unit 123’s Assessment, Selection, and Training
Section to begin utilizing the Unit 123 psychologists’ results from the first recommendation. It is
recommended that recruitment is implemented using both the old and newly designed criteria for
data collection and analysis purposes. Upon the completion of the training course Experts that
were identified by the new criteria will be compared to the existing method. The final stage of
this recommendation is to compare the results of these Experts' first two job performance
evaluations to determine if they display the very attributes for which they were recruited.
The final recommendation for future research is to disseminate the results from these
studies to other military units. All Special Operations units face job performance evaluation
problems, and a study that uses Unit 123 as a promising practice and implements a similar
evaluation method can not only improve other units but further validate this research.
Conclusion
A gap analysis was used to determine the knowledge, motivation, and organizational
barriers existing in Unit 123 that led to its inability to predict leadership ability during the
assessment, selection, and training process. During the gap analysis, it was assessed that the
fundamental barrier was Unit 123’s lack of an internal job performance evaluation method
designed for Experts. Unit 123’s lack of an effective evaluation method is a contributing factor
to the gap in knowledge of what attributes are consistent in high-performing soldiers and thus
what attributes must be recruited, selected, and assessment to increase leadership
potential. Leader Experts were determined to be the stakeholder for the purposes of this study.
One hundred percent of the 30 Leader Experts were chosen because they have eight to twelve
IMPROVING THE EVALUATION METHOD 121
years of operational time on a team, they understand the selection and training process, and still
have a direct impact on the operational environment and the on the Experts on teams.
Clark and Estes’ (2008) framework was used for a systematic gap analysis of Leader
Expert’s KMO influences with an embedded mixed methods design to identify a Unit 123
specific evaluation method. The framework provides a lens for viewing performance gaps and
other phenomena by understanding them through potential influencers. The study achieved the
goal of one hundred percent participation, requiring the researcher to travel to seven different
countries, which allowed the administration of the survey and interview to be conducted in-
person, securely, and by the researcher.
The results from the survey showed that 96% of the proposed evaluation items are
applicable; 97% did not want a reduction in survey items; and that successful and unsuccessful
Experts need to be identified. The results from the interview revealed that 100% of Leader
Experts agreed that: Experts are currently not consistently provided feedback on performance;
Unit 123 does not have an appropriate evaluation tool; the proposed survey items should be part
of a mandatory evaluation method; the data should be analyzed to improve recruitment. The
interview also showed that 100% of Leader Experts believe Unit 123 has a good culture, but only
three percent of Leader Experts agree with Unit 123’s culture statement, which indicates that the
culture statement should be abandoned or changed.
The solution to Unit 123’s lack of an Expert-specific evaluation method is the survey
evaluated during this study. After a training session conducted by the researcher, the evaluation
tool will be implemented into the regular administrative system in Unit 123. All Leader Experts
and Senior Leaders in Unit 123 have agreed to implement the method for 16 months, or one
training cycle to assess its validity. The evaluation tool will have an immediate impact by a
created a feedback loop for roughly 50% of the Experts that are not currently given consistent
IMPROVING THE EVALUATION METHOD 122
feedback. This feedback will provide them guidance so they can adjust their actions and
behaviors to improve job performance and perception. Leaders in Unit 123 will begin to have
quantitative data of their Experts to recognize and adapt to trends for individuals and groups.
This same data will be used by psychologists to identify attributes of top and bottom
performance and adjust recruitment to find the next generation of Experts. Overall this study
intends to improve the potential of current Experts and to help identify, assess, counsel, and
develop the future generations of soldiers.
IMPROVING THE EVALUATION METHOD 123
References
Anderman, L., & Anderman, M. (2009). Oriented towards mastery: Promoting positive
motivational goals for students. Handbook of Positive Psychology in Schools.
APA. (2010). Harvard referencing guide. The University Library, University of Sheffield, 1996–
1999. https://doi.org/10.1036/0071393722
Archibald, S., Coggshall, J. G., Croft, A., & Goe, L. (2011). High-quality professional
development for all teachers: Effectively allocating resources. National Comprehensive
Center for Teacher Quality, 32. Retrieved from
http://www.tqsource.org/publications/HighQualityProfessionalDevelopment.pdf
Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in
Psychological Science, 9(3), 75–78. https://doi.org/10.1111/1467-8721.00064
Bandura, A. (2005). The evolution of social cognitive theory. Great Minds in Management, 9–
35. https://doi.org/10.5465/amr.2007.23467624
Bartone, P. T., Roland, R. R., James, J., & Williams, T. J. (2008). Psychological hardiness
predicts success in US Army Special Forces candidates. International Journal of Selection
and Assessment, 16: 78-81. doi:10.1111/j.1468-2389.2008.00412.x
Boe, B. O. (2017). The Big 12 : The most important character strengths for military officers.
Athens Journal of Social Sciences, (April), 161–174.
Bolman, L. G., & Deal, T. E. (2013). Reframing organizations: Artistry, choice, and leadership.
San Francisco, CA: Jossey-Bass.
Clark, R. E., & Estes, F. (2008). Turning research into results. A guide to selecting the right
performance solution. Atlanta, Ga: CEP Press.
IMPROVING THE EVALUATION METHOD 124
Cline, P. B. (2017). Mission critical teams: Towards the creation of a university assisted, mission
critical team instructor cadre development program. The University of Pennsylvania,
Philidelphia, PA
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods
approaches. Thousand Oaks, CA: SAGE.
Dembo, M. H., & Eaton, M. J. (2000). Self-regulation of academic learning in middle-level
schools. The Elementary School Journal, 100(5), 473–490. https://doi.org/10.1086/499651
Eccles, J. (2009). Expectancy value theory. Handbook of Motivation at School, 56–75.
Gallimore, R., & Goldenberg, C. (2010). School improvement research analyzing cultural
models and settings to connect minority achievement and school improvement research,
Educational Psychologist, 36:1, 45-56, DOI: 10.1207/S15326985EP3601_5
Garson, G. D. (2014). The Delphi Method in Quantitative Research. Asheboro, NC: Statistical
Associates Publishers.
Grossman, R., & Salas, E. (2011). The transfer of training : what really matters, International
Journal of Training and Development, 15: 103-120. doi:10.1111/j.1468-2419.2011.00373.x
Grove, C. N. (2005). Worldwide differences in business values and practices : Overview of
GLOBE research findings. Grovewell Llc, 1–12. Retrieved from www.grovewell.com/pub-
GLOBE-dimensions.html
Hirabayashi, K. (2018). How do we identify organizational Influences on performance?
Retrieved
from https://2sc.rossieronline.usc.edu/mod/page/view.php?id=109259, March 26, 2018.
Johnson, S. R. (2012). US Army evaluations, a study of inaccurate and inflated reporting. Marine
Corps University. 3330(0704).
IMPROVING THE EVALUATION METHOD 125
Judge, T. A., Colbert, A. E., & Ilies, R. (2004). Intelligence and leadership : A quantitative
review and test of theoretical propositions. Journal of Applied Psychology. 89(3), 542–552.
https://doi.org/10.1037/0021-9010.89.3.542
Kezar, A. J. (2001). Understanding and facilitating organizational change in the 21st century.
ASHE-ERIC Higher Education Report Volume (Vol. 28). https://doi.org/10.1002/aehe.2804
Kirkpatrick, B. Y. J. D., & Kirkpatrick, W. (2016). Evaluation blunders and missteps to avoid.
TD: Talent Development, 70(11), 36–40.
Kirkpatrick, J., & Kirkpatrick, W. (2016). Four levels of training evaluation. Alexandria, VA:
ATD Press.
Kouzes, J., & Posner, B. (2007). The leadership challenge. San Francisco, CA: Wiley & Sons.
Krathwohl, D. R., Anderson, L. W., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich,
P. R., … Wittrock, M. C. (2002). A taxonomy for learning, teaching, and assessing: A
revision of bloom’s taxonomy of educational objectives. New York Longman, 41(4), 302.
https://doi.org/10.1207/s15430421tip4104_2
Krueger, R. A., & Casey, M. A. (2000). Focus groups: A practical guide for applied research
(3rd ed.). Thousand Oaks, CA: Sage Publications. https://doi.org/10.4135/9781412991841
Langley, G., Moen, R., Nolan, K., Nolan, T., Norman, C., & Provost, L. (2009). The
Improvement Guide (2nd ed., pp. 15–47). Hoboken, NJ: Jossey-Bass.
Maxwell, J. (2013). Qualitative research design (3rd ed.). Thousand Oaks, CA: SAGE.
Mcmillen, M., & Stewart, E. (2009). Lead the way. Family Practice Management, 16(4), 15–18.
https://doi.org/10.1097/00006247-199911000-00017
Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation
(4th ed.). San Francisco, CA: Jossey-Bass.
Mountz, T. (1993). Special warriors: Special families and special concerns. New York: Springer.
IMPROVING THE EVALUATION METHOD 126
Narli, S. (2010). An alternative evaluation method for Likert type attitude scales: Rough set data
analysis, Scientific research and essays, 5(6), 519–528.
Nowicki, A. (2017). United States Marine Corps basic reconnaissance course : Predictors of
success Naval Postgraduate School. Monterey, CA. https://doi.org/10.1002/nme
Picano, J. J., Roland, R. R., Rollins, K. D., & Williams, T. J. (2002). Development and validation
of a sentence completion test measure of defensive responding in military personnel
assessed for nonroutine missions, Military Psychology, 14(4), 279–298.
Picano, J. J., Roland, R. R., Williams, T. J., & Rollins, K. D. (2006). Sentence completion test
verbal defensiveness as a predictor of success in military personnel selection, Military
Psychology, 18(3), 207–218.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in
learning and teaching contexts. Journal of Educational Psychology, 95(4), 667–686.
https://doi.org/10.1037/0022-0663.95.4.667
Rueda, R. (2011). The 3 dimensions of improving student performance. New York: Teacher.
Salkind, N. J. (2014). Statistics for people who (think they) hate statistics. (5th ed.). Los Angeles,
CA: SAGE
Schein, E. H. (2017). Organizational culture and leadership (5th ed.). Hoboken, NJ: John Wiley
& Sons.
Schraw, G., & McCrudden, M. (2013). Psychology of classroom learning: An encyclopedia.
Information Processing Theory, 54(June), 63–75. https://doi.org/10.1002/joec.12054
Senge, P. (1990). The leader’s new work : building learning organizations. Sloan Management
Review, 31(1), 7.
IMPROVING THE EVALUATION METHOD 127
Total military personnel of the U.S. Army 2016 and 2018, by rank. (2018). Retrieved from
https://www.statista.com/statistics/239383/total-military-personnel-of-the-us-army-by-
grade/
Unit 123 Website. (2018).
Usher, E. L., & Pajares, F. (2006). Inviting confidence in school : Invitations as a critical source
of the academic self-efficacy beliefs of entering middle school students. Journal of
Invitational Theory and Practice, 12, 7–16. https://doi.org/10.1016/j.cedpsych.2005.03.002
Wigfield, A., Tonks, S., & Klauda, S. L. (2009). Expectancy value theory. Handbook of
Motivation at School, 56–75.
IMPROVING THE EVALUATION METHOD 128
APPENDIX A
Survey Protocol
Thank you for taking the time to answer these questions. You have provided insight that
I will implement into the survey. Please take the time to look through and provide feedback on
the survey below. I will be here if you have any questions. At the end are a couple of response
questions if you feel we missed something overall today.
Potential Survey Questions
Instructions: Using the four-point scale, please rate the following survey questions. Write the
number that best represents how the effective the question evaluates the job performance of an
Expert. The second set instructions are the potential instructions for the survey itself along with
the seven-point scale.
Not relevant to
success
Nice to have but not
necessary
Important but not
critical to success
Critical to success
1
2
3
4
Instructions: Rate the Expert on how he compares to the other Experts within Unit 123. Write
the number that best represents his rating. (Multiple team members can be given the same score)
Leave
Unit 123
Among
the worse
in the
platoon
Worst on
team
Average
Best on
team
Among
the best in
the
platoon
Among
the best in
Unit 123
1 2 3 4 5 6 7
IMPROVING THE EVALUATION METHOD 129
Knowledge / Competency
1. Knowledge of CQB tactics, techniques, and methods:
2. Knowledge of marksmanship and small unit tactics:
3. Knowledge and competency of secondary Team tasks, i.e., mobility, climbing, and water:
4. Knowledge and competency of Military Free-fall operations:
5. Ability to retain technical and operational information:
6. Ability to identify operational gaps and find innovative solutions:
7. Ability to identify risk and how to mitigate it:
8. Ability to approach problems from a new direction (think outside the box):
9. Ability to rapidly process and apply new information effectively:
10. Ability to make things happen when given the minimum amount of information (fire and
forget):
11. Ability to tactfully express alternative viewpoints to peers and superiors:
12. Ability to instruct or teach techniques and skills to Experts and partners:
13. Natural athletic and physical ability (Talent):
14. Ability to identify and address problems without being told:
15. Ability to identify risk and mitigate it:
16. Ability to develop subordinates and pass on knowledge amongst Experts “mentorship”:
17. Ability to not let emotions drive decision making:
Motivation / Work Ethic
18. Motivation or work ethic with regards to physical fitness and physical readiness (Grit):
19. Motivation and drive when tasked to undertake an intensive and challenging duty:
20. Motivation and drive to accomplish a challenging task when the beneficiary is the large
IMPROVING THE EVALUATION METHOD 130
group or an external partner:
21. Motivation or work ethic with regards to maintaining and increasing the above job hard
skills:
22. Motivation or work ethic when asked to perform a “less than desired” job or duty:
23. Motivation or work ethic towards professional development, i.e., Unit 123 courses or
civilian education:
24. Stays current with emerging technology:
25. Seeks to improve weakness and shortcomings:
Communication
26. Ability to effectively communicate with outside agencies and organizations:
27. Ability to tactfully express alternative viewpoints to peers and superiors:
28. Ability to articulate concepts and ideas to both military and civilian decision-makers
using appropriate verbiage and language:
29. Able to convey thoughts effectively through the written word, email, and paper:
Organization / Culture
30. Cares about those both inside and outside his immediate circle:
31. Does what is right when no one is looking:
32. Fits in a team environment, easy to live with on a slow deployment (compatible):
33. Ability when working in an ambiguous environment.
34. Understands Unit 123’s internal relationships and personalities and how to navigate
them:
35. Uphold Unit 123’s culture statement, “The Relentless Pursuit of Excellence in Everything
that we do?”
IMPROVING THE EVALUATION METHOD 131
36. Embodies Unit 123’s core values; trust adaptability, and commitment:
37. Strategic understanding of the operational environment:
38. Ability to apply lessons learned from negative experiences:
39. Understand how his decisions and actions impact the larger organization:
40. Ability to handle criticism:
41. Ability to bounce back from traumatic events:
42. Works well with others or makes an effort to work well with others:
43. Potential for lifestyle to negatively affect performance and/or longevity:
44. Places the needs of the team above his own, “selfless”:
45. Work/home balance:
Overall
46. Leadership potential:
47. Demonstrates adaptability:
48. Demonstrates maturity:
49. Overall work ethic:
50. Overall as an Expert
51. Provide brief feedback in the box below:
a. Where they stand
b. What they need to fix
c. How do they fix it
52. If you marked anyone a 1 or 7, please explain:
Instructions: Answer the following questions if you feel the above job performance survey was
missing or not addressing an attribute appropriately.
1. What were questions missing from this survey?
IMPROVING THE EVALUATION METHOD 132
2. What job performance characteristics were not highlighted during this survey and are
important to success in Unit 123?
3. Do you have any additional comments that you would like to convey to the researcher to
make this survey more effective?
IMPROVING THE EVALUATION METHOD 133
APPENDIX B
Interview Protocol
Thank you for taking the time to help me introduce a Unit 123 specific evaluation tool
designed to assess current experts and in turn, identify attributes of top performers. This study
will use pseudonyms for the organization, occupations, and command structure and all data will
be confidential, and every participant can withdraw from the study if desired. As a Leader
Expert, you are the primary stakeholder in this study and, as such should influence the evaluation
survey. The intent of this interview is to better understand what attributes and job performance
areas are of value that could be considered for inclusion in a recruitment evaluation tool. I will
provide a list of survey questions to act as a starting point, but your opinion of what questions are
valid and any new questions is my desired end-point.
Knowledge / Competency
1. In your opinion, how important is an Expert’s knowledge of CQB tactics, techniques, and
methods?
2. How important is an Expert’s knowledge beyond the traditional Expert skills?
3. How important is an Expert’s ability to retain technical and operational information
compare to his peers and past performance?
4. How important is an Expert’s ability to articulate concepts and ideas to both military and
civilian decision-makers using appropriate verbiage and language?
5. How important is an Expert’s knowledge of the relationships between supporting
elements within Unit 123?
6. How important is an Expert’s understanding of Unit 123’s role in the joint military
environment?
Motivation / Work Ethic
IMPROVING THE EVALUATION METHOD 134
7. How important are an Expert’s actions when he is asked to undertake an intensive and
challenging task?
8. How important is an Expert’s perception of his job performance and that of his peers’
perception of his job performance?
9. How important is an Expert’s drive to accomplish a challenging task differ if he is the
sole beneficiary or the goal is to support the large group or external partner?
Organization / Culture
10. How important is an Expert’s ability to uphold Unit 123’s culture statement “The
Relentless Pursuit of Excellence in Everything that we do?”
11. How important is how an Experts history of being evaluated for job performance?
NCOER? Counseling? Constant constructive criticism? Another method?
12. How important are an Expert’s actions when he sees something he does not agree with;
how does he confront the problem?
IMPROVING THE EVALUATION METHOD 135
Appendix C
Level 1 and Level 2 Evaluation Instrument
Survey Items (four-point Likert scale from strongly disagree to strongly agree)
Level One: Engagement
1. I am interested in the results of this evaluation method.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. The intent of the training met my expectations.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level One: Relevance
1. The evaluation method will provide data that will improve counseling and feedback.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. The training provided was effective and prepared me adequately.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level One: Satisfaction
1. The evaluated Expert received the evaluation positively.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. I felt the evaluation method met all my needs and expectations.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
IMPROVING THE EVALUATION METHOD 136
Level Two: Declarative
1. The survey items and terminology can be used to discover an Expert’s attributes.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. The training addressed any knowledge gaps I previously had.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Two: Procedural
1. The process outlined to evaluate an Expert is easy to follow.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. The training providing me an understanding of the who and why behind the process.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Two: Attitude
1. I see the value in providing feedback consistently to Experts.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. I am looking forward to seeing how the data from the evaluations will improve recruitment.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Two: Confidence
1. I am confident in my ability to assess an Expert using the evaluation method.
IMPROVING THE EVALUATION METHOD 137
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. I have confidence in Unit 123 to use data collected from the evaluation method to improve the
quality of Experts.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Two: Commitment
1. I will continue to conduct evaluations at the prescribed times to the best of my ability.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
2. I will provide constructive feedback to improve the evaluation method as any opportunity.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
IMPROVING THE EVALUATION METHOD 138
Appendix D
Blended Evaluation Instrument
Survey Items (four-point Likert scale from strongly disagree to strongly agree)
Level One: Reactions
Engagement
1. The training provided me drive to conduct the evaluations
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Relevance
2. The training provided was effective and prepared me adequately.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Satisfaction
3. I felt the evaluation method met all my needs and expectations.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Two: Learning
Declarative
4. The training addressed any knowledge gaps I previously had.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level 2: Procedural
Attitude
5. This training will make improve Experts overall.
IMPROVING THE EVALUATION METHOD 139
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Confidence
6. I am confident in my ability to assess an Expert using the evaluation method.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Commitment
7. I will continue to conduct evaluations at the prescribed times to the best of my ability.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Three: Behavior
Reinforcing
8. Unit 123 leadership provided me the resources needed to conduct evaluation
effectively.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Encouraging
9. Unit 123 psychologists presented feedback from evaluation results effectively.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Rewarding
10. I have seen positive feedback during and after new performance counseling.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
IMPROVING THE EVALUATION METHOD 140
Monitoring
11. Unit 123 leadership has overseen the new evaluation method and made adjustments
when needed.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Level Four: Results
External Outcomes
12. I have seen an increase in the success rate in the training course.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Internal Outcomes
13. Every Expert has been evaluated and counseled using the new evaluation method.
Strongly Disagree Disagree Agree Strongly Agree
1 2 3 4
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Winning the organizational leadership game through engagement: a gap analysis
PDF
Employee engagement and leadership collaboration: a gap analysis of performance improvement teams in healthcare
PDF
Mentoring as a capability development tool to increase gender balance on leadership teams: an innovation study
PDF
A school for implementing arts and technology: an innovation study
PDF
Role understanding as a pillar of employee engagement: an evaluation gap analysis
PDF
Enhancing socially responsible outcomes at a major North American zoo: an innovation study
PDF
A qualitative examination of the methods church leaders use to increase young adult attendance in Christian churches: an evaluation study
PDF
Inclusionary practices of leaders in a biotechnology company: a gap analysis innovation study
PDF
From a privilege to an option: hybrid work schedule: a gap analysis
PDF
Nonprofit middle manager perceptions of organizational efficacy: a gap analysis
PDF
Mentorship challenges for occupational therapy clinicians transitioning into academia: an innovation study
PDF
Establishing a systematic evaluation of positive behavioral interventions and supports to improve implementation and accountability approaches using a gap analysis framework
PDF
Millennial workforce retention program: an explanatory study
PDF
Gender diversity in optical communications and the role of professional societies: an evaluation study
PDF
Critical behaviors required for successful enterprise resource planning system implementation: an innovation study
PDF
Role ambiguity and its impact on nonprofit board member external responsibilities: a gap analysis
PDF
Aligned leadership attributes and organizational innovation: an evaluation study
PDF
Organizational agility and agile development methods: an evaluation study
PDF
A gap analysis of course directors’ effective implementation of technology-enriched course designs: An innovation study
PDF
The moderating role of knowledge, motivation, and organizational influences on employee turnover: A gap analysis
Asset Metadata
Creator
Harrison, John Robert
(author)
Core Title
Improving the evaluation method of a military unit: a gap analysis
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Publication Date
02/26/2020
Defense Date
12/10/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
attribute,evaluation,framework,job performance,Knowledge,Motivation,OAI-PMH Harvest,organization
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Yates, Kenneth (
committee chair
), Donato, Adrian (
committee member
), Foulk, Susanne (
committee member
)
Creator Email
harrisonj_32@yahoo.com,johnrh@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-273147
Unique identifier
UC11673414
Identifier
etd-HarrisonJo-8215.pdf (filename),usctheses-c89-273147 (legacy record id)
Legacy Identifier
etd-HarrisonJo-8215.pdf
Dmrecord
273147
Document Type
Dissertation
Rights
Harrison, John Robert
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
attribute
evaluation
framework
job performance
organization