Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Officer performance appraisal program management: an evaluative case study
(USC Thesis Other)
Officer performance appraisal program management: an evaluative case study
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 1
Officer Performance Appraisal Program Management: An Evaluative Case Study
by
Michael J. Colarusso
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2019
Copyright 2019 Michael J. Colarusso
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 2
Abstract
This evaluative case study employs a disciplined inquiry framework to conduct a modified gap
analysis (Clark & Estes, 2008) of factors influencing a military HR staff and its ability to
manage, evaluate, and improve a forced ranking performance appraisal program. Using a
qualitative methods design, the data include 11 interviews with mid- to senior-level HR staff
members, as well as 6 publicly available officer survey summary reports and several additional
official organizational regulations and historical reports. These were triangulated with a review
of the appropriate literature. The findings highlight areas to address in officer appraisal program
management, particularly in the areas of declarative and procedural knowledge, efficacy and goal
orientation, and organizational support to the HR staff. Based upon the findings, the study
provides a series of recommended solutions and an integrated implementation and evaluation
plan that should assist the organization in the further evaluation and improvement of its officer
appraisal practices.
Keywords: military, commissioned officers, performance appraisals, performance
management, forced distribution, forced ranking, forced curve, stacked ranking, ordinal ranking,
human resources
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 3
Dedication
This research is dedicated to the men and women, both military and civilian, of the
Armed Forces of the United States.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 4
Acknowledgements
While a dissertation has just one name on the title page, disciplined inquiry is a team
sport. In my case, I owe a tremendous debt to the staff and faculty of the Rossier School of
Education, a group of accomplished scholars and genuinely caring people. The professors,
academic advisors, support staff and program administrators – all are consummate professionals
dedicated to student learning and success. I also want to give special thanks to my dissertation
chair, Dr. Kimberly Hirabayashi, for her wisdom and guidance. Dr. Hirabayashi embodies the
perfect blend of a no-nonsense approach, kind good humor, and an empathetic heart. Simply
put, she made me better. Special thanks as well to Dr. Kimberly Ferrario, my first professor in
the program. She was critical to setting me on the right path and taking the anxiety out of the
experience from day one. It was wonderful to have her on my committee so that we started and
finished this effort together.
While the faculty may have provided me with the knowledge to complete this journey,
my fellow travelers in OCL Cohort 7 were just as crucial to this study. They say that shared
sacrifice builds enduring bonds, and this experience has proven that it’s more than just a
careworn phrase (don’t worry, I just checked APA – there’s no prohibition on using contractions
in your acknowledgments). It was an absolute pleasure spending time with this special group of
people. I’ll miss them dearly, but there’s solace in knowing I’ve made enduring friends – we’ll
meet again. I’m especially grateful to Regina Morlino, my classmate, battle buddy, and friend,
who pushed and pulled me across the finish line and always made me laugh along the way.
Like many graduate students these days, I had to pursue the intellectually and time-
consuming goal of a doctoral degree while working full time. It would have been impossible to
do this without the understanding and forbearance of my work colleagues, who gave me the extra
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 5
time and support needed to get this done – I appreciate them going easy on me for a while.
Special thanks to both Dave Lyle, for encouraging me to pursue this goal and for serving on my
committee, and to Carl Wojtaszek for helping me to see it through.
At times I think scholars can lose sight of the fact that their accomplishments depend
upon the most important constituency to any research effort – its participants. While I’d like to
thank all of them by name, of course I can’t do that. If you took part in this study and you’re
reading this, however, please know that I deeply appreciate and respect your willingness to share
your thoughts with me in the spirit of learning and improving your organization. The work you
do is vitally important, and I hope the findings of this study are useful to you and accepted in the
spirit intended.
Most importantly, I’d like to thank my beautiful wife Lily for her unwavering support,
encouragement, and love. “Dissertation again this weekend?” “Yes.” “Coffee?” “Yes, please!” I
simply couldn’t have done this without you. Thanks for putting up with the long nights and lost
weekends. I’d be remiss if I didn’t thank my mom as well, who always tells me how proud she
is of me. No matter how old I get, that’s always going to mean the world to me – I’m proud of
you too, mom. And finally, to my three wonderful children, Michael, Christopher, and Emilia,
who I love with all my heart – remember that what we accomplish in life, we accomplish for
each other.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 6
TABLE OF CONTENTS
Abstract ....................................................................................................................................... 2
Dedication ................................................................................................................................... 3
Acknowledgements ..................................................................................................................... 4
List of Tables .............................................................................................................................. 8
List of Figures ............................................................................................................................. 9
Introduction to the Problem of Practice .................................................................................... 10
Organizational Context and Mission ........................................................................................ 10
Importance of Addressing the Problem .................................................................................... 11
Purpose of the Project and Questions ....................................................................................... 12
Organizational Performance Goal ............................................................................................. 13
Stakeholder Group of Focus and Stakeholder Goal .................................................................. 14
Review of the Literature ........................................................................................................... 15
Forced Ranking Performance Appraisals in the United States ................................................. 15
Origins and Use.................................................................................................................... 16
Structure, Methodology, and Purported Benefits ................................................................ 16
Typical Use .......................................................................................................................... 17
Potential Shortcomings of Forced Ranking Appraisals ............................................................ 17
Heuristics and Cognitive Biases .......................................................................................... 18
Undesirable Workforce Behaviors ....................................................................................... 18
Reduced Employee Trust ..................................................................................................... 19
Reduced Workforce Productivity ........................................................................................ 19
Assumed Knowledge, Motivation, and Organizational Influences .......................................... 20
Knowledge Influences ......................................................................................................... 20
Motivation Influences .......................................................................................................... 22
Organizational Influences .................................................................................................... 24
Interactive Conceptual Framework ........................................................................................... 29
Qualitative Data Collection and Instrumentation ..................................................................... 30
Documents and Artifacts...................................................................................................... 31
Interviews ............................................................................................................................. 32
Data Analysis ............................................................................................................................ 35
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 7
Findings..................................................................................................................................... 35
Participating Stakeholders ................................................................................................... 36
Knowledge Findings ............................................................................................................ 37
Motivation Findings ............................................................................................................. 50
Organizational Findings ....................................................................................................... 58
Synthesis of Findings ........................................................................................................... 68
Solutions and Recommendations .............................................................................................. 70
Knowledge Recommendations ............................................................................................ 70
Motivation Recommendations ............................................................................................. 75
Organizational Recommendations ....................................................................................... 79
Limitations and Delimitations ................................................................................................... 85
Conclusion ................................................................................................................................ 86
References ................................................................................................................................. 88
Appendix A: Participating Stakeholders and Interview Sampling Criteria .............................. 95
Appendix B: Interview Protocol ............................................................................................... 98
Appendix C: Interview Questions ........................................................................................... 100
Appendix D: Information Sheet for Interview Participants ................................................... 103
Appendix E: Document Review Protocol ............................................................................... 104
Appendix F: Credibility and Trustworthiness......................................................................... 105
Appendix G: Ethics ................................................................................................................. 107
Appendix H: Implementation and Evaluation Plan ................................................................ 109
Appendix I: Initial Blended Evaluation Instrument (Levels 1-2) ........................................... 122
Appendix J: Subsequent Blended Evaluation Instrument (Levels 1-4) .................................. 123
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 8
LIST OF TABLES
Table 1. Summary of Knowledge, Motivation, and Organizational Influences ........................ 27
Table 2. Demographic Summary of HR Professionals Interviewed ......................................... 37
Table 3. HR Staff Knowledge of Forced Ranking Shortcomings .............................................. 47
Table 4. Summary of Knowledge Influences and Recommendations ....................................... 71
Table 5. Summary of Motivational Influences and Recommendations ..................................... 77
Table 6. Summary of Organizational Influences and Recommendations ................................. 81
Table H1. Implementation Outcomes, Metrics, and Methods ................................................ 111
Table H2. HR Staff Critical Behaviors, Metrics, Methods, and Timing for Evaluation ......... 112
Table H3. Required Drivers to Support HR Staff Critical Behaviors..................................... 113
Table H4. Evaluation of Components of Learning for Program ............................................ 117
Table H5. Components to Measure HR Staff Reactions to the Program................................ 118
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 9
LIST OF FIGURES
Figure 1. Interactive conceptual framework ............................................................................. 30
Figure 2. Summary of key career satisfaction findings, 2011-2016......................................... 42
Figure 3. Espoused (A) vs. actual (B) appraisal policy work process ..................................... 62
Figure 4. HR staff median job tenure (in years) ....................................................................... 66
Figure H1. Sample visual dashboard data presentation approach .......................................... 120
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 10
Officer Performance Appraisal Program Management: An Evaluative Case Study
Introduction to the Problem of Practice
Forced ranking performance appraisals are widely used in the United States. Rather than
measuring individual performance against an objective standard, these appraisals ordinally rank
employees against one another, creating a bell curve performance distribution. The goal of this
approach is to identify employees with the highest performance and potential so that
organizations can make personnel management decisions which enhance mission
accomplishment (Klores, 1966). These types of appraisals are employed in a third or more of all
U.S. corporations and are also common in non-profit, public sector, higher education, and
military workforce settings (Cappelli & Tavis, 2016; Hazels & Sasse, 2008). While some studies
find that forced ranking can improve workforce potential in specific settings (Scullen, Bergey &
Aiman-Smith, 2005), or generate better outcomes than no appraisals at all (Cappelli & Conyon,
2016; Grote, 2005), a large body of empirical research concludes that forced ranking appraisals
are subject to multiple biases which significantly reduce their reliability (Hoffman, Lance,
Bynum & Gentry, 2010; Scullen, Mount & Geoff, 2000). As a result, the use of this type of
performance appraisal may actually detract from organizational mission accomplishment
(Blader, Gartenberg & Prat, 2016).
Organizational Context and Mission
Service X (a pseudonym) is a branch of the U.S. Armed Forces with over 1.25 million
active and reserve servicemembers and civilian employees, making it one of the largest
employers in the United States. Like the other military services, Service X is subject to
congressional oversight. Title 10 of the United States Code (Title 10, USC), which regulates the
Armed Forces, requires Service X to organize, train and equip itself primarily for combat
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 11
operations. Consistent with this directive, the service characterizes its mission as providing
dominance across the full range of military operations and potential conflict types to ensure
victory against any U.S. opponent.
Service X is led by commissioned officers in the rank of ensign / second lieutenant or
above who are appointed by the President of the United States, possess a bachelor’s degree from
an accredited university, and have successfully met the requirements of a commissioning source
(a service academy, the Reserve Officer Training Corps, or an officer candidate program). Per
law and statute, the Service’s 90,000 actively serving commissioned officers command or serve
on staff in military units around the globe. These officers are periodically evaluated with Officer
Performance Appraisals (“OPAs,” a pseudonym), an ordinal ranking appraisal instrument. OPAs
have been used in one form or another since the mid-1920s, adding centrally mandated or
“forced” ranking percentages for a brief period in the 1960s. While this initial forced ranking
effort was abandoned after 18 months, it was reintroduced in the 1990s to combat rater leniency
bias. It remains a feature of Service X’s current officer appraisal program.
Importance of Addressing the Problem
It is important to evaluate Service X’s ability to effectively assess and manage its officer
appraisal program for a variety of reasons. First, numerous studies have suggested that forced
ranking appraisal programs may: have high levels of rater bias and low evaluative reliability
(Hoffman et al., 2010; Scullen et al., 2000); fail to identify employee talent and potential (Blume,
Baldwin & Rubin, 2009); engender undesirable workforce behavior (Berger, Harbring & Sliwka,
2013; Hazels & Sasse, 2008); reduce employee retention, productivity, and mission
accomplishment (Cappelli & Tavis, 2016), and; may increase employee turnover rates and
overall labor costs (Buckingham & Goodall, 2015). Second, although Congress has asked the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 12
Armed Services to modernize their officer management practices (U.S. Senate, 2018), Service X
does not yet have a plan to evaluate or improve the officer appraisal program, despite its
centrality to most officer management actions, particularly promotions, selections, and
assignments. It is unclear whether this is because the Service is fully satisfied with the current
appraisal program or is confronted by significant barriers to change. Regardless of the cause, if
Service X cannot continuously evaluate and improve its officer appraisal program, over time it
may become unable to make its required contribution to the national defense. Lastly, the lack of
peer reviewed research regarding this appraisal approach within military workforces suggest that
this study can make a meaningful contribution to our understanding of public sector forced
ranking appraisal programs and the ways in which they are managed.
Purpose of the Project and Questions
Evaluative research studies help leaders to make decisions via disciplined inquiry
(McEwan & McEwan, 2003). More specifically, case studies provide an in-depth analysis of a
specific case, program, event, activity, process, or individuals, drawing upon a variety of data
collection methods to gather detailed information (Creswell, 2014). The purpose of this case
study was to evaluate the knowledge and skills, motivation, and organizational resources needed
to achieve the goal of improving Service X’s officer appraisal program by June 2022 in response
to both internal assessments and congressional oversight. While a complete evaluation would
have focused upon all officer appraisal program stakeholders, for practical purposes the
stakeholder of focus for this study was Service X’s Global HR staff. The HR staff is responsible
for the planning, execution, assessment, and continuous improvement of Service X’s officer
appraisal program. The following questions guided this study:
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 13
1. To what extent is Service X meeting the goal of being 100% compliant with congressional
guidance to modernize officer management and appraisal practices by June 2022?
2. What knowledge and motivation related to the management, evaluation, and improvement of
the officer appraisal program is resident within the Service’s Global HR staff?
3. What is the interaction between Service X culture and context and HR staff knowledge and
motivation as it pertains to officer appraisal program management, evaluation, and
improvement?
4. What are the recommendations for Service X officer appraisal program management
practices in the areas of knowledge, motivation, and organizational structure and resources?
Organizational Performance Goal
Service X explicitly links its officer performance appraisal program to its organizational
mission in written regulations. One regulation states that officer appraisals are the Service’s
main source of officer management information and are also meant to guide each officer’s
performance and professional development. A second regulation identifies the same functions
and states that accomplishing them will increase Service X personnel readiness and mission
accomplishment. Additional qualifying language in these regulations calls for objectivity,
accuracy, completeness, honesty, timeliness, and fairness in all officer performance appraisals.
While these regulations link the officer performance appraisal program to Service X’s
mission, providing a purpose, vision, and guiding principles, they do not provide specific,
measurable, overarching goals with time targets for realization, nor do they provide a framework
for continual evaluation and improvement of the appraisal program. This suggests that
programmatic evaluations of the officer appraisal program are perhaps not taking place. This
may in part explain why Congress is exercising oversight activity in this area, most recently by
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 14
providing the military services with new authorities to revise officer management practices in the
2019 National Defense Authorization Act (U.S. Congress, 2018). Given that effective goals are
time-bound, an aspirational organizational performance goal for Service X would be to
implement an officer appraisal program modernization plan by June of 2022 in response to
congressional oversight. Please note that the researcher is suggesting this goal, which would
provide Service X with a year to conduct a summative evaluation of the officer appraisal
program and an additional two years to design a modernization plan.
Stakeholder Group of Focus and Stakeholder Goal
This study focused upon Service X’s Global HR staff (hereafter the “HR staff,” a
pseudonym), which is led by a three-star officer, the Chief of Human Resources (or “CHRO,” a
pseudonym). The HR staff is located in the Pentagon and consists of several hundred military
and civilian employees. Their mission is to develop, manage and execute all human resource
plans, programs, and polices within Service X. Given the scope of its mission, the HR staff is
the most critical stakeholder in Service X’s officer performance appraisal program.
As there is no clear, time-bound organizational goal from which to cascade officer
appraisal program stakeholder goals, none have been explicitly identified by the HR staff. While
there are implicit stakeholder goals, such as achieving 100% timeliness and administrative
accuracy for all appraisal reports, these call for process data measurements only. They do not
provide the input, outcome, or participant satisfaction data necessary to conduct a comprehensive
program evaluation. This suggests that knowledge and skill, motivation, or organizational
(KMO) influences may be preventing the HR staff from conducting planned, systematic officer
appraisal program evaluations with a proven disciplined inquiry framework. Cascading from the
aspirational organizational goal identified by the researcher, an appropriately time-bound
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 15
stakeholder goal is for the HR staff to be 100% satisfied with officer appraisal program planning,
evaluation, and management by June 2020. This would be two years prior to the implementation
of any substantive appraisal program improvements by Service X, allowing the HR staff a year
to identify and close its own KMO gaps before leading any appraisal program evaluation and
improvement effort.
Review of the Literature
This section begins by presenting the origins and history of forced ranking appraisals in
the United States, to include the theories undergirding their use and widespread adaptation. This
is followed by a synthesis of recent literature identifying the potential shortcomings of forced
ranking appraisals, as well as the iatrogenic effects they can have upon workforce productivity.
Finally, the section introduces the Clark and Estes Gap Analysis Framework (Clark & Estes,
2008) to identify potential knowledge, motivation, and organizational influences on the HR
staff’s program management abilities to evaluate and moderate such effects.
Forced Ranking Performance Appraisals in the United States
Forced ranking performance appraisals, which the literature occasionally describes as
forced distribution, forced curve, ordinal ranking, or stacked ranking appraisals, became
increasingly widespread in the post-World War II United States labor market. Today they
remain common practice in the public and private sectors, as well as in non-profit and education
workforce settings. While ostensibly created to differentiate employees by performance level,
organizations often use them for a variety of other purposes. Regardless of purpose, however,
some evidence indicates that these types of appraisals incorporate features which reduce their
reliability and efficacy as performance management tools.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 16
Origins and Use
Forced ranking performance appraisals originated in the 1920s, and the U.S. Armed
Forces were among the first to employ them. Due to the military’s success in raising vast and
effective formations to win World War II, many of its human resource practices were regarded as
the “gold standard” for large, complex industrial age organizations, and the private sector
benchmarked heavily from military personnel management (Cappelli & Tavis, 2016). As
wartime veterans carried these seemingly effective techniques back into a variety of civilian
workforce sectors, forced ranking appraisals became increasingly widespread and remain
commonplace today (Buckingham & Goodall, 2015). General Electric, Cisco, Hewlett-Packard,
Intel, Microsoft, and many other Fortune 500 companies have all at one time or another made
forced ranking appraisals a fixture of their performance management systems (Scullen et al.,
2005). Before analyzing their impact, however, it is first necessary to understand the structure,
methodology, and purported benefits of forced ranking appraisal systems.
Structure, Methodology, and Purported Benefits
In an effort to differentiate by both performance and potential, forced ranking systems
evaluate employees against one another, placing a fixed percentage of each into differing
performance tiers along a bell curve. Those at the top are often rewarded, while those at the
bottom may receive remedial coaching, probation, or even termination (Hazels & Sasse, 2008).
Proponents of this approach argue that it has several benefits (Klores, 1966). These include:
improving mean worker performance by repeatedly culling the tail end of the performance
distribution (Scullen et al., 2005); engendering an atmosphere of frankness and honesty between
supervisors and employees; creating a meritocracy, and; encouraging weaker employees to
voluntarily seek new work rather than waiting to be terminated (Blume et al., 2009). While
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 17
some organizations use forced ranking appraisals in pursuit of these ends, others employ them
for different reasons entirely.
Typical Use
Many employers use forced ranking appraisals for administrative rather than performance
management purposes. This is indicative of a highly bureaucratic accountability system, quite
common in public-sector organizations such as the government and military (Romzek &
Dubnick, 1987). Companies using forced ranking tend to view employee appraisals as auditable
accounts, indicative of a technical managerial approach (Biesta, 2004). For example, some rely
upon them to make promotion or compensation decisions (Hazels & Sasse, 2008). Others use
them for personnel record keeping (Schleicher, Bull & Green, 2009). Still others use them to
hold workers accountable and to provide the basis for legal protection should anyone challenge
the organization’s performance-related management decisions (Cappelli & Tavis, 2016). While
forced ranking appraisals may therefore be administratively and legally useful, a closer
examination of their potential shortcomings is critical to fully understanding their impact upon
organizations and employees.
Potential Shortcomings of Forced Ranking Appraisals
Any administrative benefits derived from forced ranking appraisal systems may be offset
by well documented biases which create high variability in ratings and reduce their efficacy and
accuracy as evaluative instruments. This may in turn engender undesirable workforce behaviors
and reduce employee trust, potentially creating an iatrogenic effect upon workforce engagement,
satisfaction, and productivity.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 18
Heuristics and Cognitive Biases
Research suggests that heuristics and cognitive biases feature prominently in forced
ranking performance appraisals and have unintended effects upon the fairness and accuracy of
results. Leniency bias, for example, is the tendency by raters to inflate performance appraisals
out of concern for employees, particularly in organizations that actively cull the lower portion of
the performance distribution (Blume et al., 2009). This results in a lack of differentiated
performance, with a higher number of appraisals concentrated in the middle of the performance
distribution. Another bias is institutional or systemic – a firm’s culturally driven focus upon a
narrow set of talents rather than the full range of abilities actually demanded in the workplace
(Hazels & Sasse, 2008). When this bias is in operation, workers whose perceived strengths fall
outside of the desired talent range tend to be penalized in performance rankings, even if they
possess other valuable skills. Perhaps most concerning, however, is idiosyncratic rater bias.
Scullen et al. (2000) concluded that as much as 62% of the variability in employee performance
appraisals stems from rater bias. Follow-on research drew similar conclusions (Hoffman et al.,
2010). While these biases may therefore reduce appraisal validity, they can also produce other
detrimental effects which merit discussion.
Undesirable Workforce Behaviors
One such bias-engendered effect is undesirable workforce behavior. This includes
performing downward to meet lowered supervisor expectations, or intense cutthroat competition,
a maladaptive behavior that is antithetical to workplace harmony (Hazels & Sasse, 2008). Such
negative behavior may even go as far as the deliberate sabotaging of a colleague’s work to gain
additional compensation or promotion (Berger et al., 2013). To fully understand this
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 19
phenomenon, it helps to consider how negatively both employees and supervisors feel about such
evaluations.
Reduced Employee Trust
Because they are often perceived as biased rather than objective, forced ranking
performance appraisals tend to reduce employee trust in an organization. As Klores (1966)
noted, employees sometimes conclude that the only workers who will rise in their organizations
are those who most satisfy raters’ biases. Nor are workers the only ones to feel this way. Some
research indicates that supervisors also tend to view forced ranking appraisals as unfair, lowering
their overall confidence in the use of such instruments (Schleicher et al, 2009). Organizations
with a heavily team-focused culture seem especially vulnerable to these effects (Blader & Prat,
2016; Hazels & Sasse, 2008), which over time may reduce employee engagement and increase
talent flight (Scullen et al., 2005). Compounding the ill effects of performing downward,
cutthroat competition, and a lack of employee trust, forced ranking performance appraisals may
fail to accurately inform worker preferences, with implications for workforce productivity.
Reduced Workforce Productivity
When workers receive scant or inaccurate performance feedback, potentially higher
levels of employee-job mismatch can result, lowering employee engagement and satisfaction. In
part, this is because many forced ranking evaluations are annual in nature, providing little-to-no
dynamic performance feedback to employees during the year (Buckingham & Goodall, 2015).
Additionally, these types of appraisals tend to focus upon a select range of employee
characteristics or talents (Hazels & Sasse, 2008) rather than upon the full range of capabilities
possessed by each worker. As a result, many of an individual’s strengths remain hidden or
undervalued. Without the identification and affirmation of their strengths, employees cannot
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 20
make well informed career decisions to better align their talents against work requirements. This
lack of performance feedback and preference shaping may cause retention issues and talent
shortages (Cappelli & Tavis, 2016). This shortcoming, in combination with those discussed
previously, suggests potential reductions in workforce productivity and mission accomplishment,
making American organizations less effective in a rapidly changing and more competitive global
environment while simultaneously stifling the development and engagement of employees. A
military workforce setting, with its heavy emphasis upon trust and teamwork, may be
particularly vulnerable to the potential iatrogenic effects of forced ranking performance
appraisals.
Assumed Knowledge, Motivation and Organizational Influences
As described earlier, Service X’s HR staff is charged with developing, managing, and
executing all of Service X’s personnel plans, programs and policies. As a result, the HR staff is
the critical stakeholder for the officer performance appraisal program. This section of the study
explains the assumed knowledge, motivation, and organizational influences upon the HR staff’s
ability to continuously manage, evaluate, and improve the officer appraisal program.
Knowledge Influences
Examining the knowledge and skills either resident or lacking within an organization can
help identify gaps that are preventing the realization of desired outcomes (Clark & Estes, 2008).
Krathwohl (2002) and Rueda (2011) each identified four knowledge types: factual; conceptual;
procedural; and metacognitive. Factual knowledge consists of discrete, basic content elements
such as data, terminology or specific facts within a discipline (Krathwohl, 2002; Rueda, 2011).
Conceptual knowledge refers to more complex, organized forms of knowledge, such as theories
and models that help integrate basic content elements into functional constructs (Krathwohl,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 21
2002; Rueda, 2011). Procedural knowledge refers to “how to” – the knowledge of subject-
specific skills, techniques, methods, and procedures, as well as of when and how to appropriately
employ each (Krathwohl, 2002; Rueda, 2011). Lastly, metacognitive knowledge refers to
knowledge of cognition generally, as well as an understanding and awareness of one’s own
cognition (Krathwohl, 2002; Rueda, 2011). Each knowledge type has the potential to influence
the HR staff’s ability to effectively manage the officer appraisal program.
Factual knowledge – program stakeholder attitudes. The HR staff requires factual
knowledge of the environment in which the current officer appraisal program operates,
specifically the views of other key stakeholders, who are defined as those who have a stake in a
program’s process and outcomes (Lewis, 2011). Accordingly, the HR staff must clearly
understand both congressional guidance (U.S. Congress, 2018; U.S. Senate, 2018) and officer
workforce attitudes regarding the appraisal program. Without this knowledge, Service X cannot
be sure that the appraisal program is complying with oversight requirements or generating
workforce outcomes that support its organizational goals.
Conceptual knowledge – forced ranking appraisal strengths and weaknesses. The
HR staff must understand the strengths and weaknesses of forced ranking appraisals. As the
literature review indicates, these appraisals can be subject to multiple biases which reduce their
rating validity and reliability (Blume et al., 2009; Hazels & Sasse, 2008; Hoffman et al., 2010;
Scullen et al., 2000). Forced ranking appraisals may also cause some employees to engage in
undesirable and maladaptive workforce behaviors (Berger et al., 2013; Hazels & Sasse, 2008).
Awareness of these potential shortcomings provides a necessary foundation to officer appraisal
program management. Without it, the HR staff cannot construct appropriate performance
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 22
measures to identify or correct shortcomings should they occur within the officer appraisal
program.
Procedural knowledge – program evaluation and disciplined inquiry. The HR staff
must know how and when to employ a disciplined inquiry framework to conduct periodic officer
performance appraisal program evaluations. This is because organizations that use such
frameworks to conduct program evaluations enhance their chances of reaching the program’s
stated goals (Marsh & Farrell, 2015). Such inquiry frameworks are complex, layering procedural
knowledge on top of factual and conceptual knowledge. For example, Data Driven Decision
Making (DDDM) consists of: (1) collecting data; (2) organizing, filtering, and analyzing it to
create information; (3) synthesizing information and expertise to create actionable knowledge;
(4) applying that knowledge to devise or adjust processes to achieve desired outcomes; and (5)
assessing those outcomes (Marsh & Farrell, 2015). Without this knowledge, Service X cannot
be sure that the appraisal program is generating desired outcomes in support of its organizational
goals.
Motivation Influences
As evidence suggests that motivation accounts for up to 50% of all performance success
(Rueda, 2011), motivation influences are particularly important in any performance gap analysis.
Because motivation is an internal state that initiates and maintains goal-directed behavior, it also
plays a key role in learning (Mayer, 2011), which suggests that motivational deficits can
contribute to knowledge shortfalls as well. Moreover, the components of motivation – active
choice, persistence, and mental effort – can be measured (Rueda, 2011). Assessing motivational
influences is therefore critical in any organization striving to realize specific goals. While a
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 23
variety of motivation and learning theories exist, two seem particularly germane to this case
study of the HR staff: collective-efficacy and goal orientation theory.
Collective efficacy. The HR staff must feel confident in their shared ability to manage
the appraisal program. Bandura (2000) identified such confidence as collective-efficacy, a key
focal mechanism of human agency, and defined it as a group’s belief in its ability to organize and
implement required action. Collective efficacy is an emergent, group-level quality, greater than
the sum of the parts (Bandura, 2000). In other words, some teams or organizations possess a
group efficacy that magnifies the efficacy of its individual members. Such organizations
demonstrate higher levels of strategic thinking, innovation, and optimism (Bandura, 2000).
Collective efficacy was an assumed motivational influence in this evaluative study because a
lack of appraisal program data collection and analysis may be indicative of low collective-
efficacy within the HR staff. This in turn may be related to a lack of appropriate knowledge and
expertise.
Mastery goal orientation. Goal orientation theory examines “why” people engage in
work, characterizes goals as either mastery- or performance-oriented, and further divides these
into approach or avoid goals (Kaplan & Maehr, 2007; Pintrich, 2003). Mastery-oriented people
are interested chiefly in self-improvement and tend to compare their current performance with
previous performance, while performance-oriented people are interested in out-performing others
by demonstrating greater task or skill competency (Kaplan & Maehr, 2007). A mastery-avoid
goal orientation means that an individual merely wants to avoid misunderstanding a task, perhaps
because it will lower their self-efficacy, whereas someone with a performance-avoid orientation
is concerned that if they attempt something and fail, they will be deemed incompetent, thereby
losing a competitive advantage (Pintrich, 2003). Within the context of this study, a mastery goal
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 24
orientation was assumed for the HR staff because it is less maladaptive, more improvement-
focused, and more likely to engender active goal pursuit, persistence, and the development of
new knowledge (Rueda, 2011). Additionally, studies have found that the benefits of creating a
mastery goal orientation are greatest when a task is complex and requires problem solving
(Kaplan & Maehr, 2006). In the case of the HR staff, therefore, these theoretical perspectives
suggest that the presence of a mastery goal orientation will contribute to improved appraisal
program management and evaluation practices.
Organizational Influences
As with knowledge and motivation, an improved understanding of organizational
performance gaps can be gained through an examination of institutional work processes, policies,
and material resources (Clark & Estes, 2008). Work processes refers to the structured,
systematic, and coordinated interaction of knowledge, skills, and motivation required among
employees to produce desired outcomes (Clark & Estes, 2008). If these processes are not equal
to and properly aligned with organizational goals, even sufficiently motivated and
knowledgeable employees will fail to achieve them (Clark & Estes, 2008; Rueda, 2011).
At times, a misalignment of work processes and goals may stem from an organization’s
culture, the accumulated shared learning of its members (Schein, 2004). Such learning is stable,
deep, and often automated (Rueda, 2011), representing commonly held beliefs that pervade all
aspects of an organization’s functioning and which members may not readily surrender (Schein,
2004). Culture can be described as a collection of group norms, shared mental maps and habits
of thinking, philosophies, values, and rituals which go unquestioned because they have at one
time or another helped an organization make sense of or succeed in its environment (Schein,
2004).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 25
From a gap analysis standpoint, the concept of organizational culture becomes more
granular when further distilled into the interrelated concepts of cultural models and cultural
settings. Cultural models are those collective features generally attributed to organizational
culture – somewhat abstract, assumed, invisible, and hard to change, whereas cultural settings
describe the real and concrete social context within which day-to-day organizational work
processes and practices occur (Rueda, 2011). If practices in a cultural setting are sustained long
enough, they can eventually penetrate to the bedrock of an organization’s culture, becoming an
assumed and invisible feature and thereby altering its cultural model (Gallimore & Goldenberg,
2001). This demonstrates what Rueda (2011) describes as the dynamic and interrelated nature of
cultural models and settings. Such interrelationships are evident in the three assumed
organizational influences upon the HR staff as it administers Service X’s officer performance
appraisal program. One is drawn from Service X’s overarching cultural model, whereas the
other two are drawn from the cultural settings in which HR staff work processes take place.
Policy decision rights. To enable sound appraisal program management, Service X must
value the input of the HR staff, to include granting the CHRO and HR staff appraisal policy
decision rights. In Service X culture, however, most programmatic decisions are made at the
very top based upon a leader’s experience. This belief in the primacy of experience over subject-
matter expertise (Morrison & Milliken, 2000) could discourage the HR staff from acquiring the
knowledge required to effectively manage the performance appraisal program. Alternatively, it
is possible that the HR staff could become dissatisfied with current appraisal practices but
believe that seeking change would be viewed as boat-rocking (Alper, Tjosvold & Law, 2000),
entailing career risk that is not worth the effort, a phenomenon Morrison and Milliken refer to as
organizational silence (2000). Lastly, the top-down decision-making culture of Service X could
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 26
create low collective efficacy on the part of the HR staff, which Bandura describes as a lack of
belief in a group’s shared ability to improve outcomes or achieve objectives (2000).
Professional development. Service X must provide the HR staff with sufficient
education and training in appraisal program management. Several of Service X’s recent
CHRO’s, however, have been non-HR specialists. Additionally, congressional reports have
concluded that military officers are assigned to service staffs without adequate educational
preparation, particularly in the area of critical thinking and strategic-level policy, and many
officers report that their professional military education provides inadequate preparation for
these assignments (U.S. Congress, 2010). It is therefore possible that the HR staff require
additional professional development, particularly in program evaluation practices. The
management and workforce education literature identify several different program evaluation
types, to include management-oriented, expertise-oriented, and participant-oriented approaches
(Hogan, 2007). For each evaluation type, critical stakeholders define both the approach and its
evaluation parameters, and their choices can provide insight into the culture and values of their
organization.
Assignment tenure. Service X must provide senior HR staff leaders with sufficient job
tenure. This is because longer assignments provide strategic program and policy leaders a
sufficient time span of discretion – the time needed to make strategic decisions and receive
feedback on their impact (Jaques, Gibson & Isaac, 1978). Without this, staff officers could tend
to be risk-averse, their work priorities skewing towards optimizing status quo practices rather
than conducting comprehensive evaluations of existing HR programs and policies.
Collectively, any gaps identified in the assumed knowledge, motivation, and
organizational influences described here could engender a homeostasis in which steady-state
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 27
organizational practices persist (Schneider & Guzzo, 1996), regardless of internal assessments or
external oversight demands. It could also contribute to a larger organizational learning problem,
specifically an inability to acquire and transfer knowledge or to modify behavior based upon
knowledge creation or acquisition (Garvin, Edmondson & Gino, 2008). Table 1, below,
summarizes the assumed knowledge and skill, motivation, and organizational influences upon
the HR staff’s ability to achieve its goal of 100% satisfaction with appraisal program planning,
evaluation, and management practices. Representative assessment methods from the study’s
interview and documents protocols are also included. Proposed interview participants are
identified in Appendix A, and the complete interview and document review protocols are in
Appendices B and E, respectively.
Table 1
Summary of Knowledge, Motivation, and Organizational Influences
Organizational Mission
Provide dominance across the full range of military operations and potential conflict types to
ensure victory against any U.S. opponent.
Organizational Global Goal
By June 2022, Service X will implement an officer performance appraisal program modernization
plan to comply with guidance from Congress, which exercises oversight of the Armed Forces.
Stakeholder (Global HR staff) Goal
By June 2020, Service X’s Global HR staff will be 100% satisfied with the planning, evaluation,
and management of the officer performance appraisal program.
Assumed Knowledge Influences Knowledge Influence Assessment
Factual: HR staff members must understand
the current operating environment of the
appraisal program, specifically the attitudes
and requirements of key program
stakeholders.
Interview questions: How do officers feel about
the current appraisal program?
What guidance has Congress provided regarding
the current appraisal program?
Conceptual: The HR staff must understand
the strengths and weaknesses of forced
ranking appraisals.
Interview questions: What are the biggest
strengths of the current appraisal program? What
are its biggest weaknesses?
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 28
Procedural: The HR staff must understand
how, why, and when to employ a disciplined
inquiry framework to conduct periodic
appraisal program evaluations.
Interview questions: When was the last full
evaluation of the appraisal program conducted?
What specific measurements of program
performance are used?
Assumed Motivation Influences Motivation Influence Assessment
Collective Efficacy: The HR staff must feel
confident in their collective ability to manage
the appraisal program.
Interview question: What expertise do your
coworkers possess in appraisal program
management?
Goal Orientation: The HR staff leadership
must demonstrate a mastery orientation to
appraisal program administration and
management.
Interview question: Suppose you had the power
to change the appraisal program - what would
you make different?
Assumed Organizational Influences Organizational Influence Assessment
CM Influence – Policy Decision Rights:
Service X must value the input of the HR
senior staff, to include granting them appraisal
program policy decision rights.
Interview questions: Overall, how satisfied are
you with the planning and management of the
appraisal program?
To your knowledge, which leader or
organization has the authority to direct any
appraisal program changes?
CS Influence 1 – Professional Development:
Service X should provide the HR staff with
education and training in program
management, to include program evaluation
practices.
Interview question: Describe any specialized
education, training, or experience that Service X
has provided to help you meet your appraisal
program management responsibilities.
CS Influence 2 – Assignment Tenure:
Service X should provide sufficient
assignment tenure to senior HR staff officers
responsible for long-term officer appraisal
program policy, management, and
improvement.
Interview question: Describe any specialized
education, training, or experience that Service X
has provided to help you meet your appraisal
program management responsibilities.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 29
Interactive Conceptual Framework
The conceptual framework for this study relied upon the Clark and Estes’ Knowledge,
Motivation, and Organizational (KMO) framework (2008), which was used to conduct a
modified gap analysis. A conceptual framework presents a tentative theory regarding a subject
of inquiry (Maxwell, 2013). In this study, the conceptual framework was based upon existing
theory and research, experiential knowledge, and thought experiments, as well as exploratory
research. As Merriam and Tisdell suggest (2016), the disciplinary orientations predominating in
a study’s research literature can help identify problems meriting examination. In the case of
forced ranking appraisals, the research literature is heavily concentrated in the public
administration, economics, organizational management, performance management, business
psychology, human resource psychology, industrial and organizational psychology, and applied
psychology disciplines, all largely outside the military HR management field.
The conceptual theory which guided this study was that Service X’s HR staff might be
less than completely satisfied with current officer appraisal program management practices. This
theory was arrived at by integrating both personal experience and thought experiments with the
study’s literature review. Maxwell (2013) suggests that the purpose of a conceptual theory is to
inform the design and execution of research necessary to completing a process of disciplined
inquiry. To that end, the Gap Analysis Framework (Clark & Estes, 2008) employed in this
evaluative study called for the identification of essential KMO influences which, while
independently articulated, may powerfully affect one another. Those assumed influences and
their interrelationships are graphically depicted in Figure 1, below. They were identified based
upon a comprehensive literature review, 30-plus years of experiential knowledge with military
organizational settings and culture, and a decade of experience with military HR organizations,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 30
programs, and practices. This experience, while beneficial, also required the exercise of
disciplined subjectivity throughout the study. For a full description of the strategies employed to
ensure credibility and trustworthiness, see Appendix F.
Figure 1. Interactive conceptual framework.
Qualitative Data Collection and Instrumentation
During this study, the researcher served as the primary instrument of data collection and
analysis. The study employed multiple data collection methods, a practice known as
triangulation, which is used to improve the depth, extent, and quality of the research (McEwan &
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 31
McEwan, 2003). In addition to an extensive literature review identifying the purported strengths
and weaknesses of forced ranking appraisals, the study also employed interviews and document
analysis. Because the study has a phenomenological framework, which focuses upon how
individuals perceive and experience their environment (Merriam & Tisdell, 2016), interviews
provided the coherence, depth, and clarity of information needed to address research questions 1
and 2, which were concerned with assumed knowledge and motivation influences. Documents
were also useful to this study in that they were a natural part of the research setting and provided
stability, i.e. analyzing documents does not alter a study setting in any way (Merriam & Tisdell,
2016). Document analysis focused primarily upon the assumed organizational influences
addressed by research question 3, providing useful context for the development of the interview
protocol. Document analysis therefore preceded interviews. During both data collection and all
other phases of this study, ethical best practices were employed. These practices are detailed in
Appendix G.
Documents and Artifacts
Document analysis was conducted over a two-month period in the late summer of 2018.
Four categories of documents and artifacts provided supplementary research data to this study:
official publications, websites, and training aids; historical research studies; publicly available
summary reports of longitudinal surveys; and congressional records. Analyzed documents
included two official Service X HR websites, one Service X analytical agency website, two
specific regulations governing officer performance appraisals, two specific HR staff training
presentations on appraisal program management, and six annual summary reports of officer
career satisfaction surveys. Based upon initial analysis, two Service X quantitative research
studies published in the early-1980s were later analyzed to lend additional context and meaning
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 32
to official documents. These provided a means by which to track changes in Service X’s officer
appraisal program, as well as a way to corroborate some interview evidence, both central uses of
documentary material (Bowen, 2009). Reviewing the two studies provided the researcher with
insights into the origins of the current officer appraisal program, which owes much of its
instrumental and philosophical basis to the appraisal program implemented in 1947.
Collectively, the documents chosen helped answer research question 3 by differentiating
between what Lewis describes as espoused theories and theories in use (2011), in this case the
official purpose and practices of the officer appraisal program versus how it actually functions
and is administered. The documents also provided useful information that helped answer
research question 2. All references to physical or online documents were anonymized to prevent
back-tracing and to protect the identity of the study site. The document review protocol is in
Appendix E.
Interviews
This study employed a two-phased interview protocol, and the researcher personally
conducted all interviews over a three-month period in the autumn of 2018. The first phase
consisted of informal conversational interviews with three key informants in support of a
purposive sampling approach. Guided by the overall goal of the study, these initial interviews
helped build rapport and secured access to the study site, identified candidates for Phase 2
interviews, and spontaneously generated questions in a more natural conversational context. A
careful review of Phase 1 interview responses helped to finalize the sampling strategy and refine
Phase 2. This overlap of sampling design and data collection planning is both common and
useful in qualitative research (Weiss, 1994).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 33
Three populations of interest were identified by the researcher while interviewing key
informants. The first was the Officer Section or OS (pseudonym), a 25-person, Pentagon-based
HR staff element responsible for all commissioned officer personnel management plans,
programs, and policies. The second was the Appraisals Management Office or AMO
(pseudonym), a 165-person HR support element located at Camp Smith, Illinois (pseudonym),
with day-to-day responsibility for officer appraisal program management. The last segment was
the Service’s HR leader team: the CHRO, a three-star military officer, and; the Assistant CHRO,
a Senior Executive Service (SES) career civil servant. Both work at the Pentagon. From this
population of approximately 190 people, purposive sampling (Johnson & Christensen, 2015)
resulted in 11 being invited to participate in the study, all of whom accepted. A complete
discussion of interview participant sampling criteria is in Appendix A.
Standardized, open-ended interviews were used in Phase 2 because the interview
questions were fully specified and potentially available for review by participants in advance,
made efficient use of limited time, and communicated a dispassionate and rigorous approach to
help establish credibility and trust with the research participants. The types of questions that
were asked included background/demographic, opinion/value, knowledge, feeling, experience,
“Devil’s advocate,” and hypothetical types, with a mix of specific past, present, and future
timeframes. These questions were aimed largely at the assumed knowledge and motivation
influences informing the conceptual framework, which theorized that the HR staff was perhaps
somewhat dissatisfied with current officer appraisal program management practices. Some of
these questions, however, also connected HR staff knowledge and motivation to Service X
culture and context (addressed by research question 3). Interviews took place after the bulk of
the document review and analysis had already provided useful organizational context for the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 34
development of interview questions. Phase 1 interviews were conducted with three potential
respondents and took 20 to 60 minutes each. Phase 2 interviews with 11 participants lasted 34 to
91 minutes each. The average interview length was one hour, and the median length was 55
minutes. No follow-up interviews were conducted. The total interview time across all
participants in both phases was 12.3 hours.
Phase 1 informal interviews were conducted by phone, with each participant in his or her
office and at a time convenient to each. Only the researcher and participant were present during
each informal interview, and each participant consented to being recorded. Except for one
interview conducted via Skype, all standardized, open-ended interviews conducted in Phase 2
took place in person, either in the participant’s office or in an alternative private setting in their
workplace. This ensured comfort and privacy, prevented interruption, and permitted the digital
recording of interviews without background interference. Recordings were made with little-to-
no concurrent notetaking. Recordings captured emotion, qualifiers, and speech patterns, and also
allowed the researcher to focus more upon each participant. Backup recording instruments and
notetaking materials were on hand for each interview. Study information sheets were reviewed
with each participant prior to commencing each interview, and copies of the interview protocol
were also present.
To help make the interviews interesting and conversational, questions were sequenced in
a purposeful route, leading off with a small number of background questions, followed by
introductory and key questions, and closing with a summary question followed by demographic
questions. Back-to-back interviews were not conducted, as it would have inhibited the conduct
of post-interview analysis and reflective memoing. During post-interview analysis, the
researcher also identified ways to continuously refine and improve follow-on interviews,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 35
capturing improvements in a separate process memo log. This resulted in the researcher sending
three follow-up questions to study participants via email to further examine and clarify
motivation findings emerging during analysis. The interview protocol is in Appendix B, and all
interview questions, to include the emailed follow-up questions, are in Appendix C.
Data Analysis
During this qualitative case study, the researcher began analysis during data collection by
preparing reflective memos after each interview or document review to capture initial thoughts
and conclusions about the data in relation to the conceptual framework and research questions.
These memos proved invaluable to initial data coding efforts and provided critical scaffolding
for the comprehensive analytic memos prepared after leaving the field and completing interview
transcription. In the first phase of analysis, documents were analyzed for evidence consistent
with the concepts in the conceptual framework. Analytic tools were then used to become
familiar with the interview data and to uncover aspects of it which might have been missed upon
first reading. This was followed by open coding, which consisted of inductively deriving in vivo
codes from participant transcripts and deductively deriving a priori codes from the conceptual
framework. The researcher then conducted a second phase of analysis, aggregating in vivo and a
priori codes into analytic/axial codes. In the third phase of data analysis, the researcher
identified pattern codes and themes from all data sources, documenting both reasoning and
choices as the analysis continued. As a final step, all pattern codes and themes were defined to
aid in their consistent application and to increase internal reliability.
Findings
The literature review preceding this case study resulted in the creation of a conceptual
framework incorporating assumed knowledge and skills, motivation, and organizational
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 36
influences upon the HR staff’s ability to manage, evaluate, and continuously improve the officer
performance appraisal program. Accordingly, research question 1 asked, “To what extent is
Service X meeting the goal of being 100% compliant with congressional guidance to modernize
officer management and appraisal practices by June 2022?” The study found that the Service is
not currently attempting to meet this goal due to several knowledge, motivation, and
organizational gaps. To support that finding, this section first introduces the study’s
participating stakeholders, uses the KMO framework to organize and integrate all findings, and
concludes with a short synthesis section.
Participating Stakeholders
From an HR staff population of interest numbering 190 people, purposive sampling
(Johnson & Christensen, 2015) resulted in 11 being invited to participate in the study, all of
whom accepted. These senior HR professionals were chosen due to their extensive officer
appraisal program policy or management responsibilities. Table 2 summarizes participant
demographic data. Most were white (9 of 11) and male (10 of 11).
1
The youngest participant
was 37, the oldest 64, and the median age was 54 years old. There was a balanced mix of
employee types: five military and six civilians. Work location was also balanced, with five
participants working at the Pentagon and six at Camp Smith. Job tenure, defined as work time in
the current duty position, was significantly greater for civilian HR participants, with a median
tenure of just under eight years, compared to one-and-a-half years for military staff members. Of
the six civilian participants, five were military veterans. In terms of seniority, three of the five
military participants were of flag rank (admiral/general-level), one was an “O6” (a senior
officer), and one was an “O4” (mid-career officer). Other than the Assistant CHRO,
1
Because only one participant was female, the findings section of this study uses gender neutral pseudonyms and
random personal pronoun assignment to ensure equal protection of individual confidentiality.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 37
participating civilian employees were mid-to-senior-level government service (“GS”) employees
with day-to-day appraisal program policy or management responsibilities.
Participant higher education levels were somewhat bifurcated by employee type. Five of
six military officers possessed a civilian graduate degree, whereas only one of five civilians
possessed a master’s degree, three possessed bachelor’s degrees, and one had completed high
school. While none of the 11 participants possessed a college degree in Human Resource
Management, several had received military human resource education or training as members of
Service X’s HR specialist corps.
Table 2
Demographic Summary of HR Professionals Interviewed
Knowledge Findings
Examining the knowledge and skills either resident or lacking within an organization can
help identify gaps that prevent the realization of desired outcomes (Clark & Estes, 2008).
Accordingly, research question 2 asked, “What knowledge and motivation related to the planning
and management of the officer appraisal program is resident within the Service’s Global HR
staff?” In response to the knowledge component of that question, the study found that the HR
staff requires additional factual, conceptual, and procedural knowledge to effectively manage,
evaluate, and improve the officer appraisal program.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 38
Factual knowledge. This study assumed that the HR staff requires factual knowledge of
the environment in which the current officer appraisal program operates, specifically the
attitudes of Congress, Service X’s oversight organization, and of Service X’s officer workforce,
both of which have a stake in the appraisal program’s process and outcomes. Without this
stakeholder knowledge, Service X cannot be sure that the appraisal program is complying with
oversight requests or generating desired workforce outcomes. The study found that the HR staff
requires an increase in this type of factual knowledge.
The evidence supporting this finding was developed by comparing documents analysis
with interview responses, which identified points of discrepancy between the two. For example,
a review of congressional records revealed that Congress has explicitly asked Service X to
modernize its full range of officer personnel management practices. During a Senate Armed
Services (SASC) personnel subcommittee hearing conducted in January 2018 on the need for
this modernization, the hearing chair stated that:
Officer personnel management is a combination of statute, regulation, culture, and
tradition that determines how military leaders are recruited, trained, retained, promoted,
assigned, and compensated…. I hope today our witnesses will provide us with some
clearly defined outcomes that an updated personnel system should seek to achieve (U.S.
Senate, 2018, pp. 2–3).
While the chair did not specify appraisals in his opening statement, testimony provided later in
the briefing affirmed the connection between performance appraisals and other officer
management functions, as the former provides the instrumentality upon which all of the latter
rely. Two Service X regulations reviewed for this study corroborated that the primary function
of officer performance appraisals is to provide information to the Service to make management
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 39
decisions regarding an officer’s promotion, selection, assignments, education, training, retention,
and compensation, the same practices Congress directed the military services to modernize.
Interview evidence also corroborated the officer appraisal program’s centrality to all
officer management actions. All 11 participants acknowledged that appraisal reports are integral
to the promotion, selection, assignment, and developmental decisions regarding all officers. For
example, Blair, when describing the centrality of officer appraisal reports, offered, “You’re
talking about the Officer Performance Appraisal, the culture and the heartbeat of the Service.”
Kris said, “It's kind of our touchstone for how we do things.” Lee, describing the program’s role
in officer management, used similar language, saying “What we provide is the soul or the
gateway for the Service, as far as assignments, promotions and so forth, because, not to sound
biased, but the appraisal report is probably the single most important document in an officer’s
file…” Sam’s response confirmed this view, noting that, “…the core of what is considered for
promotion in the Service is written on the appraisals.” Alex’s reply was similar as well:
We monitor the system, obviously, as part of our responsibility to ensure that the health
of the system is not – it's not hindering the Service, and it's still meeting the Service's
desired end state, which is primarily to use – that we're receiving the information so that
we can use it for selections, promotions, separations…
If appraisals are the centerpiece component of officer management practices, and if Congress is
requesting these practices be modernized, it is reasonable to conclude that a Service X officer
appraisal program evaluation and improvement effort is implicit in that request.
Given this evidence, the researcher used a prompt when interviewing the HR staff senior
leader team to assess their knowledge of whether Congress was requesting an overview or
modernization of the officer performance appraisal program. One responded, “No. I don't think
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 40
there's ever been .... I have never had an inquiry from OSD [Office of the Secretary of Defense]
or in my hundreds of visits to the Hill [Congress], a negative inquiry about the OPA [Officer
Performance Appraisal].” The other responded, “The only time Congress has gotten involved
was when they wanted to have ‘SAAP’ [‘Sexual Assault Awareness and Prevention’ program] in
the OPA. And we did that…. So that's the only time I've ever seen Congress weigh in…” This
indicates that the HR staff leader team does not believe that a comprehensive appraisal program
evaluation is required to comply with Congress’ mandate to modernize officer management
practices, as it was not explicitly requested.
The Service X officer workforce is another stakeholder group that the HR staff must
consider when managing the appraisal program, as they are its key participants. During this
study, the researcher reviewed publicly available longitudinal survey summary reports
containing detailed analysis of military workforce attitudes towards the appraisal program. Since
2005, these surveys of officers and non-commissioned officers (NCOs) have been conducted by
a non-HR, Service X analytical agency. The agency’s stated purpose for conducting these
surveys is to provide valuable information to assist senior leaders in policy decision making.
The last six publicly available summary reports were reviewed (2011-2016), with as few as
8,000 or as many as 27,000 annual respondents. Survey administrators employed stratified
random sampling and consistently reported overall sampling errors in the +/-0.7 range. Survey
items employed a 3-point Likert scale: disagree/strongly disagree; neither agree nor disagree, or;
agree/strongly agree. In addition to Likert scale items, the surveys also featured open-ended
response items. Survey questions were repeated over several years to increase longitudinal
analytical value and to permit the monitoring of changes in workforce attitudes over time.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 41
The surveys included questions about the personnel management system, performance
appraisals, and performance counseling. Questions about performance appraisals were dropped
in 2015, as were those about the personnel system in 2016, although questions addressing
performance counseling remained. Without access to the source files, the selective presentation
of data in the summary reports did not permit the researcher to compare year-over-year responses
for every survey item, although comparisons were possible in some instances. Another research
limitation of the reports was that they often aggregated officer and NCO responses into a
workforce category referred to as “leader.” Whenever the reports disaggregated these workforce
segments, however, response trends were fairly consistent for both, which increased the
researcher’s confidence in their value to this study.
A quad chart presenting key findings from active duty officers and NCOs is presented in
Figure 2. These findings are useful because they address the two functions of the officer
appraisal program as stipulated by program regulations: to enable sound personnel management
decisions, and to develop officers. The top two quadrants address performance appraisal
accuracy and fairness outcomes. The findings show that from 2011-2014, just over 50% of
respondents found performance appraisals accurate, and from 2012-2014, less than 40% believed
that they resulted in the most capable leaders being promoted.
The bottom two quadrants address developmental outcomes connected to the officer
appraisal program. The lower left quadrant shows that over a five-year period (2012-2016),
belief that the appraisal program and its resulting promotion and assignment decisions effectively
supported respondents’ developmental needs was below 50% in all but one year. Additionally,
the lower right quadrant indicates that counseling from one’s rater (supervisor), a central
component of the appraisal program, was deemed ineffectual by a third or more of the workforce
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 42
during the same time span. The relative consistency of these findings over a five-to-six-year
period suggests a reasonably stable cultural setting, further increasing the researcher’s
confidence in the reports’ usefulness to this study.
Figure 2. Summary of key career satisfaction findings, 2011-2016.
The summary reports also identified prominent themes which emerged from open-ended
response items. These too were reasonably stable over time. Survey administrators reported that:
senior officer respondents believe the officer appraisal program is heavily influenced by
subjective factors, which has a large impact upon its ability to differentiate officer performance
and potential; service leaders do not hold overwhelmingly positive views about the fairness and
accuracy of personnel management actions, described as performance appraisals, promotion
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 43
decisions, and assignments, and; there continues to be an unmet need with regard to the
frequency and quality of performance counseling.
Notably, both closed- and open-ended response findings were consistent with historical
trends reported in two Service X quantitative research studies published in the early-1980s. The
studies were prepared by students at military post-graduate colleges, were preceded by a robust
background and literature review, and were approved by a two-person committee of PhDs. One
study reported three key findings from appraisal program reviews conducted between 1947 and
1982: first, officers lacked confidence in the value and usefulness of the program; second, more
education and training were needed to support the program, and; third, there was a strong
requirement for performance counseling.
The documents analysis findings caused the researcher to ask participants how they
thought most officers felt about the appraisal program. Nine of 11 participants expressed their
belief that most officers were generally satisfied with the program, as these representative
responses indicate. Max, for example, said, “I think the majority of people think it's fair.” Lee
held a similar view, stating, “I think people are happy with the system.” Reese said that, “…I
would say 60…65…maybe 70% are satisfied because they do get their fair share [of strong
appraisals].” Blair offered, “There's an understanding of the system, there's an appreciation of
the system.” Kris said, “We get pretty good feedback. I think the officers feel pretty comfortable
with it.” Alex answered, “I would say the majority – positive.”
To better understand these responses, the researcher used a follow-up prompt to
determine whether any of the participants were relying upon the longitudinal career satisfaction
surveys or other systematically acquired data to formulate their responses. None were. Each
indicated that their understanding of officer satisfaction levels was either anecdotal, based upon
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 44
promotion board feedback, or a supposition on their part because the workforce complied with
the program. Riley, for example, offered that, “I haven't had any complaints in particular about
the program.” Fran replied similarly. “I don't think they take exception with it too much.
They're busy and focused on the mission…” Kris responded similarly, saying, “I don't know that
there's a whole lot [of feedback] that just spontaneously comes out of the operational force
because they're working on other things…”
These responses make clear that the HR staff lacks knowledge of readily available
stakeholder information that is necessary to appraisal program management, evaluation, and
improvement. This may in part be due to a motivational gap, as motivated organizations and
individuals expend greater effort to figure out what is happening around them (Mayer, 2011). It
may also be due in part to a lack of procedural knowledge regarding program evaluation
practices, which if adhered to could drive the systematic collection of factual knowledge.
Conceptual knowledge. The study assumed that HR staff members would require
knowledge of performance appraisal concepts, theories, and philosophies. This would include an
understanding of the biases and heuristics which can reduce forced ranking performance
appraisal validity and reliability (Blume et al., 2009; Hazels & Sasse, 2008; Hoffman et al.,
2010; Scullen et al., 2000), as well as of the related counterproductive and maladaptive
workforce behaviors which can result from this appraisal practice (Berger et al., 2013; Hazels &
Sasse, 2008). The study found that the HR staff requires an increase in this type of conceptual
knowledge.
The Model of Domain Learning (MDL) describes the acquisition of subject-matter
knowledge as foundational to moving a person from acclimation, to competency, and eventually,
to expertise (Alexander, 2003). While such knowledge can be gained through experience,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 45
education, or training, a combination of all three can create a transcendent ability to engage in
problem finding and knowledge creation, what Senge (1990) characterizes as generative or
value-producing learning. The data gathered in this study, however, revealed that all 11
interview participants, from most junior to most senior, were prepared for their duties via on-the-
job training or experience within the military HR field only, which is discussed in greater detail
in the professional development portion of the organization findings section.
Despite this finding, all staff members have experientially gained some knowledge
regarding the potential shortcomings of forced ranking appraisals. As Table 3 indicates, all but
two interview participants accurately described and reported one or more of the shortcomings of
forced ranking documented in the research literature. Reese, for instance, explained that forced
ranking was introduced in response to leniency bias, which he referred to as inflation. “The
natural tendency for a senior rater is always to inflate. ‘I want to be loved. I want to be liked,’
and if you don't have any control mechanism, they will inflate.” Pat made note of this trend as
well:
It got to a point where everybody was great and wonderful, and at that point you can't
make a distinctive differentiation that we believe we need for quality as people move
through the program.
Pat also described another shortcoming of forced ranking present in the research literature,
known as institutional or systemic bias (Hazels & Sasse, 2008), an outsized organizational focus
upon a particular set of skills, even though other skills are required in its workforce:
Everything that drives this [program] is command. In some places, that's not the right
measurement. How do we rate and reward a person for being the very best in the world
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 46
about whatever their talent is? I don't think that our program actually rewards that right
now.
Blair clearly explained another challenge stemming from the forced-ranking’s bi-modal
performance distribution, the difficulty of differentiating officers by performance:
It does a phenomenal job identifying our very best performers, the top 20% and the
bottom 20%. Where we struggle is the 60% in the middle. There's no middle ground. It's
a binary – you're a yes or you're a no. There's nothing in between.
Fran shared Blair’s assessment:
Well, I think it's pretty clear, and I bet a lot of people have the same one, it’s that the
arbitrary division and the categorization – half are 50 percent, 49 percent – that way of
dividing things forces us mentally into the administrative, check-block management that
we do today.
These responses make clear that the HR staff possesses some awareness of current appraisal
program challenges, particularly those with which they have first-hand experience. The staff’s
familiarity with the conceptual shortcomings presented in the research literature, however, was
confined largely to leniency bias, followed by idiosyncratic and systematic bias. After that,
conceptual knowledge appeared to decline markedly, particularly regarding potential iatrogenic
effects upon workforce attitudes and behavior.
Meanwhile, the staff identified several other challenges within the current appraisal
program, the bulk of which are more factual than conceptual in nature. Three key concerns
emerged; perceived low effort on the part of rating officials in terms of the time and energy
devoted to writing performance appraisals; misinformation or a lack of workforce understanding
of the appraisal program; and concern that when high-performing officers are concentrated under
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 47
one rating official, forced ranking will cull the tail of that non-standard rating population, even
though these officers may be better performers than the bulk of their peers across the Service.
These findings helped corroborate the conceptual theory underlying this study – that Service X’s
HR staff is less than completely satisfied with current officer appraisal program management
practices.
Table 3
HR Staff Knowledge of Forced Ranking Shortcomings
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 48
While it is possible that the HR staff did not identify more shortcomings only because
those shortcomings have not manifested within the appraisal program, the researcher found this
unlikely. Documentary evidence – the longitudinal surveys of workforce attitudes – confirmed
that several of these shortcomings are present in the appraisal program, yet the staff indicated
that they are unaware of them. In sum, HR staff knowledge of forced ranking shortcomings is
heavily weighted towards limited and experientially derived factual rather than conceptual
knowledge. This presents a barrier to the staff’s ability to gather and interpret data that would
help them to evaluate the performance of the appraisal program in a more comprehensive way.
The HR staff’s familiarity with some of this conceptual knowledge, however, provides a schema
that can later be elaborated upon to organize new conceptual learning (Mayer, 2011).
Procedural knowledge. This study assumed that the HR staff must know how, why, and
when to employ a disciplined inquiry framework to conduct periodic evaluations of appraisal
program efficacy. Procedural knowledge refers to “how to” – the knowledge of subject-specific
skills, techniques, methods, and procedures, as well as of when and how to appropriately employ
each (Krathwohl, 2002; Rueda, 2011). The study found that the HR staff requires an increase in
the procedural knowledge required to plan and conduct systematic, comprehensive evaluations of
appraisal program efficacy with a proven inquiry framework.
The evidence supporting this finding is that although Service X regulations say that the
appraisal program is routinely reviewed for effective operation, none of the study’s 11
participants could clearly identify when the last comprehensive program evaluation had
occurred, what approach was used (e.g. objectives-, management-, consumer-, or expertise-
oriented), when the next one was scheduled, or what the full range of program success measures
were. Some participants described program reviews that were not as fulsome as a program
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 49
evaluation, whereas others expressed unfamiliarity with the term itself. In those instances, the
researcher explained that a program evaluation is a formal and systematic effort to assess the
merit or worth of a program or strategy in a specific context, one that relies upon valid, reliable,
and credible data tied to clear performance objectives (Hogan, 2007).
Asked when the last program evaluation had occurred, Riley replied, “That I don't know.
I know we've changed the form. So, it's been at least, probably, maybe five years or so. Maybe
four or five years – we just changed the form.” Kris answered, “There's been papers written on it
at the Senior Service College. I'm not sure when the last complete…it hasn't happened recently.”
Fran answered the question by saying, “Honestly, I don't know. I couldn't tell you with
reliability. I think, I'm trying to stretch my memory back, my suspicion is it hasn't been done in
a systematic way for 30 years.” In response to the same question, Pat stated, “There isn't a
systematic approach that I know of. There may be one, but I don't know about it if there is. I
don't think there's a systematic approach to that.” These responses confirm that planned,
systematic, formal evaluations of the appraisal program are not occurring.
Additionally, when asked, “What measurements are used for program evaluation?”, none
of the 11 participants described the systematic use of information aligned to clear program goals.
For example, Blair replied, “I am unaware of any such metrics.” Max was unsure as well:
You probably have to have some really smart operations research or systems analysts [to]
figure that out… you just have to have a systems kind of person figure out – how do you
run that through a calculator and spit out what's effective or not…?
Other responses indicate a measurement process knowledge gap as well. For example, the
comments of three of four participants from the Appraisals Management Office (AMO), where
appraisal program data analysis occurs, suggested they believe program evaluation measurement
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 50
is synonymous with what they called “data mining.” Data mining focuses on using specific
machine learning and statistical models to predict the future and discover patterns among very
large data sets (Cleve & Lämmel, 2016). When the researcher prompted participants to explain
the use of data mining in appraisal program evaluation, their responses clarified that they were
using the term as a proxy for simple information processing, descriptive statistics, or data
analysis. Additionally, they were not describing data use in support of program evaluation. One
AMO team member, for example, offered that data mining helped her team respond to external
queries because, “…out of the blue sometimes we get some very unique requests for data.”
Another answered the question in the future tense, describing the introduction of a new academic
performance appraisal form which would collect individual talent information:
The current form is 50 years old. The new form is going to capture exactly what you were
doing in school. ‘What Master's degree? What was the subject expertise? Did you get a
Doctorate or a Master’s? What was the thesis on?’, so we can actually start data mining
that stuff.
These responses demonstrate a need for greater staff procedural knowledge in how to conduct a
program evaluation, how to support it with data, and how to employ a disciplined inquiry
framework to identify and analyze program success measures.
Motivation Findings
As evidence suggests that motivation accounts for up to 50% of all performance success
(Rueda, 2011), motivation influences are particularly important in any performance gap analysis.
Accordingly, research question 2 asked, “What knowledge and motivation related to the planning
and management of the officer appraisal program is resident within the Service’s Global HR
staff?” In response to the motivation component of that question, the study found the HR staff
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 51
could benefit from increased collective efficacy, as well as the development of a mastery goal
orientation. As mentioned earlier in this study, the motivation findings relied upon both
interview responses and follow-up email question responses. The follow-up questions helped
clarify findings which emerged during interviews analysis.
Collective efficacy. The study assumed that the HR staff requires a shared belief in its
ability to organize and implement required appraisal program management actions to achieve
desired outcomes. Collective efficacy is an emergent, group-level quality, greater than the sum
of the parts (Bandura, 2000). Without collective efficacy, individuals and teams are less likely to
demonstrate the strategic thinking, innovation, and optimism needed to manage a program or
improve outcomes (Bandura, 2000). The study found evidence that the HR staff needs to
increase its collective efficacy.
Seven of 11 study participants expressed a lack of confidence in the HR staff’s ability to
produce desired appraisal program outcomes. Three low confidence areas emerged: planning for
the future, implementing change, and employee competency. When asked, for example, how
confident she was in the HR staff’s ability to manage the officer appraisal program, Jamie
replied:
Depending on the definition of ‘manage,’ I am less confident that the program can adapt
to changes in a timely manner, so I am hesitant to say that it [the HR staff] can ‘lead
through change,’ but ‘manage’ I think the organization does well.
Replying to the same question, Fran stated that while he was confident the HR staff was able to
manage the current appraisal program, he was unsure of its internal ability to create a strategy,
establish goals, or examine the program’s underlying assumptions:
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 52
Quite confident, if provided a guiding strategy for the use of the officer performance
appraisal (OPA), and the support of senior leaders to employ tenets established within
that strategic framework. [I’m] not sure we've ever identified what our goals are for the
officer appraisal program…. Many things about personnel management are just assumed
to take care of themselves, that the precepts of our current process at a very fundamental
level are right, without ever examining them closely to see if that proposition is true
against long term goals.
The documents analysis for this study supports Fran’s belief that formal, measurable, time-bound
goals for the officer appraisal program have not been articulated. Blair also indicated a lack of
confidence in the HR staff’s collective ability to lead the appraisal program into the future:
As a human resource community, what are we doing to think into the next realm of the
21st century? Like, what's the next turn of the wheel look like? Why are we waiting for
somebody to tell us what to do?
Additional staff members expressed low confidence in the HR staff’s collective ability to
lead change or improve the appraisal program. Riley for example, describing one of his
recommendations to decrease administrative processing errors, said:
I've mentioned [automatic] spell checking and things of that nature. I know I'm not the
first one, but I know that's not going to change, like ‘Okay, we say it, and tomorrow
there's a spell-checking algorithm that's going to be in there.’ But I think it would be
something...
Max expressed his view that more comprehensive changes would be even tougher due to the size
of the appraisal program. “It's very difficult on the scale that you're talking about with the
Service, to try to continually evolve a tool because of the time, resources.” Pat expressed similar
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 53
misgivings, saying, “Well, I don't believe that the staff could do it all. Let's put it that way. I
think that the volume [of work] is so oppressive that they could not do it and do it well.”
Other participants expressed a lack of confidence in their coworkers’ collective abilities
to do their jobs. One said, for example, that she wanted to “push off” some of her employees
because they are unable to adapt to modern work demands:
They were hired for an industrial-based Service – every task was individual. They would
catch microfiches – that’s what they were hired to do – catch a sheet of plastic out of a
machine and put it in an envelope. But now we're asking that same person 20 years later
to read and process rules for multiple Service components that are situationally
dependent…. All of the ‘if-then, and-ors,’ etc. have complicated the process, so that the
quality of employee must be higher…
Pat also expressed a lack of confidence in some HR staff members, suggesting their performance
was less expertise-driven and more idiosyncratic:
I'd be okay with getting some of the variability as long as I had some degree of, I don't
know what I'd call it, some degree of assurance – that's a good word – some degree of
assurance that the people performing that task actually understood what they were talking
about.
These responses, while not indicative of a debilitating need, suggest that an increase in
HR staff collective efficacy would be beneficial. Without it, the staff’s belief that planning and
implementing appraisal program changes is difficult could impact their overall ability to manage,
evaluate, and improve the program. This is because low collective efficacy can cause difficult
tasks to be chosen less (Pajares, 2006), resulting in a negative efficacy spiral that reduces the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 54
active choice, persistence, and mental effort – i.e. motivation – needed to achieve desired
outcomes.
Goal orientation. This study assumed that the HR staff requires a mastery goal
orientation to successfully accomplish all appraisal program management actions. A mastery
orientation is less maladaptive, more improvement-focused, and more likely to engender active
goal pursuit, persistence, and the development of new knowledge (Rueda, 2011). Additionally,
studies have found that the benefits of creating a mastery goal orientation are greatest when a
task is complex and requires problem solving (Kaplan & Maehr, 2006). While the evidence is
unclear as to an approach or avoid orientation, it is clear that a performance orientation is present
and that a mastery orientation is desired.
Perhaps the most persuasive evidence of a performance orientation is that the majority of
appraisal program changes described by the HR staff had an external referent, and extrinsic
factors are consistent with a performance orientation (Rueda, 2011). Three participants, for
example, offered that the last significant changes to the appraisal program took place several
years ago in response to a highly-publicized crisis, the details of which cannot be presented here
without compromising the identity of the study setting. Alex recalled the event, saying it caused
the senior leadership to direct the HR staff to action. “If I recall, we received a directive from
the Service Secretary that we had to look at the appraisal program holistically and attempt to
bring more accountability into the program.” Sam had a similar recollection from his first days
on the HR staff, saying, “The first thing the Service did to me when I showed up was, I got a
tasker from the Service Secretary and…I had to answer eight questions about what was wrong
with the appraisal program…”
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 55
Pat said that this was the way appraisal program changes normally occurred. “I think that
my opinion…my opinion is that those types of things are triggered – they aren't catastrophic –
but by serious events. It forces you to reflect about what you have and where you're going.”
When asked whether she minded this type of top-down decision-making approach given her
team’s policy responsibilities, Reese replied, “No, that ain't a challenge. I look at that as, okay,
that’s just the way we do business.” These responses detail an extrinsically motivated program,
with the HR staff working for the Service Secretary’s approval, characteristic of a performance
goal orientation.
As for when the next internally directed appraisal program changes might occur, Kris
said, “...if you look at how many OPAs (Officer Performance Appraisals) we've had, it's not that
often. You don't want to change it every time a good idea comes up.” Pat answered similarly:
I think the wrong way to do it would be to say, ‘Hey, every 10 years we're going to have
a new one…. I go back to things like the Constitution of the United States. When it was
written, it was pretty damned good. Because of the way it was done, it has lasted 200
years. There are certain things that when you do them right, they last a long time. I don't
think that there should be some set timing on it, but I do think that we need to coalesce on
some, what I would call bad indicators.
Pat’s answer, while incorporating a desire to engage in evidence-based management, has the
characteristics of what Senge (1990) calls adaptive learning or coping, whereas a mastery goal
orientation is more likely to yield generative learning, innovation, and improvement relying upon
self-referenced standards (Pintrich, 2003), i.e. actively considering how much better something
could be tomorrow than it is today, even if it is already good.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 56
Additional insights into the HR staff’s goal orientation were gained by examining the
way in which participants described their work roles. While interrogating the response data with
analytic tools (Corbin & Strauss, 2008), the researcher noted that nine of the 11 participants
claimed no responsibility for appraisal program policy conceptualization. For example, one
participant said, “I’m the overall caretaker of the system.” Another described her job as
responsible for “…receipt and processing of all appraisals.” A third was responsible for “…the
adjudication of [selection] boards, derogatory information.” A fourth said his job was
“…maintaining and overseeing the policy that's written within Service regulations concerning
the officer appraisal program.” Still another said, “We utilize the officer appraisal report as a
measure of performance for every officer…” As a last example, one participant described
conducting “suitability screening.” These verbs – “receive,” “process,” “adjudicate,”
“maintain,” “oversee,” “utilize,” and “screen” – describe transactional practices. Verbs such as
“plan,” “create,” “evaluate,” “improve,” “review,” “analyze,” or “assess,” which typify both a
mastery approach and the conceptual thinking required for program management, were not used.
As for the two participants who asserted a policy conceptualization role when asked
about their work responsibilities, both work in the HR staff headquarters in the Pentagon. One
stated that they were in charge of officer promotion and career policy, but not appraisal program
policy. The other responded that, “…we’re responsible for…military personnel policy across the
Department that deals with a vast majority, but not all, human resource policies…. appraisal
policy actually is [at] Camp Smith.” The HR staff participants in the AMO at Camp Smith,
meanwhile, acknowledged an appraisal policy role, but explained that it was largely
administrative, responding to direction from the Pentagon. As an appraisal policy section
employee put it, “We develop policy based on their [Service X headquarters] desires and intent.”
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 57
Active goal pursuit for the purpose of progress or improvement is another feature of a
mastery approach (Rueda, 2011). Participants were asked what they would change about the
appraisal program if they had the authority to do so. The question was asked near the end of
each interview, by which time each staff member had suggested several improvement ideas.
Referring back to Table 3 in the knowledge findings, recall that every participant demonstrated
sound knowledge of at least one program shortcoming they wanted to see addressed from among
18 problem areas. Despite possessing this knowledge, when asked a question with an active goal
pursuit component, eight of 11 staff members responded with proxies for little-to-no action or
the status-quo (“nothing,” “tweaks,” “I don’t know,” etc.). Some responses were brief, with little
elaboration. “Just some minor things,” said Riley. When prompted as to whether he had shared
any improvement ideas with his superiors, he continued, “I haven't received say, a request for my
input as far as anything that may have been identified in the program, as far as changing the
program itself.” Lee replied, “From a policy standpoint, as far as appraisal is concerned, I have
to be honest with you, I really don’t have much to add.” Two others responded with what they
would not change. Jamie, for example, said, “So, I have to go back to the same – the form itself,
very, very little change.” Max answered, “Maybe you don't change the appraisal program itself,
what you do is you change the training required…” These responses show little evidence of an
intention to drive improvement, a key mastery orientation component (Yough & Anderman,
2006). When asked to identify improvement areas, the HR staff were able to. When asked what
improvements they would actively pursue if given the authority to do so, these senior HR
professionals became more noncommittal.
Again, while the evidence is unclear as to approach or avoid orientations, it is clear that a
mastery orientation is lacking and needed. Most appraisal program changes described by the HR
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 58
staff had an external referent and were executed to gain approval from outside the staff. There
was no evidence of internally goal directed behavior for the purpose of learning, innovation, and
improvement, features of a mastery goal orientation (Pintrich, 2003; Rueda, 2011), whereas there
was some evidence that the staff was waiting for events, direction, or indicators to “trigger”
future appraisal program refinements, which is inconsistent with a mastery goal orientation.
These findings are perhaps unsurprising given the HR staff’s organizational context.
First, it must be remembered that 10 of the 11 interview participants were currently serving or
veteran career military personnel. As a result, they have spent most of their adult lives working
in an organization that fosters a performance goal orientation by ordinally ranking
servicemembers against their peers and teammates. Second, the geographic dispersal and
bifurcation of the HR staff is perhaps a contributing factor as well. It may have created a
dynamic in which each component – either the headquarters HR staff at the Pentagon or the
AMO support staff at Camp Smith – views the other as the change agent for the appraisal
program. This suggests an accountability and policy work process gap, which is further
examined in the organization findings section which follows.
Organizational Findings
As with knowledge and motivation, an improved understanding of organizational
performance gaps can be gained through an examination of institutional work processes, policies,
and material resources (Clark & Estes, 2008). Accordingly, research question 3 asked, “What is
the interaction between Service X culture and context and HR staff knowledge and motivation as
it pertains to officer appraisal program management, evaluation, and improvement?” In response
to this question, the study found that the Service X’s hierarchical policy decision-making model,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 59
as well as its professional development and personnel assignment practices, have collectively
created barriers to effective appraisal program management, evaluation, and improvement.
Policy decision rights. This study assumed that Service X must value the input of the
HR staff, to include granting it appropriate authorities and decision rights for the appraisal
program. Without this type of program accountability in place, the HR staff is held responsible
for outcomes over which it has insufficient influence. The study found that Service X officer
appraisal program policy relies upon a hierarchical decision-making model, with most appraisal
program policy changes originating with the Service Secretary (the Secretary) and Chief of
Service Operations (CSO, a pseudonym), Service X’s senior military officer.
While the HR staff’s mission statement claims responsibility for all Service X HR
policies, interviews revealed that in the context of officer appraisal program policy,
“responsible” means policy application, implementation, or enforcement, not policy origination
or conceptualization. While the staff has decision rights when it comes to some administrative
processing decisions, changes in the philosophy or instrumentation of the appraisal program
must be approved by the Secretary. Evidence for this finding was found in responses to the
question, “To your knowledge, which leader or organization has the authority to direct any
officer appraisal program changes?” Eight staff members immediately replied, “the Secretary,”
while three replied, “the CHRO.” When prompted to describe the policy decision process,
however, those three quickly amended their answer to the Secretary or CSO. As Pat said:
It’s the HR person that's actually running those types of things, the CHRO would be who
is empowered to make the decision. However, if you want to survive for very long, you
will check with the head of the profession [the CSO] and perhaps even the Secretary
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 60
because of the political fallout that comes with congressionals and all sorts of other things
which ultimately follow these kinds of changes…
Riley agreed:
Really, the CSO and the Secretary, informing them that something has been
identified and how he [the CHRO] would like to address it. And of course, you know,
they may have some additional information they may want to provide to, say, ‘let's look
and see how to include maybe a different aspect into that particular form.’
Max suggested that the Secretary and CSO would be consulted even for simple changes to an
appraisal form:
I think even something as innocuous as changing the block check [language] from ‘retain
as O6’ to some other term, even that small of a change would not be done in a
vacuum, that would be done in consultation with Service senior leadership.
Reese described an actual instance of the above. After developing an appraisal form with new
forced distribution percentages, the HR staff presented it to the CSO for concurrence:
I had it broken down to 49% [for the] O3 report, O4 [was] 49%, O5 [was] 33%,
O6 [was] 33%. We brief it. He [the CSO] goes, ‘No, let's do O5 below 49%,’
and then he grabbed the O6 report and he started breaking the boxes.
These responses indicate that the Service Secretary and CSO are often involved in the minutiae
of appraisal program policy changes. They also suggest that HR expertise may at times not be
valued by the Service’s senior leaders. This has potential implications for HR staff motivation,
as a belief in the primacy of experience over subject-matter expertise can discourage employees
from acquiring the knowledge required to effectively perform their work roles (Morrison &
Milliken, 2000). This approach to decision-making is also indicative of a bureaucratic rather
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 61
than professional accountability system. These tend to focus upon decision efficiency rather than
decision quality (Burke, 2004).
Additional evidence showed that the Service’s two senior leaders not only approve all
appraisal policy changes but are most often the source of them as well. When asked which
appraisal policy decisions had originated with the HR staff, only one participant was able to
recall an example, a small change to instrumentation. Sam explained that, “Sometimes we get
the ideas from the Secretary down. He'll say, ‘Hey, I want you guys to look at this.’” Reese, one
of the longer tenured participants interviewed, said this was the norm and that change requests
usually came from the top:
It usually comes down from the Service leadership saying, we want to add this to the
program…. They come to us, and I'll give you an example. A previous CSO looked at it
[the program] and said, ‘We're going to change the appraisal form. What do we need to
do? How do we need to change? One, we think it's outdated,’ this and that...
This centerpiece role of the Service Secretariat in the full range of appraisal policy or process
decisions is an example of what Lewis describes as espoused theories versus theories in use
(2011). The HR staff is theoretically, but not actually, responsible for the appraisal policy work
process, while the top-down policy work process, driven by the Secretary and the CSO, has
become part of the Service’s cultural model – assumed, invisible, and hard to change.
Figure 3 depicts the espoused (A) versus actual (B) appraisal policy work processes as
described by the Global HR staff mission statement and interview participants, respectively.
Panel B depicts the impact of two decisions: the withholding of policy decision rights by the
Service Secretariat, which appears to function as the appraisal program thought leader, and; the
delegation of appraisal policy management to the day-to-day program managers at Camp Smith.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 62
These decisions effectively removed the global HR staff headquarters from the policy work
process without relieving it of responsibility for policy outcomes. This is problematic, because
when work processes are not properly aligned with organizational goals, even sufficiently
motivated and knowledgeable employees will fail to achieve desired outcomes (Clark & Estes,
2008; Rueda, 2011).
Figure 3. Espoused (A) vs. actual (B) appraisal policy work process.
Research also indicates that when employees are not intellectually respected, emotionally
connected, actively involved, and meaningfully empowered, they become less engaged and
motivated, and therefore less likely to realize critical organizational goals (Berbarry &
Malinchak, 2011). The policy work process depicted in Panel B fails to sufficiently empower or
involve the HR workforce, which may be contributing to the HR staff motivation findings
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 63
presented earlier in this study. Equally important, the actual policy process disincentivizes the
structured, systematic, and coordinated interaction of knowledge, skills, and motivation needed
to manage, evaluate, and improve the appraisal program. This approach, with its reliance upon
the top leadership’s experience, can instead lead to routine and habitual actions that become
increasingly impulsive and automatic (Rodgers, 2002).
Professional development. This study assumed that Service X must provide the HR staff
with education and training in program management, to include program evaluation practices.
Without this education and training, the HR staff will be unable to effectively manage, evaluate,
and improve the officer appraisal program. The study found that Service X must increase the
professional development of the HR staff, not only to close the program evaluation procedural
knowledge gap, but also to close several declarative knowledge gaps identified during this study.
Interviews confirmed that beyond general military HR training, Service X has not
prepared the HR staff for its specific appraisal program management responsibilities. All 11
participants confirmed that they received no targeted professional development to perform their
appraisal program duties. When asked, for example, what specialized education or training she
had received to gain the knowledge needed to develop appraisal program policies and processes,
Alex replied, “As far as geared specifically towards policy development and policy writing, a lot
of that has been self-taught and OJT.” Jamie’s response was similar. “Zero. All of it was on-the-
job training here.” Blair said, “I have received no specific, or formalized training…. I’ve worked
in and around the officer performance appraisal report and its three different forms over my
career.” Sam described his reservations about becoming a part of the appraisal program team
when a colleague first encouraged him to apply for a position despite his lack of an HR
background or specialized education: “I said, ‘I don't know appraisals. I mean I've got them, I've
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 64
written them, but I really don't know the guts.’” He has received no additional specialized HR
education or training since his hiring several years ago.
Some staff members demonstrated concern that relying exclusively upon experiential
knowledge or military HR training to manage or improve the appraisal program might be
contributing to knowledge shortfalls. For example, when asked about the knowledge of his
fellow appraisal program managers, Pat said, “[They’re] not a bunch of IO psychologists, which
is the school piece of that kind of thing. I think we go a little bit wrong when we don't tap into
that kind of expertise or do any other development.” Fran described the military HR training he
had received in the performance appraisal program as perhaps insufficient:
Administratively it shows you the technical aspects of the program, and how to check the
block, and what block means this, and all that. So, you know the forms pretty well, but
you don't know the psychology behind appraising people and interacting with people,
which is probably one of the largest shortcomings of the current program that we have.
People that are asked to document this really don't know how to document. They know
exactly where, and what types of information goes in which block, but how do I make
that a meaningful appraisal that someone could use…?
Jamie offered that even the Service’s HR-trained officers did not appear especially expert in the
appraisal program:
I wouldn't say they'd come in with any type of edge directly. They may indirectly
understand how the process works and have a better understanding of the appraisals and
the history behind them, but no distinct edge. Everything is on-the-job training…
Max assessed the usefulness of his HR military training similarly:
Nothing…if you're talking about beyond what you would get in the HR course or
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 65
something like that…there's no formal training that tells you, ‘Hey, I'm an HR
professional, here's the training I've received outside of experience that tells me the
appraisal program is or is not effective.’”
Combined with the knowledge and motivation findings of this study, these participant responses
suggest that Service X needs to increase its HR staff development efforts, ensuring employees
possess the factual, conceptual, and procedural knowledge and skills required to effectively
manage, evaluate, and improve the officer appraisal program.
Assignment tenure. This study assumed that Service X should provide sufficient
assignment tenure to HR senior military staff officers responsible for long-term officer appraisal
program policy, management, and improvement. Without a strategic timespan of discretion – the
time needed to make strategic decisions and receive feedback on their impact (Jaques, Gibson &
Isaac, 1978), HR staff officers are likely to exercise little program decision-making authority,
even if their charter expressly grants it. Additionally, work incentives may skew towards
optimizing status quo practices rather than conducting comprehensive evaluations of existing HR
programs and policies. The study found that Service X should increase the assignment tenure of
senior HR staff military officers responsible for long-term officer appraisal program policy,
management, and improvement.
According to a publicly available study by a military analytical agency, the CHRO
typically leads the HR staff for only two years, and the HR senior military staff experiences 80%
turnover every two years as well. The HR staff’s job tenure figures support this finding. As
Figure 4 shows, these 11 professionals, the most senior appraisal program policy and
management leaders on the global HR staff, have a median time in their current duty positions of
three years. When disaggregated by employee type, however, it becomes clear that the civilian
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 66
staff, all but one of whom are day-to-day program managers, are long-tenured, whereas the
military staff are short-tenured. It is the military staff, from the CHRO downward, who exercise
a policy oversight role and who are held accountable for the appraisal program’s strategic
outcomes by Service X’s senior leaders.
Figure 4. HR staff median job tenure (in years).
To assess job tenure impact upon appraisal program management, participant responses
to the following question were considered: “If someone were to say that the rapid rotation of key
leaders into and out of critical HR positions makes it hard to manage major programs such as
officer appraisals – what would your reaction be?” Four participants thought senior HR staff
leader job tenure had a significant impact upon appraisal program management, three from the
HR staff headquarters and one at Camp Smith – two were military and two were civilians.
Conversely, three disagreed, all at Camp Smith – one was military and two were civilians. Of
the remaining participants, three were not asked the question due to participant time constraints,
and one did not respond directly to the question. Among those agreeing, Fran was most
emphatic:
I completely agree. The time needed – I mean, even though the higher you get in the
organization the more you know – if you're paying attention at all and engage with your
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 67
institution, you know more, you know enough to be effective beyond the scope of your
narrow focus and whatnot. But in order to apply the knowledge you gain across a broad
spectrum in a specific area, you’ve got to have some time to conclude what works and
what doesn't work, and if you never have that time, you'll never get above a certain level
of competence or effectiveness in your job. So, if it's important for us as an institution to
have a good functioning program like this in Human Resources, then the Human
Resources professionals that are making decisions about that program should have some
kind of tenure, enough of a span so that they understand it completely.
Jamie disagreed, saying, “Yeah, my initial thought is no. I mean I'd be interested, I've not heard
that comment.” Max initially and vehemently disagreed as well:
“…there's the bigger indictment of course of swapping guys out… ‘Hey, all these people,
they're not freaking experts at anything. They're moving so quickly. By the time they
move, they just barely even figured out what they were supposed to be doing.’…. There's
that recurring indictment on what we're doing, but I think on something like the appraisal
program, no, I don't think so. I think the program is probably the exception to that
because all of us…. We all deal with the appraisal program, if not every day, [then] every
other day or every few days because we're all in the program as raters….”
As he thought about it, however, and without further prompting, Max amended his answer:
Now, whether you're rotating, and you just didn't have the energy as the CHRO to change
the appraisal program or something, that could be true. ‘Hey, I was going to, but by the
time I started it, I was 18 months out, then the Chief [of Service Operations] swapped
out, and then we just couldn't get [through] the bureaucracy of trying to do something
like that.’ Yeah, there's probably some truth to that.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 68
Blair also agreed with the premise of the question, saying that making major improvements to
the appraisal program would require the CHRO to have sufficient time in position:
I would agree. At the bare minimum, it's probably a three-year undertaking…. And that’s
when optimal – you and I come up with a great idea today, and three years from now,
we’ve got all the electronic permissions, and the forms uploaded, and ready to execute,
and…we rotate [HR staff leaders] a lot quicker than that.
These responses initially caused the researcher to question whether this influence was
present, as no overwhelming consensus had emerged among the HR staff. Upon closer
examination, however, it became clear that responses generally sorted by employee type, work
role, tenure, and responsibilities. Most senior military HR leaders, those subject to rapid job
rotation, agreed that it negatively impacts program management, as did two civilians in the
global HR staff headquarters, those charged with strategic management of the appraisal program.
Meanwhile, the long-tenured civilians at Camp Smith, with day-to-day program management
responsibilities, did not feel affected by senior HR leader churn. This suggests that those most
responsible for appraisal program strategic management and policy oversight agree that
increasing their job tenure would help them to improve appraisal program management.
Synthesis of Findings
The conceptual theory which guided this study was that Service X’s HR staff might be
less than completely satisfied with current officer appraisal program management practices.
That theory was corroborated, as the staff identified several shortcomings in the current appraisal
program that they would like to see addressed. Findings also provided answers to the study’s
research questions. Research question 1 asked, “To what extent is Service X meeting the goal of
being 100% compliant with congressional guidance to modernize officer management and
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 69
appraisal practices by June 2022?” The study found that the Service is not currently attempting
to meet this goal due to several knowledge, motivation, and organizational gaps.
Research question 2 asked, “What knowledge and motivation related to the management,
evaluation, and improvement of the officer appraisal program is resident within the Service’s
Global HR staff?” The study found that increases are needed in staff declarative and procedural
knowledge, as these gaps are preventing the conduct of periodic appraisal program evaluations.
This lack of knowledge may also be interacting with HR staff motivation, which would benefit
from efforts to increase collective efficacy and engender a mastery goal orientation.
Lastly, research question 3 asked, “What is the interaction between Service X culture and
context and HR staff knowledge and motivation as it pertains to officer appraisal program
management, evaluation, and improvement?” The study suggests that organizational influences
may be contributing to both knowledge and motivation gaps – Service X’s hierarchical decision
making could be placing downward pressure on both HR staff knowledge acquisition and
motivation, while insufficient professional development and the rapid duty rotation of key HR
leaders denies staff members the time and knowledge needed to effectively manage, evaluate,
and improve the appraisal program.
Collectively, these KMO influences appear to have engendered a homeostasis in which
steady-state appraisal program management practices tend to persist, regardless of internal
assessments or external oversight requirements. Since 1979, the only substantive changes to the
officer appraisal program have been gradual adjustments to its forms and instrumentation, as
well as centrally driven rather than rating official-driven forced ranking. As one study
participant stated, “…realistically, the heart and content of it, when we're looking at the program,
the forms may have changed, but the program itself – the ideology and philosophy of what the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 70
program is intended to do – hasn't changed.” Whether or not the appraisal program needs to
change is outside the scope of this study. The modified gap analysis does indicate, however, that
the Service has an incomplete understanding of how the appraisal program is currently
functioning, as a comprehensive program evaluation has not been conducted in many years. For
example, the study found that a large share of the officer workforce is dissatisfied with the
current appraisal program, particularly its accuracy, fairness, and developmental utility.
Additionally, the HR staff expressed dissatisfaction with the efforts and abilities of rating
officials who write appraisal reports, with some suggesting that officer careers are sometimes
harmed as a result.
Solutions and Recommendations
Knowledge Recommendations
Introduction. Per Krathwohl’s framework (2002), the assumed knowledge influences
among the HR staff in this case study were declarative (factual and conceptual) and procedural.
As Table 4 indicates, these were identified as gaps bearing upon the staff’s ability to effectively
manage and continuously improve the officer performance appraisal program. Of the 11
interview participants in this study, for example, none were aware of comprehensive survey data
exploring officer workforce attitudes about the appraisal program, factual evidence which could
improve program management. Most possessed a limited knowledge of performance appraisal
concepts, to include the biases and heuristics which can reduce forced ranking performance
appraisal validity and reliability. Lastly, none were familiar with the full range of undesirable
and counterproductive workforce behaviors which may result from this appraisal practice.
Without this factual and conceptual knowledge, the HR staff cannot construct appropriate
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 71
performance measures to identify or correct such undesirable effects within the program should
they occur (Clark & Estes, 2008).
Procedural knowledge in program evaluation techniques grounded in disciplined inquiry
was also lacking across study participants. Without this knowledge, the HR staff cannot conduct
periodic, data driven officer appraisal program evaluations, making it less likely that the program
will achieve its implicit and explicit goals (Marsh & Farrell, 2015). Consistent with the above,
none of the study participants could identify exactly when the last comprehensive evaluation of
the officer performance appraisal program had occurred, what approach was used (e.g.
objectives-, management-, consumer-, or expertise-oriented), when the next one was scheduled,
or what the critical program success measures should be. This suggests a need for increased
procedural knowledge (Rueda, 2011). The lack of procedural knowledge has perhaps the
greatest impact upon appraisal program management, as closing this knowledge gap first would
likely help the HR staff to identify and close declarative knowledge gaps as well. Table 4,
below, contains context-specific recommendations to close these knowledge gaps based upon
theoretical principles.
Table 4
Summary of Knowledge Influences and Recommendations
Assumed Knowledge
Influences: (F) Factual;
(C) Conceptual;
(P) Procedural
Principle(s) and Citation Context-Specific
Recommendation
(F) The HR staff requires
knowledge of the attitudes
of other key program
stakeholders towards the
officer appraisal program.
Organizations require knowledge
of specific details to be acquainted
with and solve problems in a
discipline (Krathwohl, 2002).
Have Service X’s
behavioral research
agency administer annual
personnel management
satisfaction surveys to the
officer workforce,
providing analysis to the
HR staff annually.
Designing and creating materials
that are relevant and useful to
learners and connected to real-
world tasks can increase
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 72
expectancies for success (Pintrich,
2003).
(C) The HR staff must be
familiar with the strengths
and weaknesses of forced
ranking performance
appraisals.
Organizations require knowledge
of classifications, categories,
concepts, theories, principles,
models, and structures in order to
integrate discrete content elements
to make sense or derive meaning
from information (Krathwohl,
2002).
Provide the HR staff with
continuing educational
and training opportunities
to develop requisite
subject matter expertise in
the theory and practice of
performance appraisal
program design. Also
identify such knowledge
as a desired competency
among any new senior
hires.
New information that is
meaningfully connected with prior
knowledge is more quickly stored
and accurately remembered
because it is elaborated with prior
learning (Schraw & McCrudden,
2006).
(P) The HR staff must
understand how, why, and
when to employ a
disciplined inquiry
framework to conduct
periodic evaluations of
appraisal program efficacy.
Organizations that use disciplined
inquiry frameworks to conduct
program evaluations enhance their
chances of reaching the program’s
stated goals (Marsh & Farrell,
2015).
Train the HR staff in
disciplined inquiry via
case studies and modeling
so that they can organize,
rehearse, and practice
what they have learned
about program evaluation
procedures from exemplar
programs.
Observational learning, or
modeling, can increase learning,
performance, and self-efficacy
(Denler, Wolters, & Benzon,
2009).
Factual knowledge – Initiate new appraisal-specific surveys of the officer workforce.
The data showed that the study participants were unaware of comprehensive survey data
exploring officer workforce attitudes about the appraisal program, the details of which could
improve program management. Expectancy value theory, which examines methods for
increasing expectations of success, can help devise a recommendation to close this gap.
According to Pintrich (2003), one way to increase such expectations is to design and create
materials that are relevant and useful to learners and connected to real-world tasks. Because the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 73
longitudinal surveys of officer career attitudes were not created specifically for the appraisal
program, the HR staff were unaware of them, and as they were created by another analytical
organization for other purposes, they have limited utility to the appraisal program. The
recommendation therefore is to have the HR staff’s behavioral research agency re-initiate its own
longitudinal surveys of officer careers, with survey instruments focused explicitly upon officer
attitudes towards personnel management and the appraisal program.
The goal of providing this new survey information should be to summarize survey results
and present them annually to the HR staff, with specific factual details aligned to specific
appraisal program goals. This approach will ensure the staff can successfully use the
information, which in turn can help reduce uncertainty about how to meet a work goal (Clark &
Estes, 2008). It will also provide discrete information that can be conceptually organized in
support of a more complex task, such as a comprehensive program evaluation.
Conceptual knowledge – Increase HR staff conceptual knowledge about the
strengths and weaknesses of forced ranking performance appraisals. The data showed that
HR staff study participants lacked comprehensive knowledge about a forced distribution
performance appraisal program, to include its components, theory, design philosophy, purported
benefits, and potential disadvantages. Information processing theory, which describes the role of
cognition, thinking, and memory in learning, can help devise a recommendation to close this gap.
According to Schraw & McCrudden (2006), new information which is meaningfully connected
with prior knowledge is more quickly stored and accurately remembered. Because Service X
implemented forced ranking to reduce rater leniency bias, something the HR staff recognized and
understood, the concept of biases provides a schema that can be elaborated upon to organize new
learning about the potential strengths and weaknesses of forced distribution appraisals. The
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 74
recommendation therefore is to provide the HR staff with continuing educational and training
opportunities to develop requisite subject matter expertise in the theory and practice of
performance appraisal program theory and design, making such professional knowledge a
desired competency among new hires as well.
The goal of the described education and training should be to give Service X’s HR
professionals more than an experiential basis for the management of the officer appraisal
program, as experience can be mis-educative and is not primarily cognitive (Rodgers, 2002).
The Model of Domain Learning (MDL) describes the acquisition of subject-matter knowledge as
foundational to moving a person from acclimation to competency and, eventually, to expertise
(Alexander, 2003). MDL identifies domain knowledge as breadth within a general field, whereas
topic knowledge represents an individual’s knowledge about specific domain topics. The model
argues that as depth and breadth of subject-matter knowledge is acquired, familiarity with the
problems and methodologies of the domain create a transcendent ability to engage in problem
finding and knowledge creation, what Senge (1990) might characterize as generative or value-
producing learning. By increasing the HR staff’s domain knowledge of performance appraisal
systems and topic knowledge of forced rankings, the Model of Domain Learning suggests that
HR staff members will not only move towards expertise but will also derive greater benefit from
the procedural knowledge solution described below.
Procedural knowledge – Increase HR staff procedural knowledge of program
evaluative practices incorporating a disciplined inquiry framework. The data showed that
most HR staff study participants lacked procedural knowledge in program evaluation techniques
grounded in a disciplined inquiry framework, be it KMO (Clark & Estes, 2008), DDDM (Marsh
et al., 2006), Creative Problem Solving (Davis, 1998), or any other well researched and proven
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 75
approach. Unlike the factual and conceptual knowledge gap solutions recommended above,
social cognitive theory underpins the recommendation to close this gap. Observational learning,
or modeling, can increase learning, performance, and self-efficacy (Denler, Wolters, & Benzon,
2009). Service X is an organization of over 1.25 million people with hundreds of enterprise
programs, several of which routinely conduct comprehensive program evaluations that could
serve as a model for the officer appraisal program. The recommendation therefore is to provide
the HR staff with training in disciplined inquiry via case studies and modeling so that they can
organize, rehearse, and practice what they have learned about program evaluation from exemplar
programs and managers.
The goals of the described training should be: to enhance the HR staff’s ability to engage
in disciplined inquiry, which demands contextual and problem-solving thinking (Rueda, 2011),
and; to improve the staff’s ability to establish appraisal program goals that are congruent with
Service X’s professed values, culture, strategic goals, and dynamically changing operating
environment, as organizations that use disciplined inquiry to conduct program evaluations
enhance their chances of success (Marsh & Farrell, 2015). Rosseau (2006) refers to this
approach as evidence-based management (EBMgt), which leads to valid learning and continuous
improvement, an implicit goal in any program.
Motivation Recommendations
Introduction. Per Rueda’s framework (2011), the assumed motivation influences among
the HR staff in this case study were the need for collective efficacy and a mastery goal
orientation. As Table 5 indicates, the study found that the HR staff could benefit from increased
collective efficacy, as well as the development of a mastery goal orientation. The collective
efficacy finding, for example, was supported by the fact that seven of 11 study participants
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 76
expressed a lack of confidence in the HR staff’s ability to produce desired appraisal program
outcomes. Three low confidence areas emerged: planning for the future, implementing change,
and employee competency. Without collective efficacy, the HR staff may exhibit lower levels of
strategic thinking, innovation, optimism, and performance (Bandura, 2000).
While the goal orientation findings were less conclusive, the study found some evidence
of a performance orientation, as well as a need for the HR staff to develop a mastery orientation
in its place. Evidence included that most appraisal program changes described by the HR staff
had an external referent and were executed to gain approval from outside the staff, consistent
with a performance orientation (Rueda, 2011). Additionally, there was no evidence of internally
goal directed behavior for the purpose of learning, innovation, and improvement, features of a
mastery goal orientation (Pintrich, 2003), whereas there was some evidence that the staff was
waiting for events, direction, or indicators to “trigger” future appraisal program refinements,
inconsistent with a mastery goal orientation. Without a mastery goal orientation, the HR staff is
less likely to be a learning organization capable of developing new skills focused upon
continuous appraisal program improvement (Pintrich, 2003).
Because collective efficacy and a mastery goal orientation are closely interrelated, it is
difficult to give either a higher priority, and recommendations to engender either have a high
degree of overlap – improvements to one benefit the other. However, as high collective efficacy
is needed to start a positive efficacy spiral, wherein an organization chooses an activity more,
persists more, learns more, and tries harder, increasing HR staff collective efficacy should be
prioritized. Table 5, below, contains context-specific recommendations to close these motivation
gaps based upon theoretical principles.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 77
Table 5
Summary of Motivation Influences and Recommendations
Assumed Motivation
Influence
Principle(s) and Citation Context-Specific
Recommendation
Collective Efficacy:
The HR staff must feel
confident in their
collective ability to
manage, evaluate, and
improve the appraisal
program.
Organizations in which
individuals possess a shared
belief in their collective ability
to organize and implement
required action demonstrate
higher levels of strategic
thinking, innovation, optimism,
investment, staying power, and
performance (Bandura, 2000).
Via site visits and professional
exchanges, and using the
concept of SMART goalsetting
as scaffolding, model enterprise
program evaluation and
management practices for the
HR staff, to include
opportunities for goal-directed
practice and private feedback.
Providing opportunities to
observe a credible, similar model
engaging in behavior that has
functional value can increase
self/collective efficacy (Pajares,
2006).
Goal Orientation –
Mastery: The HR staff
must demonstrate a
mastery approach to
officer appraisal
program management.
Mastery goals are less
maladaptive, orient workers
toward learning and
understanding, develop new
skills, and focus people on self-
improvement using self-
referenced rather than norm-
referenced standards. (Pintrich,
2003).
During site visits and
professional exchanges
designed to increase collective
efficacy, use challenging,
scenario driven exercises to
ensure knowledge transfer and
gradually reduce the need for
scaffolding. Follow practical
exercises with self-led, self-
referenced assessments of group
performance.
Learning tasks that are novel,
varied, diverse, interesting, and
challenging can help promote a
mastery-approach orientation
(Yough & Anderman, 2006).
Increase HR staff collective efficacy via site visits, professional exchanges, and
SMART goalsetting. The data suggested that the HR staff requires a greater shared belief in its
ability to successfully and effectively manage, evaluate, and improve the officer appraisal
program. In this instance, self-efficacy theory can help devise a recommendation to close this
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 78
gap. According to Pajares (2006), providing opportunities to observe a credible, similar model
engaging in behavior that has functional value can increase self- and collective efficacy.
Because Service X is a large organization containing several analytical organizations expert in
program evaluation and management, one can readily be identified as an appropriate model. The
recommendation therefore is to use site visits and professional exchanges with one such
organization to model enterprise program evaluation and management practices for the HR staff.
Modeling should incorporate the concept of SMART goalsetting as scaffolding, and it should
also provide opportunities for goal-directed practice and private feedback.
The goal of the described modeling should be to create an emergent, group-level quality,
greater than the sum of the parts, that magnifies the efficacy of the HR staff’s individual
members and increases its ability to gather and assess critical program performance data, as
motivated organizations and individuals expend greater effort to figure out what is happening
around them (Mayer, 2011). Organizations with high collective efficacy also demonstrate higher
levels of strategic thinking, innovation, and optimism, and are therefore more likely to
accomplish team and organizational goals (Bandura, 2000). According to Kaplan and Maehr
(2006), studies show a linkage between low collective efficacy and a performance-avoid goal
orientation, suggesting that actions taken to improve HR staff efficacy can also help foster the
mastery goal orientation recommended below.
Foster a mastery goal orientation in the HR staff with challenging, scenario driven
exercises. The data showed that the HR staff need to develop a mastery goal orientation to
accomplish all appraisal program management actions, as a performance goal orientation is more
likely to result in homeostasis, e.g. the persistence of steady-state practices that fail to meet
organizational goals (Schneider & Guzzo, 1996). Goal orientation theory can help devise a
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 79
recommendation to close this gap. According to Yough & Anderman (2006), learning tasks that
are novel, varied, diverse, interesting, and challenging can help promote a mastery goal
orientation. The recommendation therefore is to leverage the same site visits recommended to
build collective efficacy by incorporating the use of challenging, scenario driven exercises to
ensure knowledge transfer and gradually reduce the need for scaffolding. Exercises should be
followed with self-led, self-referenced assessments of group performance.
The goal of the described learning model should be to create a mastery goal orientation in
place of a performance orientation, as a mastery orientation is less maladaptive, more
improvement-focused, and more likely to engender active goal pursuit, persistence, and the
development of new knowledge (Rueda, 2011). Additionally, studies have found that the
benefits of creating a mastery goal orientation are greatest when a task is complex and requires
problem solving (Kaplan & Maehr, 2006). In the case of the HR staff, therefore, these
theoretical perspectives suggest that the development of a mastery goal orientation will
contribute to improved appraisal program management and evaluation practices.
Organizational Recommendations
Introduction. Per Rueda’s framework (2011), the assumed organizational influences
upon the HR staff in this case study were attributable to aspects of Service X culture: a policy
decision-making model, as well as a work setting in which key HR staff leaders must be afforded
sufficient professional development or assignment tenure. As Table 6 indicates, these were
corroborated as gaps bearing upon the staff’s ability to effectively manage and continuously
improve the officer performance appraisal program. For example, all 11 interview participants
in this study confirmed that they do not possess decision rights for any substantive appraisal
program policy changes, which are either approved by or originate with the Service Secretary
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 80
and CSO. Some evidence also suggests that the Service’s most senior leaders periodically
substitute their own judgement for the analysis or recommendations of the HR staff regarding
appraisal program policy matters. Without sufficient appraisal program decision rights and
empowerment, employees are less likely to collaborate and acquire new knowledge, more likely
to suffer from motivation shortfalls, and therefore more likely to engage in automated rather than
value-producing activity (Berbarry & Malinchak, 2011, Rueda, 2011).
Additionally, evidence indicated that the HR staff requires more professional
development to close procedural knowledge gaps, especially the ability to conduct a
comprehensive appraisal program evaluation, as organizations need robust measurement systems
to engage in evidence-based management (Rousseau, 2006). For example, interviews confirmed
that beyond general military HR training, Service X has not prepared the HR staff for its specific
appraisal program management responsibilities, nor do any of them possess civilian higher
education or certifications in HR management. Lastly, evidence revealed that the military HR
staff is very short-tenured relative to its strategic role in appraisal program management.
Without a sufficient time-span of discretion, (Jaques, Gibson & Isaac, 1978), senior HR staff are
unlikely to exercise program decision-making authority, even if their charter expressly grants it.
Of these three organizational influences, extending senior military officer tenure in key
HR staff leadership positions should be the most immediate priority, as it is cost-neutral and
relatively simple to implement in the study setting. The next priority would be specialized
education and training in enterprise program management, something that could be implemented
with moderate costs. While moving from hierarchical decision-making to a more pluralistic and
empowering model would produce perhaps the greatest net benefits across all of the KMO
influences confirmed by this study, the researcher’s familiarity with the culture of the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 81
organization suggests that this will require significant time and effort to achieve and should
initially be a lower priority as a result. Table 6, below, contains context-specific
recommendations to close these motivation gaps based upon theoretical principles.
Table 6
Summary of Organizational Influences and Recommendations
Assumed Organization
Influence
Principle and Citation
Context-Specific
Recommendation
CM Influence – Policy
Decision Making: The
organization must value
the input of the HR
senior staff and
employees regarding the
appraisal program.
Meaningfully empowered and
intellectually respected
employees are more emotionally
connected, actively involved,
and engaged (Berbarry and
Malinchak, 2011).
Grant the HR staff full appraisal
program decision rights.
Do not intervene unless
someone has demonstrated that
she or he is not dependable.
Set clear organizational goals,
then trust the HR staff to
manage the appraisal program
in support of those goals.
Organizational effectiveness
increases when leaders trust their
teams and grant them
accountable autonomy. Trust,
compassion, stability, and hope
are the leader characteristics
most valued by followers (Rath
& Conchie, 2008).
CS Influence 1 –
Professional
Development: The
organization should
provide the HR staff
with education and
training in enterprise
program management,
particularly program
evaluation practices.
Evaluation is one of the most
cost-effective activities in
performance-improvement,
because it is the one activity that,
if applied correctly, can ensure
success (Clark & Estes, 2008).
Direct Service X’s service
academy to develop a
curriculum in program
evaluation tailored to the
developmental needs of the HR
staff. After validating the
program, migrate it to the
Service’s HR professional
military education center so that
HR specialists can receive this
procedural training as part of
their routine professional
development.
Organizational effectiveness
increases when leaders monitor
and evaluate the effectiveness of
all aspects of their organization
and engage in evidence-based
decision-making (Rousseau,
2006).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 82
CS Influence 2 –
Assignment Policy:
The organization should
provide sufficient
assignment tenure to HR
military staff officers
responsible for long-
term officer appraisal
program policy,
management, and
improvement.
Strategic leaders require a
sufficient time span of discretion
– the time needed to make
strategic decisions and receive
feedback on their impact
(Jaques, Gibson & Isaac, 1978).
Increase the job tenure of senior
HR leaders responsible for
appraisal program policy and
management so that they can
fully evaluate, experiment,
innovate, and improve the
appraisal program. Provide
them with the most critical
resource – time to make a
difference.
Organizational effectiveness
increases when leaders insure
that employees have the
resources needed to achieve the
organization’s goals. Insuring
staff’s resource needs are being
met is correlated with increased
outcomes (Waters, Marzano &
McNulty, 2003).
Policy decision-making – grant the HR staff full appraisal program decision rights.
The data suggested that the organization must place greater value on the input of the HR senior
staff and employees regarding the appraisal program, as meaningfully empowered and
intellectually respected employees are more emotionally connected, actively involved, and
engaged (Berbarry and Malinchak, 2011). Leadership theory can help devise a recommendation
to close this gap, as organizational effectiveness increases when leaders trust their team and grant
them accountable autonomy. Rath and Conchie (2008) note that trust, as well as compassion,
stability, and hope, are the leader characteristics most valued by followers. The recommendation
therefore is for the Service Secretary and CSO to grant the HR staff full appraisal program
decision rights, intervening only when someone has demonstrated that she or he is not
dependable. The Service’s most senior leaders should set clear organizational goals, then trust
the HR staff to manage the appraisal program in support of those goals.
The goal of granting these policy decision rights should be to reinsert the global HR staff
headquarters into the appraisal policy work process to restore its accountability for program
outcomes. This will realign the actual policy work process with the espoused one, increasing the
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 83
likelihood that desired outcomes will be achieved (Clark & Estes, 2008). Additionally,
pluralistic decision-making presents other advantages over the current hierarchical model. First,
it combats the development of routine and habitual action that becomes common and automatic
when leaders rely more heavily upon their own experience than the expertise and
recommendations of their teams (Rodgers, 2002). Second, it encourages two-way
communication, preventing organizational silence – an avoidance of offering improvements
because employees anticipate that their ideas will be rejected or resented (Morrison & Milliken,
2000). Third, it helps integrate thinking and acting at all levels (Senge, 1990). In the case of the
HR staff, therefore, these theoretical perspectives suggest that restoring the global HR staff’s
decision rights over officer appraisal program policy will allow them to better manage, evaluate,
and improve the program.
Professional development – provide the HR staff with education and training in
enterprise program evaluation practices. The data suggested that the organization should
provide the HR staff with education and training in program evaluation, a cost-effective
performance-improvement activity that can ensure success if applied correctly (Clark & Estes,
2008). Leadership theory can help devise a recommendation to close this gap. According to
Rousseau (2006), organizational effectiveness increases when leaders monitor and evaluate the
effectiveness of all aspects of their organization and engage in evidence-based decision-making.
Because Service X has its own service academy, a four-year degree granting undergraduate
university, it has significant intellectual capital available to provide targeted education and
training to the HR staff in program evaluation. The recommendation therefore is to have Service
X’s service academy develop a curriculum in program evaluation tailored to the developmental
needs of the HR staff. After validating the curriculum, teach the course and evaluate results.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 84
When results warrant, migrate the course to the Service’s HR professional military education
center so that HR specialists can receive this procedural training as part of their routine
professional development.
The goal of the course is to provide HR professionals with the ability to master
disciplined inquiry and develop the procedural knowledge and data literacy necessary to sound
program evaluation. And additional benefit is that sound data can help the HR staff to move
Service X away from hierarchical decision-making and towards data-driven decision-making.
According to Marsh and Farrell (2015), organizations that collect and use data to adjust work
processes are more likely to achieve desired outcomes. Rosseau (2005) argues that an additional
benefit of translating evidence into organizational practices is that program managers become
more expert and professional in their decision-making. In the case of the HR staff, therefore, an
increase in this procedural knowledge appears likely to improve appraisal program management,
particularly if coupled with the next recommendation, which is increased assignment tenure for
key HR staff leaders.
Assignment policy – provide sufficient tenure to HR military staff officers
responsible for the officer appraisal program. The data showed that the organization should
provide sufficient assignment tenure to HR staff officers responsible for long-term officer
appraisal program policy, management, and improvement. Leadership theory can help devise a
recommendation to close this gap. According to Waters, Marzano, and McNulty (2003),
organizational effectiveness increases when leaders insure that employees have the resources
needed to achieve the organization’s goals. Insuring a staff’s resource needs are being met is
correlated with increased outcomes, and time is an extremely critical resource. The
recommendation therefore is to increase the job tenure of senior HR leaders responsible for
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 85
appraisal program policy and management so that they can fully evaluate, experiment, innovate,
and improve the appraisal program.
The goal of the policy is to provide senior HR military staff leaders with a sufficient time
span of discretion – the time needed to make strategic decisions and receive feedback on their
impact (Jaques, Gibson & Isaac, 1978). Several positive developments can accrue from this
change. According to Argyris (1977), this additional time for leader introspection improves
learning and productivity. Time allows leaders to experiment, fail, and experiment again. It
makes them less risk averse and more willing to question the underlying assumptions of current
appraisal program practices. Senge (1990) describes this willingness to look beyond the status
quo as generative learning, an impulse to dramatically expand organizational capability rather
than merely optimizing existing practices. In the context of the officer appraisal program, which
has not fundamentally changed for at least 40 years, these theories suggest that increased tenure
for key HR staff leaders might increase their capacity to lead program change as circumstances
warrant.
Limitations and Delimitations
The limitations upon this study included: a highly compressed time schedule; geographic
dispersion of interview participants; the truthfulness, knowledge and experience of participants;
the non-generalizability of the study due to its research design, and; the potential for other
interpretations of the qualitative findings. Delimitations included: a phenomenological research
design relying upon a literature review, documents, and a compact number of interview
participants; a single round of Phase 2 interviews; the decision to conduct an evaluative study;
and the employment of the Clark and Estes KMO inquiry framework.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 86
Conclusion
The purpose of this case study was to provide an in-depth evaluation of the factors
influencing Service X’s global HR staff and its ability to manage, evaluate, and improve a forced
ranking performance appraisal program in response to both internal assessments and external
oversight. Using a disciplined inquiry framework to conduct a modified gap analysis (Clark &
Estes, 2008), the study found that increases are needed in staff factual, conceptual, and
procedural knowledge, as these gaps are preventing the conduct of periodic appraisal program
evaluations, without which the HR staff cannot make sound program management decisions.
This lack of knowledge may be reducing HR staff motivation levels, with are perhaps affected by
organizational influences as well, such as hierarchical decision making, insufficient professional
development, and the rapid duty rotation of key HR military leaders. The interaction of these
KMO influences may in turn be responsible for a homeostasis in which steady-state appraisal
program management practices seem to persist, whether change is needed or not.
These findings do not suggest an intractable problem, however. The HR staff
participants in this study were professional, intelligent, capable, and committed, possessing
knowledge that can be elaborated upon with continuing education, training, and modeling. Staff
collective efficacy and a mastery goal orientation can be cultivated via novel learning
experiences, professional exchanges, goal-directed practice, and private feedback. This should
be accompanied by improved organizational support, to include increased policy decision rights,
higher education and training, job tenure, and the creation of an appraisal program balanced
scorecard to enable evidence-based program management.
If the recommended solutions in this study to empower, develop, and appropriately
resource the HR staff are thoughtfully implemented, a virtuous cycle of knowledge, skill, and
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 87
motivation growth can be initiated. This will close the HR staff’s performance gaps and increase
their ability to effectively manage the officer appraisal program, particularly in the areas of
sound program evaluation and evidence-based management. It is then that Service X will be
able to fully evaluate the efficacy of its officer appraisal program and decide what, if anything,
needs to change to better support both officers and organizational goals. As Clark and Estes
argue (2008), evaluation is one of the most cost-effective activities in performance-improvement
because it is the one activity that, if applied correctly, can ensure success. Lastly, the Service’s
HR staff should be able to transfer some of the lessons and recommendations from this study to
other program areas within its portfolio, and the study may prove instructive for other military
HR departments and public-sector organizations as well, which the literature indicates often
share similar challenges.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 88
References
Alexander, P. (2003). The development of expertise: The journey from acclimation to
proficiency. Educational Researcher, 32(8), 10–14. doi:10.3102/0013189X032008010
Alper, S., Tjosvold, D., & Law, K. (2000). Conflict management, efficacy, and performance in
organizational teams. Personnel Psychology, 53(3), p. 625–642.
Argyris, C. (1977). Double loop learning in organizations. Harvard Business Review, 55, 115–
125.
Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in
Psychological Science, 9(3), 75–78.
Barankay, I. (2012). Rank incentives: Evidence from a randomized workplace experiment. Paper
presented at USC FBE Applied Economics Workshop, Los Angeles, CA.
Berbarry, D., & Malinchak, A. (2011). Connected and engaged: The value of government
learning. The Public Manager, Fall, 55–59.
Berger, J., Harbring, C., & Sliwka, D. (2013). Performance appraisals and the impact of forced
distribution - an experimental investigation. Management Science, 59(1), 54–68.
Biesta, G. J. (2004). Education, accountability, and the ethical demand: Can the democratic
potential of accountability be regained? Educational Theory, 54(3), 233–250.
Blader, S., Gartenberg, C. M., & Prat, A. (2016). The contingent effect of management practices.
Columbia Business School Research Paper No. 15-48.
Blume, B.D., Baldwin, T.T., & Rubin, R.S. (2009). Reactions to different types of forced
distribution performance evaluation systems. Journal of Business Psychology, 24(1), 77–
91.
Bogdan, R. C., & Biklen, S. K. (2007). Qualitative research for education: An introduction to
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 89
theories and methods (5th ed.). Boston, MA: Allyn and Bacon.
Bowen, G.A. (2009). Document analysis as a qualitative research method. Qualitative Research
Journal 9(2), 27–40. doi:10.3316/QRJ0902027
Buckingham, M. & Goodall, A. (2015). Reinventing performance management. Harvard
Business Review, 44–50. Retrieved from https://hbr.org/2015/04/reinventing-
performance-management.
Burke, J. C. (2004). Achieving accountability in higher education: Balancing public, academic,
and market demands. In J. C. Burke (Ed.), The many faces of accountability (pp. 1–24).
San Francisco: Jossey-Bass.
Cappelli, P. & Conyon, M. (2016). What do performance appraisals do? National Bureau of
Economic Research. Working Paper. Retrieved from
http://www.nber.org/papers/w22400.
Cappelli, P., & Tavis, A. (2016). The performance management revolution. Harvard Business
Review, 94(10), 58–67.
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Charlotte, NC: Information Age Publishing, Inc.
Cleve, J., & Lämmel, U. (2016). Data Mining. Berlin: De Gruyter Oldenbourg.
doi:10.1515/9783110456776
Corbin, J., & Strauss, A. (2008). Techniques and procedures for developing grounded theory
(3
rd
ed.). Los Angeles: SAGE.
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods
approaches. Thousand Oaks, CA: SAGE.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 90
Denler, H., Wolters, C., & Benzon, M. (2006). Social cognitive theory. Retrieved from
http://www.education.com/reference/article/social-cognitive-theory/.
Elmore, R. F. (2002). Bridging the gap between standards and achievement. Washington, DC:
Albert Shanker Institute. Retrieved from
http://www.shankerinstitute.org/resource/bridging-gap-between-standards-and-
achievement.
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist,
36(1), 45–56.
Garvin, D.A., Edmondson, A.C., and Gino, F. (2008). Is yours a learning organization? Harvard
Business Review, 86(3), 109–116.
Glesne, C. (2011). Becoming qualitative researchers: An introduction (4th ed.). Boston, MA:
Pearson.
Grote, D. (2005). Forced ranking: Making performance management work. Harvard Business
School Press.
Hazels, B. & Sasse, C.M. (2008). Forced ranking: A review. S.A.M. Advanced Management
Journal, 73(2), 35–39.
Hentschke, G. C., & Wohlstetter, P. (2004). Cracking the code of accountability. University of
Southern California Urban Education, Spring/Summer, 17–19.
Hoffman, B., Lance, C. E., Bynum, B., & Gentry, W. A. (2010). Rater source effects are alive
and well after all. Personnel Psychology, 63(1), 119–151.
Hogan, R.L. (2007). The historical development of program evaluation: Exploring the past and
Present. Online Journal of Workforce Education and Development 2(4), 1–14.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 91
Jaques, E., Gibson, R. O., & Isaac, D. J. (1978). Levels of abstraction in logic and human action:
A theory of discontinuity in the structure of mathematical logic, psychological behaviour,
and social organization. London: Heinemann, 1978.
Johnson, R. B., & Christensen, L. B. (2015). Educational research: Quantitative,
qualitative, and mixed approaches (5th ed.). Thousand Oaks: SAGE.
Kaplan, A., & Maehr, M. L. (2007). The contributions and prospects of goal orientation theory.
Educational Psychology Review, 19(2), 141–184.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training
evaluation. Alexandria, VA: ATD Press.
Kirschner, P. A. (2002). Cognitive Load Theory: Implications of Cognitive Load Theory on the
design of learning. Learning and Instruction, 12(1), 1–10.
Klores, M. S. (1966). Rater bias in forced-distribution performance ratings. Personnel
Psychology, 19(4), 411–421.
Krathwohl, D. R. (2002). A revision of Bloom’s Taxonomy: An overview. Theory into Practice,
41(4), 212–218.
Krueger, R. A., & Casey, M. A. (2009). Focus groups: A practical guide for applied
research (4th ed.). Thousand Oaks, CA: SAGE.
Lewis, L. K. (2011). Organizational change: Creating change through strategic communication
(Vol. 4). New York, NY: John Wiley & Sons.
Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision
making: A framework for understanding capacity building. Educational Management
Administration Leadership, 43(2), 269–289. doi:10.1177/1741143214537229
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 92
in education (RAND Education occasional paper). Santa Monica, CA: RAND
Corporation. Retrieved from http://www.rand.org/pubs/occasional_papers/OP170.html.
Maxwell, J. A. (2013). Qualitative research design: An interactive approach. (3rd ed.).
Thousand Oaks: SAGE.
Mayer, R. E. (2011). Applying the science of learning. Boston, MA: Pearson Education.
McEwan, E. K., & McEwan, P. J. (2003). Making sense of research. Thousand Oaks,
CA: SAGE.
Merriam, S. B., & Tisdell, E. (2016). Qualitative research: A guide to design and
implementation (4th ed.). San Francisco: Jossey-Bass.
Morrison, E., & Milliken, F. (2000). Organizational silence: A barrier to change and
development in a pluralistic world. Academy of Management Review, 25(4), 706–725.
Office of the Deputy Assistant Secretary of Defense - Military and Family Policy (2015).
Demographics profile of the military community.
Pajares, F. (2006). Self-efficacy theory. Retrieved from
http://www.education.com/reference/article/self-efficacy-theory/.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, CA:
SAGE Publications.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in
learning and teaching contexts. Journal of Educational Psychology, 95, 667–686.
doi:10.1037/0022-0663.95.4.667
Rath, T., & Conchie, B. (2008). Strengths based leadership: great leaders, teams, and why
people follow. New York: Gallup Press.
Rodgers, C. (2002). Defining reflection: Another look at John Dewey and reflective thinking.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 93
Teachers College Record, 104(4), 842–866.
Romzek, B. S., & Dubnick, M. J. (1987). Accountability in the public sector: Lessons from the
Challenger tragedy. Public Administration Review, 227–238.
Rousseau, D. M. (2006). Is there such a thing as “evidence-based management”? The Academy
of Management Review, 31(2), 256–269.
Rubin, H. J., & Rubin, I. S. (2012). Qualitative interviewing (3rd ed.). Thousand Oaks, CA:
SAGE.
Rueda, R. (2011). The 3 dimensions of improving student performance. New York: Teachers
College Press.
Salkind, N. J. (2014). Statistics for people who (think they) hate statistics (5th ed.). Los Angeles:
SAGE.
Schein, E. H. (2004). The concept of organizational culture: Why bother? In E. H. Schein, (Ed.),
Organizational culture and leadership. San Francisco, CA: Jossey Bass.
Schleicher, D.J., Bull, R.A., & Green, S.G. (2009). Rater reactions to forced ranking rating
systems. Journal of Management 35(4), 899–927.
Schneider, B., Brief, A., & Guzzo, R. (1996). Creating a climate and culture for sustainable
organizational change. Organizational Dynamics, Spring, 7–19.
Schraw, G., & McCrudden, M. (2006). Information processing theory. Retrieved from
http://www.education.com/reference/article/information-processing-theory/.
Scullen, S. E., Bergey, P. K., & Aiman-Smith, L. (2005). Forced ranking rating systems and
the improvement of workforce potential: A baseline simulation. Personnel Psychology,
58(1), 1–32.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 94
Scullen, S. E., Mount, M. K., & Goff, M. (2000). Understanding the latent structure of job
performance ratings. The Journal of Applied Psychology, 85(6), 956–970.
Senge, P. (1990). The leader’s new work: Building learning organizations. Sloan Management
Review, 32(1), 7–23.
U.S. Congress, House of Representatives, Committee on Armed Services, Subcommittee on
Oversight & Investigations (2010). Another crossroads? Professional military education
two decades after the Goldwater-Nichols Act and the Skelton Panel. Retrieved from
http://www.dtic.mil/dtic/tr/fulltext/u2/a520452.pdf.
U.S. Congress, H.R.5515 (2018). John S. McCain National Defense Authorization Act for Fiscal
Year 2019. Retrieved from https://www.congress.gov/bill/115th-congress/house-
bill/5515/text.
U.S. Senate, 115th Cong. (2018). Officer personnel management and the Defense Officer
Personnel Management Act of 1980: Hearing before the Subcommittee on Personnel of
the Committee on Armed Services. Retrieved from https://www.armed-
services.senate.gov/hearings/18-01-24-officer-personnel-management-and-the-defense-
officer-personnel-management-act-of-1980.
Waters, T., Marzano, R. J., & McNulty, B. (2003). Balanced leadership: What 30 years of
research tells us about the effect of leadership on student achievement. A Working Paper.
S.l.: Distributed by ERIC Clearinghouse.
Weiss, R. S. (1994). Learning from strangers: The art and method of qualitative interview
studies. New York, NY: The Free Press.
Yough, M., & Anderman, E. (2006). Goal orientation theory. Retrieved from http://www.
education.com/reference/article/goal-orientation-theory/.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 95
Appendix A: Participating Stakeholders and Interview Sampling Criteria
Participating Stakeholders
The stakeholder of focus for this qualitative study was Service X’s Global HR staff of
several hundred military and civilian employees located at the Pentagon. More specifically, the
population of interest for this study had three segments. The first segment was a section of the
HR staff referred to as the “Officer Section” or “OS” (a pseudonym). This section of
approximately 25 people is responsible for all commissioned officer human resource plans and
policies, to include performance appraisals, promotions, and selections for competitive
assignments or developmental opportunities. The second segment in the population of interest
was the “Appraisals Management Office” or “AMO” (a pseudonym) located at “Camp Smith,
Illinois” (a pseudonym), consisting of approximately 165 military and civilian employees. The
last segment in the population of interest was the most senior leadership of the HR staff: the
CHRO - a military officer, and the Assistant CHRO - a career civil servant. Both are located at
the Pentagon.
While the Pentagon-based OS is responsible for officer appraisal program policy and
plans, day-to-day management of the program has been delegated to the AMO at Camp Smith.
As a result, sampling from both segments helped create what Merriam and Tisdell describe as a
more information-rich case (2016). For example, the bifurcation of officer appraisal program
responsibilities between the OS and AMO led to significantly different interview responses from
these sample segments in some areas, while simultaneously driving findings more quickly
towards a point of saturation or redundancy in others (Merriam & Tisdell, 2016).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 96
Interview Participant Sampling Criteria and Rationale
Criterion 1. Members of the OS or AMO with policy or program management
responsibilities and at least 1 year in position. Research question 2 asked: What knowledge
and motivation related to the planning and management of the officer appraisal program is
resident within the Service’s HR staff? Administrative support personnel (for example,
secretarial, budgetary, or IT employees) cannot address assumed knowledge and motivation
influences upon appraisal program evaluation, whereas employees with work roles directly
related to officer appraisal program management can (for example, management and program
analysts).
Criterion 2. Members of the OS or AMO with policy or program leadership
responsibilities and at least 1 year in position. Research question 3 asked: What is the
interaction between Service X culture and context and HR staff knowledge and motivation as it
pertains to officer appraisal program management, evaluation, and improvement? Those leaders
responsible for officer appraisal policy formulation or program outcomes are best postured to
observe the ways in which Service X’s hierarchical decision-making, staff education and
training, and rapid leadership churn impacts HR staff satisfaction with the officer performance
appraisal program.
Criterion 3. Current leader team of the Global HR staff, or former leader team
members with recent and relevant experience (within the last two years) and at least 1 year
in position. Research question 1 asked: To what extent is Service X meeting the goal of being
100% compliant with congressional guidance to modernize officer management and appraisal
practices by June 2022? Only HR staff leaders with policy or program authority can address
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 97
how or whether Service X intends to comply with congressional guidance and are best postured
to identify any KMO barriers to compliance.
Interview Participant Sampling (Recruitment) Strategy and Rationale
As the sampling criteria and rationale above suggest, the interview phase of this study
employed non-random, purposive sampling (Johnson & Christensen, 2015), with standardized,
open-ended interviews of participants possessing characteristics I believed were most likely to
meet the study’s research purpose. Interviews employed two-tier sampling (Merriam & Tisdell,
2016), in that I first bounded the system being studied to those within the HR staff with
immediate and formal responsibilities for officer performance appraisal policy formulation and
program management (Tier 1). As for the interview sample size within this bounded system
(Tier 2), I identified 11 participants within the three segments of the focus population. This
number was appropriate in that it was manageable within time and resource constraints, provided
reasonable coverage of the problem of practice given the purposes of this study, and included
individuals whose unique experiences and knowledge were more likely to answer the research
questions. A point of saturation/redundancy was reached with this participant pool, allowing me
to terminate data collection (Merriam & Tisdell, 2016).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 98
Appendix B: Interview Protocol
I really appreciate your taking the time to speak with me today about the officer
performance appraisal program. I know how busy you are and how important your work is for
the organization, so thanks for making time for me. This study seeks to better understand not
only how Service X conducts officer appraisals, but how it improves the appraisal program in
response to both internal and external feedback. The aim of the research is to evaluate whether
or not Service X’s HR department is 100% satisfied with current appraisal program planning and
management. Our interview today will last approximately 60-90 minutes. First, please review
this hard copy of the information sheet I provided previously by email, especially the part about
recording this interview [provide information sheet, wait while respondent reads]. The purpose
of recording is simply to ensure that I accurately capture your responses and that I don’t
accidentally misrepresent or interpret anything that you share with me. I will safeguard this
recording at all times and share it with no one. I will also remove it from this device later today
and transfer it to a password protected thumb drive maintained in a locked file cabinet in a
locked office when not in use. Lastly, both during and after this study I will at all times protect
your confidentiality and anonymity, and I will not use your name or any personally identifying
information (PII) in my data analysis or dissertation. If you don’t consent to being recorded, I
will take notes during the interview instead. Do I have your consent to record our interview
today? [Wait for response]. Understood. Before we begin, do you have any other questions?
[Wait for response]. Okay, let’s get started [Begin interview].
[Conclude interview as follows] Thanks again for taking the time to meet with me today.
Your answers have provided some really useful information that will greatly improve the quality
of the research I’m conducting. Do you have any questions? [Wait for response]. Remember,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 99
my phone number and email address are on the study information sheet, so please feel free to
contact me if you have any questions later.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 100
Appendix C: Interview Questions
1. How long have you been in your current job? Background/demographic question.
2. Describe your specific responsibilities in connection to the officer appraisal program (M, O).
Background/demographic question.
3. Now describe any specialized education, training, or experience that Service X has provided
to help you meet those responsibilities (K, M, O). Background/demographic question.
4. What expertise do your coworkers possess in performance appraisal management (K, M, O)?
Background/demographic question.
5. To your knowledge, which leader or organization has the authority to direct any officer
appraisal program changes (K, O)? Knowledge question.
6. Overall, what is your general opinion of the officer appraisal program (O)? Opinion/value
question.
7. In your opinion, what’s the primary goal of the officer appraisal program? (K)
Opinion/value question.
NOTE: Should the participant respond with an answer different from Service X’s two stated
appraisal program goals (gather officer management information and serve as a centerpiece
professional development tool), use this follow-up prompt/question: 7.a. “How did you come to
that understanding of the program’s goals?”
8. What’s your assessment of the program’s ability to meet those goals (K, M)? Opinion/value
question.
NOTE: If the participant was asked question 7.a., preface this question with the following
statement: “In multiple regulations, the Service says that the purpose of the appraisal program is
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 101
to gather officer management information and to provide critical developmental feedback to each
officer…”
9. Overall, how satisfied are you with the planning and management of the officer appraisal
program? (O) Feeling question.
10. When was the last time Service X conducted a full evaluation of the officer appraisal
program (K)? Knowledge question.
11. What specific measurements of appraisal program performance are used to evaluate the
program (K)? Knowledge question.
12. Using a recent example, describe your role in an appraisal program change decision (K, M,
O). Experience question.
13. To the best of your knowledge, how do most officers feel about the appraisal program (K,
M)? Feeling question.
14. If someone were to say that those officers who are dissatisfied with the appraisal program are
just the low performers – what would your reaction be? Devil’s advocate question
15. What are the biggest strengths of the current appraisal program (K, O)? Opinion/value
question.
16. What are the biggest weaknesses of the current appraisal program (K, O)? Opinion/value
question.
17. If someone were to say that the rapid rotation of key leaders into and out of critical HR
positions makes it hard to manage major programs such as officer appraisals – what would
your reaction be (O)? Devil’s advocate question.
18. Suppose you had the power to change the appraisal program - what would you make
different (K, M)? Hypothetical question.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 102
19. I know this is an important program with a lot of complex parts, and I want to make sure that
we’ve covered those things which you feel are most important – is there anything else about
the program that you’d like to share with me (K, M, O)?
20. Lastly, I have a few quick background questions: Background/demographic questions.
a. What is your highest level of education?
b. How old are you?
c. Are you a veteran (for civilians only)?
d. What is your duty title?
e. What is your current grade?
NOTE: Three follow-up interview questions were sent to all interview participants two months
after the initial interviews were conducted. The questions were as follows:
21. How confident are you in the ability of the HR staff to manage the Officer Appraisal
Program (M)? Opinion/value question.
22. What contributes to (or detracts from) the confidence level you described above (M)?
Opinion/value question.
23. When you identify a problem or shortcoming in the Officer Appraisal Program, what is your
approach to addressing it (M)? Experience question.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 103
Appendix D: Information Sheet for Study Participants
Officer Performance Appraisal Program Management
You are invited to participate in a research study conducted by Michael Colarusso at the
University of Southern California. Please read through this form and ask any questions you might
have before deciding whether or not you want to participate.
PURPOSE OF THE STUDY
This research study aims to better understand how your military service manages its officer
performance appraisal program.
PARTICIPANT INVOLVEMENT
If you agree to take part in this study, you will participate in one 60-90-minute recorded interview
at a time and private location of your choice. You do not have to answer any questions that you
don’t want to. If you do not want to be recorded, handwritten notes will be taken. You may also
withdraw from study participation at any time with no adverse consequences.
PAYMENT/COMPENSATION FOR PARTICIPATION
You will not be compensated for participating in this study.
CONFIDENTIALITY
Any identifiable information obtained in connection with this study will remain confidential. After
audio recording transcription, your responses will be coded with a false name (pseudonym) and
maintained separately. At the completion of the study, all audio recordings and direct identifiers
will be destroyed, and the de-identified data may be used for future research studies. If you do not
want your data used in future studies, you should not participate. The members of the research
team and the University of Southern California’s Human Subjects Protection Program (HSPP)
may access the data. The HSPP reviews and monitors research studies to protect the rights and
welfare of research subjects. If the results of the research are published or discussed in
conferences, no identifiable information will be used.
INVESTIGATOR CONTACT INFORMATION
If you have any questions or concerns about the research, please feel free to contact Michael
Colarusso at mcolarus@usc.edu or call xxx-xxx-xxxx.
IRB CONTACT INFORMATION
University of Southern California Institutional Review Board, 1640 Marengo Street, Suite 700,
Los Angeles, CA 90033-9269. Phone (323) 442-0114 or email irb@usc.edu.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 104
Appendix E: Document Review Protocol
In support of this study’s research questions and conceptual framework, the following
public documents were reviewed:
Two official Service X HR websites with associated official bulletins, policy memoranda
and communications to the officer workforce;
Two specific regulations governing officer performance appraisals;
Two documents from the congressional record;
Two Service X historical research reports on the officer appraisal program;
Two specific HR staff training presentations on appraisal program management, and;
Six years of summary reports of longitudinal officer surveys.
As the primary instrument of data collection, the researcher analyzed documents for evidence
consistent with this study’s research questions and conceptual framework. The document review
was aligned primarily, although not exclusively, with the organizational influences being
explored by research question 3.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 105
Appendix F: Credibility and Trustworthiness
Applied research is undertaken to improve practice (Merriam & Tisdell, 2016). This can
only happen, however, if policymakers find the research both credible and trustworthy. In
particular, researcher bias can invalidate or skew research findings. An effective way to combat
bias during research is through rigorous thinking and disciplined subjectivity, which can help
maintain the integrity of methods, analysis, and conclusions (Merriam & Tisdell, 2016).
In this study, my previous work experiences with the U.S. Armed Forces made me
potentially susceptible to bias. Additionally, the conceptual theory guiding this study was
arrived at by integrating those experiences with thought experiments and a comprehensive
literature review. As Bogdan and Biklen point out (2007), speculation is not bias unless the
researcher fails to adjust his or her conceptual framework and research questions as needed in
response to new information. I remained mindful of this as my initial findings came in, and I
made conceptual adjustments as required during data analysis.
While disclosing reflexivity and practicing disciplined subjectivity are important, I also
incorporated concrete steps in all phases of the study to increase credibility and trustworthiness.
The research design, for example, triangulated data between the literature review, document
analysis, and interviews to improve research quality (McEwan & McEwan, 2003). Additionally,
during data collection I presented emergent findings to some interview participants for
respondent validation. This was not only to rule out any misinterpretations of their comments,
but also to screen the findings for bias or misunderstanding on my part (Merriam & Tisdell,
2016). I also had two PhD economists, experts in disciplined inquiry, review my early findings
for plausibility. Perhaps most importantly, however, I documented my research choices and
conclusions by creating a running “audit trail” or process memo log (Merriam & Tisdell, 2016).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 106
The log drew upon all observations, reflective, and analytic memos, and allowed me to
constantly check my practices and conclusions for bias. Lastly, I employed the sound practices
described in the ethics section of this study, as ethical behavior was central to ensuring
credibility and trustworthiness (Merriam & Tisdell, 2016).
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 107
Appendix G: Ethics
As I conducted this study, I maintained the highest standards of ethical practice, which
also helped increase the study’s validity and reliability (Merriam & Tisdell, 2016). I first
submitted my study proposal to the University of Southern California Institutional Review Board
(IRB). I met all IRB requirements for safeguarding the rights and welfare of human subjects
during my research, to include completing CITI human subjects training and complying with all
IRB standards and practices required by both USC and the organization of study. Upon
receiving IRB approval, I began the recruitment of study participants and conduct interviews. As
I did so, several key ethical principles guided my conduct (Glesne, 2011), and I applied them as
follows. First, my sampling and recruitment plan called for volunteer participants only, and I did
not offer material rewards to coerce or entice potential participants. Second, all participants were
provided with study information sheets prior to the conduct of any interviews, a best practice in
human subjects research (Glesne, 2011). These assured participants of confidentiality and
anonymity, and also emphasized their right to halt their study participation at any time with no
penalty (Krueger & Casey, 2009). Third, I allowed participants to make their own choices by
providing them with honest explanations of my research purpose, methods, and the intended
audience for my findings. As I did so, none elected to withdraw from the study, but there would
have been no penalties for any who had chosen to do so. Fourth, prior to recording interviews, I
sought and received verbal permission to do so from every participant. I safeguarded these
recordings at all times and shared them with no one. Lastly, I ensured all information collected
during my research was securely stored to protect the confidentiality of all findings. Recordings
were immediately transferred to a password protected thumb drive and maintained in a locked
file cabinet in a locked office when not in use. Researcher notes were secured in the same
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 108
location. Upon the completion of interviews, I thanked all participants by phone or in person, as
well as in writing and via email.
I had no supervisory relationship to any of the study participants, although I was known
to some of them due to past work which I have conducted for their organization. I continuously
clarified for participants that this study was completely independent of any other agency or
purpose. Given my external positionality and these assurances, I believe participants did not feel
pressured to participate in this study. By being empathetic, honest, and by honoring all promises
throughout the process (Rubin & Rubin, 2012), I hope (and believe) that participants found the
experience both personally and professionally rewarding.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 109
Appendix H: Integrated Implementation and Evaluation Plan
Implementation and Evaluation Framework
The New World Kirkpatrick Model (Kirkpatrick & Kirkpatrick, 2016) provides the
framework for this implementation and evaluation plan. This four-level training model has its
origins in work first published by Don Kirkpatrick in 1954 (Hogan, 2007). A particular strength
of the model is its focus upon behavioral outcomes among participants, particularly the
demonstrated application of new knowledge in the workplace that makes organizations more
effective and productive (Kirkpatrick & Kirkpatrick, 2016). The Kirkpatrick Model’s four levels
help measure the two major goals of any training intervention: its quality and effectiveness. The
levels are: 1 – reaction to the training; 2 – learning, e.g. knowledge and skill acquisition; 3 –
behavioral changes in the workplace, and; 4 – results, as in improved work outcomes. The
“New World” variant of the Kirkpatrick Model inverts the levels, however. Using a backwards
planning approach, it helps organizations and trainers to devise effective training interventions
by first focusing upon Level 4, the desired end state of the training. If properly executed, the
New World Model is a flexible and effective training evaluation approach, and the separate but
mutually supporting levels make for ease of implementation (Kirkpatrick & Kirkpatrick, 2016).
Organizational Purpose, Needs, and Expectations
Service X’s mission is to provide dominance across the full range of military operations
and potential conflict types to ensure victory against any U.S. opponent. To do so, it must be led
by commissioned officers who can create the most ready and effective military force possible.
According to the Service, this requires an officer management system that makes sound
promotion and assignment decisions while simultaneously supporting the professional
development of each officer. Because it is concerned that those outcomes are not being met, in
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 110
2018 Congress asked Service X to review and modernize its officer management practices. As
forced ranking officer performance appraisals are the centerpiece program upon which all other
officer management actions rely, reviewing and modernizing the appraisal program is implicit in
Congress’ mandate. Due to the size and scope of the officer appraisal program, an aspirational
organizational goal for Service X is to implement a modernization plan by June of 2022. While
a complete organizational effort is required to accomplish this, this implementation and
evaluation plan focuses upon Service X’s Global HR staff, the critical stakeholder for the
appraisal program. To support the organizational goal, an appropriately time-bound stakeholder
goal is for the HR staff to be 100% satisfied with its own officer appraisal program planning,
evaluation, and management by June 2020. This milestone would posture the HR staff to
effectively lead any follow-on appraisal program review and modernization effort.
Given the above, this case study employed a modified gap analysis to identify any
knowledge, motivation, or organizational barriers to attaining the stakeholder goal. Based upon
the study’s findings, a key recommendation is to create a professional development (PD)
program that: increases HR staff knowledge in appraisal theory and practice, teaches SMART
goalsetting, models program evaluation processes, and employs challenging, scenario-driven
exercises for goal-directed practice and self-led assessment purposes. Additional
recommendations are to: increase the assignment tenure of senior HR military staff leaders so
that they can manage the appraisal program more strategically; create and administer new annual
officer workforce satisfaction surveys to gather critical program stakeholder feedback, and;
realign appraisal policy work processes to increase the HR staff’s autonomy and decision rights
for the appraisal program.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 111
Level 4: Results and Leading Indicators
Table H1, below, identifies the Level 4 internal and external outcomes that will improve
officer appraisal program management practices if the study recommendations are implemented.
Metrics and associated methods of accomplishment are included for each outcome.
Table H1
Implementation Outcomes, Metrics, and Methods
Outcome Metrics Methods
External Outcomes
1. The Service X officer
workforce is more
satisfied with the officer
appraisal program.
Annual increases in the
percentage of officers agreeing
that the appraisal program is
accurate, fair, and
developmentally useful.
Service X behavioral research
agency develops and
administers annual officer
career satisfaction surveys
including items on appraisals.
The HR staff uses this
stakeholder data to inform
appraisal program improvement
actions.
2. Rating officials submit
more timely and
complete appraisal
reports.
Reduction in the percentage of
late appraisal reports / reports
returned for administrative errors
during processing.
Service X increases the scope,
frequency, and efficacy of rating
official training in officer
appraisal report preparation.
Internal Outcomes
1. The HR staff employs
evidence-based appraisal
program management
practices.
The number of annual changes in
program policy/processing with
an evidentiary foundation derived
from measurable outcome goals.
The HR staff prepares an annual
appraisal program evaluation
report with quantifiable goals,
supporting metrics, analysis,
and change recommendations.
2. Appraisal program
managers on the HR staff
show increases in
required declarative and
procedural knowledge.
Staff assessments by trainers
administering a professional
development curriculum targeted
to appraisal program
management.
Service X’s service academy
faculty develop and administer a
professional development (PD)
program targeting the HR staff’s
declarative and procedural
knowledge gaps in appraisal
program management.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 112
3. Senior military HR
staff members gain
additional job tenure to
engage in strategic
appraisal program
management.
Measurable increases in median
job tenure for senior military HR
staff.
Service X changes reassignment
policies for senior military HR
staff members.
The Service Secretary reaffirms
the CHRO’s officer appraisal
program policy authorities.
4. The HR staff is more
satisfied with its own
appraisal program
management.
Year-over-year decreases in the
number of program management
concerns expressed by the HR
staff.
Service X behavioral research
agency administers internal
survey to appraisal program
managers in HR staff.
Level 3: Behavior
Critical behaviors. Level 3 measures the degree to which participants are applying the
knowledge and skills acquired during training to their daily work. To realize desired Level 4
results, critical workplace behaviors must be identified, supported, and monitored (Kirkpatrick &
Kirkpatrick, 2016). In this case, critical HR staff behaviors include an ongoing commitment to
closing required knowledge and motivation gaps, the routine practice of evidence-based
decision-making, and a commitment to sustaining new knowledge and skill gains through
professional forums and workshops. The supporting metrics, methods, and timing for each
critical behavior are presented in Table H2, below.
Table H2
HR Staff Critical Behaviors, Metrics, Methods, and Timing for Evaluation
Critical Behavior Metrics Methods Timing
1. HR staff leaders
commit to closing
knowledge and
skill gaps acting
as barriers to
sound officer
appraisal program
management.
The number of staff
members who are made
available and then
certified in appraisal
program management
practices by a targeted
professional development
program.
Service X’s service
academy faculty develop
and administer a
professional development
(PD) program targeting
the HR staff’s declarative
and procedural
knowledge gaps in
appraisal program
management.
Upon
implementation
launch, then
annually or
upon the
assignment of
new personnel.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 113
2. The HR staff
regularly makes
evidence-based
program
management
decisions.
The number of appraisal
program input, output,
process, and participant
satisfaction indicators
updated routinely by the
HR staff.
The HR staff creates and
maintains an officer
appraisal program
evaluation balanced
scorecard.
Quarterly-to-
annually,
depending upon
metric.
3. The HR staff
shows a
commitment to
sustaining increases
in required
knowledge and
motivation.
The number of appraisal
program workshops held,
attendance levels at each,
and the number of
evidence-based program
recommendations
generated per workshop.
The appraisal program
policy and management
teams hold joint program
management workshops
to share information,
make recommendations,
and increase domain
knowledge.
Quarterly
Required drivers. More than just observing the application of critical behaviors,
required drivers are processes and systems that reinforce, monitor, encourage, and reward
performance of critical behaviors on the job (Kirkpatrick & Kirkpatrick, 2016). In this case, the
HR staff members responsible for managing, evaluating, and improving the officer appraisal
program will need support from the CHRO. Additionally, Service X’s senior leadership must
provide organizational support to the CHRO and HR staff as they work to close knowledge and
motivation gaps and improve appraisal program management practices. Table H3, below,
outlines required driver methods, timing, and the HR staff critical behaviors which they support.
Table H3
Required Drivers to Support HR Staff Critical Behaviors
Methods Timing
Critical
Behavior(s)
Supported
Reinforcing
R1. Follow-up professional development
instructional modules for the HR staff.
Annually 3
R2. Disciplined inquiry / program
evaluation work review checklists / job-aids
/ toolkits.
Ongoing
1, 2, 3
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 114
Given to HR staff as part of a
program management “toolkit”
during certification training.
R3. Modeling by program evaluation
experts during site visits to Service X
analytical agency.
Within 90 days of
implementation, then bi-annually
1,3
Encouraging
E1. Appraisal program decision rights and
program management autonomy granted to
the CHRO and HR staff.
Upon CHRO request, then
ongoing
2
E2. Site visits by the CHRO to meet with
appraisal program managers and solicit their
recommendations for program management
improvements.
Quarterly 2, 3
Rewarding
R1. Site visits by the CHRO to meet with
the appraisal program management team
and recognize the employee with the most
innovative, conceptually sound and
evidence-based appraisal program
recommendation.
Quarterly 2, 3
Monitoring
M1. Observation by training professional
during HR staff’s internal appraisal program
workshops and professional forums.
Quarterly 1, 3
M2. The appraisal program balanced
scorecard with dashboard is used to inform
the Service Secretary of appraisal program
trends and planned improvements.
Annually 1, 2
Organizational support. To increase the likelihood of the HR staff achieving the stated
goal, Service X must support and resource all aspects of this implementation plan, to include
directing and funding the creation of professional development program and workshops,
providing the HR staff with full appraisal policy decision rights, and extending the job tenure of
senior HR staff members so that they can evaluate, experiment, innovate, and improve the
appraisal program. This support will yield a significant return on investment, as improvements
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 115
in officer appraisal program management will enable Service X to comply with congressional
oversight requests and increase workforce satisfaction levels. This should in turn contribute to
increased personnel readiness and mission accomplishment.
Level 2: Learning
Learning goals. Upon completion of a professional development program targeting
declarative, procedural, and conceptual knowledge gaps, all HR staff with officer appraisal
program policy or management responsibilities will be able to:
1. Identify and recall all data sources pertinent to appraisal program evaluation (factual
knowledge).
2. Understand performance appraisal theories, to include the relative strengths and
shortcomings of each (conceptual knowledge).
3. Understand program management theories (conceptual knowledge), as well as how to
apply them in a specific setting (procedural knowledge).
4. Apply SMART goalsetting in program management (mastery approach).
5. Work collaboratively to manage, evaluate, and improve the officer performance
appraisal program (collective-efficacy).
6. Lead and direct the appraisal program policy and management process (cultural
model).
Program. The learning goals listed above can be incorporated into a professional
development program taught by credentialed higher education professionals. Service X has its
own service academy, a four-year degree granting undergraduate university, the academic
departments of which possess expertise in performance appraisal theories and concepts, as well
as program evaluation practices. At the direction of Service X, the school will develop a
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 116
curriculum in program evaluation tailored to the developmental needs of the senior HR staff,
those with appraisal program policy and management duties. The learning goals described above
will generally align to one of two overarching learning objectives taught over a two-week, in-
residence program. The first week will provide HR staff professionals with deep declarative
knowledge regarding performance management theories, as well as the relative strengths and
weaknesses of different appraisal techniques. The second week will focus more upon procedural
knowledge, specifically the process of program inquiry and the necessary supporting data
literacy skills.
Because secondary goals of the program will be to increase HR staff collective efficacy
and to engender a mastery goal orientation, instruction in SMART goalsetting will be provided
and the instructional pedagogy will make extensive use of modeling, scenario-driven teamwork
exercises, goal directed practice, and private feedback. The program will also provide each
student with a library of relevant professional readings, as well as a take-away “toolkit”
consisting of job-aids designed to help them apply what they have learned when they return to
work. Once the program faculty have validated the program, the course package will migrate to
Service X’s HR education center so that it can be offered on a routine basis to senior HR
professionals prior to assuming appraisal program management duties.
Evaluation of the components of learning. In order to develop new work behaviors
that increase job performance and yield desired results, it will be critical to use formative and
summative methods to evaluate the degree to which the HR staff professional development
program participants have learned. There are five components of learning which can be
evaluated: knowledge, skills, attitude, confidence, and commitment (Kirkpatrick & Kirkpatrick,
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 117
2016). Table H4, below, identifies the methods and activities which will be used to evaluate
each.
Table H4
Evaluation of the Components of Learning for the Program.
Methods or Activities Timing
Declarative Knowledge “I know it.”
Quizzes at the conclusion of each
instructional unit (formative).
During initial professional development (PD)
program.
Group discussions at post-instructional
appraisal program workshops (summative).
Quarterly
Procedural Skills “I can do it right now.”
Scenario-driven exercises in components of
program evaluation (formative).
During initial PD program.
Team-developed appraisal program
evaluation action plan (summative).
Presented at conclusion of initial PD program.
Teach-backs at post-instructional appraisal
program workshops (summative).
Quarterly
Attitude “I believe this is worthwhile.”
Faculty observations (formative). During initial PD program.
Program surveys asking participants to assess
the value of the instruction to their work
(formative and summative).
Pre- and immediately post-PD program,
follow-up administered 90 days after
instruction.
Confidence “I think I can do it on the job.”
Facilitated group discussions following
modeling and exercises (formative).
During initial PD program.
Self-reflection journaling (summative). Completed during PD program.
Commitment “I will do it on the job.”
Individual appraisal program action plan
(summative).
Presented to PD program faculty member in
one-on-one feedback session at conclusion of
initial professional development program.
Staff member feedback, supervisor evaluation
(summative).
Quarterly, during performance reviews.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 118
Level 1: Reaction
According to Kirkpatrick and Kirkpatrick (2016), Level 1 measures participants’
engagement and customer satisfaction, as well as how relevant they find the instruction to their
actual work. Because it is often not evaluated completely or effectively, Table H5 outlines the
methods and tools which will be used to measure participant reactions to the components to
measure reactions to the program.
Table H5
Components to Measure HR Staff Reactions to the Program.
Methods or Tools Timing
Engagement
Dedicated observer. During all units of initial PD program.
Students ask meaningful questions.
During all units of initial PD program.
Student academic performance
evaluations.
During all units of initial PD program.
Survey.
At PD program conclusion.
Relevance
Pulse-checking. Continuously during each PD program unit of instruction.
Survey.
At PD program conclusion.
Participant Satisfaction
Pulse-checking. Continuously during each PD program unit of instruction.
Survey.
At PD program conclusion.
Evaluation Instruments
Per the New World evaluation model (Kirkpatrick & Kirkpatrick, 2016), instructional
outcomes will be evaluated with multiple instruments administered at different times. The initial
instrument will be administered at the conclusion of the HR staff professional development
program. A second instrument will be administered between 60- and 90-days following
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 119
instruction. The sections below summarize the purpose and approach of each evaluative
instrument.
Immediately following the program implementation. The HR staff professional
development program will conclude with an initial blended evaluation survey instrument focused
upon evaluating Level 1 and Level 2 outcomes – reactions and learning. The survey will employ
a mix of four-point Likert scale and open response items to evaluate participant engagement,
relevance, and customer satisfaction, as well as gains in declarative knowledge, procedural skills,
attitude, confidence, and commitment. Learner-centered items will be employed to capture the
perspectives of participants rather than instructors (Kirkpatrick & Kirkpatrick, 2016). The full
survey instrument is in Appendix I.
Delayed for a period after the program implementation. Between 60- and 90-days
following instruction, a subsequent blended evaluation survey instrument will be administered to
PD program graduates. This delay is to allow sufficient time for critical behaviors and required
drivers to take effect in the workplace (Kirkpatrick & Kirkpatrick, 2016), allowing participants
to apply new knowledge and skills and assess the results. The survey will revisit some Level 1
and 2 outcomes but will focus more heavily upon Levels 3 (behaviors) and 4 (outcomes). As in
the initial post-program survey, the subsequent survey will also employ learner-centered items.
The full survey instrument is in Appendix J.
Data Analysis and Reporting
The sequencing of key evaluative milestones in this plan is designed so that after 90 days,
the CHRO will be presented with an initial post-PD program report that can be followed with
quarterly updates. That report would be presented at the initial quarterly appraisal program
workshop conducted by the HR staff, during which the staff will discuss how its program
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 120
management procedures are being adjusted to incorporate knowledge and skill gained during the
PD program to achieve desired outcomes. This forum will allow the CHRO to demonstrate his
commitment to sustaining and employing that new knowledge and skills, to review data from the
appraisal program’s newly created balanced scorecard, and to encourage and reward employees
for their efforts to improve the appraisal program.
The post-PD report will incorporate findings from the PD program’s instructors, who will
share data evaluating the components of learning from the program, such as student performance
statistics, workgroup products created, and observations. Findings from the two post-PD
program blended evaluation instruments will also be integrated into the report, presenting the
CHRO with a clear, evidence-supported assessment of the impact of training upon his
workforce’s newly enhanced capabilities. Findings will be presented in visual dashboard style.
Figure H1., below, provides a sample of the data presentation approach the visual dashboard will
employ.
Figure H1. Sample visual dashboard data presentation approach.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 121
Summary
This plan employs the Kirkpatrick New World Model to implement and evaluate a
professional development program intended to close knowledge, motivation, and organizational
barriers to effective appraisal program management identified during this study. The New World
Model is advantageous because it is results-focused, and the integration of implementation and
evaluation increases the odds of success. An additional strength of the model is its focus upon
behavioral outcomes among participants, particularly the demonstrated application of new
knowledge in the workplace that makes organizations more effective and productive (Kirkpatrick
& Kirkpatrick, 2016). The plan clearly articulates desired outcomes, as well as the critical
behaviors and required drivers needed to achieve those outcomes. Critical organizational
support is also identified, as are learning goals, program elements, and evaluation components
for all four evaluation levels. If properly executed, this implementation and evaluation plan will
help close the HR staff’s performance gaps and increase their ability to effectively manage the
officer appraisal program, particularly in the areas of sound program evaluation and evidence-
based management.
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 122
Appendix I: Initial Blended Evaluation Instrument (Levels 1-2)
Please indicate your level of agreement with each statement regarding
the HR Staff Professional Development (PD) Program:
1
STRONGLY
AGREE
2
AGREE
3
DISAGREE
4
STRONGLY
DISAGREE
1. The PD program was interesting.
o o o o
2. I felt encouraged by my instructor.
o o o o
3. I will do my job better because of what
I’ve learned in the PD program.
o o o o
4. I better understand my work role thanks
to the PD program.
o o o o
5. I better understand the theories of
performance management because of the
PD program.
o o o o
6. I better understand how to employ sound
appraisal program evaluation practices
because of the PD program.
o o o o
7. I believe it will be worthwhile for me to
apply what I learned in the PD program.
o o o o
8. I am committed to applying what I
learned in the PD program.
o o o o
9. I feel confident applying what I learned
in the PD program.
o o o o
10. I would recommend the PD program to
my work colleagues.
o NO o YES
11. Which parts of the PD program did you find most valuable? Why?
12. Which parts of the PD program did you find least valuable? Why?
13. How could the PD program be improved?
14. Additional comments:
OFFICER PERFORMANCE APPRAISAL PROGRAM MANAGEMENT 123
Appendix J: Subsequent Blended Evaluation Instrument (Levels 1-4)
Please indicate your level of agreement with each of the following:
1
STRONGLY
AGREE
2
AGREE
3
DISAGREE
4
STRONGLY
DISAGREE
1. I am doing a better job at work because
of what I learned in the PD program.
o o o o
2. At work, I routinely apply what I learned
in the PD program.
o o o o
3. The professional readings and job-aids
from the PD program are helping me to
apply what I learned.
o o o o
4. At work, I have the support needed to
apply what I learned in the PD program.
o o o o
5. Since the PD program, my colleagues
and I are more confident in performing
our appraisal program management
duties.
o o o o
6. Because of the PD program, I am already
seeing improvements in our appraisal
program management processes.
o o o o
7. I have shared my new knowledge with
work colleagues who did not attend the
PD program.
o o o o
8. I remain committed to applying what I
learned in the PD program.
o o o o
9. The PD program was worth my time.
o o o o
10. Describe any continuing professional development you and your colleagues have
undertaken since completing the PD program.
11. Describe a positive work outcome attributable to what you learned during the PD program:
12. Additional comments:
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Mitigating low employee engagement through improved performance management: an evaluation study
PDF
An evaluative study on implementing customer relationship management software through the perspective of first level managers
PDF
Effective practices for managing staff performance in higher education: an exploratory study
PDF
Relationship between employee disengagement and employee performance among facilities employees in higher education: an evaluation study
PDF
Leadership selection within public safety: an evaluation study
PDF
Aligned leadership attributes and organizational innovation: an evaluation study
PDF
Performance management in government: the importance of goal clarity
PDF
Leadership in an age of technology disruption: an evaluation study
PDF
Leadership readiness: evaluating the effectiveness for developing managers as coaches
PDF
Mandatory reporting of sexual violence by faculty and staff at Hometown University: an evaluation study
PDF
The board fundraising challenge after nonprofit mergers: an evaluation study
PDF
Getting ahead: performance management
PDF
Civic learning program policy compliance by a state department of higher education: an evaluation study
PDF
Recruiting and hiring female police officers: an evaluative study
PDF
Increasing organizational capacity at a small college to deploy revenue diversification strategies: an evaluation study
PDF
Managers’ learning transfer from the leadership challenge training to work setting: an evaluation study
PDF
Customer satisfaction with information technology service quality in higher education: an evaluation study
PDF
Applying best practices to optimize racial and ethnic diversity on nonprofit boards: an improvement study
PDF
Adaptability characteristics: an evaluation study of a regional mortgage lender
PDF
Institutional advancement in higher education: managing gift officer performance and turnover
Asset Metadata
Creator
Colarusso, Michael J.
(author)
Core Title
Officer performance appraisal program management: an evaluative case study
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Publication Date
03/15/2019
Defense Date
03/11/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
commissioned officers,forced curve,forced distribution,forced ranking,human resources,Military,OAI-PMH Harvest,ordinal ranking,performance appraisals,performance management,stacked ranking
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hirabayashi, Kimberly (
committee chair
), Ferrario, Kimberly (
committee member
), Lyle, David (
committee member
)
Creator Email
mcolarus@usc.edu,mjcolarusso@optonline.net
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-134498
Unique identifier
UC11675783
Identifier
etd-ColarussoM-7167.pdf (filename),usctheses-c89-134498 (legacy record id)
Legacy Identifier
etd-ColarussoM-7167.pdf
Dmrecord
134498
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Colarusso, Michael J.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
commissioned officers
forced curve
forced distribution
forced ranking
human resources
ordinal ranking
performance appraisals
performance management
stacked ranking