Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Multi-source feedback in the U. S. Army: an improved assessment
(USC Thesis Other)
Multi-source feedback in the U. S. Army: an improved assessment
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Multi-Source Feedback in the U. S. Army:
An Improved Assessment
by
Anthony Francois Cerella
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
December 2020
Copyright 2020 Anthony Francois Cerella
ii
Dedication
To my family: Fran, Joanne, Elise, Tony, Mike, and Gretchen, thank you for shaping me
into the man I am today. Your love and support are immeasurable.
To our children, Sophie and Tony, I love you both. I look forward to helping you pursue
whatever path makes you content in life. It brings me immeasurable happiness that you both love
adventure, travel, and books more than I do.
Thank you for advice, patience, love, and support during this journey to my wife
Bethany. I am a better person, husband, and father when I stumble through life with you at my
side.
iii
Acknowledgments
My sincere thanks to my dissertation committee, Dr. Murphy, Dr. Stowe, and my chair
Dr. Malloy. I am grateful for your insights that improved my work. A special thanks to my chair
Dr. Malloy for keeping me on schedule and mentoring me through an unfamiliar, and at times,
challenging journey. I enjoyed our discussions, and I am grateful for your openness and patience.
To the OCL ninjas: I was humbled to join Rossier’s OCL program, and I am indebted to
each professor. I am impressed by the diversity and range of experiences within our cohort.
Thank you for your kindness and support. I have another bottle of wine ready for each of you
when we next share a table.
To my brothers and sisters in arms in the U.S. Army, I hope this research, if even in the
smallest way possible, helps improve the quality of those entrusted to lead you in war and peace.
iv
Table of Contents
DEDICATION ........................................................................................................................... ii
ACKNOWLEDGMENTS ......................................................................................................... iii
LIST OF TABLES..................................................................................................................... vi
LIST OF FIGURES .................................................................................................................. vii
ABSTRACT ............................................................................................................................ viii
CHAPTER ONE: INTRODUCTION.......................................................................................... 1
INTRODUCTION OF THE PROBLEM OF PRACTICE ........................................................................ 1
BACKGROUND OF THE PROBLEM .............................................................................................. 1
RELATED LITERATURE ............................................................................................................. 3
STATEMENT OF THE PROBLEM OF PRACTICE ............................................................................. 4
ORGANIZATIONAL CONTEXT AND MISSION............................................................................... 4
ORGANIZATIONAL PERFORMANCE GOAL .................................................................................. 7
IMPORTANCE ........................................................................................................................... 8
DESCRIPTION OF STAKEHOLDER GROUPS ................................................................................. 9
STAKEHOLDER GROUP OF FOCUS ............................................................................................. 9
PURPOSE OF THIS STUDY AND RESEARCH QUESTIONS ............................................................. 10
METHODOLOGICAL FRAMEWORK ........................................................................................... 11
DEFINITIONS AND CLARIFICATIONS ........................................................................................ 11
ORGANIZATION OF THIS STUDY .............................................................................................. 13
CHAPTER TWO: REVIEW OF THE LITERATURE .............................................................. 14
INTRODUCTION ...................................................................................................................... 14
MSF and the Army............................................................................................................ 23
Clark and Estes (2008) Knowledge, Motivation, and Organizational Influences Framework
......................................................................................................................................... 28
CONCLUSION ......................................................................................................................... 37
CHAPTER THREE: METHODOLOGY .................................................................................. 39
INTRODUCTION TO THE METHODOLOGY ................................................................................. 39
Methodological Approach and Rationale ........................................................................... 39
Sampling and Recruitment ................................................................................................ 40
Interview Sampling Strategy, Criteria, and Rationale ........................................................ 41
QUALITATIVE DATA COLLECTION AND INSTRUMENTATION .................................................... 42
Interviews ......................................................................................................................... 42
CREDIBILITY AND TRUSTWORTHINESS .................................................................................... 43
ETHICS .................................................................................................................................. 44
LIMITATIONS AND DELIMITATIONS ......................................................................................... 45
RESULTS AND FINDINGS ......................................................................................................... 47
Participating Stakeholders ................................................................................................. 47
Purpose of This Study and Research Questions ................................................................. 49
Determination of Threshold Criteria .................................................................................. 50
Results and Findings for Research Question One (K and M Influences) ............................ 50
v
Influence ........................................................................................................................... 56
Type ................................................................................................................................. 56
How to incorporate feedback............................................................................................. 56
Knowledge ........................................................................................................................ 56
Incorporating MSF ............................................................................................................ 56
Motivation ........................................................................................................................ 56
Results and Findings for Research Question Two (Interaction of K and M and O) ............ 56
Results and Findings for Research Question Three (KMO to Improve the MSAF
Assessment) ...................................................................................................................... 68
Summary .......................................................................................................................... 76
CHAPTER FIVE ...................................................................................................................... 77
RECOMMENDATIONS .............................................................................................................. 77
Review of Research Questions and KMO Influences......................................................... 77
Recommendations to Improve the MSAF Assessment....................................................... 79
Recommendation Summary .............................................................................................. 84
Implementation ................................................................................................................. 84
Strengths and Weaknesses of the Approach....................................................................... 87
Evaluation and Reporting .................................................................................................. 88
Suggestions for Future Research ....................................................................................... 88
Impact on the Profession ................................................................................................... 89
Conclusion ........................................................................................................................ 90
APPENDIX A: INTERVIEW PROTOCOL ...................................................................................... 92
REFERENCES.......................................................................................................................... 94
vi
List of Tables
Table 1: Assumed Knowledge Influences and Knowledge Types…………………………………...31
Table 2: Assumed Motivation Influences……………………………………………………………….33
Table 3: Assumed Organizational Influences………………………………………………………….36
Table 4: Participant Pseudonyms, Background, and Experience…………………………………..48
Table 5: Validated Knowledge and Motivation Influences…………………………………………..56
Table 6: Issues Identified by Participants……………………………………………………………...60
Table 7: Participant Structural Recommendations…………………………………………………...71
Table 8: Participant Utilization Recommendations…………………………………………………..73
Table 9: Participant Post-Assessment Recommendations……………………………………………76
vii
List of Figures
Figure 1: Comparison of Feedback Systems (Adapted from Ewan & Edwards, 2001)............... 18
Figure 2: Conceptual Framework ............................................................................................ 37
Figure 3: Implementation Schedule .......................................................................................... 85
viii
Abstract
Multi-Source Feedback (MSF) encompasses a range of tools that solicit feedback on
performance. The feedback aims to evaluate the strengths and weaknesses of leaders, identify
their impact on others, and provide meaningful feedback to inform future development. Offering
ineffective MSF assessments can limit opportunities for leaders to receive meaningful feedback.
The U.S. Army currently offers the web-based Multi-Source Assessment and Feedback (MSAF)
tool. The MSAF assessment is an optional developmental instrument used within the Army to
solicit and employ MSF for the entire Army community. Within the Army’s feedback and
performance system the MSAF assessment is a unique opportunity for leaders to increase their
self-awareness with their subordinates, peers, and supervisors. Interviews with Army lieutenant
colonels identified knowledge, motivation, and organizational influences that highlighted issues
with the current MSAF assessment. This study recommends changes to the current MSAF
assessment to improve its utility for Army leaders.
Keywords: Army, Multi-Source Feedback, MSF, MSAF, 360.
1
Chapter One: Introduction
Introduction of the Problem of Practice
This study focused on the U.S. Army improving a Multi-Source Feedback (MSF)
assessment for the self-development of its leaders. MSF encompasses a range of tools that solicit
feedback on performance. One common MSF instrument available to organizations are 360-
degree assessments. MSF assessments aim to evaluate the strengths and weaknesses of leaders,
identify their impact on others, and provide meaningful feedback to inform future development
(Bracken & Rose, 2009; Humphrey, Vogele-Welch, Jarvis, & Wallis, 2017). MSF assessments
generally solicit observations and input on leader actions and behaviors from subordinates, peers,
and supervisors (Fleenor, 2008; Nowack & Mashihi, 2012). Offering ineffective MSF
assessments can limit opportunities for leaders to receive meaningful feedback on their strengths
to sustain and weaknesses to improve. (Bracken & Rose, 2009; Humphrey, Vogele-Welch,
Jarvis, & Wallis, 2017). The Army currently offers an internal MSF assessment to all personnel;
however, this study identified that the current assessment needs to be improved.
Background of the Problem
Since 2008, the U.S. Army has offered the web-based Multi-Source Assessment and
Feedback (MSAF) assessment to all personnel. The MSAF assessment is an instrument used
within the Army to solicit and employ MSF for the entire Army community. The MSAF is a self-
development tool in which Army personnel select assessors to provide Likert scale and free-form
feedback on a leader's strengths and weaknesses. The optional and self-initiated assessment
gathers input from three source categories: five peers, five subordinates, and three supervisors.
The MSAF website is only accessible to Department of Defense (DoD) employees. After
completing a self-assessment, the rated individual selects individual subordinates, peers, and
2
supervisors to receive the assessment. Raters assess the rated soldier on a scale-based set of
questions that align with the Army’s expectations for leader behavior and traits. The assessment
concludes with an individual feedback report that is only available to the assessed soldier. The
report contains average ratings from each source category and free-form comments that describe
the rated officer’s strengths and developmental needs. After the assessment is complete, the
assessed individual is not required to take any action and the results are not shared.
The MSAF assessment supports leaders within the Army by providing feedback from
supervisors, peers, and subordinates (U.S. Department of the Army, 2017). It is a unique tool
since Army leaders generally only receive performance feedback from their immediate
supervisors. There are limited opportunities to receive performance feedback from subordinates,
peers, and supervisors from outside a soldier’s evaluation hierarchy. The MSAF aims to deliver
anonymous feedback to increase leaders’ self-awareness of their leadership abilities to help aid
self-development (U.S. Department of the Army, 2017). However, during the life of the MSAF
program, reports indicated that the assessment suffered from several issues that may have limited
its utility to Army users (Cavanaugh, Fallesen, Jones, & Riley, 2016; Hardison et al., 2015;
McAninch, 2016). Recent reports that reviewed leadership across the Army commissioned by the
Center for the Army Profession and Leadership (CAPL) and other sources suggest that the
MSAF’s effectiveness and perceived utility was not favorable amongst users. (Cavanaugh,
Fallesen, Jones, & Riley, 2016; Hardison et al., 2015; McAninch, 2016). For example, only 49%
of participants stated that MSAF assessments helped their development (Cavanaugh, Fallesen,
Jones, & Riley, 2016). The reports also highlighted that the MSAF assessment suffered from low
rates of user compliance, limited review of post-assessment reports by users, and limited use of
post-assessment mentorship opportunities.
3
The challenge of creating an effective MSF assessment is not unique to the Army.
Research highlights that organizations often dedicate significant resources to develop and
implement MSF assessments but that the MSF programs are often improperly implemented and,
more broadly, may not yield positive results for those assessed (Bracken & Rose, 2009; Das &
Panda, 2017; Humphrey, Vogele-Welch, Jarvis, & Wallis, 2017; Smither, London, & Reilly,
2005). Improving the current MSAF assessment will support the development of Army leaders
by enhancing their self-awareness and increasing their understanding of how they are perceived
by those they impact.
Related Literature
While there is an absence of standardization of terms related to MSF, there is a wide
range of literature that highlights studies that assess their impact, often in a corporate setting
(Fleenor, Taylor, & Chappelow, 2008; Hardison et al., 2015; Mccarthy & Garavan, 2007). MSF
assessments include a diverse mix of tools that provide feedback from employees, peers, and
supervisors. Three hundred and sixty-degree assessments are a type of MSF assessment that uses
feedback from subordinates, peers, supervisors, and other stakeholders. They are widely
accepted in North and South America, Europe, and Australia. They are gaining favor in Asia
since they can be used for various sub-groups and individual employees, over time, for self-
development (Fleenor, 2008).
Importantly, MSF assessments should be tailored to each organization, and possibly to an
internal section within an organization to ensure the assessment is of value to the individual and
supports organizational norms and values (Dalton & Hollenbeck, 2001). Failure to do so can
create new organizational conflicts, psychological stress on those assessed, and an erosion of
individual and collective performance (Nowack & Mashihi, 2012). The literature suggests that
4
MSF assessments, when properly implemented, are a useful self-development tool for
individuals and can also help organizations in promoting productive leadership traits (Hardison
et al., 2015). In sum, leadership development and efficacy can be improved by a well-designed
and executed MSF assessment, and a poorly designed assessment can have a negative impact.
Statement of the Problem of Practice
As highlighted in the U.S. Army Leader Development strategy (2013), the Army must
develop leaders internally as part of a comprehensive leadership development strategy. One of
the primary focuses of Army leader development programs is to identify individual strengths to
sustain and weaknesses to reduce or eliminate (Department of the Army Leader Development
Strategy, 2013). The Army's evaluation system that generally only includes formal feedback on
performance from supervisors, most often the immediate supervisor and the next higher
supervisor. The MSAF assessment is a unique self-development opportunity for Army leaders
receive feedback from a larger group of stakeholders in addition to the formal performance
evaluation system. Leveraging MSF, with an improved MSAF assessment, will allow Army
leaders to understand their impact within their units and inform their self-development.
Organizational Context and Mission
The U.S. Army's mission is to "fight and win our nation's wars by providing prompt,
sustained land dominance across the full range of military operations and spectrum of conflict"
(U. S. Army Organization: Who We Are, n.d.). Since its creation as the Continental Army during
the American Revolutionary War, the Army provided the majority of the land forces for
peacekeeping, occupation, and combat operations for the United States (About the Army, 2020).
The Army exists to protect and defend American interests from enemies, foreign and domestic.
5
The Army has 154 permanent installations in the United States and numerous forward
operating locations abroad in Asia, Europe, and the Middle East (Army Posture Statement, 2016;
Post Locations, n.d.) The Army is currently supporting combat and deterrence operations in
Afghanistan, Iraq, South Korea, and other countries (Bialik, 2017).
The Army includes three distinct components: the regular Army or active component and
the two reserve components, the Reserve and National Guard (U.S. Army Organization: Who
We Are, n.d.). In 2019 the Army's end strength was 1,305,500 soldiers; 487,500 Regular Army,
343,500 National Guard, and 199,500 Reserve (Government Accountability Office, 2019).
Within all three Army components, there are three personnel categories: officer, warrant officer,
and enlisted. Officers command and lead Army units, warrant officers provide technical
expertise, and enlisted personnel complete missions and tasks directed by officers. Within the
regular Army, the authors focus, the personnel distribution is 17% officers, 3% warrant officers,
and 80% enlisted (Active Duty Military Personnel by Rank/Grade, 2020).
Army enlisted personnel includes a mix of soldiers with limited individual responsibility
and Non-Commissioned Officers (NCO;s). NCO’s start their careers as soldiers, and they are
then promoted to ranks with additional authority and responsibility. They support the orders of
officers by directly supervising soldiers during training and operations. NCO’s are critical to the
success of the Army as they are the immediate supervisors of soldiers and serve as their
professional role models. Additionally, NCO’s also serve as mentors for young officers,
especially early during the careers of officers. As officers are promoted and gain increased
positional authority, NCO’s are assigned alongside them in command teams. The supporting
NCO’s provide counsel and advice to officers, especially those selected as unit commanders,
from the company level to the senior levels of Army command.
6
While officers only consist of 17% of the regular Army, they command Army units and,
when not commanding, serve as staff officers or organizational managers. Officers progressively
incur increasing authority and responsibility as they progress in time in service and rank.
Officers are trained to lead complex, large organizations and are responsible for all
organizational failures and successes during peace and combat operations. The most important
and impactful position for officers is a unit commander. From the company level to the senior
levels of the Army, officers serving as commanders have significant military and judicial
authority over their assigned personnel. When in command, officers can significantly, both
negatively and positively, influence the morale and success of an Army unit or organization.
Army officers are divided into three groups, junior to senior, respectively: company
grade, field grade, and flag officers. All officers progress through each of these groups as they
attain higher ranks with the corresponding increased authority and responsibility. Company
grade officers, lieutenants and captains are the most junior group, noting that this period of
minimal authority and responsibility is often a formative learning experience. Company grade
officers are generally given the latitude to learn from minor mistakes and align their behaviors
with institutional norms during the initial ten years of service. This early developmental stage for
officers is widely viewed as critical to future development and success. It allows new company
grade officers to observe senior officers and learn from their NCO’s to refine their leadership
styles.
Field grade officers, i.e., majors, lieutenant colonels, and colonels, are the Army’s mid-
level leaders who have generally served in the Army for 10 to 25 years. The best performers
within this group are selected to command battalion to brigade-sized units with 500-2000
7
personnel. The units perform a range of missions, from logistics, intelligence, and others to
traditional combat functions such as artillery, armor, and infantry.
Flag officers or general officers, all of whom started their careers as company grade
officers and progressed through the field grades, have often served in the Army for more than 25
years. They lead division and higher level units with approximately 25,000 personnel and other
senior Army organizations, units, and agencies, some with hundreds of thousands of employees.
General officers were selected by excelling when commanding battalion and brigade level units
and similar organizations. As of May 2020, the Army only had 299 general officers, equating to
0.003% of the officer corps (Active Duty Military Personnel by Rank/Grade, 2020). Flag officers
are named as such since flags are displayed in their headquarters that illustrate the number of
stars that correspond with their rank.
As previously mentioned, all Army officers, with rare exceptions, are mandated to
progress through the company and field grade ranks during their careers. An example of an
exception are civilians who directly commission as mid-grade officers due to highly specialized
skills, e.g. doctors, lawyers, etc. Thus, it is essential to note that the Army, unlike other
comparable civilian organizations, does not recruit and assign mid-grade and senior-level leaders
from outside the organization. Therefore, internal leader development within the Army is
essential to long-term organizational success and should, ideally, be supported by mutually
supportive programs and tools. Army leaders, especially those selected as unit commanders,
receive formal leader training and are expected to use self-development tools.
Organizational Performance Goal
This dissertation is focused on the organizational goal that the Army Chief of Staff will
offer an improved MSAF assessment for all Army personnel. The Army Chief of Staff
8
established this goal after receiving feedback that a more effective MSF tool is required to
replace the MSAF assessment. Research outside the Army highlights the positive value of MSF
to support leader development (Hardison et al., 2015). Leveraging MSF, with an improved
MSAF assessment will support leadership development within the Army.
Importance
This problem is important to address because without effective tools to inform and
improve self-development, the Army will have fewer opportunities to develop its leaders. As
mentioned, the current Army performance evaluation usually requires feedback from the
immediate supervisor on performance and the next higher supervisor on potential. This top-down
system only incorporates the input and perceptions of two Army leaders. Army performance
evaluations have a substantial impact on future promotions and selection for selective
assignments. An ineffective MSF assessment for the Army community may limit opportunities
for Army leaders to identify and learn from both their strengths and weaknesses. Additionally,
not having an assessment that is perceived as useful within the Army community may result in
lower-quality Army leaders at all levels.
Having effective Army leaders is essential when one considers the potential negative
impacts of harmful leaders. Harmful leaders produce enduring and pervasive adverse effects on
individual and organizational effectiveness. Examples of the well-documented negative
consequences of harmful leaders include low morale, reduced productivity from disengaged
employees, and higher turnover (Burns, 2017; Tavanti, 2011). Additionally, organizations with
centralized power structures, such as military organizations, have divested power at low levels,
less institutional checks and balances on power, and a high price for failure. Thus, harmful
leadership within the Army can result in costly, tragic consequences, such as the unnecessary
9
death and suffering of soldiers and non-combatants (Aubrey, 2012). Given the Army’s structure
and the potential impact of harmful leaders within the Army, MSF is a unique and valuable
opportunity to inform leader self-development.
The MSAF assessment is one of three MSF programs currently in the Army. The MSAF
assessment is available to the entire Army community, including civilians and soldiers of all
ranks. A second assessment provides MSF to the small number of lieutenant colonels and
colonels selected for command and key billets. The third program provides feedback on an
organization’s performance to its leaders with a broad focus beyond, but not limited to, the
performance of the leader. The use of each programs will be highlighted in the literature review.
Description of Stakeholder Groups
Three key stakeholder groups are relevant for the accomplishment of the goal. The first is
the Center for the Army Profession and Leadership (CAPL), the internal Army organization that
studies the Army profession, leadership, and leadership development. A section within CAPL is
responsible for administering the MSAF assessment. CAPL manages the MSAF and also
provides internal mentors for post-assessment coaching upon the request of the assessed
individual. The second group includes the more than 487,500 soldiers serving in the regular
Army that could use an improved MSAF assessment if developed (Government Accountability
Office, 2019). The final relevant stakeholder and the stakeholder group of focus are Army
lieutenant colonels.
Stakeholder Group of Focus
The stakeholders that will contribute most to the accomplishment of the goal are Army
lieutenant colonels. As of May 2020, the stakeholder group consisted of 9,040 lieutenant
colonels (Active Duty Military Personnel by Rank/Grade, 2020). For context, the totals of the
10
other two field grade officers, majors, and colonels, are 16,022 and 4,084 colonels, respectively
(Active Duty Military Personnel by Rank/Grade, 2020).
Lieutenant colonels, who serve as the commanders of battalion-level units, have a
substantial impact on the well-being and organizational morale of soldiers and assigned
personnel. They hold significant positional authority and are essential to the success of the
battalion-level units that they lead. Second, they have generally served in the Army for 16 to 28
years and are experienced enough to identify and model positive and negative leadership styles
that align with the Army’s behavioral norms. Additionally, they are the Army's mid-level leaders
present in nearly every internal Army professional specialty, and almost all units and agencies.
Finally, since lieutenant colonels are in positions of current and future influence, their feedback
can be used to inform an improved MSAF assessment.
Purpose of this Study and Research Questions
The purpose of this study was to analyze and examine the knowledge, motivation, and
organizational influences for the Army to offer an improved MSAF assessment. The analysis
generated a list of assumed influences that were systematically examined. While a complete
analysis would focus on all stakeholders; this analysis only focused on the perspectives of Army
lieutenant colonels. This study examined the following questions:
1. What are the knowledge and motivation influences related to having an improved
MSF assessment?
2. What is the interaction between knowledge and motivation and organizational
culture?
3. What are the recommended knowledge and skills, motivation, and organizational
solutions for having an improved MSF assessment?
11
Methodological Framework
Achieving organizational missions and associated goals requires an awareness of gaps
between goals and performance (Clark & Estes, 2008; Rueda, 2011). This dissertation drew upon
the framework provided by Clark and Estes (2008), which focuses on the knowledge, motivation,
and organizational influences that contribute to accomplishing goals. Knowledge consists of the
skills required for effective performance. Motivation drives individuals to start and complete
tasks and is based on an individual’s belief in their abilities (Clark & Estes, 2008; Krathwohl,
2002; Mayer, 2011; Rueda, 2011). Organizational influences, such as structure, procedures, and
available resources, have an impact on the ability of stakeholders to accomplish goals.
This research used a qualitative approach to answer the research questions, using semi-
structured interviews with 14 participants via video or phone. These interviews are common for
qualitative data collection and help provide a rich context for a respondent’s perceptions of the
focus area (Johnson & Christensen, 2015).
Definitions and Clarifications
This list clarifies the use of several terms that have a specific context within the Army
and in this research:
Army: Unless otherwise specified, the use of Army means the United States Army.
Army officers: Soldiers who command units and, when not commanding, serve as staff officers
or organizational managers. Officers progressively incur increasing levels of authority and
responsibility as they progress in time in service and rank. Officers are trained to lead complex,
large organizations and are responsible for all organizational failures and successes during peace
and combat operations.
12
Company grade officers: The most junior group of Army officers consists of first and second
lieutenants and captains. They have authority and responsibility for small Army units such as
platoons and companies generally with less than 250 soldiers.
Field grade officers: Mid-level Army officers with the rank of major, lieutenant colonel, and
colonel who have generally served in the Army for 10 to 25 years. They command battalion to
brigade-sized units with approximately 500-2000 personnel that perform a range of missions,
including logistics, intelligence, and traditional combat functions such as artillery, armor, and
infantry.
Flag officers: The most senior Army officers, i.e., general officers, who have previously served
as company and field grade officers. They lead division and higher level units with
approximately 25,000 personnel and nearly all other senior Army organizations, units, and
agencies, some with hundreds of thousands of soldiers and civilians.
Officer Evaluation Report (OER): The performance evaluation report for Army officers. The
report includes an assessment of performance by the immediate supervisor and an assessment of
potential by the next higher supervisor. OERs have a significant impact on future promotion and
the selection for competitive assignments.
Personnel: Since the Department of the Army includes over a million civilian employees,
personnel means all soldiers and civilians that work for the Army.
Soldier: The term soldier applies to all uniformed personnel in the Army regardless of rank. As
previously mentioned, there are three stratifications of Army soldiers; enlisted, warrant officers,
and commissioned officers. The author will specify these groups when required.
Multi-Source Assessment and Feedback (MSAF): The Army’s internally constructed and
administered web-based 360-degree assessment. The optional instrument collects feedback from
13
subordinates, peers, and supervisors on the strengths and weaknesses of assessed soldiers. The
assessment became available to all Army personnel in 2008 and remains active.
Multi-Source Feedback (MSF): The term used to describe the diverse assessments that provide
feedback from a broader group of stakeholders than traditional evaluation systems. This
feedback can be used for both development or to inform administrative performance systems.
Unit Identification Code (UIC): A six digit alphanumeric code that uniquely identifies each
Department of Defense organization and unit.
360-Degree Assessment: A type of MSF assessment that uses feedback from subordinates, peers,
supervisors, and other stakeholders. The term 360-degree assessments or 360s comes from the
idea that the feedback comes from a full circle of stakeholders relating to the assessed employee.
Organization of this Study
This study is organized into five chapters. This chapter provided the key concepts and
terminology related to the Army’s MSAF assessment. The organization’s mission and
stakeholders, along with background context, were introduced. Chapter two provides a review of
the current literature surrounding the scope of this study. The definitions of MSF and its history,
evolution, advantages, and disadvantages will be addressed. The second chapter reviews the use
and potential issues of the MSAF assessment. Chapter three details the methodology, including
sample selection, data collection, and analysis. Chapter four discusses the findings and results.
This study concludes with chapter five that provides recommendations to improve the MSAF
assessment.
14
Chapter Two: Review of the Literature
Introduction
This research focuses on the Army using an improved MSAF assessment to support
leader self-development. MSF solicits observations and input on leader actions and behaviors
from followers, peers, and supervisors (Fleenor, Taylor, & Chappelow, 2008; Nowack &
Mashihi, 2012). Effectively using MSF can increase leaders' self-awareness and further
positively develop effective leaders. Not offering and using well-developed MSF assessments
can limit the opportunities for leaders to receive meaningful feedback from those they impact by
highlighting their strengths to sustain and weaknesses to improve. As highlighted in Army
doctrine, the Army must develop leaders internally since they are essential to long-term
organizational successes and should, ideally, be supported by mutually supportive programs and
tools. One of the primary focuses of Army development programs is to identify individual
strengths to sustain and weaknesses to reduce or eliminate (Army Leadership and the Profession,
2019). Additionally, the Army expects its leaders, at all levels, to embrace the opportunity to
understand and internalize feedback from multiple feedback systems, including MSF
assessments (Army Leadership and the Profession, 2019). The MSAF assessment is a self-
development opportunity to help Army leaders receive feedback from those outside their
performance evaluation hierarchy.
This research seeks to improve the understanding of the value of an effective MSF
assessment within the Army. This problem is essential to address since the Army is essential for
national defense and international security. Additionally, Army soldiers are part of the broader
American and international community. They thus should be prepared, while serving in the
Army, to positively contribute to their community as effective, self-aware leaders.
15
This chapter first reviews the literature on the broad category of MSF, then the historical
evolution of MSF, and the contemporary use of MSF. Next, the utility, best practices,
advantages, and disadvantages of the assessments will be highlighted before discussing how
MSF assessments are used in the U.S. Armed Forces and the U.S. Army. Finally, the review will
conclude by exploring the limited research related to the MSAF assessment and a short
discussion of identified research gaps. An overview of the Clark and Estes gap analysis model
(2008) is also provided, along with the assumed knowledge, motivation, and organizational
influences for this dissertation study. The chapter concludes with the conceptual framework that
guides this study.
Multi-Source Feedback
Context, Definition, and History
Feedback is information given to an employee to strengthen desired behaviors and to
solicit changes in undesired behaviors (Fleenor, Taylor, & Chappelow, 2008; Fletcher, 2014).
Feedback is provided to employees in multiple forms; examples include formal written
performance evaluations, informal, spontaneous feedback, and periodic performance updates.
Although organizations can vary in their approach, traditional administrative performance
systems typically involve feedback from supervisors. Additionally, traditional systems tend to
utilize feedback from one or two leaders, often the direct supervisors of the rated employee
(Fleenor, Taylor, & Chappelow, 2008). This type of top-down performance feedback is generally
criticized for offering a narrow or incomplete perspective on individual behavior and
performance (Fletcher, 2014). MSF is designed to facilitate a fuller, more complete picture of
performance by providing feedback from a broader group of stakeholders than traditional
evaluation systems.
16
MSF slowly evolved during the 20th century as employers and researchers recognized the
value of collecting and reviewing performance and leadership data from additional sources.
Small scale, early variations started with employers seeking to better train employees and
supervisors. Both World Wars encouraged and facilitated government and corporate efforts to
examine personnel research and broaden rating systems (Hedge, Borman, and Birkeland, 2001).
For example, the total mobilization of public and private organizations during World War II
offered unparalleled opportunities to study predictions of job performance, the classification of
personnel, and motivational factors (Hedge, Borman, and Birkeland, 2001). The years after
World War II saw a substantial increase in private organizations and formal research, focusing
on comparing peer ratings to supervisors and identifying criteria to measure performance.
In the 1950s, the effort to shift toward an employee-focused workplace in multiple
industries supported the development of early MSF assessments (Atwater & Yammarino, 2001;
Hedge, Borman, & Birkeland, 2001). These efforts were driven by private and public institutions
seeking to find better ways to hire and train employees and to measure job performance. Initial
MSF efforts began as a development tool for managers and rapidly expanded beyond managers
in the 1990s and 2000s with the advent of the internet (Fleenor, Taylor, & Chappelow, 2008).
The current scope of MSF assessments is based on numerous theories, including leader
development, learning theory, adult development theory, and social learning theory (Eggert,
2016). In sum, as highlighted by the 3D Group (2020) and Fletcher (2014), MSF assessments are
now a common tool to improve individual performance by sharing the perceptions of a leader
from the perspectives of the people they impact.
17
MSF and 360-degree Assessments
This study uses MSF as an umbrella term to encompass the range of tools that solicit
feedback in addition to traditional methods. Other terms in the literature are also used, such as
multi-rater assessment, multi-rater feedback, multisource assessment, and 360-degree assessment
(Bracken, Timmreck, & Church, 2001). MSF assessments can include off-the-shelf and
proprietary assessments. They typically have input from the rated individual, supervisors, peers,
and followers, with some MSF assessments, including input from external stakeholders. For
example, a 360-degree assessment, also known as a 360, is an MSF tool that includes feedback
from supervisors, peers, and followers. The name of the 360-degree assessment comes from the
idea that the feedback comes from a full circle of stakeholders. For this study, MSF applies to all
tools and programs that solicit feedback from a wider group of stakeholders than traditional
performance evaluations. See figure 1 for a visual representation of how MSF varies from
traditional performance feedback systems.
18
Figure 1
Comparison of Feedback Systems (Adapted from Ewan & Edwards, 2001).
Contemporary MSF Assessment Use and Design
Experts contend that MSF assessments are now internationally widely accepted and are a
commonly used tool to aid individual development by soliciting feedback from a larger group of
stakeholders when compared to traditional evaluations (Fleenor, Taylor, & Chappelow, 2008;
Smither, Brett, & Atwater, 2008; 3D Group, 2016). Additionally, Fleenor, Taylor, and
Chappelow (2008) highlight that since MSF assessments can be used for various employees and
can potentially provide assessments of performance over time, they continue to be perceived as
useful development tools.
Implementing MSF Programs. Before implementing an MSF program, there are several
considerations. First, experts maintain that the MSF assessment should be part of a process, not
19
just a single event (Bracken, Timmreck, & Church, 2001). The results are decision-making tools
for both the individual and the organization that are rooted in measurements. The assessments, in
concert with an MSF program, aim to facilitate positive behavioral changes (Bracken & Rose,
2011; Bracken, Timmreck, & Church, 2001). Thus, the design and execution require careful
planning to support organizational goals and accompanying change efforts (London, 2001).
Dalton and Hollenbeck (2001) recommend that MSF assessments should be tailored to the
organization, and sometimes an internal section or branch, especially when considering the size
and goals of the program. Finally, organizations are also encouraged to explore and fully
understand the advantages, disadvantages, and limitations of MSF assessments and programs.
Variations in Use. MSF assessments are primarily a developmental tool and are used by
some organizations to inform administrative activities (3D Group, 2016; 3D Group, 2020;
Lepsinger & Lucia, 2001; London, 2001). When used for self-development, the results are
generally not shared beyond the assessed employee, except for a possible post-assessment
review. The exclusive use of MSF as a developmental tool does promote confidentiality and
associated psychological safety. Conversely, using MSF to inform administrative actions, such as
performance evaluations, can improve the quality of information available to decision-makers.
Using MSF assessments to inform administrative activities does present the potential for
respondent bias and social, ethical, and legal risks that should be identified and mitigated (3D
Group, 2016; Greguras et al., 2003; London, 2001).
Typical Design. Even within the differences amongst MSF assessments and programs,
several norms have developed. The assessments are often embedded into existing talent
management programs. They are perceived as complete for the assessed individual after a post-
assessment review and counseling, ideally resulting in creating an Individual Development Plan
20
(IDP) (3D Group, 2020; Smither, London, and Reilly, 2005). MSF assessments also often
include a point-based rating scheme with at least one open-ended question to allow for both scale
based and less structured feedback (Hedge, Borman, & Birkeland, 2001).
Best Practices. International use of MSF assessments, across a wide range of industries,
has identified several best practices. First, organizations seeking to use an MSF should determine
and communicate if the program focus is a developmental tool, impacts performance
management, or a mix of both (3D Group, 2016; 3D Group, 2020). This program focus is
important to identify and communicate to the organization since it will impact the design and
implementation of the MSF. Second, MSF assessments should be part of a long-term employee
development program that is fiscally sustainable and supported by organizational leaders
(Bracken & Timmreck, 2001). MSF should be part of an iterative process for employee
development and not be perceived as a single development event. Research supports that goals
established as part of long-term development programs were more successful in positively
influencing behavioral change (Atwater, L., Waldman, A., Atwater, D., & Cartier, P. 2000;
Smither, London, & Reilly, 2005). Additionally, an MSF program should consider cultural,
regional, or organizational differences, especially when considering the use of an off-the-shelf
MSF assessment.
Additional research highlights the value of private and specific feedback to support
learning. Waldman & Atwater (2001), when reviewing the utility of MSF assessments, which
often include anonymity as a program feature, illustrated that providing feedback is of greater
value if it does not easily identify who is providing the feedback. A later review of Australian
university leaders who used a 360-degree assessment highlighted the importance of quality
21
feedback (Drew, 2009). Specifically, an assessment that offers feedback that was specific enough
to help the assessed individual with opportunities to make behavioral changes.
Utility of MSF Assessments
Research into the utility of MSF assessments has produced results that generally
encourage usage while acknowledging distinct advantages and disadvantages. Some researchers
identified small and inconsistent improvements and difficulty in linking individual improvements
directly to the MSF program or tools (Allen, 2008; Brett, & Atwater, 2008; Hezlett, 2008).
Alternatively, researchers have highlighted the value of MSF when they are deliberately
implemented and delivered within organizations with post-assessment coaching (Drew, 2009;
Fletcher, 2014). Similar to other organizational efforts that support employee development, MSF
assessments have inherent advantages and disadvantages.
Advantages and Disadvantages of MSF Assessments
When properly planned and resourced, MSF programs can inform and improve individual
development and organizational success (Fleenor, Taylor, & Chappelow, 2008; Hezlett, 2008;
Smither, London, & Reilly, 2005). For assessed individuals, the assessments can review an
individual's strengths and developmental needs and provide a reality check of self-perception
(Brown et al., 2014; Fleenor, Taylor, & Chappelow, 2008; Smither, London, & Reilly, 2005).
Additionally, research highlights that MSF also has additional benefits, such as improving
employee communication, discussion, and knowledge sharing (Druskat & Wolff, 1999; Gibson
et al., 2007).
Research also highlights that organizations that use MSF improve organizational
performance. For example, a study of 253 organizations by Kim et al. (2016) found that MSF
positively impacted organizational financial performance due to an increase in employee's
22
abilities and information sharing. Having an organizational MSF program can also demonstrate
organizational commitment to long-term employee development. Additionally, using MSF can
identify role models and mentors within an organization (Bracken & Timmreck, 2001; Brown et
al., 2014). MSF assessments can also reinforce team-oriented behaviors while aligning
organizational and individual direction and efforts. (Bracken & Timmreck, 2001).
Conversely, MSF assessments can have disadvantages. First, creating and sustaining an
MSF program requires allocating scarce resources. Additionally, creating and using an MSF
assessment requires an understanding of the change process, and experts recommend giving 5-10
times as much effort and resources to the post-assessment portion of an MSF assessment when
compared to the assessment (Bracken & Rose, 2009; Ewan & Edwards, 2001; Dalton &
Hollenbeck, 2001). The post-assessment focus is critical since it allows the rated employee to
review the results, with coaches if possible, and then create an IDP with programmed future
touchpoints. Focusing on the post-assessment action plan acknowledges that behavioral change
is a long-term effort best supported by ongoing evaluation of the process and the outcomes
(Dalton & Hollenbeck, 2001; Ewan & Edwards, 2001).
Poorly developed and delivered MSFs can negatively impact employees, supervisors,
relationships, and organizational success (Dalton & Hollenbeck, 2001; Nowack & Mashihi,
2012). As highlighted by Smither, London, and Reilly (2005), organizations that use MSFs
should understand and communicate that not all recipients may benefit equally, like any other
development and training program. Individual employee variance in multiple areas such as
employee feedback orientation, the perceived need for change, and personality may impact
individual benefits (Smither, London, and Reilly, 2005). MSF program managers should
communicate the potential benefit imbalance to aid in reducing employee cynicism and setting
23
realistic expectations for programs (Waldman & Atwater, 2001). Additionally, the psychology
research by Nowack and Mashihi (2012) illustrates that lower than expected feedback can be
discouraging to the rated individual, and the interpretation of negative feedback can increase
stress and negatively impact employee short and long-term goals.
In sum, MSF assessments can be of value to employees and organizations when
adequately sourced and implemented. Failure to provide adequate resources and a deliberately
designed program that supports organizational goals and individual post-assessment coaching
may negatively impact employees and the organization.
MSF and the Army
In addition to existing performance evaluation systems and feedback mechanisms, all
military services within the U.S. Department of Defense currently use an MSF assessment
(Hardison et al., 2015). The current assessments are maintained and tailored to each military
service, with a different target audience for use.
Within the Air Force and Army, MSF assessments are available to all personnel. In the
Navy and Marine Corps, the assessments are open to a subset of leaders selected to command
units. All U.S. military MSF assessments are used exclusively as developmental tools due to
ethical and legal concerns and congressional restrictions related to officer promotions (Hardison
et al., 2015). The Army's MSAF assessment is the longest-running and largest MSF program
within the Department of Defense.
Army MSAF
Structure and Use. The MSAF assessment is a web-based self-development tool in
which soldiers select assessors to provide scale-based and free-form feedback on a leader's
strengths and weaknesses. The website based self-initiated assessment gathers input from three
24
source categories and requires a minimum of five peers, five subordinates, and three supervisors.
After completing a self-assessment, the rated soldier selects supervisors, peers, and subordinates
to receive the assessment. Potential raters then receive an email with a link to the MSAF website
that requests their input for a soldier's assessment. If those selected to provide feedback on a
leader choose to access the MSAF site and complete the requested assessment, they do so by
answering a scale-based set of questions and free-form comments. The assessment results in an
individual feedback report that contains average ratings from each rater group, peers,
subordinates, and superiors, and two free-form comments sections. The comments section solicit
input on the rated soldier’s strengths and developmental needs.
As of the fall of 2020, there are currently three programs that offer MSF within the Army.
The first is significantly smaller in scale and is exclusively for lieutenant colonels and colonels
selected for competitive assignments such as battalion and brigade command. These MSF
assessments are part of two new initiatives that seek to better review the officers tentatively
selected for key assignments under the Battalion Commander Assessment Program (BCAP) and
the Colonel’s Command Assessment Program (CCAP). The BCAP and CCAP are multi-day
events in which officers who were tentatively selected for key assignments undergo in-person
cognitive, non-cognitive, and physical assessments, in addition to a panel interview with senior
Army officers (Talent Management, n.d.). The MSF assessments that inform the BCAP and
CCAP are different from the MSAF in several ways. First, the assessed individual does not select
the participants who provide feedback. Second, the assessment takes less time to complete and
has more direct questions. Third, the assessment results are reviewed with the assessed officer in
one-on-one sessions with an assigned mentor. Fourth, and most importantly, the results of these
25
MSF assessments can have an not yet publicly available impact on who is selected for key
assignments and, thus, who will become future leaders in the Army.
Finally, the Army also solicits indirect MSF using the Command Climate Survey (CCS).
This periodic survey, only sent to members of a specific unit or organization, solicits feedback
on the morale and effectiveness of an organization. A further CCS is initiated based on the
arrival and departure of a unit commander. The CCS is not designed to solicit feedback on a
specific leader; however, feedback on a unit leader can sometimes be indirectly gathered. This
research did not focus on the CCS since it is not a formal MSF assessment.
This research effort only focuses on the MSAF assessment, instead of the BCAP and
CCAP MSF assessments, for three reasons. Since the MSAF assessment has been in use since
2008, the stakeholder group of focus was likely to have experience providing and receiving MSF
via the MSAF assessment. Second, the MSF assessments within the BCAP and CCAP started in
2020, and there are no published data on their impact or perceived utility. Third, there are
existing data on the perceived utility and effectiveness of the MSAF assessment that will be
highlighted in the next section.
Sources on the MSAF Assessment. Beyond the annual reports from the CAPL on
leadership in the Army, there is limited publicly available research on the MSAF assessment.
The 2015 CAPL leadership report mentions three unpublished internal program evaluation
reports, in 2011, 2014, and 2015, that were not available for this study. Additional sources
focusing on the MSAF available to the researcher included articles, dissertations, and research
projects. Beyond the CAPL leadership reports from 2014-2016, the most substantial source on
the MSAF assessment is a 2015 Research and Development (RAND) Corporation study. This
study highlighted the history of MSF use within the U.S. military, with a primary objective to
26
determine if existing MSF assessments should influence performance evaluations (Hardison et
al., 2015).
MSAF Utility and Perceptions. Internal Army leadership surveys, commissioned by the
CAPL, illustrate that the perceived effectiveness and utility of the MSAF were routinely low
(Cavanaugh et al., 2016; Fallesen, 2015; Fallesen, 2016). For example, only 53% of users rated
the MSAF program as effective. Seventy percent of assessed leaders did not create a post-
assessment IDP to follow up on the assessment results. Only 10% of assessed leaders used the
Virtual Improvement Center (VIC) for post-assessment mentoring. Most strikingly, the 2015
CAPL report highlighted that only 57% of those that initiated and completed an MSAF
assessment downloaded and viewed their post-MSAF report (Cavanaugh et al., 2016). This final
statistic communicates that a sizable portion of the MSAF user community was not motivated or
did not see the value in their MSAF assessment report.
Comparing the MSAF to existing research on effective MSF assessments revealed
several possible deficiencies that may have contributed to the perception of the limited utility of
the MSAF assessment as a leadership development tool (3D Group, 2016; Bracken, Timmreck,
& Church, 2001; Hardison et al., 2015; McAninch, 2016). The four highlighted potential issues
for the MSAF are not exhaustive; however, they are supported by the CAPL reports and other
limited sources on the MSAF assessment.
First, the assessment used the same questions and structure for company and field grade
officers, ranging from newly commissioned junior lieutenants to senior colonels with 25 years of
service. This lack of vertical development for different personnel grades does not acknowledge
the expected evolution of leadership and management skills for officers who may have a 25-year
experience difference and the significant responsibility disparities between junior and senior
27
officers. While there are uniformly published professional expectations and ethical standards for
all Army officers, using the same measure for a newly commissioned lieutenant and a colonel
with 25 years of Army experience is not optimal for individual self-development. Additionally,
the absence of vertical development may not prepare officers for future development by
assessing the skills required to succeed at the next highest rank.
Second, only the rated soldier selected the subordinates, peers, and supervisors to provide
feedback, resulting in potential selection bias. As MSF scholars highlight, the selection methods
used by the MSAF assessment likely negatively impacted the accuracy and fairness of ratings
(Waldman & Atwater, 2001; Gilliland & Langdon, 1998). Users requesting an MSAF
assessment are likely influenced by their conscious and unconscious biases when selecting
potential raters who can provide MSF. Farr & Newman (2001) highlight that a rater assessment
system should seek to provide a balanced perspective, with both disagreement and agreement,
from various sources not wholly controlled by the rated employee.
Third, from 2011 to 2018, communicating the date of a completed MSAF assessment,
was mandatory for all officers when completing their annual performance evaluation known as
the Officer Evaluation Report (OER) (MILPER Message Number 11-282, 2011; MILPER
Message Number 18-181, 2018). The date of the completed assessment was directed to have
been within the last three years.
However, there was no accountability mechanism to confirm compliance that the
assessed officer had started, completed, or reviewed the results on an assessment. Officers were
required to add the date of a completed MSAF assessment to the documents they prepared for
their evaluations. Precisely, the support form that lists their achievements to inform their annual
28
evaluation. The requirement to include the date of completed MSAF was rescinded in 2018
(MILPER Message Number 18-181, 2018).
Since the MSAF assessment was always communicated as a self-development tool, with
no impact on evaluations, this requirement was inconsistent. The artificial connection between
the MSAF assessment and performance evaluations reduced the credibility of the assessment by
indicating that it was potentially not fully supported by the Army’s leadership and thus was not a
useful tool to help the self-development of leaders.
Finally, the MSAF post-assessment report was only available to the assessed individual
and did not facilitate an action plan to address identified behaviors to improve or maintain.
(Hardison et al., 2015; McAninch, 2016). Once the deadline for an MSAF assessment expires,
the rated officer receives an individual report with average ratings from each assessor group for
desired behaviors for Army leaders from peers, subordinates, and supervisors (Hardison et al.,
2015). However, the contents of the assessment are not shared beyond the user, and there is no
requirement for rated officers to use their current chain of command, CAPL external resources,
or other sources fully understand and incorporate feedback (Hardison et al., 2015; McAninch,
2016). Therefore, the assessment is perceived as a single development event without a
sustainable path for individual behavior change or accountability encouraged by MSF research
and scholars (Dalton and Hollenbeck, 2001; London, 2001).
Clark and Estes (2008) Knowledge, Motivation, and Organizational Influences Framework
Achieving organizational goals requires an awareness of gaps between goals and
performance (Clark & Estes, 2008; Rueda, 2011). The framework provided by Clark and Estes
(2008), focusing on the knowledge, motivation, and organizational influences, provides a tool to
analyze that gap. Knowledge is categorized into four types required to evaluate a knowledge gap:
29
factual, conceptual, procedural, and metacognitive (Krathwohl, 2002; Rueda, 2011). Motivation
drives individuals to start and complete tasks and is based on an individual's belief in their
abilities (Clark & Estes, 2008; Mayer, 2011). Research has illustrated that an absence of
motivation can hinder individual and organizational performance (Clark & Estes, 2008; Rueda,
2011). Finally, organizational influences, such as structure, procedures, and available resources,
also help inform the gap between performance and goals.
All three elements of the Clark and Estes (2008) framework informed this effort to
explore how the stakeholder group viewed the importance of an MSF assessment within the
Army. The first section will focus on the knowledge influences on the stakeholder performance
goal, the second will consider motivation influences, and the final section will review
organizational influences. All three assumed influences will be examined in Chapter 3.
Stakeholder Knowledge and Motivation Influences
This review highlighted the assumed knowledge and motivation influences regarding
how Army lieutenant colonels perceive the utility of an improved MSF assessment.
Knowledge and Skills. People are commonly perceived as the most important
organizational asset, and they have different levels of personal knowledge and skills to support
the organization. Research has demonstrated that the ability of employees to gain and use
knowledge and skills improves employee engagement and productivity (Alexander, Schallert, &
Reynolds, 2009; Grossman & Salas, 2011). Additionally, achieving organizational missions and
associated goals requires an awareness of gaps in employee knowledge and skills (Clark & Estes,
2008; Rueda, 2011).
Knowledge is categorized into four types required to evaluate a knowledge gap: factual,
conceptual, procedural, and metacognitive (Krathwohl, 2002; Rueda, 2011). Factual knowledge
30
is considered easily accessible and basic information, such as terminology, definitions, and
details (Krathwohl, 2002). Understanding the definition of an MSF assessment is a related
example of factual knowledge. Conceptual knowledge is the interrelationship between basic
knowledge elements to allow them to function together (Krathwohl, 2002). An example of
conceptual knowledge is the awareness of how MSF assessments incorporate the components of
positive leadership. The third type is procedural, i.e., how to do something. A relevant example
is the knowledge of how to complete an OER. Lastly, the final knowledge type is metacognitive.
Metacognitive knowledge describes an individual's ability to reflect on their learning and
requires self-awareness (Krathwohl, 2002; Rueda, 2011). Understanding the four knowledge
types helps inform the methodology needed to assess knowledge gaps. Each knowledge type
contributes to the understanding of employee engagement toward an organizational goal, factual,
procedural, and metacognitive, each playing a role in this effort.
Three knowledge influences were identified for lieutenant colonels. The first influence
was knowing how to reflect on performance. This metacognitive influence is prioritized first
amongst the three influences since it is necessary for the stakeholder group to address prior to the
remaining influences. The additional influences were factual and procedural, knowing how to
receive and incorporate feedback.
Metacognitive: Reflecting on Performance. The use of MSF assessments requires the
knowledge of how to reflect on performance. Army lieutenant colonels must have the skills to
reflect on their performance based on the information identified in the MSAF assessment. This
knowledge also includes measures of self-awareness related to understanding that some past
behaviors may not be aligned with organizational norms or expectations.
31
Factual: Receiving Feedback. The ability to receive feedback is a foundational
knowledge for MSF assessments. Army lieutenant colonels must possess the knowledge to
receive meaningful feedback when using an MSF assessment.
Procedural: Incorporating Feedback. Lieutenant colonels must possess the knowledge
of how to incorporate feedback provided by others using an MSF assessment. Incorporating
feedback includes the ability to understand how observed behaviors were perceived as either
negative or positive and then adjusting future actions based on MSF. Table 1 summarizes the
three assumed MSAF knowledge influences discussed in the preceding paragraphs.
Table 1
Assumed Knowledge Influences and Knowledge Types
Knowledge Influence Knowledge Type
Lieutenant colonels must know how to reflect on their performance Metacognitive
Lieutenant colonels must know how to receive feedback Factual
Lieutenant colonels must know how to incorporate feedback Procedural
Motivation. The utility of MSF assessments at both the individual and organizational
levels is influenced by motivation. Motivation drives individuals to start and complete tasks based
on their belief in their abilities (Clark & Estes, 2008; Mayer, 2011). Research has illustrated that
an absence of motivation can hinder individual and organizational performance (Clark & Estes,
2008; Rueda, 2011). Two motivational influences, expectancy-value and goal orientation, were
discussed since they are relevant to this stakeholder goal.
Expectancy Value (Utility). The Expectancy-Value Theory (EVT) links motivation to an
individual's expectancy for success and value for the task (Eccles, 2006; Rueda, 2011). More
32
simply: can I do the task, and do I want to do the task? Eccles (2006) also highlights that if
people view a task as important and of value, they are likely more motivated to start and
complete the task. While EVT includes several value constructs, this effort will focus on utility.
Per Eccles (2006), the perceived utility value of an MSF assessment will likely be higher
within the stakeholder group if it aligns with their individual goals. As previously mentioned, the
annual CAPL reports highlight that approximately half of all previous MSAF users perceived the
tool as a useful self-development tool (Cavanaugh et al., 2016; Fallesen, 2015; Fallesen, 2016).
Additionally, the low rates of users either not reviewing their feedback reports or utilizing post-
MSAF mentorship tools illustrate a motivational issue. Interviews assessed if a utility
motivational gap was present and if lieutenant colonels understood the value of MSF feedback.
Goal Orientation. According to the goal orientation theory, goals fit into two areas:
mastery and performance (Pintrich, 2004; Yough & Anderman, 2006). Learners are mastery-
orientated if they seek to truly understand or master a task and compare current to prior
achievement (Yough & Anderman, 2006). Performance-orientated learners aim to compare their
performance to others and use those performers as a benchmark to their performance (Yough &
Anderman, 2006). In terms of self-development, Army lieutenant colonels are encouraged to be
both mastery and performance-oriented, with a greater focus on mastery. Specifically, having a
mastery orientation relates to improving their leadership skills and using benchmarks of their
performance to inform the focus of their future self-development efforts.
To successfully utilize an MSF assessment, lieutenant colonels must be mastery-oriented
because Army leaders are encouraged to master their professional specialty while continually
improving their ability to lead diverse teams. As Army leaders gain authority and rank, they need
33
to routinely receive and incorporate both formal and informal feedback for themselves and for
those they lead. Table 2 lists the two assumed motivational influences.
Table 2
Assumed Motivation Influences
Utility (EVT) Lieutenant colonels need to understand the value of MSF
Goal Orientation (Mastery) Lieutenant colonels need to incorporate MSF
Organizational Influences. Organizational leaders need to acknowledge and understand
organizational culture when planning and implementing change efforts (Clark & Estes, 2008).
While organizational culture is difficult to define clearly, Schein (2004) notes that failing to
understand the culture's roots and power can negatively impact and overcome change efforts. A
detailed appreciation of culture can significantly impact the success of change efforts. Culture
operates at multiple, intersecting levels, including within groups, subgroups, and individuals.
Additionally, within and beyond these groups, culture can be shared unconsciously via attitudes
and behaviors (Clark & Estes, 2008).
There are two relevant aspects of organizational culture: cultural models and settings. As
highlighted by Clark and Estes (2008), cultural models are the shared perspective of how an
organization or community should work. Cultural settings are the measurable, visible
manifestations of a cultural model. The following sections will examine the relevant literature,
specifically the cultural model and settings.
Cultural Models and Settings. Cultural models are the shared perspectives of how an
organization or larger community works or should work (Gallimore & Goldenberg, 2001).
Cultural models are often invisible and are generally accepted by those who hold them. If not
34
fully understood, cultural models can be almost unseen internal obstacles to change. Cultural
settings are the visible manifestations of cultural models that can be observed and potentially
measured (Rueda, 2011). Cultural models and settings influence each other, and their evolving
relationships impact organizational success. The next section highlights one cultural model and
one cultural setting related to the Army offering an improved MSAF assessment.
Reflection That Supports Leadership Development. The MSAF assessment was deemed
as a low-cost and tailored option to support the stakeholder group and the Army community with
MSF (Fallesen, 2017; McAninch, 2016). Since 2008, when the MSAF was initiated, several
Army leaders have championed the utility of the MSAF assessment in Army literature and
doctrine (Casey, 2010; Odierno, 2014). The Army leaders encouraged personnel, especially
officers, to use the MSAF assessment to understand and fill the gaps in their self-awareness.
Having an MSF instrument that supports the Army's broader organizational norm of reflection on
individual behaviors and understanding the impact of leaders on their unit and community.
Additionally, the assessment supported Army doctrine requiring leaders to improve by assessing
their leadership style.
However, as previously mentioned, the MSAF was never fully embraced by the Army
community (Cavanaugh et al., 2016; McAninch, 2016). Participation rates and perceived utility
were low. Other issues mentioned earlier, such as the lack of an accountability mechanism
highlighted by Wong & Gerras (2015), indicate the MSAF assessment can be improved.
Additional research supports the value of reflection on leaders and leader development.
Roberts’ (2008) theory on leadership reflection offers a solution by acknowledging that
reflection is an essential competency required for effective leaders, especially in contemporary
workplaces that are more complex and multicultural. Robert’s principle is aligned with the U.S.
35
Army’s broad guidance to all leaders and the stakeholder group. Additionally, a study of first-
year and third-year university students identified that the frequency of self-reflection was a
strong predictor of caring and leader development (Park & Millora, 2012). Narrowing the focus
to MSF, as a mechanism to encourage reflection by leaders, studies highlight that MSF improves
employee communication, discussion, and knowledge sharing (Druskat & Wolff, 1999; Gibson
et al., 2007). Therefore, the next version would use the lessons learned from the MSAF
assessment and successful assessments outside the Army. The improved Army MSF needs to be
nested with Army leadership doctrine and professional milestones, with feedback from mid-level
leaders like lieutenant colonels, to support reflection on leadership development within the
Army.
Accountability to Offer an Effective MSF assessment. This research effort identified an
organizational gap for the Army as a cultural setting. Specifically, because the Army directs its
personnel to improve their leadership skills, the Army is expected to provide the time, training,
and tools for tools that meet that end for its personnel. The available research concluded that the
MSAF assessment needs to be improved to strengthen its utility and effectiveness to the Army
community. The principle that building the capacity of an organization is crucial in improving
the institution and its accountability systems, as highlighted by Grubb and Badway (2003)
underscores the importance of offering an effective MSF program within the Army. The absence
of an effective assessment highlights that the Army is limiting the opportunity for leaders to
reflect on their leadership by examining feedback from their subordinates, peers, and
supervisors. Table 3 illustrates the assumed organizational influences mentioned in the preceding
paragraphs.
36
Table 3
Assumed Organizational Influences
Assumed Organizational Influences Type
Reflection supporting leadership development Cultural Model
Accountability to offer an effective MSF assessment Cultural Setting
Conceptual Framework
Constructing a conceptual framework communicates the research phenomena to be
examined (Maxwell, 2013). The conceptual framework also provides the model to explain the
tentative theory and is informed by the researcher's background and orientation (Maxwell, 2013;
Merriam & Tisdell, 2016). This framework describes the interactions of key stakeholders and
influences.
The conceptual framework for this effort is illustrated in figure 2 with the U.S. Army as
the organization. This study's stakeholder group is Army lieutenant colonels. Viewing the figure
from left to right, we identify that lieutenant colonels interact with the knowledge and motivation
influencers. The knowledge influencers consider if the lieutenant colonels know how to reflect
on performance, know how to receive feedback, and know how to incorporate feedback. The
motivation influencers include understanding the value of MSF and incorporating MSF. Both the
K and M influencers overlap with each other and with the stakeholder group since they are
related and interdependent.
The organizational influencers are relevant to the Army that offers MSF via the MSAF
assessment. The O influencers indirectly include both the K, M, and stakeholder group with
limited direct engagement.
37
Ideally, if the stakeholder group effectively used an MSF assessment facilitated by the
Army's leadership, the assessment would help leaders identify their positive and negative
behaviors and their impact on others. Thus, providing more self-aware and potentially higher
quality Army leaders that facilitate the Army achieving its organizational goal.
Figure 2
Conceptual Framework
Conclusion
This study seeks to understand the experiences of Army lieutenant colonels with the
MSAF assessment to determine if the Army needs to offer an improved MSAF assessment. The
literature outlined that MSF assessments are of value and this can be extended to the Army.
Using MSF assessments requires committing organizational resources and effort to design and
implement a tailored assessment tool that provides meaningful feedback and post-assessment
coaching to assessed employees. The review of the literature informed the assumed knowledge,
motivation, and organizational influences related to the research. The knowledge influences
38
included knowing how to reflect on performance, how to receive feedback, and how to
incorporate feedback. The motivation influences included understanding the value of MSF and
incorporating MSF. Finally, the organizational influences include reflection that supports
leadership development and accountability to offer an effective MSF assessment. The conceptual
framework highlights the interactions between the knowledge, motivation, and organizational
influencers. Chapter three will describe this study's methodological approach to validate the
assumed influences.
39
Chapter Three: Methodology
Introduction to the Methodology
The purpose of this study was to analyze and examine the knowledge, motivation, and
organizational influences for the Army to offer an improved MSF assessment. The analysis
generated a list of assumed influences that were systematically examined by considering the
following questions:
1. What are the knowledge and motivation influences related to having an improved
MSF assessment?
2. What is the interaction between knowledge and motivation and organizational
culture?
3. What are the recommended knowledge and skills, motivation, and organizational
solutions for having an improved MSF assessment?
This chapter describes the relevant stakeholders and the sampling criteria methods for the
interviews in this study.
Methodological Approach and Rationale
This research was exclusively qualitative and used one-on-one interviews. Qualitative
research seeks to understand the meaning of the stakeholder experiences using rich descriptions
collected and analyzed by the researcher (J.W. Creswell & J.D. Creswell, 2018: (Merriam &
Tisdell, 2016). Qualitative semi-structured interviews facilitated exploring the knowledge,
motivation, and organization influences.
40
Sampling and Recruitment
Participating Stakeholders
The stakeholder group that was critical for the accomplishment of this study was Army
lieutenant colonels. They are particularly important due to their impact on organizations, length
of service, their prevalence within Army units, and their authority. As of May 2020, the
stakeholder group consisted of 9,040 lieutenant colonels (Active Duty Military Personnel by
Rank/Grade, 2020). As this is a qualitative study, random sampling was used to select and
interview 14 lieutenant colonels.
Due to organizational privacy restrictions, the contact information for the entire
population of lieutenant colonels was not available. The researcher used the Army’s lists of
lieutenant colonels selected for battalion command and key billets to solicit potential candidates.
The command and key billets lists are published annually and include the names of those
selected, with approximately 420 selectees for a battalion-level command or equivalent key
billets on each annual list. The selection of lieutenant colonels for a battalion-level command and
key billets is a highly rigorous and selective process that generally results in the best available
officers selected from the broader lieutenant colonel stakeholder group. Thus, the lists ensured
that the potential interviewees were high-performing lieutenant colonels as determined by the
Army’s internal metrics on previous performance success and long-term potential. The
researcher used a random number generator to code all lieutenant colonels listed as selectees for
command from the 2018, 2019, and 2020 battalion command and key billet lists, totaling 1290
lieutenant colonels. Using a random number generator, 10% of the 1290 lieutenant colonels were
selected as potential participants. The researcher used the Army global email network to obtain
the email address for the 129 potential participants and then contacted each officer from a non-
41
Army university email address. The researcher then emailed each officer to solicit participation
in the study. Of the 129 potential candidates the researcher was able to schedule 14 participants
for interviews.
Interview Sampling Strategy, Criteria, and Rationale
The criteria for this study focused on experience with the MSAF assessment. Participants
with this experience were appropriate since they could provide meaningful and relevant
observations. The researcher used the following criteria to screen and select interview
participants:
Criterion 1 (Experience with the MSAF Assessment)
Participants must have previously completed at least two MSAF assessments within the
last five years as the assessed soldier. This ensured that each participant had sufficient
experience to offer informed answers.
Criterion 2 (Experience with MSAF Feedback Reports)
Participants must have reviewed an individual MSAF post-assessment report or reports.
This criterion ensured that those interviewed had examined their post-assessment report and had
the requisite exposure to report feedback.
The emails soliciting participants included the two criteria as a prerequisite before the
scheduling of interviews. The interview questions also confirmed both criteria with each
participant.
42
Qualitative Data Collection and Instrumentation
Interviews
Interview Protocol
This study used semi-structured interviews due to the rich detail that can be captured in
interviews. Before starting each interview, the researcher introduced himself, confirmed consent
to participate and to record the interview, and briefly described the nature of the research. The
protocol began with questions on the background of the participant, including their military
service history. The initial question built rapport and ensured the interview started with a topic
that was both comfortable and familiar to each participant. Subsequent questions sought answers
that inform the three research questions and were aligned with the knowledge, motivation, and
organizational influences. The interview questions initially determined the participant's
experiences with MSF assessments, within and beyond the Army. The questions then shifted
toward their experiences with the MSAF tool, including the perceived utility of MSF
assessments and goal orientation. Additional questions addressed issues regarding MSF in the
Army and then solicited recommendations for how to improve the utility of the current MSAF
assessment. The semi-structured protocol allowed for additional follow-on questions as
necessary.
Interview Procedures
Data collection, in the form of the interviews mentioned above, occurred in May and June
of 2020, during the Covid-19 pandemic. Given the geographic dispersion and participant work
commitments, there was no ideal set time for the interviews. The researcher selected times that
were convenient for each participant since they lived in either the United States or Europe. The
43
14 interviews ranged from 22 to 42 minutes in length and were all conducted by phone or video.
At the start of each interview, the researcher reconfirmed permission to record and take notes.
The researcher aimed to make each interview as conversational and informal as possible.
The researcher did not wear an Army uniform during the interviews, instead dressing in casual
professional attire, and some of the interviewees were in uniform. As recommended by Patton
(2002), the researcher used an audio recorder for all interviews and also took notes during the
interviews. Additionally, no later than 48 hours after each interview, a reflection memorandum
was created to supplement the notes captured during the interview (Bogdan & Biklen, 2007). The
reflection memoranda summarized the major takeaways, communication style, and tone of each
interview. The researcher also used a private, quiet location for each interview and did not
collect documents or artifacts.
Credibility and Trustworthiness
Since this research effort was exclusively qualitative, the connected concepts of
credibility and trustworthiness are essential to highlight. Credibility, as highlighted by Merriam
& Tisdell (2016), addresses how close the research effort can be linked to reality, noting that
reality can never truly be captured and that every researcher has an impact on a research effort,
even when trying to limit bias (Maxwell, 2013). In short, credibility highlights if research is
accurate, while trustworthiness is related to the research process.
This study used member checks or respondent validation to maximize credibility and
trustworthiness. This method allowed the researcher to confirm initial findings with respondents
to help increase the probability of credibility (Merriam & Tisdell, 2016). Once the interviews,
related notes, and reflection memorandums were completed the researcher shared the results with
44
four randomly selected participants to solicit their confirmation. Doing so ensured that the
captured interpretation of their experience was as accurate as possible.
When previously serving in assignments with positional authority, e.g., battalion
executive officer and brigade operations officer in South Korea and Germany, the researcher
encouraged subordinates, peers, and supervisors to use all available tools to improve their
leadership skills and their self-awareness. Accordingly, when conducting in-person quarterly
counseling and professional development sessions with subordinates, the researcher encouraged
the use of the MSAF assessment. Additionally, the researcher had previously used the MSAF
assessment to be assessed and provide feedback to others on multiple occasions. These behaviors
highlight that the researcher approached this study with assumptions and biases. Precisely, based
on previous experiences with the MSAF, the researcher assumed that MSF assessments are a
useful tool. In addition to using tools such as the Institutional Review Board (IRB) to reduce the
impact of assumptions and biases, the researcher did not mention personal experiences with the
MSAF and maintained objective verbal and nonverbal reactions during the interviews and
communications with participants.
Ethics
Research involving human participants requires controls, preparations, and diligence by
the researcher to limit any potential harm (Glesne, 2011; Rubin & Rubin, 2012). Several such
procedures, supported by research guidelines and literature, support this end. First, participants
must fully understand the scope of their participation and provide consent. Informed consent, as
highlighted by Glesne (2011), includes confirming participants are informed of and understand
three concepts: their participation is voluntary, the potential impact of the research on their well-
being, and that their participation can be terminated at any time (Glesne, 2011). The researcher
45
obtained informed consent using pre-interview communications and at the start of each
interview. During the interviews, an opening statement reminded participants that their
participation is voluntary and that removing their data from this research effort could occur at
any time.
Second, the confidentiality of data is essential to protecting participants when conducting
research, and thus, requires appropriate precautions (Rubin & Rubin, 2012). To protect the
confidentiality of participants, this study used pseudonyms for participants and organizations.
Additionally, identifying markers were removed. To further safeguard the confidentiality of
collected, stored data, participant names were coded against a master list and stored on a separate
device that was only accessible to the researcher. The gender of individual participants is also not
revealed when highlighting the interview statements in chapter four.
While the researcher is an active-duty lieutenant colonel, he is not currently in a
leadership or command role. Accordingly, and to limit any potential conflicts of interest, the
researcher did not have general military authority or military judicial authority over any
interviewees. Additionally, the participants did not include lieutenant colonels who currently
work with the researcher.
Limitations and Delimitations
Limitations impact the results of research and are beyond the control of a researcher.
Delimitations are choices by a researcher that define the boundaries of a study. Limitations of
this study included:
● Data collection was conducted within a short time frame, i.e., from May to June 2020.
Participation was dependent on personal and professional schedules during the initial
stages of the COVID-19 pandemic.
46
● The potential for untruthful answers from respondents during the interviews.
● Small sample limiting generalizability.
The delimitations includes:
● Data collection was limited to the active Army component.
● Data collection is limited to Army officers and does not consider the majority of Army
personnel, i.e., civilians, soldiers, and NCOs.
● Data collection does not include company grade or flag officers.
● Data collection is limited to lieutenant colonels and excludes majors and colonels.
47
Chapter Four
Results and Findings
Offering an improved MSAF assessment with the Army was the focus of this study.
Chapter four presents the results and findings of this research related to the knowledge,
motivation, and organizational influences. As mentioned in the previous chapter, the Clark and
Estes (2008) gap analysis model guided the categorization of the influences. The Clark and Estes
framework can assess and potentially validate assumed influences based on the results of the
interviews. This chapter begins by highlighting the participating stakeholders, a review of the
research questions, and concludes with the results and findings.
Participating Stakeholders
Army lieutenant colonels were the stakeholder of focus for this study. Fourteen lieutenant
colonels participated in semi-structured interviews via a video application or the telephone. The
participants had all served in the Army for at least 14 years and three participants had over 20
years of service. The years of military service for all participants aligned with traditional Army
officer career timelines. Four of the fourteen participants were women (28.5%), while the
remaining ten were men (71.5%). The number of women participants in this study was higher, by
13.4% than the demographics of the stakeholder group of 9,040 lieutenant colonels in which
women represent 15.1% (Active Duty (Military Personnel by Rank/Grade (Women Only), 2020).
All of the participants had served in leadership positions such as unit commanders and
had also served in staff positions, i.e. managerial positions with less authority and responsibility.
Additionally, all participants had completed at least one tour or assignment in an active combat
zone. Eight of the 14 participants had served or were serving an overseas assignment in countries
such as Germany, Italy, or South Korea. Each participant acknowledged completing at least two
48
MSAF assessments within the last five years. Participants also indicated multiple uses of MSF
assessments, including the MSAF, before achieving the rank of lieutenant colonel and while
serving at their current rank of lieutenant colonel.
Each participant was assigned a unisex pseudonym that was not similar to their first
name. At the start of each interview, each participant provided a brief summary of their career.
No other additional demographic information was collected during the interviews. Table 4
provides an overview of each participant based on the information they chose to share at the start
of each interview. The background and experience information offered by each participant was
different and therefore the information in table 4 is not uniform.
Table 4
Participant Pseudonyms, Background, and Experience
Participant
Pseudonyms
Background and Experience
Peyton - Infantry branch officer
- Nineteen years of service
- Multiple deployments to combat zones
Artemis - Engineer battalion commander in a cavalry regiment
- Recent assignment supporting a special operations command
- Multiple assignments in Europe
Avery - Aviation branch officer
- Previously assigned in Hawaii, Texas, and South Korea
- Two combat deployments in Iraq
Casey - Commissioned from Reserve Officer Training Corps (ROTC) university
program
- Recently served in the Army Headquarters
- Training battalion commander
Jay - Served as a staff intelligence officer for aviation, field artillery, and infantry
battalions
- Previous assignment as a RAND fellow
- Scheduled to transfer to Germany during the summer of 2021 to assume
battalion command
49
Jo - Signal branch officer
- Previously served as an ROTC university instructor
- Twenty years of service
Finley - Started his / her military career as a soldier in the National Guard
- Four combat tours in Afghanistan and Iraq
Sloan - Career spent in the special forces community
- Before joining the Army the officer served as a Navy special forces officer
Salem - Eighteen years of service
- Combined arms battalion commander
- Previously an instructor at the United States Military Academy (USMA)
Jules - Initially a field artillery branch officer
- Later transitioned to the adjutant general corps
- Served in a military intelligence brigade in South Korea
Ash - Multiple leadership positions as a multifunctional logistics officer
- Served as an interagency fellow with the Department of Homeland Security
Kerry - Logistics branch officer
- Seventeen years of service
- Training battalion commander
Max - Started their military career in the Air Force and then transferred to the
Army as an armor branch officer
- Serving as the chief of plans for corps level unit
Rowan - Civil affairs branch officer
- Previous assignments in the special forces community
- Attended graduate school at the Naval Postgraduate School in California
Purpose of This Study and Research Questions
The purpose of this study was to analyze and examine the knowledge, motivation, and
organizational influences for the Army to offer an improved MSF assessment. The analysis
generated a list of assumed interfering influences that were systematically examined by
considering the following questions:
1. What are the knowledge and motivation influences related to having an improved
MSF assessment?
50
2. What is the interaction between knowledge and motivation and organizational
culture?
3. What are the recommended knowledge and skills, motivation, and organizational
solutions for having an improved MSF assessment?
Determination of Threshold Criteria
The KMO influences identified using the Clark and Estes (2008) gap analysis framework
required criteria to determine if a finding is an asset or a need. After the 14 interviews were
reviewed, and patterns were identified, the researcher identified if the results were an asset or a
gap. While the assumed influences differed in relative importance, the researcher chose a
consistent threshold of 10 of the 14 (71.4%) interview participants. Harding (2013) stated that if
a code applies to at least 25% of participants, that it can be used as a subjective standard.
Therefore, if an assumed influence met or exceeded the 71.4% threshold, the influence was
confirmed as an asset. While the 71.4% threshold contains inherent subjectivity the researcher
determined that it communicates a measure of consensus. Influences that ranged from 7 of 14
(50%) to 9 of 14 (64.2%) participants were deemed partially validated. The results are grouped
by KMO influence and, as applicable, include additional insights identified in the research.
Results and Findings for Research Question One (K and M Influences)
Research question one focused on the knowledge and motivation influences related to
having an improved MSF assessment. This section will highlight the findings for the knowledge
and motivation influences, respectively. The knowledge influences included knowing how to
reflect on performance, receive feedback, and incorporate feedback.
51
Results and Findings for Knowledge Influences
The first knowledge influence was reflecting on performance. This is the ability to
examine past behaviors to determine if they align with organizational expectations and norms.
The majority of the officers, 12 of 14 (85.7%), shared that they understood how to reflect during
the feedback process and that they engaged in such reflection. When considering MSF Avery
stated, “So it was hard to chew on that sometimes. And get past the critique and how to make
that piece better by acknowledging what is accurate and something I need to improve on.” Jo
noted, “ Yes, some of the comments highlighted a blind spot that I had and I really tried to work
to fix that area.” Jay highlighted that MSF allowed a review of their past performance, “These
are my problems. But it is helpful to see that, in writing…” The results demonstrated that the
participants confirmed that reflecting on performance was important for understanding and
utilizing MSF.
The majority of participants, 12 of 14 (85.6%), stated that the knowledge related to
receiving feedback was confirmed when using an MSF assessment. Casey highlighted how the
assessment facilitated receiving feedback “…you need to be quiet and listen more.” Jo shared
how the assessment confirmed that he could receive feedback, “…some of the comments
highlighted a blind spot that I had and I really tried to work to fix that area.” Artemis highlighted
how an assessment and then a related later in-person conversation with a subordinate confirmed
his ability to receive feedback. After the sergeant explained the potential negative impact of
Artemis’s sense of humor, a conversation facilitated by the assessment, Artemis received the
feedback and stated “I said, that’s totally fair, I understand that…”.
Most participants indicated a clear understanding of the ability to incorporate feedback
into their actions as leaders, both from MSF assessments and more broadly: eleven of the 14
52
participants (78.6%) indicated that they were able to use feedback to improve their own
behaviors and to strengthen their leadership skills. When reviewing their reports Avery learned
that “…my subordinates wanted more development and more time spent on, not like LPD
development. But helping them with their careers. Which is something I put more focus into.”
Several participants provided examples of how MSF was incorporated to adjust their future
behaviors. Peyton learned that “I was too overbearing at times. Through that and with interaction
with a lot of my subordinates, leaders, I became aware to have a more humble approach when
leading.” Peyton added later that the feedback “Enlighted me to some blindspots. As a younger
officer, I was extremely like a bull in a china shop. There was some feedback that showed me the
negative results that came from my behaviors.” Additionally, Jo noted, “….some of the
comments highlighted a blind spot that I had and I really tried to work to fix that area.” The
ability to incorporate feedback was confirmed as necessary and that it can be positively informed
using MSF.
All three knowledge influences were identified as validated assets based on the
interviews. The majority of participants identified that having the knowledge to receive
feedback, incorporate feedback, and reflect on performance were necessary skills relating to the
use of MSF within the Army.
Results and Findings for Motivation Influences
The first research question focused on the knowledge and motivation influences related to
having an improved MSF assessment. This section includes the interview findings for the
motivation influences. The first influence, based on the expectancy-value theory, highlights that
lieutenant colonels need to understand the value of MSF. Goal orientation theory reinforces that
lieutenant colonels need to incorporate MSF to support their self-development.
53
The Value of MSF. All but one participant, 13 of 14 (92.8%), indicated that they valued
MSF and what such feedback provides. There were three main areas of value that were shared by
participants, including general utility, receiving input from a larger group of stakeholders, and
confirming self-perception.
Regarding the general value of MSF, participants remarked that it was useful for
uncovering critical areas for improvement and that when taken seriously, MSF can positively
influence future leadership behaviors. Peyton, Sloan and Rowan succinctly summarized the
value of MSF with statements such as, “I think it can be very enlightening”, “There is a lot of
value when taken seriously,” and “I think they [MSF] are extremely valuable.” Salem shared that
it can be particularly helpful by showing you how you are perceived by others: “…if leadership
is influence, then recognizing how you are perceived by those you lead and by your peers, seems
like it is worth having the conversation. ”Sloan and Finley stated, “Being able to see yourself is
critical to leader development” and “…knowing yourself is so important to understanding how to
improve yourself.” Additionally, when considering the comments received in previous MSAF
assessments, Artemis shared that some comments resonated over time.
When describing the value of MSF, several participants linked the value to gaining
perspective on their actions and behaviors from a larger group. For example, Casey said MSF
was uniquely useful since it created the “… the opportunity to contact and gain perspective
across the spectrum of people you interact with and have interacted with.” Jay also stated,
“MSAF is valuable since it allows a multitude of different diverse groups to reflect on how they
see you.” Research question two will add additional insight as to why MSF from a larger group
of stakeholders was identified as important.
54
Participants also expressed value in MSF as a tool that confirms self-perception. Ash and
Casey described previous MSAF assessment reports as validation of their self-perception. Max
stated, “I think it was more the reinforcement of what I had already known.” Jay described MSF
“I must admit that every time I get one of these back I am not surprised…I understand who I am.
A validation of who I am.”
Incorporating MSF. The second motivation influence was incorporating MSF within the
context of being goal-orientated. Thirteen of 14 participants (92.8%) confirmed the influence as
an asset that allowed them to work toward mastering their profession. When considering using
MSF to make adjustments to their leadership style, Sloan stated: “I think yes, there is not a way it
couldn’t.” Sloan continued, “…once you see yourself, it allows you to make corrections based on
that.” After expressing reservations about not always receiving quality feedback in MSAF
reports Jo added, “I think as long as the people who take it, take it seriously and provide good
feedback. I think if the person is open to it they can actually gain some real insights and inform
self-development ways ahead.” When highlighting the challenges associated with incorporating
MSF Avery noted, “…the hardest part is to accept the feedback and try to work with it when you
get it.” Peyton stated, “I think the more 360 you can get the better you can see yourself. But it’s a
two-edged sword. You have to be humble enough to listen to what is being said. Feedback that is
not received well is like pearls on a swine.”
Ten of 14 (71.4%) participants expressed concerns about the motivation drivers for
soldiers to follow through and take action or incorporate the lessons from their MSF
assessments. Their reservations were based on the lack of a requirement for a post-assessment
review after each MSAF assessment. The MSAF program does not require participants to review
their feedback reports after completing an assessment. Additionally, the Army does not require
55
the assessed soldier to share the results with a mentor or supervisor and a development plan is
not required. Peyton noted that since the MSAF is not tied directly to mentorship, feedback is not
always acted on: “Information undigested or undiscussed may or may not be integrated. From
my perspective, the aspect that was missing was the link to the mentorship, to make it effective.”
Ash agreed and stated, “Most often though, if you are left to interpret the results on your own, it
is not as useful. I think the times when I found it as an actual tool for growth was when my rater
sat down and talked to me about it.” The need to link MSF assessment results to performance
feedback between supervisors and soldiers was highlighted by Jules, “You can’t correct that self-
reflection or self-perception if that counseling doesn’t happen.” The absence of a post-
assessment accountability mechanism will be highlighted when reviewing the results and
findings for research question two.
Additional Insight Regarding Motivation Influences
Seeking MSF from Possible Detractors and Known Supporters. The interviews
identified one additional emergent insight regarding motivation; lieutenant colonels will seek
MSF from possible detractors and known supporters. Seven of 14 (50%) participants associated
the value of MSF with receiving feedback from subordinates, peers, and supervisors who may be
detractors or supporters. Artemis stated, “Noting that I was going to snag a few that either didn’t
know me very well or we had had negative contacts. I wasn’t going to exclude those folks.”
Peyton expressed, “It was one of those things, that a critic may provide better feedback than a
fan. So, finding those folks who were willing to be blunt was my going in position. I didn’t want
a bunch of cheerleaders.” Kerry provided a similar perspective, “If you use the multi-source
assessment tool and you actually seek out people that are not your number one supporters that
you will likely get candid and honest feedback.” Ash sought out subordinates who had dispensed
56
military justice upon as a unit commander: “So, when I was a company commander I tried to
make sure that I included, like a quarter of the people in my list who I had had UCMJ or perhaps
another issue.” The insight confirmed that motivation to collect MSF from sources that could
potentially provide positive and negative insights was a validated motivation influence.
Summary of Validated Influences (Research Question One)
The interviews confirmed that the assumed knowledge and motivation influences were
identified as validated assets. The research also identified one additional motivation influence
related to seeking MSF from possible detractors and known supporters. In sum, the participants
validated that the knowledge and motivation influences, highlighted in table 5, are relevant to the
Army offering an improved MSAF assessment.
Table 5
Validated Knowledge and Motivation Influences
Influence Type
Reflecting on performance
How to receive feedback
How to incorporate feedback
Knowledge
Understanding the value of MSF
Incorporating MSF
Motivation
Results and Findings for Research Question Two (Interaction of K and M and O)
Research question two focused on the interaction of the knowledge and motivation
influences with organizational influences related to having an improved MSF assessment. The
interviews focused on two organizational culture influences. The first influence, a cultural
57
model, is that reflection supports leader development. Cultural models are the shared
perspectives of how an organization or larger community works or should work and are often
invisible and generally accepted (Gallimore & Goldenberg, 2001). The second influence is a
cultural setting that can be observed and potentially measured (Rueda, 2011).
Reflection Supporting Leader Development
The interviews identified that the links between the knowledge and motivation influences
and the Army’s culture are primarily driven by the performance evaluation system. Army
performance evaluations have a substantial impact on future promotions and selection for
competitive assignments. Therefore, positive evaluations are critical for an officer's long-term
success within the Army. The first was the limited opportunities to receive performance feedback
from personnel outside of their evaluation structure. The emergent observation was the limited
availability of performance feedback outside of the performance evaluation.
Receiving Feedback From Outside the Evaluation System. The primary observation
from participants regarding reflection is the ability of an MSF assessment to provide feedback
from personnel outside of their formal evaluation hierarchy. The current Army performance
evaluation system most often only requires feedback from the immediate supervisor on
performance and the next higher supervisor on potential. For the majority of performance
evaluations, this top-down system only incorporates the perceptions and feedback of two Army
leaders. Ten of 14 (71.4%) participants identified that MSF offers a unique and useful
opportunity to receive feedback on their impact on subordinates, peers, and supervisors. Max
stated, “The wide feedback from the MSAF is valuable in which it allows a multitude of
different diverse groups to reflect on how they see you. Sometimes that is not always apparent if
you don’t have a system that forces that feedback.” Casey indicated that “…being able to reach
58
above, below, and to subordinates to provide input and have seniors and peers [providing
feedback] is pretty significant.” When discussing the MSAF, participants highlighted their
strategy to select subordinates, peers, and supervisors. For example, Jay shared, “So I tried to
pick as many of the clear direct reports that I had. As a company commander, I was looking for
the senior warrant officers, platoon leaders, and platoon sergeants that I was interacting with on a
daily basis.”
When discussing receiving MSF from personnel who are not their supervisors, 9 of 14
(62.4%) participants emphasized the value of feedback from subordinates. When comparing
feedback from peers and superiors to subordinates Casey stated, “Subordinates seem to be the
most informed and seemed to give the most legitimate input.” Casey later added, “I found it
useful since you could gain general impressions based on the three categories. Subordinates
seemed to have a more open approach to what they thought about the questions and what they
wrote in the comments.” Ash shared, “I can’t think of another way for us to have our
subordinates to have a voice in our development than if we do some kind of survey like this.”
Additionally, Sloan expressed a similar perspective, “It’s also an opportunity for subordinates to
vote on the effectiveness of their leader as well as peers and seniors.” The interviews identified
that MSF offers Army leaders a rare opportunity to receive feedback on their performance from
subordinates. This is especially relevant since the Army, like other military organizations, is a
top-down hierarchical organization in which officers, like the stakeholder group, are empowered
with significant authority over their subordinates.
Lack of Quarterly Counseling on Performance. The Army requires officers to conduct
and also receive quarterly written counseling sessions during a performance evaluation period
(U.S. Department of the Army, 2019). Accordingly, the stakeholder group all with at least 14
59
years of Army experience, are familiar with receiving feedback on their performance. However,
5 of 14 (35.7%) participants indicated that performance counseling sessions rarely happened as
expected during their career. Jules noted, “The required counseling within the Army is already
non-existent. In 20 years, I’ve had an actual counseling session with my supervisors or senior
rater, less than the number of fingers on this hand.” The sentiment was echoed by Rowan, “I
haven’t been counseled outside of being talking to about my OER in the last five, six, seven
years.” Participants concluded that MSF provides a rare opportunity to receive feedback on their
performance, especially when they do not receive feedback on performance as expected from
routine counseling.
The interviews highlighted that reflecting on leader development is a validated gap due to
limited opportunities to receive feedback from outside the performance evaluation hierarchy.
Additionally, the interviews highlighted a collective failure to follow established performance
counseling procedures.
Offering an Improved MSAF Assessment
Since 2008, the primary tool for MSF within the Army is the MSAF assessment. All
participants had served in the Army for at least 14 years and were familiar with using the MSAF
assessment and its utility and issues. Eleven of the 14 (78.6%) participants held a negative
perception of the MSAF assessment. The interviews identified eight issues that can inform an
improved MSF assessment. The assessment issues, listed in table 6, are in three categories:
structural, utilization, and post-assessment.
60
Table 6
Issues Identified by Participants
Structural Utilization Post-Assessment
Selection Bias Low Response Rates Absence of an Impact
Instrument Design
Linking MSAF Date to
Evaluations
Absence of Post-Assessment
Action
Lack of Vertical
Development
Length and Time
Commitments
Selection Bias. A significant concern expressed by participants is how individual
subordinates, peers, and supervisors are selected for each assessment. When each assessment is
initiated, the assessed individual selects the potential raters, by name, to be invited to provide
feedback for all three source categories. Ten of 14 (71.4%) participants expressed concerns about
selection bias with the MSAF assessment. Peyton stated, “Of course, because you choose who
replies to it, you can shape your feedback based on the type of people that you pick to send the
thing to.” Casey highlighted their concerns, “….because the individual gets to select who they
apply for the questioning. And because of that a lot of times, people will not challenge
individuals that may have not seen eye-to-eye on or have had relationships. So, we generally
hunt within the social groups that we know and are comfortable with. Generally, the feedback is
pretty biased in that capacity.” When highlighting the utility of the MSAF, Ash stated, “If you
stay comfortable and only choose people who you know will give you good feedback, then I
don’t think that it is.” The officer also added, “I think the way we have used it in the Army where
61
we have allowed people to choose their own evaluators, for lack of a better word, does skew
your results pretty badly.”
Instrument Design. The MSAF includes two sections: the first with multiple, connected
web pages that solicit feedback using Likert scale questions and a final webpage seeking free-
form written comments. Nine of 14 (62.4%) participants expressed concerns related to design,
including a particularly negative perception of the Likert scale section. Avery stated, “I think that
due to the repetitive nature of the questions some people get survey fatigue when they fill it out.”
Jay stated, “While it has been a while since I’ve done an MSAF, but I remember then taking
closer to 30 minutes to fill out the form.” When discussing the questions, Jo specified, “There are
way too many questions that people just click through it for what they think someone wanted to
hear.”
When participants expressed concerns about the MSAF design, 8 of 14 (57.1%)
participants assigned more value to the free-form comments over the Likert scale data. Jo stated,
“I’ve always taken the comments as the most valuable part.” Jo continued “The comments are
where you got the actual development from if you were to take it from that.” Artemis described
his / her method of reviewing each MSAF as, “I would always flip to the comments first, then go
back and maybe review the numbers a little bit. Avery stated, “So for the Army version, the only
real benefit I got out of it was the comment portions. The comments were very useful.” Ash
compared the value of the two portions when the officer stated “I really think the narrative
portions were more useful to me than the bar charts, and all of that. Those are nifty, but I think if
you are looking, particularly for those data sets where you have a particularly small respondents,
those bar charts get pretty skewed pretty quickly. If you focus in on the narrative part then I think
you are more likely to get what you need to work on.”
62
Lack of Vertical Development. The MSAF assessment uses the same set of questions
and structure for officer ranks, also known as grades. Eight of 14 (57.1%) of participants
confirmed that the lack of vertical development does not acknowledge the expected evolution of
leadership and management skills for officers, and the significant responsibility disparities
between junior and senior officers. When comparing the difference in expectations for junior and
senior officers, Peyton stated, “While there is a management aspect across all of them, when it
comes to the leader attributes that you are expecting from those two populations. It may even
need to be specified to the grade, but also to the job or specialty.” When comparing the different
expectations for commanding units at lower versus higher-levels Max noted, “…at the upper
levels of command, whether it’s battalion or brigade or higher, it’s almost more about
organizational leadership versus personal leadership.” Casey noted that the OER templates are
different for the three officer groups and that the MSAF assessment should mirror the evaluation
system. They stated that the MSAF assessment “should not be the same, just like our evaluations
have changed for the levels. It should fit the context.”
Low Response Rates. A significant issue identified by 11 of 14 (78.6%) participants was
routine and recurring low response rates when requesting an assessment. As mentioned, after an
MSAF is initiated, those selected to provide feedback received an email that includes a link to
complete the online assessment. Once the deadline for the assessment passes, the assessed
individual can then access the post-assessment report. Additionally, individual or unit level
response rates for completing MSAF assessments, as the assessed individual or as a feedback
provider, are not available to unit commanders. Moreover, it is not mandatory to respond to a
request for feedback using the MSAF assessment. Accordingly, participants indicated that the
frequent low response rates diminished the utility of previous MSAF assessments. Finley stated.
63
“…in general on a yearly basis, you send it out to nine people you might get feedback from
three. It is kind of limited.” Ash added, “But, I don’t think there are many recently where I have
heard people get enough responses to actually have worthwhile results. When you have four
people that respond, it is just not good.” Ash continued, “…there was a few where I didn’t get
enough results, so I didn’t really bother. For the ones where I thought I got enough, good
information, I do read all of them.”
Additionally, when selecting potential raters, several participants indicated that they
routinely exceeded the minimums for each source category to ensure that enough potential raters
completed the assessment. Their over selection of each potential rater source category was based
on their previous negative experiences in which MSAF assessments did not exceed the threshold
for each source category and resulted in an incomplete assessment without meaningful feedback.
For example, Max stated, “I generally exceeded the supervisor and peer categories because you
typically don’t get much feedback.” When highlighting the low MSAF response rates, several
participants also highlighted that the supervisors they selected to provide feedback often did not
provide feedback. Casey shared, “And seniors, I saw, were the worst at actually replying.”
Rowan highlighted the impact when the officer said, “I didn’t necessarily pick senior people that
I thought could give me useful feedback, i.e., like my brigade commander or like the deputy
commander cause I knew they were super busy and wouldn’t do it anyway.”
Linking MSAF Dates to Evaluations. From 2011 to 2018, Army officers were required
to add the date of a completed MSAF assessment to the documents that informed their evaluation
report. Precisely the support form in which the officer listed their achievements and that is
provided to their supervisor. The date of the completed assessment was required to have been
within the past three years. However, there was no accountability mechanism to confirm that the
64
assessed officer started, completed, or even reviewed the results on an MSAF. Accordingly,
reports from the Center for the Army Profession and Leadership (CAPL) and participants
identified that the link to the evaluation was artificial with no connection between the
developmental assessment and performance evaluation reports. This link may have negatively
impacted motivation to utilize the MSAF assessment as a developmental tool (Cavanaugh,
Fallesen, Jones, & Riley, 2016; Fallesen, J.J. et al., 2015). The majority of participants, 11 of 14
(78.6%), found that linking the date of purported assessment to an evaluation as detrimental to
the effectiveness of the MSAF. Salem stated, “I think that the evaluation required you to initiate
one, but not necessarily to complete it or review it.” When describing the issue, Rowan stated, “I
think a lot of the push back with the MSAF 360 was the mandatory entry in the OER form. ”
Finley added a similar perspective when they stated, “…then as I became, I guess with the link to
the OER, I became disenfranchised.”
Alternatively, after the mandatory requirement linking the MSAF date to the evaluation
was rescinded in 2018, participants highlighted a negative association in their motivation to
continue using the MSAF. For example, Salem stated, “I stopped initiating the survey when the
Army stopped making me initiate for evaluation.” Describing the link between the assessment
and evaluations, Peyton stated, “We screwed up when we linked this to evaluation reports.
Because the moment it is linked to the evaluation report, it’s now about the report card and no
longer a development tool.”
Length and Time Commitments. The length of the assessment and the amount of time
required to complete the MSAF were identified as issues. Ten of 14 (71.4%) participants
expressed concerns that the assessment was too long and required too much time to complete an
assessment. The participants negatively associated the excessive length and time commitments to
65
access and use the website to the low response rates and their negative perception of the MSAF.
For example, Ash stated, “I don’t think it is seen as an overwhelmingly positive thing. I think it
is seen as a pain. Because it is so long.” Similarly, Salem shared, “…my perception, that others
like me didn’t take the tool seriously, which then gives me less confidence that the feedback is
accurate. That people aren’t just clicking the buttons to get it over with.” Rowan recommended,
“…making the MSAF more concise, so it doesn’t take as long. Jo expressed a similar
perspective, “…the MSAF, what did it have, eight pages, of clicks the things. Way too long.”
When highlighting the Likert scale portion, Finley noted, “There are way too many questions
that people just click through it for what they think someone wanted to hear.”
Absence of an Impact. Eleven of 14 (78.5%) participants identified that the MSAF’s lack
of impact, even as a self-developmental tool, was a issue. Jules highlighted that “…there is no
teeth to the assessment itself to make people process the responses that people may have taken
the time to provide. And there is no teeth to require a change in that person’s leadership style if
they need it. There are no consequences.” Salem stated that since the assessment “wasn’t
required to be used during counseling; for example, people just tried to get it over with, check
the block and move on.” Salem continued, “For example, if the survey somehow had some
impact on the evaluation or at least if it was used, it doesn’t seem to be that difficult to require
that during counseling leaders used MSAF feedback as part of that. And that would completely
change if there was a way to compel people to do that.”
Regarding impact, some participants contrasted their experiences with the MSAF with a
new MSF assessment recently implemented that informs how the Army selects battalion
commanders. The Battalion Commander Assessment Program (BCAP) was briefly described in
chapter two. The BCAP MSF assessment was first implemented in early 2020 and nine
66
participants had experience with the new tool. According to those participants, the BCAP MSF
assessment is different from the MSAF in four important aspects: the assessed individual does
not select potential raters, it is administered to a small population of lieutenant colonels, the
assessment was shorter with more direct questions, and the assessment has an impact on how
future leaders are selected. For example, Jay stated, “I do think the benefit of that over the MSAF
was the selection process because it was randomized.” Rowan stated, “I have done a few BCAP
assessments on others. I did all of the BCAP assessments I was asked of me. And what
motivated me was knowing that it was going toward my peers that could be future battalion
commanders. So, if I saw someone who I thought, in my opinion, did not deserve to be a future
battalion commander, then I absolutely wanted the board to have that feedback. Conversely, if I
saw someone who should absolutely be a battalion commander, I wanted that feedback to be
presented to the board. The idea that we are influencing who could be battalion commanders is
important to me. I did not feel the same way about the MSAF.” When discussing the value of a
shorter MSF assessment, Jo noted, “ I think the BCAP’s version with a lot less questions. I
thought that showed some promise since you could actually just take your time. The direct nature
of the BCAP MSF questions was mentioned by Kerry: “And they had the occasional box where
you could type in, you know, and give an example. Even one question said, is this person a toxic
leader? Yes or no? I was like they are not messing around with that one.” The same participant
expanded on the utility of the direct BCAP MSF questions, “…I did get to do the BCAP
assessment for six others last year. And those are brilliant. Those are really good. They are not
pulling any punches. They are very direct. They are very specific. And it was targeted to
commanders. So, if you are gonna use the MSAF, then it should be in the same concept.” While
the MSAF and the BCAP MSF assessments are different in scale and purpose, participants used
67
their experiences with both to highlight their perceived concerns and recommendations to
improve the MSAF assessment.
Absence of a Post-Assessment Action. The final issue, highlighted by 10 of 14 (71.4%)
participants, was the absence of a post-assessment action by the assessed soldier. Potential
examples of actions include creating a development plan or reviewing the results with a mentor.
Peyton stated, “From my perspective, the aspect that was missing was the link to the mentorship,
to make it effective. They also added, “Unless there is a development or engagement plan that is
developed out of that. It takes an exceptional person to take abstract information and turn it into
their own implementation plan, without having someone who can assist like a mentor or guide.”
Jules expressed the need for post-assessment counseling as such, “You can’t correct that self-
reflection or self-perception if that counseling doesn’t happen.” Ash highlighted the value of
post-assessment mentorship when they shared, “I do think it is a useful self-development tool.
Most often though, if you are left to interpret the results on your own it is not as useful. I think
the times when I found it as an actual tool for growth was when my rater sat down and talked to
me about it.”
The MSAF program offers post-assessment mentorship using the Virtual Improvement
Center (VIC). However, 8 of 14 (57.1%) participants indicated they were not aware that the
MSAF program offered post-assessment services. Max stated. “I was not. I didn’t know they
existed, nor did I use them” and Jo added a similar statement, “I don’t think I knew they existed
and I never used them.” The sole participant who had used the post-assessment services was
disappointed with the results. Casey described his experience, “I attempted to, and I was very
disappointed. I reached out, I was interested when I saw they had additional stuff. So I set up an
appointment to meet with one of the counselors or mentors that they have identified. Basically. it
68
turned into a book suggestion…. I wasn’t sure what the mentorship would provide and I was
disappointed by the engagement.” In sum, mentorship, from either an internal or external
organization, after an assessment was identified as a method to improve the value of assessment.
Summary of Validated Influences (Research Question Two)
Research question two focused on the interaction of the knowledge and motivation
influences with organizational influences related to an improved MSF assessment. Participants
highlighted two concerns within the Army’s cultural model that limit leader development: the
absence of performance feedback outside of the evaluation hierarchy and the lack of routine
performance counseling. Participants identified eight issues that are relevant to the Army
offering an improved MSF assessment. The influences associated with the Army’s cultural
model and setting were validated. Research question three will address recommended solutions
to facilitate an improved assessment based on validated influences.
Results and Findings for Research Question Three (KMO to Improve the MSAF
Assessment)
The participants overwhelmingly supported the Army having an MSF assessment (12 of
14 (85.7%). Responses from Avery, Kerry, and Jo included: “Yes, I think anyone could benefit.”
“Absolutely, I think yes,” and “I think yes. It has the ability to add good feedback.” Additionally,
participants encouraged the Army to offer an MSF assessment to all personnel, regardless of
rank or position. For example, Avery shared, “I would love to see junior NCOs go through an
MSAF process.” Sloan added a similar perspective when they stated, “Yes, I think every soldier
should have access to one of these tools.”
Additionally, 7 of 14 (50%) participants linked the MSF to an organizational effort to
reduce the possibility of harmful leaders reaching positions of substantial authority. Ash stated,
69
“We have gone an awful long way towards eliminating the kind of toxic or undesirable
leadership traits by helping people understand what they are doing wrong. Kerry added, “I think
some of this toxic leader needs to be addressed and fixed earlier in careers. You can fix it in a
captain, but not for a lieutenant colonel.” Artemis highlighted support for MSF assessments
when the officer stated, ”If you look at poor leaders, and I am specifically avoiding using the
word toxic, the impact is often not clearly seen at the higher echelons. So, some type of feedback
could be useful.” When considering how MSF can impact harmful leaders Rowan added,
“Getting rid of toxic leaders. If we really want to do that then we have to take these assessments
seriously.”
As noted previously, participants identified eight issues in three categories: structural,
utilization, and post-assessment. The next section reviews the participant’s recommendations,
when offered, that aim to mitigate the identified issues.
Selection Bias
Participants identified that selection bias negatively impacted the utility and value of
MSF when using the MSAF. To mitigate selection bias, participants recommended that selecting
potential raters was not to be limited to the assessed soldier. Potential raters would be selected
from personnel who previously served with the assessed officer based on information from the
Army’s human resources databases. For example, Casey recommended: “find a way to
randomize the selections. We all know what units we were in, somehow, we know the years we
served. If there was a way to pull that data and figure out that timeline of your life.” Finley
recommended, “A way to make it more automatic with sending out the information based on the
unit you were in. Automatically send it to people that were in your organization without you
having to pick and choose.”
70
Instrument Design
The solutions offered included reducing the number of Likert scale questions, offering
additional opportunities for free-form comments, and making the questions shorter and more
direct.” Jo recommended to, “…trim the questions down to a one page of questions. A little bit
more fill in the blank questions that highlight better areas.” When elaborating on the value of the
comments section participant 11 stated, “I think I would make it a shorter survey. With less of
that stuff on a scale and more of the stuff on a narrative associated with it. Because people value
personalized feedback and it is hard to get personalized feedback when you are just clicking
buttons.” Max recommended questions similar to the BCAP MSF assessment, “Can this person
build a team? Yes or no? Does this person hold a grudge? Yes or no?” The officer continued
“They are not pulling any punches. They are very direct. They are very specific.” Peyton
recommended an assessment with, “…drop-down menu of descriptive phases about the
questions. So, the question might be: How does this person lead in this type of situation? And
then they would give you a range of choices that were a lot more descriptive then just doing
numbers or I strongly agree.” The solutions offered by participants are linked to the concerns
about the length and time required to complete an assessment.
Lack of Vertical Development
The recommended solution was evolving the assessment as officers increase in seniority.
For example, Rowan shared, “I think the feedback that you need at the captain and below, at the
tactical level is different than what you need at the field grade level. And is certainly different
than what a general officer needs,” Jo stated, “I think it could be tailored. Company grade, even
lieutenant level and captain. Major and lieutenant colonel could probably be grouped together.”
The templates used for officer evaluations are slightly different for company grade, majors and
71
lieutenant colonels, and colonels and flag officers. Similarly, Casey stated the MSAF questions,
“…should not be the same, just like our evaluations have changed for the levels. It should fit the
context.” Table 7 summarizes participant recommendations for structural issues.
Table 7
Participant Structural Recommendations
Issue Recommendations
Selection Bias - Use human resource system to select potential raters
Instrument Design - A shorter instrument with more concise, direct questions
Lack of Vertical Development - Align instrument with officer evaluation templates
Low Response Rates
The participants did not offer direct recommendations to improve low response rates. The
solutions offered to improve response rates were indirect and linked to other identified issues,
such as reducing the length of the instrument and the time commitment.
Linking MSAF Date with Evaluations
In 2018 the requirement to add the date of a completed MSAF assessment to the
documents that informed OERs was rescinded. Therefore, this issue highlighted by participants
no longer requires attention. However, when discussing the MSAF date link to evaluations, the
participants also highlighted that the assessment does not need to influence performance
evaluations. Noting that the MSAF assessment has always been an exclusively developmental
tool, none of the participants argued that the MSAF needs to formally influence evaluations.
Salem stated, “…if you made MSAF feedback part of an evaluation, I think that is probably a big
leap for the Army to take right now.” Alternatively, participants did encourage that the results
from MSAF assessments need to be shared with the assessed individual’s supervisor to help
72
inform performance counseling without a direct link to evaluations. In sum, this research
identified that the MSAF assessment do not need to be linked to the performance evaluation
process. This consideration is discussed in an upcoming paragraph relating to the impact of
MSAF assessments.
Length and Time Required
Participants highlighted that the MSAF assessment was too long and required too much
time to complete. Participants recommend a shorter instrument, with fewer webpages, that
required less time to complete. Rowan recommended changing the MSAF assessment, “So, it
only takes 15 or 20 minutes to do one. Maybe they could get it down to one or two pages of
questions. Maybe a ten minute period. Then maybe people would be motivated to do them.” Ash
11 indicated, “I think I would make it a shorter survey. With less of that stuff on a scale and
more of the stuff on a narrative associated with it.”
In addition to the length and time concerns, four participants also offered additional
recommendations to improve how users interact with the MSAF webpage. Setting up an
assessment or providing feedback on another assessment currently requires accessing an Army
website using an Army Common Access Card (CAC). Using a CAC for the MSAF assessment
most often requires using an Army computer system or a personal computer with a CAC reader
and associated software. Jay indicated that the MSAF website was difficult to use, “…it was so
time-consuming to find the people and add them in the different categories. Sloan highlighted the
issues with accessing the MSAF website when deployed, “So, and then you know you could get
into the hard science part of connectivity issues downrange, that’s a big part. Back in the early
days, you had to do all of this stuff on Army systems, and so you had to be on Army internet
instead of dirty internet.” Ash also highlighted challenges during deployments, “So I know I had
73
a couple of those where I know I didn’t get very good feedback because we were in a deployed
environment and just didn’t have enough access.” Ash also recommended removing the
requirement to access the MSAF assessment using a CAC card when highlighting a similar
safety survey, “They didn’t need a CAC card, they didn’t need all of that. They could sit there
and do it right there where they were sitting.” Table 8 summarizes the participant
recommendations for utilization.
Table 8
Participant Utilization Recommendations
Issue Recommendations
Low response rates - No direct recommendation
Linking MSAF date to evaluations - Rescinded in 2018, no action required
Length and time commitments - A shorter instrument
- Allow access to provide feedback without a CAC
Absence of an Impact
The MSAF assessment is exclusively a developmental and does not inform performance
evaluations. Therefore, the assessed individual fully controls the impact and use, if any, of the
post-assessment report. Again, participants did not recommend that MSAF assessments inform
evaluations; however, 9 of 14 (64.2%) recommended that post-assessment reports were shared
with the supervisors of assessed individuals. Avery stated, “I think the supervisor has to be
involved. They are great for personal self-reflection, but not everybody is going to want to do
that or take it on themselves. I think it is; also, it would be a great tool for supervisors to see
another side of the people they rate.” Salem added, “I would require that raters and senior raters
74
make use of it during counseling” and also stated, “…if you just asked for bosses to have
conversations about the feedback. That seems within reach.”
Beyond the impact for the assessed individual and his or her supervisor, participants also
highlighted the potential positive impact if unit commanders had access to MSAF reports for
those they directly supervise. Avery stated, “…being in a command position; I would love to see
some sort of MSAF type report on some of my subordinates. To confirm or deny my assessment.
I wish there was a better tool available that I could use…” Casey described how unit
commanders could use the assessments to improve their unit leader development programs:
“Some way to compile that information and share it with the leadership you could have
commanders develop leader development programs, topics, stuff, for areas, just like command
climate surveys. I think that would help. A targeted program.” Artemis indicated support for this
solution even though sharing MSAF reports with supervisors may create some concerns. They
said, “I think the fear would be, they are raters and senior raters going to see this feedback and
use it as insight into an evaluation report. But I’m not sure that fear is something that we want to
shy away from.”
Absence of Post-Assessment Action
Participants expressed concern that the MSAF assessment lacks a requirement to review
the post-assessment report or to take action based on the results. Solutions offered included
requiring a review of the report with a supervisor, as mentioned earlier. Other solutions offered
were requiring personalized coaching with a mentor after an assessment. Peyton recommended a
required discussion with a mentor that resulted in a development plan: “The linkage I see is a
tailored MSAF tailored by grade and branch that produces a plan that then is linked and
discussed with a mentor.” When reflecting on the experiences with the BCAP MSF assessment
75
Max stated, “I think what was of value was that you had an assigned individual who was able to
take you through what that feedback was. It allowed you the chance to talk about it and reflect on
it. And I think that in itself was immensely valuable.”
Several participants recommended using mentors outside of the chain of command or
evaluation hierarchy. Jay noted, “You could contract it out, you could use volunteers. There are
ways to get that mentorship and individualized feedback.” Jay encouraged external coaches,
beyond the officer’s chain of command, “Personalized, individual coaching were effective.
Certainly, possible with the MSAF results and a personal mentor, but it’s also helpful to get the
email from talent management asking: Do you want a coach? And then they assign one. Then
you aren’t left with how you start the discussion with a mentor, who may or not have time for
me.” Avery, also a recent participant in the BCAP MSF assessment, indicated that using
experienced externals mentors provided better value since they, “…had also seen so many of
these surveys so he could quickly identify deviance.” Participants argued that the MSAF
assessment requires an impact and a post-assessment review of the report to increase its utility.
Therefore participants recommended that supervisors have access to each post-assessment report
and that the immediate supervisor can review the results with the rated soldier. The participants
also recommended the use of external mentors to improve post-assessment report efforts. This
option already exists using the MSAF programs VIC. The recommendations from participants
are summarized in table 9.
76
Table 9
Participant Post-Assessment Recommendations
Issue Recommendations
Absence of an
impact
- Direct the review of post-assessment reports with the immediate
supervisor
Absence of post-
assessment action
- Direct the review of post-assessment reports with the immediate
supervisor
- Use external mentors outside the unit chain of command
Summary of the KMO to Improve the MSAF Assessment (Research Question Three)
Research question three focused on the solutions to inform an improved MSAF
assessment. The participants supported the Army having an MSF assessment and, importantly,
an improved assessment with solutions related to its structure, utilization, and post-assessment
actions.
Summary
This chapter presented the findings of the qualitative data analysis highlighted in chapter
3. All five knowledge and motivation influences were validated, and one emergent additional
motivation influence was identified. The two organizational influences were also validated with
multiple sub-themes for each influence. Eight issues for the assessment were identified, and the
interviews offered solutions to the majority of the issues. Chapter five will offer
recommendations to strengthen the current MSAF assessment. The chapter will also present an
implementation plan and suggestions for future research.
77
Chapter Five
Recommendations
Review of Research Questions and KMO Influences
The purpose of this study was to analyze and examine the knowledge, motivation, and
organizational influences for the Army to offer an improved MSAF assessment. The stakeholder
group of focus was Army lieutenant colonels. The analysis generated a list of assumed influences
that were systematically examined using the following questions:
1. What are the knowledge and motivation influences related to having an improved
MSF assessment?
2. What is the interaction between knowledge and motivation and organizational
culture?
3. What are the recommended knowledge and skills, motivation, and organizational
solutions for having an improved MSF assessment?
This research was based on the framework provided by Clark and Estes (2008), which
focuses on the knowledge, motivation, and organizational influences that contribute to
accomplishing goals. Knowledge consists of the skills required for effective performance.
Motivation drives individuals to start and complete tasks and is based on an individual’s belief in
their abilities (Clark & Estes, 2008; Krathwohl, 2002; Mayer, 2011; Rueda, 2011).
Organizational influences, such as structure, procedures, and available resources, impact on the
ability of stakeholders to accomplish goals.
The knowledge influences were knowing how to reflect on performance, how to receive
feedback, and how to incorporate feedback. The motivation influences were understanding the
value of MSF and the desire to incorporate MSF. Finally, the organizational influences were
78
reflection supporting leadership development and accountability to offer an effective MSF
assessment.
Summary of Validated Influences (Research Question One)
All three knowledge influences were identified as validated assets based on the
interviews. The majority of participants identified that knowing how to reflect on performance,
how to receive feedback, and how to incorporate feedback, and are necessary skills relating to
the use of MSF within the Army. Both assumed motivation influences, seeking and incorporating
MSF, were identified as validated assets. The research also identified one additional motivation
influence related to seeking MSF from possible detractors and known supporters.
Summary of Interaction Between the K & M & O (Research Question Two)
Research question two focused on the interaction of the knowledge and motivation
influences with organizational influences related to an improved MSAF assessment. The
research identified two issues within the Army’s cultural model as validated gaps: the absence of
performance feedback from beyond the evaluation hierarchy and the lack of routine performance
counseling. The second organizational influence, accountability to offer an effective MSF
assessment was also validated as a gap. Additionally, when considering the second
organizational influence participants identified eight issues that inform an improved MSAF
assessment including selection bias, instrument design, lack of vertical development, linking
MSAF date to evaluations, length and time commitments, absence of an impact, and an absence
of post-assessment action.
Summary of the KMO to Improve the MSAF Assessment (Research Question Three)
Research question three focused on the solutions to inform an improved MSAF
assessment. The participants supported the Army having an MSF assessment and, importantly,
79
an improved assessment. They offered solutions regarding the structure, utilization, and the post-
assessment requirements of the current MSAF assessment to improve its utility. The next section
will describe the recommendations to strengthen the current MSAF assessment based on this
study and supporting research.
Recommendations to Improve the MSAF Assessment
The MSAF assessment is used within the Army to solicit and employ MSF and is
available to the entire Army community. The assessment is only used for self-development and
concludes with an individual feedback report that is only available to the assessed soldier. The
report contains average ratings from subordinates, peers, and superiors and free-form comments
that describe the rated individual’s strengths and developmental needs. The assessed individual is
not required to take any further action after an assessment is completed, and the results are not
shared.
This research confirmed that the stakeholder group has the knowledge and motivation to
understand and use MSF. Therefore, the recommendations are focused on how the Army can
strengthen the MSAF assessment. The implementation plan and opportunities for future research
will be highlighted in the closing paragraphs.
Structural Issues
Selection Bias. This study identified a structural issue regarding how individuals are
selected to participate in each MSAF assessment. When an assessment is initiated, the assessed
soldier selects, by name, potential raters for all three source categories. As MSF scholars
highlight, the selection methods used by the MSAF assessment likely negatively impact the
accuracy and fairness of ratings (Waldman & Atwater, 2001; Gilliland & Langdon, 1998). Users
requesting an assessment are influenced by their conscious and unconscious biases when
80
selecting the subordinates, peers, and supervisors who provide MSF. Farr & Newman (2001)
highlight that a balanced rater assessment system should seek to provide a balanced perspective,
both disagreement and agreement, from various sources not limited by the rated employee.
Accordingly, selecting participants for each MSAF assessment is best informed by the assessed
soldier and a separate unbiased source. Research indicates that including the rated employee in
the rater selection process likely does not negatively influence results and also transfers
ownership to the ratee (Bracken & Rose, 2011; Farr & Newman, 2001; Nieman-Gonder et al.
2006). Therefore, the improved assessment needs to leverage the existing Army human resources
database to select a majority of the assessment participants. The rated soldier would select the
remaining potential raters not selected by the database.
The current Department of Defense database associates individual soldiers with a unit
level Unit Identification Code (UIC) while assigned to a small unit such as company or battalion.
When a soldier initiates an assessment, a program will identify the current and previous UICs to
which the soldier is and was assigned. The request for raters will search 3-5 years in the past
since the average Army assignment is 2-3 years in length. The resulting personnel from all three
source categories, subordinates, peers, and supervisors, will automatically receive the request to
complete an MSAF assessment. While this change may result in the program automatically
selecting potential raters that did not have sufficient opportunities to assess the soldier, two
additional changes will support success. First, the assessment will allow a potential rater to opt
out an assessment before inputting feedback if there was lack of previous engagement with the
rated soldier. Second, the program would select 3-5 times the number of minimum raters for
each source category to ensure that more assessments exceed the minimum requirements.
Scholars and industry research maintain that MSF assessments that do not reach enough
81
respondents to ensure the feedback accurately reflects the assessed individual’s behaviors are not
valuable (Nowack & Mashihi, 2012; 3D Group, 2020). The mix of randomized selection and
selection from the rated individual would reduce selection bias from the assessed soldier.
Moreover, this change will support the partially validated influence of seeking feedback from
more stakeholders, including potential detractors and known supporters. Additionally, reducing
the requirement to select all potential raters, by name, will also reduce the time to start an
assessment.
Instrument Design. The recommendations to improve instrument design as highlighted
in the length and time commitments within the upcoming utilization issues section.
Lack of Vertical Development. Since the current OER has three different templates,
based on expectations for performance within each category of officer, vertical development
needs to be included in the improved assessment. This change will acknowledge the expected
evolution of leadership and management skills for officers during their careers while maintaining
a focus on critical ethical expectations such as integrity. Additionally, incorporating vertical
development can prepare officers for future development by assessing the skills required to
succeed at the next highest rank during post-report counseling sessions recommended later in
this chapter.
Utilization Issues
Length and Time Commitments. This study identified that the length of the assessment
and time required to complete the assessment negatively impacted perceptions and response
rates. This concern was primarily related to the time required to select potential raters for each
source category and the initial portion of the MSAF assessment with Likert scale questions.
Therefore, the Likert scale portion needs to be reduced to mirror the current BCAP MSF
82
assessment that generally offers three to four web pages with Likert scale questions and a single
page for free-form comments. The aim of reducing the number of scaled questions is to reduce
the length and time commitments required to complete an assessment.
Additionally, providing feedback to an assessment needs to no longer require the use of
DoD CAC. Once initiated the MSAF website will send an email link that can be accessed on any
non-Army system. Since each potential rater selected to complete an MSAF assessment receives
a notification to their official Army email address, this would require forwarding or sharing the
assessment link from their Army email to non-Army system. This additional option to complete
an MSAF assessment would permit selected raters additional flexibility and opportunity to
provide feedback using non-Army systems like a personal computer or a mobile phone.
Post-Assessment Issues
Absence of an Impact and Absence of Post-Assessment Action. The MSAF
assessment is an exclusively developmental tool that does not influence evaluations or require
the sharing of the results. Therefore, by design, the potential impact of the MSAF assessment is
controlled by the assessed soldier. This research identified that this method, including the
absence of post-assessment action, resulted in low usage and negative perceptions of utility for
the instrument and program. Requiring a supervisor to review and discuss the report with the
rated soldier is supported by multiple studies. For example, Nowack (2009) highlighted that
greater transfer of learning and goal setting occurred when an external coach helped debrief the
results. Thach (2002) also determined that post assessment coaching positively impacted
performance.
The improved assessment needs to allow each post-assessment report to be shared with
the assessed soldier’s immediate supervisor. The report would only be shared with the immediate
83
supervisor and no other member of the soldier’s chain of command. This is appropriate since the
immediate supervisor is responsible for routine counseling on performance and assessing
performance in annual evaluations. The report would only be available to the supervisor via the
MSAF website, using a CAC, to ensure the confidentiality and privacy of the assessed soldier.
Once a report is available the supervisor would receive an email notification. Before
downloading the report each supervisor would acknowledge a disclaimer that certifies that the
report is a collection of unfiltered MSF, is not to be shared with others, and that the supervisor
must review the results with the assessed soldier. No additional requirements, such as adding the
date of the assessment or the date of the discussion of report would be added to an evaluation
related document. The review of the post-assessment report, between the rated soldier and his or
her supervisor, would be a developmental opportunity to leverage MSF. From the supervisor’s
perspective, especially within the Army’s top-down evaluation system, the report will provide an
additional information source to make the most informed judgments on performance. This
change increases the impact by mandating that the assessed soldier and their supervisor consider
MSF. Additionally, this research identified that rated soldiers need the option to use an external
mentor to review post-assessment reports. The MSAF program currently offers post-assessment
counseling using the VIC so the only action required is to increase communication that informs
the community of the availability of the opportunity.
The researcher acknowledges that implementing the final two recommendations which
bring immediate supervisors into the MSAF assessment cycle may meet with institutional
resistance. The resistance could be due to the previously mentioned legal and ethical concerns
regarding developmental tools impacting officer evaluations, even if informally, as
recommended. The addition of supervisors may also face resistance within and beyond the
84
stakeholder group since the change could challenge the perception that the MSAF is exclusively
a development tool. Allowing supervisors to review post-assessment reports and mandating the
review of the results with rated soldiers will inevitably create the perception that the MSAF is no
longer a solely developmental tool. This can be mitigated by keeping the MSAF assessment as
an optional self-development tool. Once the new assessment is initiated, all users will
acknowledge, via a disclaimer, the requirement to review the results with a supervisor. Only
those that choose to use the improved MSAF assessment would allow their supervisors to bring
their experience and expertise into their self-development.
Recommendation Summary
This study recommends the following changes to the MSAF assessment:
1. Select potential raters for each assessment using input from HR databases and the
assessed soldier.
2. Design a shorter instrument and reduce the length and time required to provide
feedback.
3. Allow access to the assessment, only when providing feedback, without a CAC.
4. Allow immediate supervisors access to post-assessment reports.
5. Require immediate supervisors to discuss the post-assessment report with the
rated soldier.
These changes will improve the value of the MSAF assessment as a self-development
tool. This research identified that the stakeholder group has the knowledge and motivation to use
MSF however, the current instrument to leverage MSF can be improved. Doing so will improve
the utility of the assessment and improve self-development within the Army.
Implementation
85
Implementing the five changes to the MSAF assessment will require a four-phase plan on
a 24-month schedule. Figure 3 describes the process with milestones noting that each phase is
not allocated a specific time frame to provide flexibility.
Figure 3
Implementation Schedule
During the first phase the MSAF program office will create a business case for approval.
The business case will include relevant background information on the current assessment and
the advantages and disadvantages of the recommended changes. The business case will first
obtain approval within CAPL and then at the appropriate Army senior level command. The
86
business case will include a legal review and approval before moving forward with the
subsequent phases. These initial reviews and legal review are critical especially with the addition
of a requirement for a supervisor to review the post-assessment report.
Phase two will implement the first and third recommendations: linking the MSAF
program with the HR database and allowing access without a CAC. These two recommendations
require changes to existing software and protocols. Linking the UIC database will require
creating software that can determine potential raters, within the specified timeframe for each
assessment. The software will also collect performance data, based on the number of selected
raters who opt out due the lack of close working relationships, to further refine the rater selection
software over time.
This phase also includes a software-driven solution, i.e. allowing raters to input feedback
from a non-Army system like a personal computer or a mobile phone. This allows greater
flexibility and additional opportunities for potential raters to complete an assessment. The use of
routine commercial security protocols can limit privacy concerns related to sending an
assessment request to a non-Army system. Currently the potential rater receives an email
notification, only to their official Army email address, when they are selected as a potential rater.
This challenge can be mitigated by allowing soldiers to add a non-Army email address once they
have logged into the MSAF web page to receive links to complete assessments.
The third phase revises the MSAF instrument. This phase includes reviewing best
practices, feedback from the new BCAP and CCAP MSFs, and research to confirm the value of a
shorter, more efficient instrument. Once confirmed, the scale based section will be reduced in
length. The remaining questions will be concise, direct, and focus on the most important
leadership traits as defined by Army doctrine. Once revised the new MSAF instrument can be
87
tested with officers and NCO’s with experience with the previous version to confirm the utility
of the shortened assessment.
The final phase communicates the changes to the Army community. The MSAF program
office will develop an Army-wide communication plan that highlights the changes and the
improved utility of the assessment. The will highlight previously known issues, the changes to
the instrument, and the perceived utility to the community. A key cohort for engagement are
mid-level Army officers and senior NCO’s due to their current and future positions of authority
and influence. For example, training on the improved assessment is to be included in the
compulsory pre-battalion and brigade command training for lieutenant colonels and colonels.
Additionally, the MSAF program office will provide information that highlights assessment
usage and also collect feedback from users on perceived utility. Sharing this information with the
Army community will increase confidence in the program and the assessment.
Strengths and Weaknesses of the Approach
This approach includes strengths and weaknesses. First, the strengths. The changes are
supported by MSF literature and studies. Second, the evolution of the assessment demonstrates
that the Army is constantly improving its development tools. Additionally, the changes will have
a marginal impact on managing the MSAF program once in place. Finally, the implementation
timeline is flexible and can be amended as required.
The weaknesses of this approach include several assumptions. First, that the MSAF
program and assessment will continue to be funded and supported. Next, the MSAF program
office has the resources to complete the implementation plan outlined above. Third, the legal
review will conclude that sharing post-assessment reports with supervisors is an acceptable
88
practice. Finally, this approach is Army-centric and does not consider MSF programs within the
other U.S. military services.
Evaluation and Reporting
After implementing the improved MSAF assessment recommendations, the MSAF
program office will gather feedback on the perceived utility of the revised assessment. A group
of officers and NCO’s, who have used the new assessment, can be identified by the Army
Human Resources Command and the MSAF program office. Those soldiers can be sampled,
using a blended evaluation tool with Likert scale questions and open ended questions, via email,
roughly 14 days after the supervisor has downloaded the post-assessment report. Collecting and
reporting data is a continuous process during evaluation. The MSF program office will compile
the feedback related to the improved assessment monthly and provide quarterly updates for the
first two years after implementation. The analysis will determine if expectations were met for
each change and then offer an explanation as to why or why not they were met. The reports will
be easy to understand and provide results that will increase acceptance among stakeholders. The
outcomes will be available online to Army personnel and would facilitate an annual report that
highlights if the MSAF assessment requires additional changes. The MSAF program office will
review and use the data to inform future decisions.
Suggestions for Future Research
This research adds to the existing literature that explores the value of MSF by exploring
the Army’s MSAF assessment. When considering MSF several topics for future research were
identified. Regarding potential rater selection, the appropriate mix of self-selection and selection
by an unbiased third party requires additional research. Second, identifying mechanisms to
encourage the use of an exclusively optional MSF assessment requires further analysis. Finally,
89
within the context of the COVID-19 pandemic and the resulting increase in working from home,
exploring how MSF assessments may evolve to retain the ability to capture valid information on
performance is relevant.
Regarding MSF in the Army, this study identified that the periodic performance
counseling that is directed to occur within the Army does not occur as it is directed. Noting the
importance of meaningful and actionable periodic feedback of performance, especially for junior
soldiers and officers, the lack of counseling deserves a thorough review. Additionally, the Army
is now using MSF assessments in the BCAP and CCAP as one of several inputs to select
lieutenant colonels and colonels for competitive command and key billet positions. Even though
these new assessments are smaller in scale, research into their impact would be of value for the
Army community and those interested in MSF.
Specific to the Army’s MSAF assessment, two opportunities for future research were
identified. Future research can explore mechanisms to increase response rates and use for the
optional assessment. Additionally, this study only focused on lieutenant colonels. Expanding this
research to ask the whole Army community, on a much larger scale, how to improve the current
MSAF assessment would be a valuable enterprise.
Impact on the Profession
While this research effort highlights several issues with leader development in the Army
the researcher contends that additional discussion, within and beyond academia, on how the
Army develops leaders is constructive and necessary. Discussing how to provide accurate and
useful performance feedback to leaders is essential to create and maintain positive work
environments, especially in the most demanding of circumstances. Ideally the revised MSAF
assessment will improve self-development in the Army by showing leaders the impact of their
90
actions as they lead others. The Army is a microcosm of the United States and the world and it
provides leaders at the local, national, and international level. These leaders have frequent inputs
into and out of our societies much like a graceful owl flying amongst the branches of an old
forest. The researcher hopes, and is encouraged by this research, that the owl can see its impact
on its neighbors, big and small.
Conclusion
MSF is a departure from traditional methods through which employees and supervisors
receive performance feedback. While the names and practices have changed during the last
century, the idea of collecting and using rating data from several sources is not new. MSF
programs and assessments are generally accepted and are now a widely used method to inform
individual development by soliciting feedback from a larger group of stakeholders. The literature
review highlights that MSF assessments are of value, which can be extended to the Army. Since
2008 the Army has offered the web-based MSAF assessment to provide MSF. Annual reports
from the Army and other sources highlighted that the MSAF assessment suffered from several
issues that resulted in limited utility for the Army community.
This study focused on the U.S. Army improving its current MSAF assessment for the
self-development of its leaders. Using the Clark and Estes (2008) gap analysis model this study
examined the knowledge, motivation, and organizational influences related to improving the
Army’s web-based MSAF instrument. Interviews with lieutenant colonels validated the three
knowledge and two motivation influences as assets, suggesting that the requisite knowledge and
motivation was in place. However, gaps identified within the organization and the assessment
limited the utility of the MSAF assessment. This dissertation provides five recommendations
91
based on the research and supporting literature to improve the current assessment. A path to
implement and evaluate the improvements was also created.
Within the Army’s feedback and performance system the MSAF assessment is a unique
opportunity for Army leaders to increase their self-awareness of how they are perceived by their
subordinates, peers, and supervisors. Understanding the influences identified in this study will
improve the utility of the MSAF assessment as a self-development tool for Army leaders.
92
Appendix A: Interview Protocol
Interview Questions
1 Please tell me about your time in the Army.
2
Possible follow ups based on the initial answer: duty assignments, positions, locations,
deployments, long-term plans in and after the Army.
3
What is your experience with Multi Source Feedback (MSF) assessments outside of the
Army?
4 Describe the value of an MSF assessment from your perspective.
5 How many MSF assessments for yourself, including the MSAF, have you completed?
6 What caused you to initiate your previous MSF assessments?
7 Do you think the MSAF was a useful self-development tool? Why or why not?
8 How do you think your peers view the MSAF?
9
When selecting assessors for your previous assessments how did you select subordinates,
peers, and supervisors? Did you exceed the minimums for each category?
10 For the three groups of Army officers, should the MSAF be the same?
11 Did you read your feedback report after your assessment closed? If not, why?
12
How many times did you the post-assessment services offered by the MSAF office? If
yes, were they of value?
13 Did you find your MSAF feedback reports useful? Please explain?
14
Did you make changes to your leadership style or behaviors based on your reports? If yes,
please describe?
15
Do you think MSAF response rates are perceived as low? How would you tackle that
challenge?
16
What was the biggest obstacle to receiving a meaningful feedback report with your
previous MSAFs?
93
17 How would you change the current MSAF assessment?
18 Should the Army have an MSF assessment?
19 Who in the Army should use a MSF assessment?
20
Is there anything else you would like to add reference the MSAF or the Army having an
MSF assessment?
94
References
3D Group (2016). Current Practices in 360-Degree Feedback: A Benchmark Study of
North America Companies: 5
th
ed. Emeryville, CA: 3D Group.
3D Group (2020). Current Practices in 360-Degree Feedback: A Benchmark Study of
North America Companies: 6
th
ed. Emeryville, CA: 3D Group.
About the Army. (2020, July 9). Retrieved from https://www.goarmy.com/about/what-is-the-
army/history.html
Active Duty Military Personnel by Rank/Grade (2020, May 31). Retrieved from
https://www.dmdc.osd.mil/appj/dwp/dwp_reports.jsp#
Active Duty Military Personnel by Rank/Grade (Women Only) (2020, May 31). Retrieved from
https://www.dmdc.osd.mil/appj/dwp/dwp_reports.jsp#
Alexander, P. A., Schallert, D. L., & Reynolds, R. E. (2009). What is learning anyway? A
topographical perspective considered. Educational Psychologist, 44, 176–192.
Allen, S. (2008). A hunt for the missing 50 cents: One piece of the leadership development
puzzle. Organization Development Journal, 26(1), 19.
Army Posture Statement. (2016, March-April). Retrieved from
https://www.army.mil/e2/rv5_downloads/aps/aps_2016.pdf
Atwater, L., Waldman, A., Atwater, D., & Cartier, P. (2000). An upward feedback field
experiment: Supervisors’ cynicism, reactions, and commitment to subordinates.
Personnel Psychology, 53, 275–297.
Atwater, L. & Yammarino, F. (2001). Understanding Agreement in Multisource Feedback. In: D.
Bracken, C. Timmreck and A. Church, ed., The Handbook of Multisource Feedback, 1st
ed. San Francisco: Jossey-Bass, pp.204-220.
95
Aubrey, D. (2012). The Effect of Toxic Leadership (Army War College Research Paper).
Retrieved from Defense Technical Information Center. (ADA560645).
Bialik, K. (2017, August 22). U.S. active-duty military presence overseas is at its smallest in
decades. Retrieved from http://www.pewresearch.org/fact-tank/2017/08/22/u-s-active-
duty-military-presence-overseas-is-at-its-smallest-in-decades/
Bogdan, R. C., &; Biklen, S. K. (2007). Qualitative research for education: An introduction to
theory and methods (5th Ed.). New York, NY: Pearson. Chapter 5: Data analysis and
interpretation (pp. 159-172).
Bracken, D., & Rose, D. (2011). When does 360-degree feedback create behavior change? and
how would we know it when it does? Journal of Business and Psychology, 26(2), 183-
192.
Bracken, D. & Timmreck, C. (2001). Success and Sustainability: A Systems View of
Multisource Feedback. In: D. Bracken, C. Timmreck and A. Church, ed., The Handbook
of Multisource Feedback, 1st ed. San Francisco: Jossey-Bass, pp.478-492.
Bracken, D., Timmreck, C., & Church, A. (2001). The handbook of multisource feedback : the
comprehensive resource for designing and implementing MSF processes (1st ed.). San
Francisco: Jossey-Bass.
Brett, J. F., & Atwater, L. E. (2001). 360 degree feedback: Accuracy, reactions, and perceptions
of usefulness. Journal of Applied Psychology, 86, 930–942.
Brown, J., Lowe, K., Fillingham, J., Murphy, P., Bamforth, M., & Shaw, N. (2014). An
investigation into the use of multi-source feedback (MSF) as a work-based assessment
tool. Medical Teacher, 36(11), 997–1004.
96
Burns, W. A. (2017). A descriptive literature review of harmful leadership styles: Definitions,
commonalities, measurements, negative impacts, and ways to improve these harmful
leadership styles. Creighton Journal of Interdisciplinary Leadership, 3(1), 33-52.
Casey, G. W., Jr. (2010). Army Training and Leader development, FY 10-11. Washington,
DC: Department of the Army.
Cavanaugh, K. J., Fallesen, J.J., Jones, R. L., and Riley, R. P. (2016). 2015 Center for Army
Leadership Annual Survey of Army Leadership (CASAL): Military Leader Findings.
(Technical Report 2016-01). Fort Leavenworth, KS: The Center for Army Leadership.
Clark, R. E. & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Charlotte, NC: Information Age Publishing, Inc.
Creswell, J. W. & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed
methods approaches. SAGE Publications, Inc.
Dalton, M. & Hollenbeck G. (2001). A Model for Behavior Change. In: D. Bracken, C.
Timmreck and A. Church, ed., The Handbook of Multisource Feedback, 1st ed. San
Francisco: Jossey-Bass, pp.352-367.
Das, U. K., & Panda, J. (2017) The Impact of 360 Degree Feedback on Employee Role in
Leadership Development. Asian Journal of Management, Dec 28, 2017.
Drew, G. (2009). A "360" degree view for individual leadership development. Journal of
Management Development, 28, 581-592.
Druskat, V. U., & Wolff, S. B. (1999). Effects and timing of developmental peer appraisals in
self-managing work groups. Journal of Applied Psychology, 84, 58–74.
Eccles, J. (2006). Expectancy value motivational theory. Retrieved from http://www.
education.com/reference/article/expectancy-value-motivational-theory/.
97
Edwards, M. & Ewen, A. (2001) Readiness for Multisource Feedback. In: D.
Bracken, C. Timmreck and A. Church, ed., The Handbook of Multisource Feedback, 1st
ed. San Francisco: Jossey-Bass, pp.33-47.
Eggert, C., Harris, M., Entrekin, K., & Parry, R. (2016). Exploring the relationship of Multi-
Source Feedback on a military leader’s transformative leadership capability: An
exploratory qualitative inquiry. ProQuest Dissertations Publishing.
Fallesen, J.J. et al. (2015). 2014 Center for Army Leadership Annual Survey of Army Leadership
(CASAL): Military Leader Findings. (Technical Report 2015-01). Fort Leavenworth, KS:
The Center for Army Leadership.
Farr, J., & Newman, D. (2001). Rater Selection: Sources of Feedback. In: D. Bracken, C.
Timmreck & A. Church, ed., The Handbook of Multisource Feedback, 1st ed. San
Francisco: Jossey-Bass, pp.96-113.
Fleenor, J., Taylor, S., & Chappelow, C. (2008). Leveraging the impact of 360-degree feedback.
San Francisco: Pfeiffer.
Fletcher, C. (2014). Using 360-Degree Feedback as a Development Tool. In The Wiley Blackwell
Handbook of the Psychology of Training, Development, and Performance Improvement
(pp. 486–502). Chichester, UK: John Wiley & Sons, Ltd.
Foster, C. A., & Law, M. R. (2006). How many perspectives provide a compass? Differentiating
360-degree feedback from multi-source feedback. International Journal of Selection and
Assessment, 14, 288-291.
Gallimore, R. & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist,
31(1), 45-56.
98
Gibson, C. B., Porath, C. L., Benson, G. S., & Lawler, E. E., III. (2007). What results when firms
implement practices: The differential relationship between specific practices, firm
financial performance, customer service, and quality. Journal of Applied Psychology, 92,
1467–1480.
Gilliland, S. W., and Langdon, J. C. “Creating Performance Management Systems That Promote
Perceptions of Fairness.” In J. W. Smither (ed.), Performance Appraisal: State of the Art
In Practice. San Francisco: Jossey-Bass, 1998
Glesne, C. (2011). Chapter 6: But is it ethical? Considering what is “right.” In Becoming
qualitative researchers: An introduction (4
th
ed.). Boston, MA: Pearson.
Nieman-Gonder, J., Metlay, W., Kaplan, I., & Wolfe, K. (2006, May). The effect of rater
selection on rating accuracy. Poster presented at the 21st Annual Conference of the
Society for Industrial and Organizational Psychology, Dallas, TX.
Government Accountability Office. (2019). Army Readiness: Progress and Challenges in
Rebuilding Personnel, Equipping, and Training (GAO-19-367T). Washington D.C.:
GAO.
Greguras, G., Robie, C., Schleicher, D., & Goff, M. (2003). A field study of the effects of rating
purpose on the quality of multisource ratings. Personal Psychology, 56, 1–21.
Grossman, R., & Salas, E. (2011). The transfer of training: What really matters. International
Journal of Training and Development, 15, 103–120.
Grubb, W., Badway, N., & Bell, D. (2003). Community colleges and the equity agenda: the
potential of noncredit education. The Annals of the American Academy of Political and
Social Science.
99
Harding, J. (2013). Chapter 5: Using codes to analyze an illustrative issue. Qualitative data
analysis from start to finish (pp. 81-106). Thousand Oaks, CA: Sage Publications.
Hardison, C. M. et al. 360-Degree Assessments: Are they the right tool for the U.S. Military?
Santa Monica, CA: RAND Corporation.
Hedge, J., Borman, W. and Birkeland, S. (2001). History and Development of Multisource
Feedback as a Methodology. In: D. Bracken, C. Timmreck and A. Church, ed., The
Handbook of Multisource Feedback, 1st ed. San Francisco: Jossey-Bass, pp.15-32.
Hezlett, S. (2008). Using multisource feedback to develop leaders: Applying theory and
research to improve practice. Advances in Developing Human Resources, 10, 703-720.
Humphrey, D., Vogele-Welch, D., Jarvis, S., & Wallis, S. (2017). 360-degree Feedback Impact
on Leadership Development: A Multiple Case Study, ProQuest Dissertations and Theses.
Johnson, R. B., & Christensen, L. B. (2015). Educational research: Quantitative,
qualitative, and mixed approaches. (5th ed.). Thousand Oaks: SAGE.
Krathwohl, D. R. (2002). A revision of Bloom’s Taxonomy: An overview. Theory Into Practice,
41(4), 212–218.
Kim, K., Atwater, L., Patel, P., & Smither, J. (2016). Multisource feedback, human capital, and
The financial performance of organizations. The Journal of Applied Psychology, 101(11),
1569–1584.
Lepsinger, R., & Lucia, A. (2001). Performance Management and Decision Making. In: D.
Bracken, C. Timmreck and A. Church, ed., The Handbook of Multisource Feedback, 1st
ed. San Francisco: Jossey-Bass, pp.318-334.
100
London, M. (2001). The Great Debate: Should Multisource Feedback Be Used for
Administration or Development Only? In: D. Bracken, C. Timmreck and A. Church, ed.,
The Handbook of Multisource Feedback, 1st ed. San Francisco: Jossey-Bass, pp.368-388.
Mayer, R. E. (2011). Applying the science of learning. Boston, MA: Pearson Education.
Maxwell, J. (2013). Qualitative research design: An interactive approach (3
rd
ed.).
Thousand Oaks, CA: SAGE Publications.
Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and
implementation. (4
th
ed.). San Francisco: Jossey-Bass.
McAninch, K. A. (2016). Catalyst for Leader Development: The Multi-Source Assessment and
Feedback Program. Carlisle, PA: U.S. Army War College.
Mccarthy, A., & Garavan, T. (2007). Understanding acceptance of multisource feedback for
management development. Personnel Review, 36(6), 903–917.
https://doi.org/10.1108/00483480710822427
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
implementation. (4th ed.). San Francisco: Jossey-Bass.
MILPER Message Number 11-282 (2011). Enhancement to the Officer Evaluation Reporting
System. Fort Knox, KY.: U.S. Army Human Resources Command.
MILPER Message Number 18-181 (2018). Elimination of Multi-source Assessment and
Feedback (MSAF) requirements for DA Form 67-10 series Officer Evaluation Reports
(OERs). Fort Knox, KY.: U.S. Army Human Resources Command.
Nowack, K. (2009). Leveraging multirater feedback to facilitate successful behavioral change.
Consulting Psychology Journal: Practice and Research, 61, 280–297.
101
Nowack, K. M., & Mashihi, S. (2012). Evidence-based answers to 15 questions about leveraging
360-degree feedback. Consulting Psychology Journal, 64(3), 157-182.
Odierno, R. (2015). Leader development and talent management the army competitive
advantage. Military Review, 95(7), 8-15.
Park, J., & Millora, M. (2012). The Relevance of Reflection: An Empirical Examination of the
Role of Reflection in Ethics of Caring, Leadership, and Psychological Well-Being.
Journal of College Student Development, 53(2), 221–242.
Patton, M. (2015). Qualitative research and evaluation methods (4
th
ed.). Thousand Oaks,
CA: Sage Publications.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in
learning and teaching contexts. Journal of Educational Psychology, 95, 667–686
Roberts, C. (2008). Developing Future Leaders: The Role of Reflection in the Classroom.
Journal of Leadership Education, 7(1), 116–130.
Rubin, H., & Rubin, I. (2012). Chapter 6: Conversational partnerships. In Qualitative
interviewing: The art of hearing data (3
rd
ed.). Thousand Oaks, CA, SAGE Publications.
Rueda, R. (2011). The 3 dimensions of improving student performance. New York: Teachers
College Press.
Schein, E. H. (2004). The concept of organizational culture: Why bother? In E. H. Schein, (Ed.),
Organizational culture and leadership (3
rd
ed.). (pp. 3–24). San Francisco, CA: Jossey
Bass.
Smither, J., Brett, J., & Atwater, L. (2008). What do leaders recall about their multisource
feedback? Journal of Leadership & Organizational Studies, 14, 202-218.
102
Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following
multisource feedback? a theoretical model, meta-analysis, and review of empirical
findings. Personnel Psychology, 58(1), 33-66.
Talent Management (n.d). Retrieved on August 15, 2020 from https://talent.army.mil/bcap/
Tavanti, M. (2011). Managing Toxic Leaders: Dysfunctional Patterns in Organizational
Leadership and How to Deal with Them. Human Resources Management, 6(83), 127-
136.
Thach, E. (2002). The impact of executive coaching and 360-feedback on leadership
effectiveness. Leadership and Organization Development Journal, 23, 205–214.
Tuckman, B., & Kennedy, G. (2011). Teaching Learning Strategies to Increase Success of First-
Term College Students. The Journal of Experimental Education, 79(4), 478–504.
U.S. Army Organization: Who We Are. (n. d.) Retrieved on September 5, 2018 from
https://www.army.mil/info/organization/
U.S. Army Organization: Who We Are. (n. d.) Retrieved on September 5, 2018 from
https://www.army.mil/info/organization/
U. S. Department of the Army. (2017). Army Profession and Leadership Policy. Army
Regulation 600-100. Washington, D.C.: U.S. Department of the Army.
U. S. Department of the Army. (2017). Army Training and Leader Development. Army
Regulation 350-1. Washington, D.C.: U.S. Department of the Army.
U. S. Department of the Army. (2019). Army Leadership and the Profession. Army
Publishing Directorate (APD) 6-22. Washington, D.C.: U.S. Department of the Army.
U. S. Department of the Army. (2019). Evaluation Reporting System. Army Publishing
Directorate (AR 623-3. Washington, D.C.: U.S. Department of the Army.
103
Waldman, D. & Atwater, L. (2001). Confronting Barriers to Successful Implementation Of Multi
Source Feedback. In: D. Bracken, C. Timmreck and A. Church, ed., The Handbook of
Multisource Feedback, 1st ed. San Francisco: Jossey-Bass, pp.463-477.
Wong, L., Gerras, S., & Army War College Carlisle Barracks PA Strategic Studies Institute.
(2015). Lying to Ourselves: Dishonesty in the Army Profession.
Yough, M., & Anderman, E. (2006). Goal orientation theory. Retrieved from http://www.
education.com/reference/article/goal-orientation-theory/.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Measuring the impact of short-term campus exchange programs: an evaluation study
PDF
An evaluation study of the retention of company grade officers in the Western Volunteer Rifles
PDF
Attendance interventions to address chronic absenteeism
PDF
Building capacity to increase community member involvement at a clinical and translational science award (CTSA)
PDF
An invisible army: access to basic training for special education paraprofessionals
PDF
Closing the completion gap for African American students at California community colleges: a research study
PDF
Creating a culture of connection: employee engagement at an academic medical system
PDF
Preparing student affairs administrators to support college students of color with mental health needs
PDF
Physical activity interventions to reduce rates of sedentary behavior among university employees: a promising practice study
PDF
Improving the representation of women in the independent school headship
PDF
Getting ahead: performance management
PDF
School-based interventions for chronically absent students in poverty
PDF
Promising practices for building leadership capital in educational organizations
PDF
A customer relationship management approach to improving certificate completion
PDF
Labor displacement: a gap analysis an evaluation study addressing professional development in a small business environment
PDF
How to improve emergency communications' unit (COMU) task book completion rates in California
PDF
Professional development in generating managerial high-quality feedback
PDF
Uncovering promising practices for providing vocational opportunities to formerly incarcerated individuals
PDF
Exploring the effectiveness of continuing medical education for physicians enrolled in an MBA program
PDF
Manager leadership skills in the context of a new business strategy initiative: an evaluative study
Asset Metadata
Creator
Cerella, Anthony Francois
(author)
Core Title
Multi-source feedback in the U. S. Army: an improved assessment
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Publication Date
11/28/2020
Defense Date
10/28/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
360,Army,MSAF,MSF,multi-source feedback,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Malloy, Courtney (
committee chair
), Murphy, Don (
committee member
), Stowe, Kathy (
committee member
)
Creator Email
cerella@usc.edu,tony.cerella@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-395451
Unique identifier
UC11666513
Identifier
etd-CerellaAnt-9138.pdf (filename),usctheses-c89-395451 (legacy record id)
Legacy Identifier
etd-CerellaAnt-9138.pdf
Dmrecord
395451
Document Type
Dissertation
Rights
Cerella, Anthony Francois
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
MSAF
MSF
multi-source feedback