Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Assessing and articulating the impact of the Daniel K. Inouye Asia-Pacific Center for Security Studies: an innovation study
(USC Thesis Other)
Assessing and articulating the impact of the Daniel K. Inouye Asia-Pacific Center for Security Studies: an innovation study
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: ASSESSING IMPACT 1
ASSESSING AND ARTICULATING THE IMPACT OF THE DANIEL K. INOUYE ASIA-
PACIFIC CENTER FOR SECURITY STUDIES: AN INNOVATION STUDY
by
Christine M. Gayagas
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
Aug ust 2018
Copyright 2018 Christine M. Gayagas
ASSESSING IMPACT 2
ACKNOWLEDGEMENTS
This dissertation process has been a journey that would not have been possible without
the guidance and patience of many supporters. First, I would like to acknowledge my family
who endured my absence while I engaged with this program during my Dad’s successful battle
with brain cancer. Second, to my husband, Doug, thank you for your patience and understanding
during my struggles to push through to the next step. And to our daughter, Madi, who inspired
me with her neuroscience pursuits in college, thank you for virtually sharing this experience.
I owe much gratitude to my dissertation chair, Dr. Cathy Krop, for her keen insights and
recommendations, her responsiveness to my most mundane or urgent questions, and for her
enduring encouragement. She modeled the best of a learning experience in the classroom and
representing USC in an international setting while inspiring me with her example in life. Thank
you to my dissertation committee members, Dr. Larry Picus and Dr. Monique Datta, who
provided sage advice and feedback that honed my methodology and streamlined my efforts.
I would like to also acknowledge our Global EdD program director, Dr. Sabrina Chong,
and her staff for their support regardless of where we were around the globe. Thank you to our
program chair, Dr. Mark Robison, for the constant intellectual challenge and interest in my
dissertation topic. Many thanks go to professors Dr. Ruth Chung, Dr. Helena Seli, and Dr. Kiley
Adolph for making statistics and KMO as enjoyable as possible.
Thank you to the Daniel K. Inouye Asia-Pacific Center for Security Studies for
welcoming this research. I dedicate this study to the hard working staff and faculty at the Center
who strive to make a difference every day. My hope is that this study will help the Center
capture the impact of their efforts.
ASSESSING IMPACT 3
Lastly, I could not have been blessed with a more supportive or cohesive cohort. My
experience with C5 was intellectually rich with abundant social capital. Each of you and our
global experiences together made an impact that will serve as an inspiration for a lifetime.
Thank you to my BBFC partner, participants, and guest instructors, for the fun, challenging, and
needed stress-relieving workouts around the world.
ASSESSING IMPACT 4
TABLE OF CONTENTS
Acknowledgements ......................................................................................................................... 2
List of Tables .................................................................................................................................. 7
List of Figures ................................................................................................................................. 8
Abstract ........................................................................................................................................... 9
CHAPTER ONE: INTRODUCTION ....................................................................................... 10
Background of the Problem ...................................................................................................... 10
Importance of Addressing the Problem .................................................................................... 13
Organizational Context and Mission ........................................................................................ 14
Organizational Performance Goal ............................................................................................. 16
Description of Stakeholder Groups ........................................................................................... 17
Stakeholders’ Performance Goals ............................................................................................. 18
Stakeholder Group for the Study .............................................................................................. 19
Purpose of the Project and Questions ....................................................................................... 19
Conceptual and Methodological Framework ............................................................................ 19
Definitions................................................................................................................................. 20
Organization of the Study ......................................................................................................... 21
CHAPTER TWO: REVIEW OF THE LITERATURE .......................................................... 23
Conflict Prevention ................................................................................................................... 23
Global Conflict Impact ......................................................................................................... 24
Overview of Peace Operations and Conflict Prevention ...................................................... 24
Conflict Prevention in the Indo-Pacific Region .................................................................... 28
Challenges of Assessing the Impact of Conflict Prevention ..................................................... 31
Defining Assessment ............................................................................................................ 32
Defining Impact .................................................................................................................... 33
Practical Challenges of Assessing Impact ............................................................................ 35
Approaches to Assessing Impact in Complex Environments ................................................... 38
Methodological Approaches to Assessing Impact ................................................................ 40
Research Designs and Methods ............................................................................................ 45
Best Practices in Organizational Impact Assessment ........................................................... 47
Building Capacity Within DKI APCSS to Measure Impact ..................................................... 49
Knowledge and Skills Needs ................................................................................................ 50
Motivation Needs .................................................................................................................. 52
Organizational Needs ............................................................................................................ 53
Conclusion ................................................................................................................................ 55
CHAPTER THREE: METHODOLOGY ................................................................................ 56
Purpose of the Project and Questions ....................................................................................... 56
Stakeholders of Focus ............................................................................................................... 56
Methodological Framework ...................................................................................................... 57
Assumed Influences on Performance ........................................................................................ 58
Critical Observations and Preliminary Scanning Data ............................................................. 59
Knowledge and Skills ........................................................................................................... 59
Motivation ............................................................................................................................. 60
Organization .......................................................................................................................... 61
Population of Study .................................................................................................................. 63
ASSESSING IMPACT 5
Data Collection ......................................................................................................................... 63
Surveys .................................................................................................................................. 66
Individual Interviews ............................................................................................................ 66
Focus Group Interviews ........................................................................................................ 67
Observations ......................................................................................................................... 67
Document Analysis ............................................................................................................... 68
Validation of Influences ............................................................................................................ 68
Trustworthiness of Data ............................................................................................................ 70
Role of Investigator ................................................................................................................... 70
Data Analysis ............................................................................................................................ 71
CHAPTER FOUR: FINDINGS AND RESULTS .................................................................... 73
Introduction ................................................................................ Error! Bookmark not defined.
Participating Stakeholders and Methodological Framework .................................................... 74
Validation of Assumed Needs .................................................................................................. 74
Knowledge Needs ................................................................................................................. 75
Summary of Knowledge Related Findings ........................................................................... 91
Motivation Needs .................................................................................................................. 92
Summary of Motivation Related Findings .......................................................................... 100
Organizational Needs .......................................................................................................... 101
Summary of Organizational Resources Related Findings .................................................. 111
Findings Summary .................................................................................................................. 112
CHAPTER FIVE: PROPOSED SOLUTIONS AND IMPLEMENTATION ..................... 115
Overview of Validated Needs ................................................................................................. 115
Recommended Solutions for Implementation ........................................................................ 117
Recommended Solution #1: Create a Strategy at the Center Level .................................... 118
Recommended Solution #2 - Communicate Leadership’s Commitment to Assessments of
Impact ................................................................................................................................. 119
Recommended Solution #3 - Build Capacity for Staff and Faculty to Conduct Assessments
and Articulate Impact .......................................................................................................... 120
Recommended Solution #4 – Implement Impact Assessment Process and Take Appropriate
Action .................................................................................................................................. 121
Summary of Proposed Solutions ......................................................................................... 123
Implementation Plan ............................................................................................................... 123
Implementation for Recommendation #1 - Develop a Strategy at the Center Level .......... 125
Implementation for Recommendation #2 - Communicate Leadership’s Commitment to
Assessment of Impact ......................................................................................................... 126
Implementation for Recommendation #3 - Build Capacity for Staff and Faculty to Conduct
Assessments and Articulate Impact .................................................................................... 127
Recommendation #4 - Implement Assessment Process ...................................................... 131
Resource Requirements ...................................................................................................... 133
Timeline .............................................................................................................................. 135
Implementation Constraints and Challenges ...................................................................... 135
Evaluation Plan ....................................................................................................................... 136
Areas for Future Research ...................................................................................................... 138
Conclusion .............................................................................................................................. 138
References ................................................................................................................................... 141
ASSESSING IMPACT 6
Appendix A Survey Protocols .................................................................................................... 154
Appendix B Individual Interview Protocols ............................................................................... 159
Appendix C Focus Group Interview Protocols ........................................................................... 161
Appendix D Document Analysis Checklist ................................................................................ 163
Appendix E Observation Checklist ............................................................................................. 164
ASSESSING IMPACT 7
LIST OF TABLES
Table 1: Organizational Mission, Organizational Goal, and Stakeholder Performance Goals
goal, and Projected Stakeholder Timelines 18
Table 2: Modified Methodological Approaches to Assessing Impact (PricewaterhouseCoopers,
2007) 41
Table 3: Source of Assumed Needs 61
Table 4: Summary of Assumed Needs and Data Colletion 69
Table 5: Summary of Assumed Knowledge Needs Validation Results 76
Table 6: Summary of Assumed Motivation Needs Validation 93
Table 7: Summary of Assumed Organizational Needs Validation 101
Table 8: Summary of Findings 112
Table 9: Summary of Findings and Associated Codes 116
Table 10: Summary of Recommended Solutions Associated wtih Validated Finding Codes 118
Table 11: Summary of proposed solutions, action steps, resources, and timelines 124
Table 12: Evaluation of Impact Assessment Process 136
ASSESSING IMPACT 8
LIST OF FIGURES
Figure 1. Gap analysis process. 58
Figure 2. Survey question – Extent that stakeholders understand role in assessing and
articulating impact 77
Figure 3. Survey question - Select one definition of "Enhancing Stability." 79
Figure 4. Survey question - Extent that staff and faculty agree that enhancing stability reflect
the goal of impact 80
Figure 5. Survey question – Extent stakeholders view that assessing and articulating the
Center’s impact is in alignment with Center’s mission 81
Figure 6. Survey question - Extent stakeholders know how to relate strategy, design and
program development to creating impact 83
Figure 7. Survey question - Extent staff and faculty know how to articulate the Center's
impact 88
Figure 8. Survey question - Extent stakeholders know how to apply theories, best practices,
methods, models or rubrics to create a methodology to assess the Center's impact 89
Figure 9. Survey question - Extent stakeholders are aware of inherent biases in the process of
assessing the impact of the Center 91
Figure 10. Survey question - Extent staff and faculty value the importance of assessing and
articulating the Center's impact on empirical evidence 94
Figure 11. Survey question - Extent staff and faculty value the effort of assessing and
articulating the Center's impact as worthy of time and resources 96
Figure 12. Survey question - Extent stakeholders are confident they have the ability to create
a tailored methodology, model, or rubric to assess the Center's impact 98
Figure 13. Survey question - Extent staff and faculty view resources could be at risk if the
Center cannot assess and articulate its impact 99
Figure 14. Survey question - Extent stakeholders have adequate resources to learn how to
create a methodology, model, or rubric to assess and articulate impact 103
Figure 15. Survey question – Extent staff and faculty have sufficient resources to collect
and analyze data over time to assess impact 104
Figure 16. Example of a strategy process. 110
Figure 17. Survey question – Extent stakeholders believe DKI APCSS will request adequate
resources to create impat based on assessment results 111
ASSESSING IMPACT 9
ABSTRACT
The Daniel K. Inouye Asia-Pacific Center for Security Studies (DKI APCSS or referred to as the
Center) is a Department of Defense (DoD) regional center that aimed to assess and articulate its
impact. This innovation study focused on the knowledge and skill, motivation, and
organizational needs for the Center’s staff and faculty to meet its performance goal of assessing
and articulating its impact. As part of its mission, DKI APCSS builds partner capacity, shared
understanding and networks to enhance stability in the Indo-Asia-Pacific region. The Center
educates, connects, and empowers international security practitioners through courses,
workshops, and dialogues. Given the Center’s role in U.S. security cooperation efforts, literature
focused on conflict prevention, assessing soft power impacts, social capital, approaches, and
methodologies. Applying an adapted Clark and Estes’ (2008) gap analysis model, the researcher
validated 12 needs through a mixed-methods approach and suggested four solutions. Based on
the Chapter Two literature review and findings in Chapter Four, recommendations included
developing and communicating a Center strategy, conducting stakeholder training, and
developing and implementing an innovative approach to assess and articulate its impact. The
study concludes with an evaluation plan for the impact assessment process to improve upon its
purpose and perhaps share best practices with the four other DoD regional centers.
ASSESSING IMPACT 10
CHAPTER ONE: INTRODUCTION
According to the 2016 Global Peace Index, global levels of peace have continued to
deteriorate in the last decade with increased levels of terrorism, battle deaths, and the number of
refugees and internally displaced persons doubling (Institute for Economics and Peace, 2016).
Global and regional efforts to minimize conflict have had varying degrees of success because of
treaties, alliances, and institutions. The United States appropriated $572.7B for the Department
of Defense (DoD) in 2016 of which $7.4B was focused on security cooperation to enhance
stability and security through the building of capacity in partner nations and in some cases,
building relationships (Consolidated Appropriations Act of 2016; Skorupski & Serafino, 2016).
When the DoD faces budget pressures and must assess potential decrements, funded entities
must be able to assess and articulate their value and impact in support of DoD missions. In 2010,
the Office of Cost Assessment and Program Evaluation recommended options to reduce the DoD
budget by $500 million, including the elimination of some security cooperation programs
(Hanauer et al., 2014). The problem of practice addressed by this dissertation is the need to have
a mechanism to assess and articulate the impact of a security cooperation organization within the
DoD.
Background of the Problem
The impact of armed conflict globally costs 13.3% of the world’s gross domestic product
(Institute for Economics and Peace, 2016). Although efforts for peacekeeping and peace
building are increasing, those investments are less than 2% of the cost of impact from armed
conflict (Institute for Economics and Peace, 2016). Furthermore, natural disasters cause 13 times
more deaths in countries that lack positive peace indicators such as a well-functioning
government, low levels of corruption, acceptance of the rights of others, good relations with
ASSESSING IMPACT 11
neighbors, and free flow of information (Institute for Economics and Peace, 2016). Countries
that invest in peace building and security cooperation are likely to have greater resilience and a
capacity to prepare for and respond to natural disasters (Institute for Economics and Peace,
2016). Subsequently, these countries are more likely to encounter fewer deaths as a result of the
disaster.
Broadly, the most effective strategies in preventing conflict are multifaceted strategies
that may include both coercive options and soft power, or non-coercive approaches (Jentleson,
2003; Lund, 1996; Oberg, Moller, & Wallensteen, 2009; Regan, 2000). As the United States
continues to protect its national interests and contribute to global stability, the DoD is
responsible for the nation’s security through military readiness and security cooperation efforts
(Mattis, 2018). Security cooperation is considered a non-coercive, or soft power, approach
towards stability and securing peace. Congress has funded, yet still questions, some of DoD’s
security cooperation efforts, such as building capacity, in partner nations and its effectiveness
(McInnis & Lucas, 2015).
The U.S. investment in security cooperation includes several programs of which
Congress has the responsibility to provide oversight. To maintain the U.S. commitment to invest
in the programs and ensure their effectiveness, programs need to be able to assess their outputs,
outcomes, and impact. Assessing the impact of programs, such as security cooperation in a
complex and convoluted environment of conflict prevention, is a significant challenge since
many factors are involved and some efforts could take up to two to ten years to reflect the impact
(DoD, 2017; Oberg et al., 2009; W.K. Kellogg Foundation, 2004). Security cooperation may
include varied soft power activities such as military officer exchange programs, the selling of
ASSESSING IMPACT 12
material equipment to countries, executive education, building capacity within other nations’
institutions, and establishing communities of interest between practitioners.
Assessing soft power tasks is challenging, yet many organizations make efforts to
identify frameworks and procedures, employ data collection methods, and triangulate
information to determine the impact of an organization (Gordon & Chadwick, 2007). Numerous
non-profit agencies have similar missions and goals to build capacity in other countries and also
have a comparable challenge of attempting to assess and articulate the impact of their efforts.
Unique methodologies may need to be applied to various situations to best capture and describe
the organization’s impact. Numerous methodologies exist that can be singularly applied or
modified but validity and reliability vary based on factors and circumstances of the research
(Mohr, Lawrence, 1995). An example methodology is a participatory action research approach
used in a community based organization, which includes a systemic inquiry and collaboration
with those studied (Mackinnon & Stephens, 2010). Other organizations such as the Consultative
Group on International Agricultural Research determined that a common approach to assess
impact within the agricultural industry, such as an economic impact assessment, was not most
appropriate for their research entity within agriculture. Therefore, this organization created a
concept of an innovation framework to complement other impact approaches and better assess
the impact of their efforts that included institutional context and learning concepts (Hall,
Rasheed Sulaiman, Clark, & Yoganand, 2003). While the challenge remains to assess and
articulate impact in social capital and soft power arenas, organizations still have responsibilities
to stakeholders to consider a tailored approach to improve an organization’s ability to capture its
impact. Descriptions of other methodological approaches will be addressed further in the
literature review.
ASSESSING IMPACT 13
Importance of Addressing the Problem
The problem of not being able to assess and articulate the impact of a security
cooperation program is important to solve for a variety of reasons. DoD budgeted approximately
$21M in 2016 to operate the Daniel K. Inouye Asia-Pacific Center for Security Studies (DKI
APCSS or referred to as the Center) and enhance stability in the Indo-Pacific region (Hirai,
2016). Per a 2014 RAND study about the DoD five regional centers, interviewees stated that
DKI APCSS had a strong reputation (Hanauer et al., 2014). However, like the other centers,
DKI APCSS did not have a systematic method to assess and articulate the impact of the Center
in the Indo-Pacific region (Hanauer et al., 2014). Without the ability to assess the Center’s
impact as intended with an acceptable level of evidence and communicate the Center’s impact in
relation to its mission, DKI APCSS is at greater risk of not receiving funding sufficient to
conduct its mission or, in an extreme case, to exist as an institution. If DoD values DKI
APCSS’s mission, and leaders understand the impact of the Center, it is less likely they would
support a decrement in the Center’s budget. Similarly, clearly articulating the impact of an
organization based on triangulated evidence better supports sustaining or possibly increasing
funding should the impact be aligned with its mission in support of U.S. interests. Further,
articulating the impact of the Center to internal stakeholders, particularly if results are positive,
serves as a motivating factor (Eccles, 2013). Moreover, an impact assessment that is conducted
with another purpose of learning allows the Center to cull out areas of improvement to be more
effective. When DKI APCSS can assess and articulate its impact with greater confidence levels,
this process could be shared with other DoD regional centers with similar missions in other
geographic regions. The process could potentially help other broader Defense Institution
Building programs that have struggled with similar challenges (Department of Defense Inspector
ASSESSING IMPACT 14
General, 2013). Furthermore, if this problem were solved, non-DoD entities such as non-profits
that strive to build capacity and communities of interest could potentially benefit from the
sharing of best practices to assess and articulate impact in complex environments (Simister &
Smith, 2010).
Organizational Context and Mission
There are five DoD regional centers authorized under the 10 U.S.C. Section 184
established to conduct security cooperation missions (Defense Security Cooperation Agency,
n.d.-a). Senator Daniel K. Inouye initiated the founding of the Asia-Pacific Center for Security
Studies established in Waikiki, Hawaii on the island of Oahu in 1995. During a speech for the
opening of a new hall at the Center, Senator Inouye mentioned that critics may have an
unrealistic expectation for this DoD Center to bring about peace through dialogue but it was also
unrealistic to know only how to kill. Further, he said
There must be a place where the military leaders of the region can safely put
down their arms and sit around a table to talk and to learn about one another.
This can only lead to a better understanding and greater tolerance. (Inouye,
2012, p. 2)
Over time, this ideal has served as a foundation for the Center’s role in building capacity through
executive education, building networks and relationships, and preventing conflict through
collaboration and cooperation. The mission statement published on the Center’s website and
charter is “DKI APCSS educates, connects, and empowers security practitioners to build partner
capacity, shared understanding, and networks to enhance stability in the Indo-Asia-Pacific
region” (DKIAPCSS, 2017a, p. 1).
ASSESSING IMPACT 15
Guidance provided to the Center stems from the Office of the Under Secretary of Defense
Policy and issues priorities of focus to DKI APCSS (DoD, 2015; Government Accounting Office
[GAO], 2013). Noting a need to enhance oversight, DoD designated the as the executive agent
to provide oversight for the regional centers’ programming and resourcing (GAO, 2013). The
senior military command for the Indo-Pacific region, formerly referred to as the Asia-Pacific
region, is the United States Indo-Pacific Command (USINDOPACOM). As a mission partner,
this entity collaborates with the Center on strategic and operational regional issues, along with
country end-state goals. Moreover, they collaborate with country teams to assist with identifying
course participants in the region.
The Center consists of three main sections. The first section, the front office, comprises
the director and deputy director, and staff and administration including the public affairs office
and the strategy and assessment (S&A) office. The second entity is the College of Security
Studies where military and civilian faculty and college operations reside. The third section is the
dean of administration and business operations (DABO), which includes most of the support
staff such as admissions, alumni, resource management, and the regional engagement office that
coordinates workshops in the region. The Center conducts a variety of seminars and workshops
throughout the Indo-Pacific region, and in-residence courses ranging in topics from the Asia-
Pacific orientation course, to the 5-week advanced security course, to the transnational security
course (TSC), which is the most senior and strategic level program with flag officers and
ministerial level attendees. Course sizes vary but typically range from 80 to over 150
participants, with TSC serving a smaller population of senior leaders. Workshops normally run
from two to four days both at the Center and in the region. At this time, workshop participants
are generally not considered alumni. However DKI APCSS courses have produced more than
ASSESSING IMPACT 16
10,000 alumni fellows, including military and civilian security practitioners, attending DKI
APCSS courses from over 100 countries (DKI APCSS, 2016). The Center also invites
participants from organizations such as USAID and other non-governmental organizations to
represent a ‘whole of government’ approach in the region (GAO, 2013). With a substantial
reach focused on foreign security practitioners in the Indo-Pacific region, the Center has
potential to have a significant impact in advancing security and enhancing stability.
Organizational Performance Goal
The Center in the Indo-Pacific has a strong reputation and appears to execute a variety of
courses well. Program survey feedback, implementation of projects developed by the
international fellows while at the Center, and anecdotal feedback are indicators that fellows had a
positive experience while participating in a course (Hanauer et al., 2014). However, in addition
to building capacity, networks, education, and empowerment, the mission of the Center includes
“enhancing stability in the Indo-Asia-Pacific region” (DKI APCSS, 2017a, p. 1)). After the start
of this study, DoD published an assessment, monitoring, and evaluation (AM&E) instruction
memo that described a hybrid framework entailing a decentralized assessment and monitoring
program and a centralized evaluation approach (DoD, 2017). This strengthens the responsibility
that program leadership has for monitoring and determining outcomes, of which long-term
outcomes are also referred to as impact (DoD, 2017). Thus, the need still exists to systematically
assess and articulate with a reasonable level of confidence, the impact of the Center in the Indo-
Pacific region as it relates to enhancing stability.
The organization’s goal is, by January 2020, DKI APCSS will assess and articulate the
impact of its Center on Indo-Pacific stability as it relates to its mission. The Center projected
this goal in conjunction with an updated mission statement and an effort to align DoD priorities,
ASSESSING IMPACT 17
the Center’s activities, and objectives. The timeline for meeting this goal allows for a transition
in the Center’s director leadership, developing a Center strategy and impact assessment
methodology, along with any necessary training and implementation of systematic surveys,
interviews, and other suggested measures. Not meeting this goal will place the Center at risk of
not articulating its impact and not receiving the resources needed to conduct its mission. In a
worst case scenario, it could lead to the Center’s downsizing or closure, thus, creating a gap in
providing a safe space to discuss security issues, build relationships, cooperate, collaborate and
enhance stability in the Indo-Pacific region.
Description of Stakeholder Groups
There are three main stakeholder groups for the Center that are instrumental in
achievement of the Center’s goal. The groups consist of the DoD policy staff and agencies,
including the DSCA and USINDOPACOM; second are the alumni or fellows; and third are the
DKI APCSS staff and faculty. The first stakeholder group, DoD, or the higher-level stakeholder
group, is critical in providing succinct guidance and setting expectations on enhancing stability
in the Indo-Pacific region.
The participants of the various courses at the DKI APCSS are called fellows, responsible
for self and shared learning. Upon graduation from a course, fellows are also referred to as
alumni and they serve as the second stakeholder group. They are a major stakeholder group as
alumni are the means to affect impact in the region when they return to their respective positions
and countries. Alumni have not only been educated and empowered while attending a course,
but they can also impact stability in the region by leveraging relationships and networks built at
the Center. Alumni also have the opportunity to reach back to the Center for faculty expertise,
library and research resources, and for further connections. This stakeholder group is essential in
ASSESSING IMPACT 18
providing feedback to the Center through surveys, interviews, reports, and social media on the
impact they have in the region as they continue through their careers.
The third stakeholder group consists of the DKI APCSS staff and faculty. This group is
essential to planning, preparing for and executing the mission. Moreover, members of this
stakeholder group are involved in the monitoring and evaluation of the Center’s mission.
Without this stakeholder group’s appropriate knowledge and skill, motivation, and organizational
resources, the need to assess and articulate the impact of the Center will remain.
Stakeholders’ Performance Goals
Success in reaching goals for each stakeholder group will contribute to accomplishing the overall
organizational goal by 2020. Table 1 outlines DKI APCSS’ mission, organizational goal, and
stakeholder performance goals.
Table 1
Organizational Mission, Organizational Goal, and Stakeholder Performance Goals goal, and
Projected Stakeholder Timelines
Organizational Mission
DKI APCSS educates, connects, and empowers security practitioners to build partner capacity, shared
understanding, and networks to enhance stability in the Indo-Asia-Pacific region.
Organizational Goal
By January 2020, DKI APCSS will assess and articulate the impact of DKI APCSS on Indo-Pacific
security as it relates to its mission.
DKI APCSS Staff and Faculty
DKI APCSS Alumni
Department of Defense
Agencies – Defense Security
Cooperation Agency and U.S.
Indo-Pacific Command
By Jan 2019, DKI APCSS staff
and faculty will develop the
structure and plan to assess the
Center’s impact in the Indo-
Pacific region.
By Summer 2019, 40% of DKI
APCSS alumni will provide
feedback as part of the
assessment strategy.
By Fall 2018, specific DoD
agencies will provide succinct
guidance on goals to advance
security in the Indo-Pacific
region.
ASSESSING IMPACT 19
Stakeholder Group for the Study
While each of the three stakeholder groups described in Table 1 plays an instrumental
role in meeting the overall organizational goal, the DKI APCSS staff and faculty must have the
capability to create and sustain the plan for assessing and articulating the Center’s impact. Thus,
the focus of this study is on this collective stakeholder group, the DKI APCSS staff and faculty,
with the goal to validate and understand their needs to develop the structure and plan to support
the impact assessment process.
Purpose of the Project and Questions
The purpose of this study was to conduct a needs analysis in the areas of knowledge and
skill, motivation, and organizational resources necessary to reach the organizational performance
goal of systematically assessing and articulating the impact of the DKI APCSS. The analysis
started by generating a list of possible needs. The researcher then systematically examined these
needs to focus on validated needs. While a complete needs analysis would focus on all
stakeholders, for practical purposes, the stakeholder focus in this analysis was on the DKI
APCSS staff and faculty.
As such, the questions that guided this study are the following:
1. What are the DKI APCSS staff and faculty knowledge, motivation, and organizational
needs related to creating a plan and structure to systematically assess its impact?
2. What are the recommended knowledge, motivation, and organizational solutions for the
DKI APCSS to systematically assess and articulate its impact?
Conceptual and Methodological Framework
The conceptual framework for this project is an innovation study adapted as a needs
analysis based on Clark and Estes’ (2008) gap analysis, which is a systematic and analytical
ASSESSING IMPACT 20
method that assists to clarify organizational goals and identify the gap between the actual and the
preferred performance level within an organization. The methodological framework is a mixed-
methods study that will include a qualitative case study with descriptive statistics. Personal
knowledge or observation, scanning, theory and related literature informed assumed knowledge,
motivation and organizational needs. These needs were validated through surveys, focus groups
and interviews, literature review, observations, and document analysis. Research-based solutions
were recommended and evaluated in a comprehensive manner.
Definitions
Building capacity: process to enhance competencies in people, institutions, or societies to
perform specific activities or functions.
Communities of interest: group of people operating to further a common cause.
Conflict prevention: a variety of activities focused on proactively averting the outbreak of a
conflict.
Design: framework or approach towards conducting an assessment.
Empower: process to make a person or institution more confident or stronger to do something.
Indo-Pacific: term used to describe the regional area formerly known as Asia-Pacific and now
includes reference to the Indian Ocean.
Peacebuilding: a process that facilitates peace and attempts to prevent violence by addressing
root causes and effects of conflict through reconciliation and capacity building, in addition to
political and economic transformation.
Resilience: ability of a country to manage and recover from significant events such as natural
disasters.
ASSESSING IMPACT 21
Security assistance: sub-set of security cooperation that grants, sells, or leases military
equipment, education, or training to foreign nations to further U.S. policies or goals.
Security cooperation: collective term for the U.S. Department of Defense to build relationships
that promote specific U.S. security interests and develop partner nation capabilities.
Social capital: the value of human capital in which social networks are central where
relationships are discernible with reciprocity, trust, and cooperation focused on a common
good.
Soft power: persuasive approach typically in international relations to shape others without
coercion and influence through appeal or attraction.
Stability: region that is relatively balanced and not easily overturned.
Treatment: intervention or participation in a program when referring to DKI APCSS
Track 1 Diplomacy: official communication and interaction between governments.
Track 2 Diplomacy: unofficial communication and interaction between non-state actors,
typically retired or former government officials or representatives from the private sector or non-
profits.
Organization of the Study
This study is organized in five chapters with the first one providing key concepts and
background information concerning peace and stability and the importance of assessing the
impact of security cooperation organizations. This chapter also introduced the DKI APCSS
mission, stakeholders and goals and the concept of the innovation study. Chapter Two provides
a review of literature related to the scope of the study. This literature review also includes topics
such as challenges in assessing impact in other soft power or non-governmental organizations. It
also describes various approaches and the levels of validity using different methodologies.
ASSESSING IMPACT 22
Chapter Three describes the assumed needs for this study in addition to the methodology for
selection of participants, data collection and analysis. Chapter Four discusses the analysis and
assessment of data and results from surveys and interviews. Chapter Five provides suggested
solutions, based on data and literature, for addressing the needs and closing the performance gap
as well as recommendations for an implementation and evaluation plan for the solutions.
ASSESSING IMPACT 23
CHAPTER TWO: REVIEW OF THE LITERATURE
This chapter reviews literature as it relates to assessing the impact of conflict
prevention strategies and enhancing stability in a region. It begins with a review of the
effects of global conflict and addresses the importance of conflict prevention, focusing on
the front end of the peace operations spectrum. This includes an analysis of global
organizations and strategies, then further hones in on aspects of conflict prevention at a
regional level. The most significant portion of the review focuses on the challenges of
assessing the impact of conflict prevention measures through both a soft power and social
impact lens with consideration of various approaches and levels of validity. The review
also highlights best practices and challenges in developing methodologies to assess the
impact of an organization ultimately focused on enhancing stability in a region. Through
learning theories and the knowledge, motivation, and organizational structure model, the
last section of this chapter reviews the capacity building needed to assess and articulate
the impact of the Daniel K. Inouye Asia Pacific Center for Security Studies (DKI
APCSS) also referred to as the Center, which aims to enhance stability in the Indo-Pacific
region.
Conflict Prevention
Broadly, studies argue that multiple strategies and multifaceted actions including
non-coercive or soft power type approaches coupled with coercive options are most
effective in preventing conflict (Dyke, 1997; Jentleson, 2003; Lund, 1996; Oberg et al.,
2009; Regan, 2000). However, others tout a narrower definition of conflict prevention
that is focused on primarily diplomatic measures that are non-coercive (Munuera, 1994).
ASSESSING IMPACT 24
A review of the impact of global conflict is worth consideration while deliberating either
approach to conflict prevention.
Global Conflict Impact
Although there has been a decrease in interstate and civil wars, other types of
conflict such as terrorism have caused increased loss of life and other negative effects
(Muggah & White, 2013). Global levels of peace continued to deteriorate in the last
decade with increased levels of terrorism, over 370,000 killed in direct violence, and the
number of refugees and internally displaced persons expanding two-fold (Crawford,
2016; Institute for Economics and Peace, 2016). While the calculation of costs can vary,
in general, costs accumulate before war in preparation, during war for sustainment and to
care for wounded, and after war to replace damaged equipment and infrastructure
(Chalmers, 2007; Crawford, 2016). Further, future medical expenses for wounded could
cost another trillion raising the total projected cost of conflict to $4.7 trillion in 2017 for
the United States’ conflicts in Iraq, Afghanistan, Pakistan, and Syria, along with
homeland security costs from 2001-2016 (Crawford, 2016). Given the significant costs
and impact of conflict on humanity, some researchers argue that investments in
preventing conflict are worthwhile. Yet it can still be difficult to prove that the costs and
benefits support more preventive action (Chalmers, 2007; Dyke, 1997; Jentleson, 1998;
Muggah & White, 2013).
Overview of Peace Operations and Conflict Prevention
The spectrum of peace operations ranges from pre-conflict to post-conflict. It
includes a variety of conflict prevention strategies with debatable success (Dyke, 1997;
Meharg, 2009; Woocher, 2009). At the front end of peace operations, the concept of
ASSESSING IMPACT 25
conflict prevention aims at reducing risk factors that can lead to war. It also refers to
containment, decreasing intensity, or spread of conflict (Muggah & White, 2013).
Menkhaus (2006) identifies that conflict prevention can focus on preventing imminent
conflict or it can include long-term capacity building to address measures to resolve
conflicts prior to escalating (Organisation for Economic Cooperation
Development/Development Action Committee (OECD/DAC), 2012). Another part of
peace operations includes peacebuilding, which often focuses on the back end of the
spectrum in a transition from war to peace. However, this term can overlap with the
notion of conflict prevention, and includes stages where there are threats of violence or
efforts to reduce the risk for violence. Moreover, peacebuilding can include a number of
activities at various stages of a conflict (Boutros-Ghali 1992; Evans, 1993; Lederach,
1997; OECD/DAC, 2012). This study refers to the broad definition of peacebuilding as
efforts to prevent violent conflict from erupting (Fast & Neufeldt, 2005; Meharg, 2009).
However, it does not focus on aspects of preventive diplomacy or mediation. Fast and
Neufeldt’s (2005) description of a peacebuilding goal is to create peace through
sustainable interactions and respectful relations, along with building capacity, to address
causes of conflict. For the purposes of this study, enhancing stability is a by-product of
conflict prevention and peacebuilding. As discussed below, non-violent approaches to
conflict prevention include soft power strategies, official government Track 1 and
informal non-official Track 2 discussions, efforts by alliances and institutions, and
security cooperation.
Soft power strategies. Nations employ various soft power approaches as part of
a combined strategy to influence others in a non-violent or non-kinetic manner and are
ASSESSING IMPACT 26
often described as intangible dimensions of power (Nye, 2004; Solomon, 2014).
Jentleson (1998) purports that approaches can include developmentalist diplomacy, a
longer-term approach, or preventive diplomacy which tends to address more immediate or
mid-term prevention of conflict involving engagements, person-to-person dialogues,
building of democracy, or environmental preservation, as examples. Even so, the
argument exists that these approaches likely need to be back stopped by threats or a show
of force (Jentleson, 1998).
Global alliances and institutions. Conflict prevention strategies also include the
building of alliances and institutions that provide tools to mitigate conflict, but may not
be effective due to factors such as a lack of commitment and priority, or evolving
environments (“Whole of Society Conflict Prevention and Peacebuilding,” 2017;
Woocher, 2009). In the early 1990’s, preventive measures included political efforts
through the Organization for Security and Cooperation in Europe that were designed to
prevent conflicts before they emerge (Jentleson, 1998). Despite these efforts, numerous
conflicts emerged, including in Bosnia, Croatia, and Somalia where a majority of the
conflicts in these countries were ethnic struggles (Jentleson, 1998). In the early 2000’s,
preventive measures broadened to address threats such as natural disasters, terrorism, and
transnational crime (Muggah & White, 2013). In more recent years, the United Nations’
secretary-general raised the visibility of conflict prevention by naming 2012 “the year of
prevention” (Muggah & White, 2013, p. 2). However, despite evidence suggesting that
prevention is worthwhile, challenges remain in gaining political traction and funders,
somewhat due to the lack of understanding of probabilities and future risks (Muggah &
White, 2013).
ASSESSING IMPACT 27
Track 2. While alliances and governments engage in official or Track 1
discussions, Track 2 discussions are an unofficial approach to resolve issues and build
relationships with participants from non-profits, think tanks, and former government
officials (Dyke, 1997). The Carnegie Commission for Preventing Deadly Conflict and
the Council on Foreign Relations Center for Preventive Action are example organizations
that were established to facilitate these efforts (Jentleson, 1998). A hallmark example of
a successful Track 2 endeavor was the signing of the Oslo Accord in 1993 (Burgess &
Burgess, 2010). Although violence eventually returned between the Israelis and
Palestinians, these Track 2 efforts have generally been credited with reaching an
unprecedented level of peace through increased communication, sharing knowledge and
ideas, and fostering new relationships (Burgess & Burgess, 2010). Like the goal of
assessments in other soft power efforts, Burgess and Burgess (2010) recommended to not
only garner lessons learned for future implementation but also to validate the efficacy of
theories applied in Track 2 discussions. Overall, the impact and results of these
discussions are often difficult to assess due to intangible outcomes. Furthermore, the
topic of Track 2 discussions requires more research (Allen & Sharp, 2017; Ball, Milner,
& Taylor, 2007).
Security cooperation. DoD invests in preventing conflict and advancing U.S.
interests through shaping operations which includes filling gaps in institutional capability
in partner countries (Hanauer et al., 2014; Woocher, 2009). Activities include
engagements through military operations with partner nations to enhance interoperability
and capacity and share values. It also entails the sale of U.S. military weapons to partner
nations and through participation in Defense Institution Building (DIB) programs to
ASSESSING IMPACT 28
holistically sustain and increase its effectiveness. DoD’s DIB programs are also focused
on developing partner nations to operate under civilian control and better employ forces
independently or in support of U.S. efforts (Marquis et al., 2016). Furthermore, DoD has
five regional centers that seek to build capacity and communities of interest through
education, empowerment, and networking for partner nations (Hanauer et al., 2014).
Security cooperation efforts also extend into other service educational settings through
the International Military Exchange & Training program (DSCA, n.d.b.).
Conflict Prevention in the Indo-Pacific Region
An evolution of economic, political, demographic, maritime security and other
factors in the Indo-Pacific region creates an increasingly complex security environment
which could destabilize or lead to additional conflict (Swaine, Eberstadt, Fravel, Herberg,
& Keidel, 2015). Due to the risks of potential conflict, countries implement prevention
measures such as regional alliances, institutions, and security cooperation efforts to
enhance stability.
Regional alliances. The UN has sought to decentralize some of its conflict
prevention efforts to a regional level. The Indo-Pacific region includes a number of
regional and sub-regional alliances that approach issues differently, including ASEAN’s
(Association of Southeast Asian Nations) mandate to monitor and prevent conflict
through its preferred policy of constructive engagement (Dyke, 1997; Haacke, 2009;
Muggah & White, 2013). Established alliances such as ASEAN have evolved and grown
to create new organizations such as the ASEAN Regional Forum (ARF). The ARF
consists of 28 countries that includes the original 10 ASEAN countries and dialogue
partners involved in both official Track 1 discussions and unofficial Track 2 discussions
ASSESSING IMPACT 29
(ARF, 2017). The ARF took measures to address security issues in the region, such as
counter-terrorism and maritime security, but cooperation and exercises were limited to
table-top exercises and reflected a need for more capacity building in its members to
create greater impact (Haacke, 2009). Further, ASEAN’s focus in 2012 shied away from
challenges that could further cause instability in the region, including issues in the
southern Philippines and southern Thailand (Muggah & White, 2013). ASEAN
continues to receive critical assessments of its efforts due to lack of capacity in the
organization and the challenge to reach consensus amongst its members, yet there have
been successful bilateral agreements between members (Tonnesson, 2015). Despite the
on-going Rohingya refugee crisis, ASEAN played a role in transitioning Myanmar from a
formally military led government, which was considered progress towards a democratic
civilian ruled society (Tonnesson, 2015).
Regional institutions. Entities such as the East-West Center, under the U.S.
Department of State, and Pacific Forum, a non-profit, reside in the Indo-Pacific region
and aim to build understanding and relationships. However, the impact of their roles on
conflict prevention is unclear (East-West Center, 2017; Pacific Forum Center for
Strategic and International Studies, 2016). Further, regional Track 2 entities, such as the
Council for Security Cooperation in the Asia-Pacific (CSCAP), evolved in 1993 at a
strategic level to enhance dialogue, conduct research, and contribute to complex topics
that were loosely tied to government officials (Ball et al., 2007; Council for Security
Cooperation in the Asia Pacific, n.d.). The number of Track 2 organizations in the Indo-
Pacific region has grown to approximately 150 entities. Their efforts and impact vary
ASSESSING IMPACT 30
through bilateral dialogues and non-traditional security topics, such as pandemics (Ball et
al., 2007).
Security Cooperation in the Indo-Pacific Region. In the Indo-Pacific region,
the U.S. military conducts security cooperation through foreign military sales (FMS) of
equipment and training, military exchanges, and joint exercises such as Pacific Pathways
and RIMPAC. In an effort to maximize accountable and sustainable employment of
partner nations forces towards a collective security, DoD provides institutional capacity
building through a number of programs (DSCA, n.d.c.). Further, as the focus of this
study, DKI APCSS is one of DoD’s five regional centers, and is a means of security
cooperation within the Indo-Pacific region to build partner capacity, shared
understanding, and networks through executive education and its extensive alumni
program. Additionally, DKI APCSS conducts research projects and senior leader
engagements (Ross, 2016; Strategy & Assessment (S&A), 2017).
Building capacity. DoD’s Quadrennial Defense Review in 2012 identified DoD’s
strategy to prevent and deter conflict through the building of capacity of alliances and
advance common interests to promote stability (GAO, 2013). DoD’s 2018 National
Defense Strategy also addresses the need to strengthen alliances and attract new partners
as mutually beneficial partnerships have served the United States well. Further, DoD
acknowledges that partners and allies are an important part of preventing war and
maintaining a free and open Indo-Pacific (Mattis, 2018). Another benefit of building
capacity in fellows through the Center is the possibility to reduce U.S. long-term
presence in some countries yet still help protect partner nations (GAO, 2013). DKI
ASSESSING IMPACT 31
APCSS also conducts building capacity aimed at enhancing partner nations’ ability to
respond to non-traditional security challenges such as natural disasters.
Education and empowerment. Education contributes to enhancing capacity and
empowering students with knowledge (GAO, 2013; St. Laurent, 2013; Templeton, n.d.).
DKI APCSS also focuses on educating civilian and military students to become more
adept at decision making to enhance the defense of their own populations and resources
(GAO, 2013). Alumni who have increased knowledge can also improve access to more
information and research through functional networks (GAO, 2013). Executive
education at DKI APCSS serves as a means for building capacity for fellows
participating in courses.
Shared understanding and connecting. Building relationships can lead to
cooperation as a result of desiring to achieve mutually desirable outcomes or if the parties
involved depend on each other’s assets and capabilities (Church & Rogers, 2006; Xu,
Cui, Qualls, & Zhang, 2017). The founder of DKI APCSS, Senator Inouye, shared that
dialogue may have been viewed by some as weak, but he stressed that it is a powerful
tool and “in order to achieve collaboration and compromise, much work must be done to
sincerely understand each other” (Inouye, 2012, p. 1).
Challenges of Assessing the Impact of Conflict Prevention
This section begins with clarifying the definition of assessment and the reference
of this term in this study. It further describes the definition of impact and the two types
of impact, soft power and social capital, in the context of this scholarship. Most
significantly, the next portion addresses challenges and key elements of attempting to
assess the impact of conflict prevention.
ASSESSING IMPACT 32
Defining Assessment
The term assessment takes on many meanings and is often referred to at the
beginning of a project to assess the landscape or the problem (Church & Rogers, 2006;
DoD, 2017; Marquis et al., 2016). Assessment is also used to describe impact which
infers a point of time generally at the end of a project or some time after (Mohr,
Lawrence, 1995). The term assessment can also apply different meanings within various
organizations as the military considers assessment throughout the full spectrum of
reviews, while the U.S. Department of State may refer to assessment as part of an initial
review (Joint Chiefs of Staff (JCS), Joint publication 3-20: Security cooperation, 2017).
The area of impact assessments are the most far-reaching and underdeveloped (Church &
Rogers, 2006).
Further, the terms evaluation and assessment tend to overlap and are synonymous
at times while both include formative and summative aspects. Formative approaches
help develop goals to be valuable and useful to improve a process (Church & Rogers,
2006; Rothman, 1997) and summative approaches track, monitor and assess fulfillment
of goals and aim to capture effectiveness and results (Church & Rogers, 2006; Scriven as
cited in Rothman, 1997). As Rothman (1997) purports, formative and summative
assessments should be considered along a continuum. Moreover, approaches to
evaluation and assessments may be conducted at various levels such as at a holistic
systemic level of all DoD institutional capacity building programs, a partial or sectoral,
single agency, or single project impact (DoD, 2017; IFRC (Institution of Federal Red
Cross and Red Crescent Societies), 2005).
ASSESSING IMPACT 33
For the purposes of this study, the term assessment is used rather than evaluation
and incorporates both formative and summative characteristics. Evaluations can analyze
various aspects of a program such as relevance, sustainability, and effectiveness (DoD,
2017; OECD/DAC (Organisation for Economic Co-operation Development/Development
Assistance Committee), 2012). Yet this study is focused on the internal assessment and
articulation of impact as part of an on-going program for DKI APCSS that further allows
for learning to improve through understanding the impact of its design and activities.
When assessing impact in a program, success or failure can be as a result of the theory of
change or the implementation of the program (Church & Rogers, 2006; OECD/DAC,
2012).
Defining Impact
In assessing the impact of an organization, one seeks to identify the change that
has taken place and whether there is an adaptability of change (Church & Rogers, 2006).
The Organization for Economic Cooperation and Development (OECD) defines impact
as “positive or negative, primary and secondary effects produced by an intervention,
directly or indirectly, intended or unintended” which can be aimed at institutional level
capacity or policy changes (Church & Rogers, 2006; International NGO Training and
Research Centre [INTRAC], 2015; OECD/DAC, 2012). Conversely, many NGOs focus
on people’s lives and impact is aimed at the level of individuals or households (INTRAC,
2015).
Some view an impact assessment as the most rigorous and scientific assessment
and others may prefer to conduct a performance assessment with less rigor to assess
change beyond a reasonable doubt (Marquis et al., 2016). Given the challenges of
ASSESSING IMPACT 34
assessing impact, researchers also referred to impact indicators or simply indicators, to
measure achievement (Church & Rogers, 2006; OECD/DAC, 2012). Assessing impact
also tends to blur definitions with assessing outcomes (Anderson, 2001; INTRAC, 2015).
Mills-Scofield (2012) describes outcomes as “knowledge transferred and behaviors
changed” and that they “create meanings, relationships, and differences.” For this study,
outcomes will refer to short and mid-term changes and impact refers to broader long-term
outcomes or the results of the program and changed behaviors as it contributed to an
organization’s mission. In 2015, Kirkpatrick updated their model by incorporating the
use of long-term indicators within its Level 4 or results category in the “New World”
model (J. Kirkpatrick & Kirkpatrick, 2015). As it pertains to this research, impact will be
viewed through the perspective of soft power and a social capital lens.
Determining soft power impacts. Soft power influences are considered to be
efforts that do not involve military hard power. Assessment of soft power, such as Track
II diplomacy as an element of soft power, may be approached through one of the
methodological options listed in Table 2. However, military forces may also be involved
in the work of soft power particularly through various aspects of peace operations,
conflict prevention, and security cooperation. Assessing impact in this vein can be
through conceptual and instrumental, or total and partial impacts (Meharg, 2009). Soft
power impacts for the military are commonly assessed through measures of effectiveness
and performance (Meharg, 2009). As mentioned earlier, security cooperation and its
application through the DKI APCSS, is a means of soft power and serves as the
foundation for assessing impact.
ASSESSING IMPACT 35
Determining social capital impacts. Assessing aspects of social capital in conflict
prevention is a consideration given the influence of relationships and leveraging networks
in conflict prevention (Church & Rogers, 2006). Methodologies highlighted in Table 2
also reflect approaches considering social impacts. Socialization processes through
informal memberships or structures serve as a method to internalize shared values and
norms (Narayan & Cassidy, 2001). Similarly, assessing an absent, negative, or
unaccountable social capital impact could indicate corruption (Narayan & Cassidy,
2001). This type of assessment includes an art and science approach that addresses not
just the existence of relationships but the nature and type of relationships (International
Federation of Red Cross and Red Crescent (IFRC), 2005; James, 2009; Narayan &
Cassidy, 2001). By incorporating an appropriate approach and tool, such as a factor
analysis using a multivariate statistical technique, an organization may identify impactful
factors or dimensions of an intervention (Narayan & Cassidy, 2001). Part of the DKI
APCSS mission to connect international fellows and build shared understanding and
networks prompts the need to consider the aspects of social capital impact. The
following section addresses the challenges in assessing social impact and other intangible
outcomes.
Practical Challenges of Assessing Impact
A number of factors contribute to the challenges of assessing impact in complex
environments, whether it is as a result of soft power or social influences. Challenges
include the lack of robust methodologies and the difficulty of isolating interventions to
determine a cause and effect in programs that can take 2 to 10 years for impact to occur
(Church & Rogers, 2006; Hanauer et al., 2014; Jentleson, 1998; Marquis et al., 2016;
ASSESSING IMPACT 36
Mohr, Lawrence, 1995; OECD/DAC (Organisation for Economic Co-operation
Development/Development Assistance Committee), 2012). The following sections
highlight pertinent challenges for DKI APCSS.
Environment. In some cases, assessments are in violent areas or fragile states
with non-linear events. Nonetheless, assessments are needed to improve learning and
effect better results or impact (Marquis et al., 2016; OECD/DAC, 2012). Other
environments may be more benign but face the same challenge of discerning multiple
interventions to validate the cause and effect (Moroney, Hogler, Kennedy-Boudali, &
Pezard, 2011). The vastness of the Indo-Pacific region also creates a challenging
environment for assessment due to the 36 countries in the region, vastly varied in size and
capabilities, covering over 50% of the world’s population with over 3,000 languages
(USINDOPACOM (United States Indo Pacific Command), 2018). An organization
responsible for enhancing stability in this large of a region is challenged to disaggregate
data collection and subsequently assess contextually specific indicators (Church &
Rogers, 2006).
Intangibles. Numerous studies have identified the challenge of assessing a
program that is premised on building capacity and relationships (GAO, 2013; Marquis et
al., 2016). Intangible topics such as building capacity, relationships, trust and confidence
levels, particularly in varying cultural contexts, are a challenge to measure (James, 2010).
Unlike health fields and others, extensive research on developing impact indicators is a
nascent endeavor for an organization such as DKI APCSS (Church & Rogers, 2006;
Hailey, James, & Wrigley, 2005). An additional complicating factor in conducting
assessments is an organization’s operating values where consideration of non-attribution
ASSESSING IMPACT 37
policies could inhibit the ability to disaggregate data by gender, region, or position
without guaranteeing identity protection of select survey respondents (DKI APCSS,
2017c).
Counterfactual reasoning and control groups. Practitioners and policy makers
struggle with measuring whether preventive action works, since obtaining evidence is
complicated (Leeuw & Vaessen, 2009; Marquis et al., 2016; Muggah & White, 2013).
Assessments may incorporate counterfactual reasoning, or an attempt to capture and
measure what may happen in the absence of implementing a program, that might have
resulted in a different impact (Leeuw & Vaessen, 2009; OECD/DAC, 2012). This
consideration requires either a random control group or a quasi-experimental method that
uses a comparison group, where one connects a theoretical basis and an alternative
hypothesized path with empirical data as much as possible (Jentleson, 1998; Leeuw &
Vaessen, 2009; Mohr, Lawrence, 1995). To address additional challenges of lacking a
control group, techniques such as propensity score matching can be applied to a control
group built ex-post to add rigor to the impact analysis yet it requires a large sample group
and still has shortfalls with bias and unobserved aspects (Leeuw & Vaessen, 2009).
Another emerging option could be an application of Stochastic Linear Programming to
increase validity of data collected without a control group (M. Powers, 2018). Further,
ethical issues emerge when possible harm is known and withholding preventive action
could increase risk to a control group.
Data collection. Assessing impact often requires data collection over an extended period
of time prompting longitudinal studies (Church & Rogers, 2006; Hanauer et al., 2014; Marquis et
al., 2016; Mohr, Lawrence, 1995; W.K. Kellogg Foundation, 2004). This also speaks to the
ASSESSING IMPACT 38
larger challenge in complex environments of who to collect data from and how to reach
respondents. Key data collection for DKI APCSS conceivably entails surveying over 10,000
alumni dispersed throughout the Indo-Pacific region and, as part of their service, typically
relocate regularly, creating challenges to obtain feedback and stay connected (DKI APCSS,
2016). Further challenges include the manpower and resources to receive, analyze, and process
voluminous data as part of the assessment process. Applying developing programs within the
DoD such as the most recent Content Heuristic Unstructured Parsing and Predictive Electronic
Tool (CHUPPET) plausibly eases the process by identifying relevant themes and temporal trends
while reducing analyst bias (M. J. Powers, 2018).
Approaches to Assessing Impact in Complex Environments
This section describes approaches for assessing impact and starts with the need
for having a strategy and defining success. It follows with a discussion of the role of a
conceptual program design. The last section comprehensively reviews various designs
and methodologies, along with considerations for quantitative, qualitative, and mixed-
methods approaches.
Jentleson (1998) purports that assessing impact must have a degree of relativity
yet, to provide credibility, a successful assessment must maintain a sufficient level of
reliability. To further expound, impact can tend to be vague at times and not succinct,
particularly when assessing intangibles. However, identifying linkages and other
methods must add rigor to the process to improve precision and dependability of the
assessment. Another challenge identified in the research is the transitoriness of success;
identifying success could be a brief moment in time or short-lived based on external
influences and the assessment is likely conducted months to years after a program is
ASSESSING IMPACT 39
completed (Church & Rogers, 2006; Jentleson, 1998). Academics reason that there is a
lack of standard metrics to evaluate results of preventive interventions. Muggah and
White (2013) contend that success cannot be defined in absolutes such as, conflict
avoidance, but must consider historical and contextual aspects along with the specific
goals of interventions. Rothman (1997) suggests that one should assess success against
goals and not an abstract definition of success. Rothman (1997) further expounds on the
need for an objective set of standards for conflict resolution that could help identify
success and allow for a user-friendly and replicable methodology. Within the DoD, goals
and objectives should be aligned with a strategy that is nested with the theater campaign
plan (Marquis et al., 2016).
Conflict prevention programs that lack a clear change theory, based on the
sources of conflict, validated underlying assumptions, and baseline data, create a
complicated situation to implement a solid methodology for assessing impact (Church &
Rogers, 2006; Marquis et al., 2016; Mohr, Lawrence, 1995; OECD/DAC, 2012). Similar
to the vastness of the Indo-Pacific region, conducting baseline studies and developing
program goals that involve numerous countries from different parts of the region, is more
challenging due to the large variance in cultures, history, and sources of conflict (Church
& Rogers, 2006; Marquis et al., 2016). Programs also often require additional resources
to conduct impact assessments beyond a typical 3- to 5- year strategic plan (Church &
Rogers, 2006; Church & Shouldice, 2002; Curnan, LaCava, Sharpsteen, Lelle, & Reece,
1998).
ASSESSING IMPACT 40
Methodological Approaches to Assessing Impact
Leveraging methodologies can help assess, incorporate, or articulate impacts of
change (Briard & Carter, 2013; IFRC (Institution of Federal Red Cross and Red Crescent
Societies), 2005; Lance, Guilkey, Hattori, & Angeles, 2014; Marquis et al., 2016).
Similarly, use of a methodology is not a panacea for assessing impact. It provides a
framework or an approach through processes to relate needs or theories with results and
changes, but it does not negate the inherent challenges. While there are numerous
categories of methodologies, the ideal methodology for an organization operating in a
complex environment is likely a combination of the processes and tools from a suite of
methodologies (IFRC (Institution of Federal Red Cross and Red Crescent Societies),
2005; Marquis et al., 2016; OECD/DAC (Organisation for Economic Co-operation
Development/Development Assistance Committee), 2008, 2012). Within this study,
analysis of evaluation approaches is also applied to assessments given the extensive
similarities and overlap between evaluations and assessments and its terminology.
RAND highlighted frameworks from other government agencies including the DoS
Managing for Results (MfR) model, USAID’s Interagency Security Sector Assessment
Framework (ISSAF), and the Millenium Challenge Corporation approach that leverages
input from partner countries to conduct assessments (Marquis et al., 2016). These each
provide useful concepts and elements that may be incorporated but the frameworks are
targeted at a level that may not be best suited for a regional center. The table below
outlines six of many other methodological approaches with a brief summary of
applicability and considerations for use to innovate a process in order to assess impact.
ASSESSING IMPACT 41
Table 2 and the following sections provide more detailed descriptions and comparisons of
the methodologies and approaches.
Table 2
Modified Methodological Approaches to Assessing Impact (PricewaterhouseCoopers, 2007)
Type Brief Description Advantage Disadvantage Remarks
Logic
Framework
Structured approach
outlines intervention
logic using key
indicators
Highlights linkage of
projects with desired
outcomes
Complex
environment may
not reveal direct
linkages of results
Micro-level; tends
to be a linear
process
Participatory or
Action
Evaluation
Aimed at project
level involving more
stakeholders
Facilitates micro-level
initiatives with more
stakeholder
involvement
Lacks linkage to
macro or strategic
level
Consider
stakeholders at the
micro level
Theories of
Change
Focused on how
theories can effect
change
Intends to assess
validity of intervention
Best employed during
planning phase
Macro level and
appropriate for
course and program
development
3d Gen PCIA Incorporates other
methodologies in
this approach
Comprehensive and
full spectrum
Challenge to get
consensus on causes
and goals
Micro and macro
level; Political
aspects not
appropriate for
DKI APCSS
Results Based Focus is on impact
or intended results
Identified results can
be shared with funders
Can overlook ways to
improve program
Micro and macro
level; Impact must
be further
articulated
Goal-Free Does not focus on
goals but allows
them to emerge
Provides flexibility in
a fluid environment
Can appear to be
reactive and
fragmented
Micro and macro
level; Challenging
for a large and
diverse region
Logic framework. This framework focuses on causal relationships of inputs,
activities, outcomes, and impact, and considers underlying assumptions and risks that can
help in assessing programs (Church & Rogers, 2006; INTRAC, 2015; OECD/DAC,
2012; W.K. Kellogg Foundation, 2004). This approach typically traces a flow of inputs
to impact in a linear fashion, and usually from a single perspective, in an attempt to
capture attribution (Church & Rogers, 2006; INTRAC, 2015). However, this directional
flow and focus on quantifiable indicators may fall short of a comprehensive assessment
and reflect small-scale examples that illustrate work performed, but not the impact,
ASSESSING IMPACT 42
particularly if impact is not realized until years after a project is complete (Curnan et al.,
1998; INTRAC, 2015; PricewaterhouseCoopers, 2007). DoD also has a similar
framework that has been applied to its Section 1206 program that equips and trains
foreign counterterrorism forces and faces similar challenges in the assessment process
(Marquis et al., 2016).
Participatory or action evaluation. This method may be applied for long-term
projects operating in complex and dynamic environments that require flexibility to effect
change (Church & Rogers, 2006; OECD/DAC, 2008, 2012; Rothman, 1997). A key
characteristic of this approach integrates assessors and other stakeholders, such as
leadership or faculty, to establish goals, validate assumptions, and develop assessment
measures (Church & Rogers, 2006; OECD/DAC, 2008; PricewaterhouseCoopers, 2007).
An advantage to this action research method is the continuous feedback loop where
theory informs practice and practice informs theory (Ross, 2001; Rothman, 1997).
Rothman (1997) suggests that the intent of using a systematization of this process is to
increase flexibility in making progress towards the goals. As part of this approach,
identifying shared goals help define success for interventions or strategies and self
reflection can spur changes to meet success (Rothman, 1997). This approach usually
involves a guide or facilitator, which could also incur costs for training sessions. Finally,
this approach is largely focused at a project or micro level and if goals are not focused on
capturing impact, it may fall short of assessing the impact of an organization such as the
DKI APCSS (PricewaterhouseCoopers, 2007; Ross, 2001).
Theory of change. This approach relies on applying a theory of change that is
based on one’s experiences, beliefs, or research, to the assessment of an intervention
ASSESSING IMPACT 43
(Bamberger, 2006, as cited in OECD, 2008). It also focuses on who, what, and how to
effect change to create an impact and can help align efforts from goals to impact
(OECD/DAC, 2012; PricewaterhouseCoopers, 2007). This approach is conducive to
discerning whether effecting impact or not was due to faulty performance of a program or
relying on an ineffectual theory (OECD/DAC, 2008; PricewaterhouseCoopers, 2007).
Similarly, an assessment conducted without applying a theory of change is at risk of
allocating additional resources to determine the source of a shortfall in a program that did
not meet its intended outcomes or impact. Subsequently, incorporating a theory of
change is helpful because it can contribute to learning and improving a program
(Neufeldt, 2011; PricewaterhouseCoopers, 2007). An innovative approach of this theory
was applied to an impact study on entrepreneurial education that faced similar challenges
where it was difficult to isolate impact due to the manifestation of time for impact to
occur. In this approach, the study sought a framework that captured rich data but robust
empirically validated data. The study also leveraged a theory of planned behavior and
identified that intention and attitudes were the best indicators of impact behavior
(Fayolle, Gailly, & Lassas-Clerc, 2006).
Third generation Peace and Conflict Impact Assessment (PCIA) also known
as Aid for Peace. This approach has evolved since the original PCIA methodology. It
incorporates a number of methodologies that begins with anticipating the impact of
interventions, continues with monitoring and evaluation during the design and execution
phases, and includes a post-project assessment (Bush, 2009; PricewaterhouseCoopers,
2007). The main focus of this approach is identifying a country’s or area’s needs and
designing interventions or projects, while considering the risks of the intervention,
ASSESSING IMPACT 44
followed by an assessment of the effects and impacts (PricewaterhouseCoopers, 2007).
Suggestions such as incorporating a participatory approach and integrating causal chains
are key points, yet emphasis is placed less on impacts and more on outputs and indicators
(PricewaterhouseCoopers, 2007). Elements of this project may not be applicable to all
projects since this methodology includes political and economical aspects that may not
pertain to organizations more focused on areas such as security
(PricewaterhouseCoopers, 2007).
Results based approach. Largely as a response to donor demands and public
accountability, this approach focuses on whether impact or intended results were
achieved through a program or intervention based on what was projected, purported, or
promised (OECD/DAC, 2008). If results are identified, organizations may inform donors
or other stakeholders on the impact of the organizations’ efforts. This approach can also
be participatory and may increase understanding of the challenges to conduct an
assessment without specifying indicators, conducting longitudinal studies or a conflict
analysis (OECD/DAC, 2008). However, if this approach is focused solely on the results,
it misses opportunities of identifying process shortfalls and how a program could be
better executed. As with other approaches, this assessment should require a baseline
study which is often absent (OECD/DAC, 2008).
Goal-free. At the other end of the spectrum is a unique approach to an
assessment that is without focus on goals or intended results, and it is also referred to as a
needs approach (Church & Rogers, 2006; OECD/DAC, 2008). This tactic supports the
idea that revealing unintended results is as informative as intended ones, and
subsequently, results can be compared to needs of an intended population (Church &
ASSESSING IMPACT 45
Rogers, 2006; OECD/DAC, 2008). As part of this approach, evaluators avoid interaction
with leadership and learning about goals to minimize bias of influence or potentially
flawed programs (Church & Rogers, 2006). The assessment may be more applicable in a
dynamic environment where previous goals are no longer applicable, and it may be more
objective without the bias of imposing program leadership goals (OECD/DAC, 2008).
Also, this approach could require more resources as the assessment would have a broader
review without the focus of assessing impact against specific goals (Church & Rogers,
2006; OECD/DAC, 2008).
Research Designs and Methods
Various research designs and assessment or evaluation methods may be applied as part of
the methodological approaches described above. There are numerous debates regarding the
application of quantitative or qualitatives research methods with each having its advocates,
adversaries, and moderates (Creswell, 2014; Merriam, 2009; Mohr, Lawrence, 1995; Reichardt
& Cook, 1979; Scriven, 1972, 1976). The following sections provide more insight to
quantitative, qualitative, and mixed-methods with regard to impact assessments.
Quantitative. Quantitative approaches aim to be statistically verifiable and outcome
oriented along with attempts for findings to be generalizable (Creswell, 2014; Mohr, Lawrence,
1995; Reichardt & Cook, 1979). Quantitative approaches are generally viewed as being the most
objective approach and typically applied by a positivist who seeks objectivity (Mohr, Lawrence,
1995). A quantitative approach and design are often thought of as numbers oriented and include
experimental designs that incorporate a control group and ideally randomized selection of
participants (Mohr, Lawrence, 1995). Non-experimental designs such as the use of surveys are
also part of a quantitative design (Creswell, 2014). Mohr (1995) classifies impact analysis
ASSESSING IMPACT 46
designs as experimental and quasi-experimental with the former generally serving as the highest
standard design due to its selection mode. Internal validity or the accuracy of findings, varies
with the threats being minimized by implementing a randomized and centralized selection mode
(Mohr, Lawrence, 1995; Reichardt & Cook, 1979). The quasi-experimental design leverages a
comparison group that is not randomly selected and introduces elements of selection bias. He
further describes the design ex-post facto, generally weakest in principle due to the lack of a
control or comparison group that negatively influences the impact of effects not due to chance
(Mohr, Lawrence, 1995). However, the inability to employ a control group or comparison group
due to ethical reasoning serves as an impetus to consider other approaches. Moreover, a
quantitative approach in a complex environment falls short of capturing nuances and clarification
of data in complex environments (Mohr, Lawrence, 1995; Reichardt & Cook, 1979).
Qualitative. One may expect that if a quantitative approach fails to obtain rich data to
clarify a complex situation or lacks validity due to the lack of a control group, another approach
such as a qualitative design that captures narrative descriptions may be more suitable (Merriam,
2009; Reichardt & Cook, 1979). A qualitative design entails data collection methods such as
interviews and focus groups that develop meaning. The researcher then identifies themes and
inductively generates findings (Creswell, 2014; Merriam, 2009). Although a qualitative design
is not focused on statistics per se, there is still a goal to maintain quality by standardizing
interview questions and triangulating data, for instance. An example of a qualitative design that
tends to be free flowing is a naturalistic evaluation where standards are different but clarified and
stated up front (Erlandson, 2012). Standards may address the credibility of the researcher by
conducting member checks, maintaining a reflexive journal, and leveraging referential adequacy
or appropriate documents (Erlandson, 2012). Additional goals could be addressed such as
ASSESSING IMPACT 47
striving for transferability rather than generalizability by documenting rich descriptions or
creating dependability in lieu of reliability by maintaining meticulous notes (Erlandson, 2012).
Qualitative research typically takes place in a natural setting rather than in a sterile lab focused
on gathering multiple forms of data to triangulate findings (Creswell, 2014; Merriam, 2009).
This design and approach may be the sole approach to a research study or it may complement a
quantitative design that is described as mixed methods.
Mixed-methods. Given that quantitative and qualitative approaches have their
advantages and disadvantages, a mixed-methods approach serves to maximize the benefits of
each approach (Creswell, 2014; Marquis et al., 2016). In the case of DKI APCSS, true
experimental and even quasi-experimental approaches are not likely feasible without a control or
comparison group. This inability to develop a control group is due to decentralized selection
procedures that involve vetting and coordination for selection of international participants, not
randomized selection. However, utilizing survey methods from a quantitative approach has
potential to add strong value to the collection of data, particularly over time. The rigid structure
of a quantitative approach may be limiting to an innovation study but the incorporation of
qualitative approaches allows for more creativity (Creswell, 2014). Further, due to the nature of
building networks and relationships, interviews and focus groups from a qualitative approach
could be integrated into the research design. While these dual pursuits may require more time
and money, it may be the most appropriate approach for a unique organization that seeks to
address validity and rich data collection.
Best Practices in Organizational Impact Assessment
As mentioned earlier, an application of a combination of methodologies tailored
to a specific organization is more likely to meet the intent of an assessment. Further,
ASSESSING IMPACT 48
studies indicate that triangulating information that is collected consistently leads to
effective assessments as well as employing methods, models, or rubrics that adapt to a
specific assessment (Curnan et al., 1998). INTRAC (2015) recommends assessing
outcomes on a consistent basis to provide early indicators that activities are on track
towards making an impact.
Indicators. A relatively recent trend is organizations integrating indicators into
assessments, particularly when impacts are not tangible or not yet achieved (DoD, 2015;
J. Kirkpatrick & Kirkpatrick, 2015). To leverage the use of indicators, they are most
effective when they are measured from a baseline (Marquis et al., 2016; Mohr, Lawrence,
1995). Further, indicators should reflect details of what is being measured, have a unit of
measurement, specify quality and a timeframe, and target populations (Church & Rogers,
2006; Marquis et al., 2016). Indicators should pass a test of both quantitative and
qualitative measures consisting of reliability, feasibility, and utility in decision making
(Church & Rogers, 2006). Assessments can also refer to well-researched indicators, as in
the health arena, to leverage areas such as behavioral change as it applies to conflict
prevention. Similarly, when integrating indicators in a baseline study, Church and
Rogers (2006) suggest that focusing solely on impacts could cause an organization to
overlook more immediate changes. In the case of the DKI APCSS, DoD provides
indicators of success in short, mid, and long-term categories that somewhat correlate to
Kirkpatrick’s New World model of Levels 1, 2, 3, and 4 (DoD, 2015; J. Kirkpatrick &
Kirkpatrick, 2015). However, in order to identify if the Center is meeting the indicators,
additional efforts must be applied to make the determination by measuring or assessing
designated indicators.
ASSESSING IMPACT 49
Models. Given the complexity and uniqueness of the regional centers, along with
the fact that impact assessments are a nascent endeavor, there is a shortfall in exemplars.
Models such as Kirkpatrick’s New World model, seek to help organizations determine
results of training and focuses on identifying intended results within Level 4 (J.
Kirkpatrick & Kirkpatrick, 2015). However, this model falls short of specifying the
details of seeking valid data, particularly for organizations that cannot rely on the use of a
control group. This model, along with others, stress the importance of planning up front,
identifying goals and intended impact, incorporating a theory or plan to effect the impact,
and systematically seeking results through on-going assessments. Thus, the Center has
an opportunity to develop a model that best captures the impact of its unique mission.
Articulating results. Upon completion of an extensive assessment of an
organization’s impact, it is critical to communicate the results in a manner that can
appropriately shape policy or resourcing (Zenisky & Laguilles, 2012). Clarifying aspects
of an assessment such as the purpose, intended audience, and data collection methods add
rigor and credibility to the report (Zenisky & Laguilles, 2012). Further, articulating
results informs internal and external stakeholders for the purpose of learning and
improving as well as providing input to decision makers (Truman & Triska, 2001).
Building Capacity Within DKI APCSS to Measure Impact
Given the numerous challenges in assessing and articulating impact for a complex
mission, this study leverages Clark and Estes’ (2008) process to identify and ascertain
needs to build capacity for the staff and faculty at DKI APCSS to innovate a solution.
This process will do so by analyzing assumed needs through a knowledge, motivation,
and organizational lens, then develop solutions based on the validated needs. Chapter
ASSESSING IMPACT 50
Three will provide the in-depth protocol methodology that validated or in some cases, did
not validate, assumed needs.
Knowledge and Skills Needs
In order to achieve the stakeholder’s goal to systematically assess and articulate
the impact for the Center, knowledge and skills are needed to develop an appropriate
solution (Anderson & Krathwohl, 2001). The Center will require the full spectrum of
knowledge dimensions and cognitive processes to innovate a solution to assess and
articulate impact for their organization. Anderson and Krathwohl (2001) address the
factual, conceptual, procedural and metacognitive dimensions of knowledge needs that
will be assessed in this study. Further, the knowledge needs identified will require each
level of the cognitive process dimensions classified in Bloom’s revised taxonomy that
include remember, understand, apply, analyze, evaluate and create (Krathwohl, 2002).
Factual knowledge. Krathwohl (2002) describes factual knowledge as basic
facts, information and terminology as it pertains to a subject. Knowledge is a resource
and stakeholders have specific knowledge needs to be able to develop, implement, and
articulate an impact assessment plan for DKI APCSS. Staff and faculty need to know the
factual goals of the organization, including one specifically related to the need for
assessing and articulating the impact of the Center as it relates to its mission of enhancing
stability (Anderson & Krathwohl, 2001). Although staff and faculty were in the process
of building assessment tools focused on shorter-term outcomes that may eventually
contribute to measuring longer-term impact, they also need to know their role that
contributes to the assessment of impact.
ASSESSING IMPACT 51
Conceptual knowledge. Conceptual knowledge consists of theories, principles,
and knowledge of underlying classifications (Krathwohl, 2002). Conceptual knowledge
needs consist of comprehension of the design schema, change theories and familiarity
with advantages and disadvantages of the methodologies and tools. This conceptual
knowledge may then be used to develop an assessment process at the Center and
subsequently contribute to assessing impact and improving the program.
Procedural knowledge. Krathwohl (2002) further explains procedural
knowledge as the procedures and skills conducted in a task that includes how to
accomplish the task step-by-step using certain techniques or methods. With conceptual
knowledge of the purpose of assessment, change theories and methodologies, staff and
faculty need to have the procedural knowledge to know how to relate design of the
program with the applied methodology and tools to assess impact, and successively,
know how to collect and analyze data, then articulate the impact (Church & Rogers,
2006).
Metacognitive knowledge. The fourth main category in the knowledge and skills
classification is metacognitive, described as learning how to learn or reflecting on
knowledge strategies, planning approach, and monitoring progress (Krathwohl, 2002).
Metacognition is the most challenging aspect of capacity to assess but is a critical aspect
of self-reflecting for staff and faculty and as an organization (Anderson & Krathwohl,
2001). This organization, as part of the self-assessment process, must also be cognizant
of any inherent bias as well as be prepared for the results regardless of the type of impact
revealed. Staff and faculty will further set objectives, continue to build upon previous
knowledge and experience, and self-reflect throughout the process (Krathwohl, 2002).
ASSESSING IMPACT 52
Motivation Needs
General. In addition to the knowledge requirements, staff and faculty need
elements of motivation to develop and implement an impact assessment plan (Clark &
Estes, 2008; Dembo, 2004). This study identified assumed needs and causes, validated
motivation levels, and assessed causes that assisted in recommending solutions as part of
the innovation (Clark & Estes, 2008). Clark & Estes (2008) also describes three indices
of motivation as choice, mental effort, and persistence. Choice consists of the decision to
act in pursuit of the goal (Clark & Estes, 2008). DKI APCSS indicated interest in
pursuing the goal to assess impact yet a decision had not been made for stakeholders to
act on this pursuit. Persistence as an indicator with regard to motivation reflects that
work continues toward the goal of pursuing impact (Clark & Estes, 2008). Upon
commitment and implementation to assess impact, stakeholders at DKI APCSS will need
to persist with this endeavor, avoid distractions, and dedicate time to the process. The
third indicator is mental effort which reveals if there is on-going work to develop new
knowledge (Clark & Estes, 2008). Stakeholders at the Center need new knowledge in the
pursuit of assessing and articulating impact. Exploring indices of motivation and
validating assumed needs contributed to the development of solutions.
Other factors to consider that influence motivation of the staff and faculty are
interest, value, self-efficacy, and goal orientation (Mayer, 2011). Mayer (2011) purports
that if an individual has self or situational interest, one is more likely to learn or perform.
Similarly, if one values a task or sees the importance in the task, motivation is higher than
without valuing the task. Mayer (2011) also addresses levels of self-efficacy and higher
performance is more likely when one believes he or she has the competence to
ASSESSING IMPACT 53
accomplish the task. Lastly, Mayer (2011) advises that focusing on the goal of
accomplishing the task rather than outperforming competition will enhance motivation
levels. Beliefs, including perceptions of faculty culture, and work environments, with
either changing or vague goals, can also contribute to levels of motivation (Clark &
Estes, 2008). Assessing and enhancing confidence at an individual and staff level are
also key to fostering motivation to innovate an impact assessment plan (Clark & Estes,
2008).
Expectancy Theory. Eccles (2013) expectancy theory simplistically addresses
two key questions, “Can I do the task?” and “Do I want to do the task?” Although these
questions originated based on a classroom-learning environment, they also apply to an
organization not situated in a classroom. Staff and faculty need to have confidence in
their abilities to develop an impact assessment for a complex organization, knowledge of
how to develop a robust and rigorous plan, and have the ability to execute it. Similarly,
this study aims to validate whether staff and faculty have intrinsic value and utility value
that influences motivation to assess the organization’s impact (Eccles, 2013).
Organizational Needs
The third pillar of the Clark and Estes (2008) model highlights the need to
identify organizational gaps or needs that hinder achieving a performance goal.
Organizational needs can include alignment of goals, resourcing, and cultural
understanding (Clark & Estes, 2008). Schein (2004) describes culture as an abstract
influencer that resides in a group’s unconscious. Schein (2004) also addresses the
importance of considering culture aspects within an organization particularly given
ASSESSING IMPACT 54
professional fields such as college faculty who create their own unique culture and
potentially have strong norms and values.
Cultural models. Cultural models reflect an individual or organization’s values
or norms and are often automated actions (Gallimore & Goldenberg, 2001). Within the
DKI APCSS, the organization’s culture appeared to reveal an earnest determination to
educate, empower, and connect fellows in an inclusive environment. The staff and
faculty also appeared to have pride in conducting professional and quality programs in
support of a worthy mission. Thus, the Center’s cultural models reflected overall strong
values, positive attitudes, and a desire to execute programs professionally. While the
Center has routinely been queried about a reduction in resources, there have not been any
significant reductions in funding (DKI APCSS Strategy and Assessment (S&A), 2017).
These circumstances seemed to contribute to a need to respond but it was unclear if the
worthiness of executing the mission overrides the need to have a norm of innovating a
solution to assess impact.
Cultural settings. While cultural models are a reflection of values and beliefs
that can appear to be invisible, cultural settings, tend to be visible reflections of cultural
model manifestations such as daily activities (Gallimore & Goldenberg, 2001). In some
circumstances, it could also be the absence of daily activities that establishes a cultural
setting (Gallimore & Goldenberg, 2001). Examples of cultural setting needs may include
focused or stable goals, improved communication, and realignment or an increase of
resources to conduct tasks, such as faculty preparation time. An assumed need in the
Center is an alignment of goals and the potential resources it will require to innovate a
solution to assess and articulate the organization’s impact (Clark & Estes, 2008;
ASSESSING IMPACT 55
Gallimore & Goldenberg, 2001). A review of organizational barriers such as the inability
to conduct longitudinal surveys may lead to a realignment of internal resources to
conduct assessments.
Conclusion
The purpose of this study was to identify and validate assumed influences needed to
innovate a solution for the DKI APCSS to assess and articulate its impact. This chapter
described the impact of global conflict and efforts to prevent conflict and enhance stability.
Further literature expounded on the challenges of assessing the impact of organizations involved
in conflict prevention and described methodologies for assessments. This chapter also addressed
a couple of promising practices to enhance assessments of organizations involved with conflict
prevention. The last part of the literature review discussed the knowledge, motivation, and
organizational assumed influences that pertained to the Center. Chapter Three will present the
study’s methodological approach to validate or not validate needs for the Center’s stakeholder
group to achieve its performance goal and the overall organization’s performance goal related to
assessing and articulating impact.
ASSESSING IMPACT 56
CHAPTER THREE: METHODOLOGY
Purpose of the Project and Questions
The purpose of this study was to conduct a needs’ analysis in the areas of knowledge and
skill, motivation, and organizational resources necessary to reach the organizational performance
goal of systematically assessing and articulating the impact of DKI APCSS. The analysis started
by generating a list of possible needs and then examined these methodically to focus on actual or
validated needs. While a complete needs’ analysis would focus on all stakeholders, for practical
purposes, the stakeholder focus in this analysis was on the Center’s staff and faculty.
Given this stakeholder group of focus including staff and faculty, the questions that
guided this study are the following:
1. What are the DKI APCSS staff and faculty knowledge, motivation, and organizational
needs to create a plan and structure to systematically assess and articulate the Center’s
impact?
2. What are the recommended knowledge, motivation, and organizational solutions to meet
the validated and actual needs to meet the organizational performance goal?
Stakeholders of Focus
There are three main stakeholder groups: the Center’s staff and faculty, fellows and
alumni, and the DoD and the U.S. Indo-Pacific Command. All play an instrumental role in
meeting the overall organizational goals. However, the DKI APCSS staff and faculty must have
the capability to create and sustain the plan for assessing and articulating the Center’s impact.
Thus, the focus of this study was on this collective stakeholder group of DKI APCSS staff and
faculty, with the goal to understand their needs to develop the structure and plan to support
implementation of the impact assessment process.
ASSESSING IMPACT 57
Methodological Framework
The conceptual framework for this project was an innovation study adapted as a needs
analysis based on Clark & Estes' (2008) gap analysis. This approach was a systematic and
analytical method that assisted to clarify organizational goals and identify the needs between the
actual and the preferred performance level within an organization. The methodological
framework was a mixed-methods study that included a qualitative case study with descriptive
statistics. Observations, personal knowledge and related literature informed the assumed
knowledge, motivation and organizational needs. These needs were validated through surveys,
interviews, observations, literature review and content analysis. Findings in Chapter Four
resulted in the validation of 12 of 15 assumed needs where six were knowledge related, three
were motivation related, and the last three were organizational related needs. Research-based
solutions were recommended in Chapter Five and will be evaluated in a comprehensive manner.
Based on the needs identified to meet organizational goals, this framework was adapted
to be an innovation model. The innovation model identified assumed influences and through a
systematic approach, validated the influences, and determined recommendations to create an
innovative solution to meet the organizational goal. Figure 1 below describes the Clark and
Estes (2008) process that was adapted for an innovation model.
ASSESSING IMPACT 58
Figure 1. Gap analysis process.
Assumed Influences on Performance
In Clark and Estes’ (2008) needs analysis using an innovation model, one identifies
assumed performance-related needs to reach organizational goals. The needs analysis
framework suggests the important step of identification of assumed knowledge, motivation, and
organization needs and argues that this step is typically omitted to the detriment of organizational
performance and effective generation of solutions to close performance gaps (Clark & Estes,
2008). This framework is being leveraged to ensure that real needs are identified and not
omitted. As such, a thorough investigation of organizational performance included three
components: (a) observations and preliminary scanning interviews with stakeholders; (b)
learning, motivation, and organization/culture theory; and (c) review of the literature as it
pertains to assessing impact in a security organization with a unique mission. In Chapter Two,
the literature review addressed global peace, purposes for taking action, and some of the various
approaches of conflict prevention to enhance stability. The main portion of the literature review
focused on assessing complex, intangible outcomes and impact as a result of efforts to educate,
empower, and build capacity, relationships, and networks. The latter portion focused on
learning, motivation and organizational theory to ascertain additional possible stakeholder needs
to reach organizational goals. The next portion of this chapter describes the results of
ASSESSING IMPACT 59
observations and preliminary scanning interviews. These results led to additional assumed
knowledge, motivation, and organizational needs identified for the Center to create a plan to
assess and articulate its impact.
Critical Observations and Preliminary Scanning Data
The researcher conducted observations and preliminary scanning interviews in the
organization of this study. As a result of these activities, knowledge and skills, motivation, and
organizational needs were identified. The following is a discussion of the knowledge,
motivation, and organizational assumed needs to implement the organizational goal to assess and
articulate the impact of DKI APCSS.
Knowledge and Skills
Preliminary scanning of the Center revealed that staff and faculty are dedicated subject
matter experts and were aware of the indicators of success that DoD provided to assess
performance. The staff and faculty demonstrated in weekly meetings and after action reviews
that they had a process and were proficient in assessing coursework and in many cases, short-
term outcomes. Yet, due to the complexity of the goal to assess and articulate the Center’s
impact, there was a variance in the staff and faculty’s knowledge as it pertained to knowing goals
and specific roles and methods to assess impact. Another key observation was the staff and
faculty need to understand that enhancing stability reflects the goal of impact in the Center’s
mission (Marimon, Mas-Machuca, & Rey, 2016). Further, additional preliminary scanning
indicated the need for staff and faculty to know how strategy, design and program development
related to creating and assessing impact. Most significantly, staff and faculty need to know how
to apply theories, best practices, methods, models, or rubrics to create a methodology to assess
the Center’s impact. Finally, when striving to articulate the Center’s impact, the staff and faculty
ASSESSING IMPACT 60
routinely referred to predominantly anecdotal examples. For the future, there was a need to
articulate the Center’s impact by leveraging more evidence-based and triangulated data. The
combination of the difficulty and lack of a method to assess the Center’s impact has contributed
to the assumed knowledge category needs identified.
Motivation
As mentioned in the knowledge section, staff and faculty reflected a high level of
dedication to the Center’s mission (Kim, 2012). Preliminary scanning also reflected that staff
and faculty valued conducting thorough assessments of reactions and outcomes for each course,
workshop, and program that the Center conducted to improve its performance. It was unclear
whether the full stakeholder group valued the need to assess and articulate the Center’s impact
through empirical evidence and triangulated data. Based on the challenges and perceived effort
to determine the Center’s impact, staff and faculty may not have considered this task as
important as other competing demands such as preparing for and executing programs and
conducting research.
The S&A working group was striving to build a foundation of systematically assessing
outcomes with a goal to ultimately assess impact. This group, along with faculty leadership,
tended to prepare the Center’s leadership for briefings to the DoD and U.S. Indo-Pacific
Command. The Center’s leadership reflected valuing the ability to articulate the Center’s impact
based on empirical evidence. Preliminary scanning also suggested that staff and faculty needed
greater self-efficacy regarding the task to create a tailored methodology to assess and articulate
the Center’s impact due to the complexity of the concept and process.
ASSESSING IMPACT 61
Organization
Preliminary scanning revealed that the culture at DKI APCSS fostered teamwork but
needed resources to learn how to create a design and methodology, model, or rubric for assessing
and articulating impact. Further, current programming requirements and executing programs
appeared to demand the staff and faculty’s near full attention. Thus, it showed that staff and
faculty would likely require the means to collect and analyze data over time to conduct impact
assessments. Further, there was a potential need for the Center to structurally re-align staff to
conduct and articulate the impact assessments. Staff and faculty also needed to find utility that
their efforts to assess and communicate the Center’s impact would result in sufficient
justification to request adequate resources to further enable the Center’s impact. Finally, this
stakeholder group needed to consider that assessing the Center’s impact was in alignment with
the Center’s mission to ultimately enhance stability. Table 3 provides a full summary of the 15
assumed needs that were generated as a result of scanning, literature review, theory, and
observations.
Table 3
Source of Assumed Needs
Assumed Need Scanning
Literature
Review
Theory
Observation
Knowledge
Staff and faculty need to know their roles in assessing
and articulating impact (F)
x x x x
Staff and faculty need to have a common definition of
‘enhancing stability’ as part of the mission statement
in order to align roles and to be able to assess the
Center’s impact (F)
x x
Staff and faculty need to understand that enhancing
stability reflects the goal of creating impact in the
Center’s mission (C)
x x
Staff and faculty need to know how strategy, design,
program development, and their roles relate to
creating and assessing the Center’s impact (C)
x x x x
ASSESSING IMPACT 62
Table 3, continued
Assumed Need Scanning
Literature
Review
Theory
Observation
Staff and faculty need to know how to relate program
development to indicators of success and key performance
indicators and assessing impact (P)
x x x
Staff and faculty need to know how to articulate the Center’s
impact (P)
x x x x
Staff and faculty need to know how to apply theories, best
practices, methods, models, or rubrics to create a methodology
to align roles and assess impact (P)
x x x x
Staff and faculty need to be aware of any inherent biases to the
process and purpose of assessing impact (M)
x x x
Motivation
Staff and faculty need to personally value the importance of
aligning roles and assessing and articulating the Center’s
impact based on evidence (Intrinsic Value)
x x x x
Staff and faculty need to value the effort of assessing and
articulating the Center’s impact as worthy of time and
resources (Cost Value)
x x x x
Staff and faculty need to have confidence to create a methodology,
model, or rubric to assess and articulate the Center’s impact (Self-
Efficacy)
x x
Staff and faculty need to realize the Center could lose
resources if its impact cannot be assessed and articulated
(Expectancy Outcome)
x x
Organization
Staff and faculty need adequate resources to learn how to
align roles and create and implement a methodology to
assess and articulate impact (Cultural Setting)
x x
Staff and faculty need sufficient resources to collect and
analyze data over time (Cultural Setting)
x x x
Staff and faculty need to believe the Center will request adequate
resources to create impact based on assessments (Cultural Model)
x x
(F) Factual Knowledge; (C) Conceptual Knowledge; (P) Procedural Knowledge; (M) Metacognitive Knowledge
The next section will describe the population and data collection methods to validate or
not validate these assumed influences on stakeholder performance and attainment of the
organizational performance goal. Following these descriptions, Table 4 summarizes the assumed
needs and indicates whether a survey, interview, observation or document analysis was leveraged
to validate the need.
ASSESSING IMPACT 63
Population of Study
The stakeholder of focus for this study was the Center’s staff and faculty due to the need
for them to have the capability to create and sustain a plan for assessing and articulating the
Center’s impact. The survey included the Center’s population of 112, which included the
collective stakeholder group of staff and faculty, with the goal to identify and validate their needs
to develop the structure and plan to support the impact assessment process. Survey respondents
remained anonymous throughout the survey, analysis, and findings. To gain a deeper
understanding of the survey results, DKI APCSS leadership were also included in this study. A
selection of three interviewees based on a mixture of purposeful stratification of the Center’s
leadership served as the population for the individual leadership interviews. Further individual
interviewees represented the Center’s front office, the College of Security Studies, or the Dean of
Admissions and Business Operations and were only referred to as a leader. Finally, two groups
of staff and faculty participated in two separate focus group interviews where participants were
selected based on purposeful and stratification sampling. The first group consisted of peer
supervisors and the second group included non-supervisory peers. Selection of focus group
participants was handled confidentially and group participants were only referred to as a staff or
faculty member or peer supervisory or non-supervisory focus group participant.
Data Collection
The researcher obtained permission from the University of Southern California’s
Institutional Review Board (IRB) and also complied with DoD requirements for conducting
surveys with DoD personnel. In order to determine the knowledge, motivation, and
organizational needs for the DKI APCSS to assess and articulate its impact, this study addressed
ASSESSING IMPACT 64
data collection to validate assumed needs to reach the organizational performance innovation
goal.
The researcher distributed a quantitative survey via email with an online link to the DKI
APCSS civilian employee and military staff and faculty, which consisted of 112 different email
addresses. Surveys were anonymous and collected only non-identifiable demographics
categorized as staff, faculty, or other (leadership). The survey consisted of 19 questions using a
combination of Likert scale and open-ended questions. The specific questions on the survey are
listed in Appendix A. The survey was pilot tested prior to issuing the survey to all
participants. Responses did not identify participants and survey responses were kept in a
password-protected computer and stored in a secure location.
Following the survey, the researcher conducted 45-minute semi-structured qualitative
interviews with three senior leaders in the Center as voluntary participants to further comprehend
knowledge, motivation, and organizational needs to assess and articulate impact, and further
validate assumed causes for the influences. Participants’ identification was kept
confidential. The interview included eight questions listed in Appendix B. Permission was
obtained for interviews to be recorded and transcribed, and transcripts were stored in a secure
password-protected computer stored in a secure location.
The researcher also conducted a 60 minute focus group interview with four voluntary
participants of peer supervisors (College of Security Studies representative – 1, Center Staff – 1,
and Dean of Business Administration and Operations representative – 2) to further comprehend
knowledge, motivation, and organizational needs to assess and articulate impact, and further
validate assumed causes for the influences. Participants’ identification was kept confidential and
only referred to as staff or faculty or a supervisory focus group participant. The focus group
ASSESSING IMPACT 65
interview included seven questions listed in Appendix C. Permission was obtained for
interviews to be recorded and transcribed, and transcripts were stored in a secure password-
protected computer stored in a secure location.
The researcher also conducted a second 60-minute focus group interview with four
voluntary participants of non-supervisory peers (College of Security Studies representatives – 2,
Center staff – 1, and Dean of Business Administration and Operations representative – 1) to
further comprehend knowledge, motivation, and organizational needs to assess and articulate
impact, and further validate assumed causes for the influences. Participants’ identification were
kept confidential and only referred to as staff or faculty or a non-supervisory focus group
participant. The interview included the same seven questions listed in Appendix C. Permission
was obtained for interviews to be recorded and transcribed, and transcripts were stored in a
secure password-protected computer stored in a secure location.
Data collection also included a document analysis review of current guidance provided to
the Center on its mission and priorities; its policies, procedures, and purpose for conducting
surveys with fellows and alumni; and other current documents used to assess its objectives. The
researcher acknowledges that there may have been limitations to the document analysis such as a
delay in publication of updated or current documents. The checklist at Appendix D was used to
review the documents in a systematic manner. These data sources ensured triangulation of data
for purposes of trustworthiness.
Observations were also conducted to validate needs. The checklist in Appendix E guided
the observations for a methodical approach to validating needs. The researcher acknowledges
that there may have been limitations to the observation process as interactions that took place
may be unique and not fully representative of on-going interactions.
ASSESSING IMPACT 66
To validate knowledge, motivation and organizational assumed causes, an online survey,
in-person interviews, document reviews, and observations were conducted. The following
sections further expound on the protocols for the survey, interviews, observations, and document
analysis.
Surveys
Upon approval from the IRB and DoD, surveys were administered online via a survey
program, in English, in the fall of 2017. Surveys did not collect any identifiable information but
indicated whether the respondent was in a category of staff, faculty, or other. The survey was
available for about three weeks with weekly reminders that it was for voluntary participants.
There were 19 questions with 17 of them using the Likert scale or multiple-choice responses.
The last two questions were open-ended questions. The survey included the same questions for
the staff and faculty. Within this stakeholder group of approximately 112 people, 35 were
faculty and 77 were staff and leadership. There were 12 DoD contractors who were not included
in the survey email and while they play a key role in the Center, their non-participation did not
impact results as they were predominantly in the IT section of the Center. There was a 41%
response rate with 46 participants of the 112 who were sent the survey. Of the 46 participants,
16 or 35% were faculty, 25 or 54% were staff, and 5 or 11% responded as leadership. Survey
results were triangulated with information obtained from follow-on interviews and focus groups.
Individual Interviews
The researcher conducted three individual interviews with three assigned senior leaders at
the Center. Interviewees were selected by position of leadership to represent the Center’s front
office, the College of Security Studies, and the Dean of Business and Administration to obtain
rich data based on leadership experiences. Participants were only referred to as leadership in the
ASSESSING IMPACT 67
findings. Interviews were conducted in a private conference room on an individual basis at the
DKI APCSS facility, and each interview took approximately 45 minutes. The researcher
obtained permission to record and transcribe feedback that was handled in a confidential manner.
The interview was conducted in English and included member checks.
Focus Group Interviews
Where as individual leader interviews allowed for a deeper understanding of needs by
specific roles, the conduct of focus group interviews allowed for interaction between diverse
participants to generate new ideas while validating or not validating assumed faculty and staff
needs. Interviewees were selected through purposeful sampling to obtain rich data based on
positions of expertise. There were two focus group interviews with the first group consisting of
peer supervisors from the Center to include a supervisor from the college, one supervisor from
the Center’s staff, and two supervisors from the dean of admissions and business operations.
There was a list of seven main questions. The second group of four participants consisted of
non-supervisory peers purposefully selected based on their role within the Center that included
two members in the College, one staff member from the college, and one employee from the
dean of business and administration. Interviews were conducted in a private conference room
with the group seated around a table at the DKI APCSS facility, and each interview took 60
minutes. This interview also consisted of seven questions. The researcher obtained permission
to record and transcribe feedback that was handled in a confidential manner. The interview was
conducted in English included member checks.
Observations
The researcher periodically observed meetings and program after action reviews in
various sections of the Center. The purpose of the observations was to identify discussions that
ASSESSING IMPACT 68
reflected and validated the staff and faculty’s knowledge, motivation, and organizational needs to
assess and articulate the Center’s impact.
Document Analysis
The following documents were analyzed to validate the need for creating a methodology
to assess and articulate the Center’s impact, to provide insight into existing strategies for
assessing outcomes, and where additional needs may exist: DoD guidance memorandums and
instructions related to performance and impact assessments for the regional centers; DKI APCSS
documents related to strategy development; DKI APCSS after action reports and their relation to
assessing impact; annual reports; press releases that pertain to impact; and DKI APCSS
assessment plans and charts. These documents provided insight to existing planning efforts to
assess the Center’s impact. The review sought evidence of plans, methodologies, models, or
rubrics for assessing and articulating impact.
Validation of Influences
The assumed needs for knowledge, motivation, and organization influences were
validated as a need, or not validated, through the triangulation of data from the survey results,
individual and focus group interview feedback, a thorough document analysis, and observations
as indicated in Table 4. Chapter Four presents an analysis of the data and finding to which the
researcher then developed solutions targeted for the validated needs, as presented in Chapter
Five.
ASSESSING IMPACT 69
Table 4
Summary of Assumed Needs and Data Colletion
Assumed Need Survey Interviews
Document
Analysis
Observation
Knowledge
Staff and faculty need to know their roles in assessing and articulating
impact (F)
x x --- x
Staff and faculty need to have a common definition of ‘enhancing
stability’ as part of the mission statement in order to align roles and to be
able to assess the Center’s impact (F)
x x --- x
Staff and faculty need to understand that enhancing stability reflects the
goal of creating impact in the Center’s mission (C)
x x -- x
Staff and faculty need to know how strategy, design, program
development, and their roles relate to creating and assessing the Center’s
impact (C)
x x x x
Staff and faculty need to know how to relate program development to
indicators of success and key performance indicators and assessing impact
(P)
x x -- x
Staff and faculty need to know how to articulate the Center’s impact (P) x x x x
Staff and faculty need to know how to apply theories, best practices,
methods, models, or rubrics to create a methodology to align roles and
assess impact (P)
x x x x
Staff and faculty need to be aware of any inherent biases to the process
and purpose of assessing impact (M)
x x --- x
Motivation
Staff and faculty need to personally value the importance of aligning roles
and assessing and articulating the Center’s impact based on evidence
(Intrinsic Value)
x x --- ---
Staff and faculty need to value the effort of assessing and articulating the
Center’s impact as worthy of time and resources (Cost Value)
x x --- ---
Staff and faculty need to have confidence to create a methodology, model,
or rubric to assess and articulate the Center’s impact (Self-Efficacy)
x x --- ---
Staff and faculty need to realize the Center could lose resources if its
impact cannot be assessed and articulated (Expectancy Outcome)
x x --- ---
ASSESSING IMPACT 70
Table 4, continued
Assumed Need Survey Interviews
Document
Analysis
Observation
Organization
Staff and faculty need adequate resources to learn how to align roles and
create a methodology to assess and articulate impact (Cultural Setting)
x x --- ---
Staff and faculty need sufficient resources to collect and analyze data over
time (Cultural Setting)
x x x ---
Staff and faculty need to believe the Center will request adequate
resources to create impact based on assessments (Cultural Model)
x x --- ---
(F) Factual Knowledge; (C) Conceptual Knowledge; (P) Procedural Knowledge; (M) Metacognitive Knowledge
Trustworthiness of Data
To ensure the trustworthiness of data used in this project, the investigator implemented
the following measures: (a) data was triangulated between surveys, interviews, observations, or
document analysis, (b) surveys were assured of anonymity and interviews treated with
confidentiality, (c) survey items were based on valid and reliable instruments, (d) investigator
conducted member checks, and (e) investigator obtained feedback from the Center’s leadership
on adhering to a transparent process for data collection.
Role of Investigator
The investigator’s role in this project was to conduct an investigation for an innovative
approach to meet the organization’s performance goal of methodically assessing and articulating
the Center’s impact. The investigator served as a visiting academic at this Center for the purpose
of conducting this project. Also on a voluntary basis, the investigator also served on the Center’s
Foundation board to help the Center with additional resources as needed in support of their
mission.
Although the investigator did not have any employees or subordinates working in the
Center, the following measures were taken to reduce any perceptions of coercion or pressure for
participants:
ASSESSING IMPACT 71
• All participants in surveys and interviews remained anonymous
• Information, identity, responses, and data were treated confidentially
• Surveys and interviews reinforced that participation was voluntary and each person had
the right to not participate in the research
• The researcher obtained permission to use documents that were produced for other
organizational purposes
• The investigator requested feedback from colleagues and the deputy director in the
organization regarding any potential misunderstandings
• The investigator coordinated with any concerned participants and leadership on how
descriptions of their work and points of views were published
• The investigator did not use or refer to investigator’s previous military rank as a senior
officer
Data Analysis
The researcher analyzed this data by reviewing survey responses and coding interview
feedback and documents to validate feedback. Descriptive statistics were applied to determine
survey results; transcribed interviews were coded based on categories of assumed knowledge,
motivation, and organization needs; and triangulation of data was used to validate or not validate
assumed needs. Observation notes were also analyzed to confirm knowledge needs indicated in
Table 4.
Limitations and Delimitations
Limitations of this study included the lack of literature as it specifically pertained to
assessing and articulating the impact of DoD’s unique regional centers. Other limitations
included the self-reported data in surveys that may contain individual bias or selective recall.
ASSESSING IMPACT 72
Delimitations of this study included the focus on the stakeholder group of staff and faculty, as it
would have been ideal to study all stakeholder groups to determine how to assess and articulate
the Center’s impact. Additional considerations involved the purposeful selection of interviewees
to ensure that limited resources were focused on participants who could provide the most
substantial feedback. The results of this study are most applicable to the other DoD regional
centers that have similar missions in other geographic regions. The results, or portions of the
results, may be further generalized to other DoD institutions that provide executive education,
build capacity, or build relationships and networks. Lastly, elements of assessing impact for the
DKI APCSS could also apply to other entities that strive to create impact through executive
education, building capacity, or building relationships and networks.
The researcher also served as a visiting scholar at DKI APCSS during the research period
and there is a possibility that biasness as a faculty member is present. However, measures were
taken to capture various perspectives from the Center and mitigate the presence of inherent bias.
Following the implementation of the methodology described in Chapter Three, Chapter
Four describes the findings of survey feedback, analysis of individual leader interviews and
focus group discussions, as well as findings from document analysis and observations. These
results were triangulated and will further examine the validation or non-validation of the 15
assumed needs.
ASSESSING IMPACT 73
CHAPTER FOUR: FINDINGS AND RESULTS
The purpose of this research was to examine assumed needs in the areas of knowledge
and skills, motivation, and organizational resources necessary to reach the organizational
performance goal of systematically assessing and articulating the impact of the Daniel K. Inouye
Asia-Pacific Center for Security Studies (DKI APCSS). Initial analysis generated a list of
assumed needs, as presented in Chapter Three, that were examined and analyzed to validate or
not validate each one through a combination of survey, interview, and focus group results, as
well as observation and document analysis.
The research methodology and analysis were aimed at answering the two main research
questions below. This chapter focuses on the findings for the assumed needs that answer the first
question below. Based on the needs that are validated in Chapter Four, Chapter Five will include
suggested solutions that address areas needed for the Center to assess and articulate its impact.
1. What are the DKI APCSS staff and faculty knowledge, motivation, and organizational
needs to create a plan and structure to systematically assess and articulate the Center’s
impact?
2. What are the recommended knowledge, motivation, and organizational solutions to meet
the validated needs to meet the organizational performance goal?
Of the 15 assumed needs presented in Chapter Three and addressed in this chapter, nine
were validated, three were partially validated, and three were not validated. Following a brief
overview of the stakeholders that participated in the surveys and interviews, and the analysis
process, each assumed need and its findings will be described in greater detail along with a
summary of each knowledge, motivation, and organizational need category.
ASSESSING IMPACT 74
Participating Stakeholders and Methodological Framework
The focus of this study was on the collective stakeholder group of the DKI APCSS staff
and faculty, with the goal to understand their needs to innovate a structure and plan to support an
impact assessment process. Using the innovation model and Clark and Estes’ (2008) gap
analysis approach, the researcher identified assumed knowledge, motivation and organizational
needs and through a systematic approach, validated or did not validate each of the assumed
needs. Based on the findings, the next chapter provides recommendations to create an innovative
solution to meet the Center’s organizational goal.
The DKI APCSS staff and faculty are the central stakeholder group due to the need for
them to have the capability to create and sustain a plan for assessing and articulating the Center’s
impact. As discussed in Chapter Three, the population for the survey included the collective
stakeholder group of 112 staff and faculty with 46 respondents for a 41% response rate. The
researcher conducted three individual interviews with leaders from the main sections in the
Center followed by two focus group interviews. The first focus group consisted of four
supervisors that were selected to balance representation of the Center. The second group was
comprised of four non-supervisory peers that also represented various sections of the Center. The
interaction with this population through the survey, individual interviews, and focus group
interviews provided valuable input to the validation process of assumed needs.
Validation of Assumed Needs
The following sections detail the findings of each knowledge and skill, motivation, and
organizational assumed need, respectively. Each section begins with a table summary of the
assumed needs by category of either a knowledge, motivation, or organizational need followed
by a detailed discussion of the validation process and findings.
ASSESSING IMPACT 75
Each assumed need was addressed in the survey and through a combination of the senior
leader interviews, focus group interviews, observations, or document analysis. The researcher’s
validation process included a review of the survey feedback where 15 of the 19 questions were
on a four point Likert scale that listed “not at all”, “somewhat not”, “somewhat”, and “very
much”. The first step identified whether 85% or greater of the survey participants responded as
either “somewhat” or “very much” that initially indicated an affirmative response. However,
survey responses can tend to reflect varying levels of confidence or belief as compared to true
levels of reality, and results required further analysis (Jackson, Messick, & Braun, 2002). The
next step analyzed interview and focus group data as well as observation notes and document
analysis to identify corroborating or contradictory information to further triangulate data and
provide findings. The researcher will detail the analysis of each assumed need and provide a
conclusion of whether the need was validated, not validated, partially validated, or if a new need
was identified. An assumed need is considered “validated” if the data together suggest that
additional work is needed in that area to meet the stakeholder and organizational performance
goals related to assessing and articulating impact. Validated needs will be addressed with
possible solutions and recommendations in Chapter Five. An assumed need is considered “not
validated” if the data together suggest that the stakeholder group already has the knowledge,
motivation and organizational support needed to reach the performance goal and no additional
solutions are needed in the area. The first category of analysis addresses assumed needs in the
knowledge and skills arena.
Knowledge Needs
The researcher initially identified eight assumed needs related to knowledge and skills.
As noted in Clark and Estes (2008), the knowledge categories consisted of needs that were either
ASSESSING IMPACT 76
considered factual, conceptual, procedural, or metacognitive. The first category, factual,
included assumed needs that ensured staff and faculty, as an example, had factual knowledge of
a common understanding of a definition such as ‘enhancing stability’. Another knowledge
category was conceptual and focused on the staff and faculty need to understand that enhancing
stability reflects a goal of creating impact in the Center’s mission. A third category of assumed
needs related the staff and faculty’s procedural knowledge and this process focused on their
levels of knowledge of how to accomplish certain tasks. The last knowledge category assessed
the staff and faculty’s metacognitive knowledge and identified if there was awareness of biasness
and reflection in the process of assessing impact.
Findings from the survey, three senior leader interviews, two focus groups, observations
and document analysis concluded that of the eight assumed knowledge needs, five needs were
validated, two were not validated, and one was partially validated. As part of this innovation
study, the five validated needs, along with the one partially validated need, are those that
generated proposed solutions as part of the Center’s efforts to assess and articulate its impact,
and will be discussed in Chapter Five. Table 5 reflects a summary of the eight assumed
knowledge related concepts and their validation status.
Table 5
Summary of Assumed Knowledge Needs Validation Results
Assumed Need Validated
Not Partially
Validated Validated
Knowledge
Staff and faculty need to know their roles in assessing and articulating
impact (F)
X
Staff and faculty need to have a common definition of ‘enhancing stability’
as part of the mission statement in order to align roles and to be able to
assess the Center’s impact (F)
X
Staff and faculty need to understand that enhancing stability reflects the
goal of creating impact in the Center’s mission (C)
X
ASSESSING IMPACT 77
Table 5, continued
Staff and faculty need to know how to relate strategy, design, and program
development to creating and assessing the Center’s impact (P)
X
Staff and faculty need to know how to relate program development to
indicators of success and key performance indicators and assessing impact
(P)
X
Staff and faculty need to know how to articulate the Center’s impact (P) X
Staff and faculty need to know how to apply theories, best practices,
methods, models, or rubrics to create a methodology to assess the Center’s
impact (P)
X
Staff and faculty need to be aware of any inherent biases to the process and
purpose of assessing impact (M)
X
(F) Factual Knowledge; (C) Conceptual Knowledge; (P) Procedural Knowledge; (M) Metacognitive Knowledge
Understanding roles in DKI APCSS. The researcher projected that a solution was
needed for staff and faculty to understand their role at the Center to assess and articulate the
impact of DKI APCSS’s efforts in the Indo-Pacific region. Survey feedback, as shown in Figure
2, reflected that most participants believed they understood their role in assessing impact
to some degree. Survey results reflected that a majority, 41 of 46 respondents or 89%, either
indicated they somewhat (16 responses) or very much (25 responses) understood their role in the
process to assess and articulate impact.
Figure 2. Survey question – Extent that stakeholders understand role in assessing and articulating
impact
0
5
16
25
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you understand your
role in assessing and articulating the
impact of DKI APCSS' effort to enhance
stability in the Asia-Pacific region?
ASSESSING IMPACT 78
Five survey respondents indicated they did somewhat not understand their role in the
assessment process. Further, more details in the interviews revealed that roles were less clear.
When asked, “Could you describe your role in how the Center is doing their assessments and
being able to articulate it?” one leader emphasized the role as conveying the Center’s efficiencies
quantitatively and “to do an assessment is easy when you have numbers,” but also
acknowledged, “proving that we’re worth a hoot is done by outcomes and that is a qualitative
assessment, so how do you make the qualitative quantitative is the biggest challenge.” The
interviewee also participated in an Ivy League program and asked peers and professors about this
type of assessment, “they didn’t have any answers.” Thus, while the leader described having a
role in the process, the leader indicated his role in incorporating necessary qualitative data is
challenging and not as clear. Another leader broadly described his role in assessments as
“alerting people to this idea…if you’re not thinking about the assessment before you get into
doing something, than you are probably not going to have a good assessment for sure and…you
may not get to what you intend to.”
One participant from a focus group indicated that DKI APCSS receives feedback from
alumni but there was a lack of clarity in that role as part of a systematic process. The contributor
described how surveys are sent out at a 6-month mark to alumni, along with a survey to alumni
supervisors, and acknowledged that some information is collected but “we don’t have any system
set up to do anything with it.” A participant also noted that there is “no systematic standard
process for getting feedback on workshops.” Another staff member described a process that
captured anecdotes whenever staff or faculty was traveling in the region, but indicated he did not
know “if anyone is contributing to that anymore.” Although survey results indicated that many
staff and faculty knew their role in assessing and articulating impact, a tendency for response
ASSESSING IMPACT 79
bias to indicate a more desirable response or a greater level of agreement may exist (Paulhus,
2002). Additionally, more details in focus groups revealed that while some knew what their role
is or should be, roles were not entirely clear because systems were either not in place or known
(Jackson et al., 2002). This contradictory information and triangulation of data resulted in a
partially validated need.
Figure 3. Survey question - Select one definition of "Enhancing Stability."
Common definition of enhancing stability. A mission statement is critical to aligning
the staff and faculty’s understanding and efforts in creating and assessing impact. The survey
results indicated that 44 of 46 respondents or almost 96% identified the “prevention of conflict in
region” as the definition of enhancing stability, whereas only two respondents referred to
“increased trade agreements” as a specific definition. While “increased trade agreements”
contribute to stability particularly through an economic lens, the Center’s focus is predominantly
on security and prevention of conflict. During a staff meeting observation, a participant
described part of the Center’s efforts to foster relationships and networks so alumni are inspired
to “pick up the phone rather than a gun” when faced with a potential conflict. Further, a focus
group participant stated the “good governance that we’re teaching” is “giving them tools for
44
0 2 0
20
40
60
Prevention of conflict in
region
More military hardware
sales
Increased trade
agreements
Select one definition of "Enhancing
Stability" that is most applicable to
assess the Center's impact
ASSESSING IMPACT 80
stability in the region.” Another supervisor stressed that the “key to everything is relationship
building…breaking down cultural and ethnic and religious barriers…we’re all the same people
working together trying to promote peace.” These descriptions of conflict prevention and
enhancing stability, in addition to the 96% commonality of survey respondents, and observations
indicate that the Center has a general common definition of enhancing stability related to
prevention of conflict. Thus, the need to improve upon knowledge of the common definition of
enhancing stability is not validated and there is no requirement for a solution.
Enhancing stability reflects the goal of impact in DKI APCSS’s mission. The
Center’s survey results previously revealed that the majority of staff and faculty shared a
common definition of enhancing stability. Furthermore, survey results from this study reflected
that almost all of the staff and faculty respondents (43 of 46 or almost 94%) somewhat or very
much agreed that the mission statement reflects the goal of enhancing stability to create impact
in the region.
Figure 4. Survey question - Extent that staff and faculty agree that enhancing stability reflect the
goal of impact
0
3
13
30
0
20
40
Not at all Somewhat not Somewhat Very much
To what extent do you agree that
enhancing stability in the region reflects
the goal of impact in the DKI APCSS
mission?
ASSESSING IMPACT 81
A senior leader explained that the Center fosters communication as part of its mission,
“we create stability because people talk to each other across the region.” The leader further
described the Center as an environment where fellows “don’t feel threatened, so now they go
back and as they get more senior they maybe will reach out to their contemporaries or classmates
or fellow alumni” and share a commonality impacting stability in the region.
A second survey question further expounded on this concept by querying participants
about the extent that they viewed assessing and articulating impact as in alignment with the
Center’s mission of enhancing stability. The responses in the Likert scale question in the survey
indicated that 21 of the 46 respondents or 46% very much agreed that there is alignment and 17
of the 46 or nearly 37% somewhat agreed. Only six respondents or 13% indicated there was
‘somewhat not’ alignment and two participants or almost 4% said there is no alignment. Figure
5 below reflects that 36 staff and faculty or 78% of respondents had a positive view of the
alignment. Further analysis corroborates that there is overall alignment between the Center’s
mission and assessing and articulating impact.
Figure 5. Survey question – Extent stakeholders view that assessing and articulating the Center’s
impact is in alignment with Center’s mission
2
6
17
21
0
5
10
15
20
25
Not at all Somewhat not Somewhat Very much
To what extent do you view that assessing and
articulating the Center's impact is in alignment
with the Center's mission of enhancing stability?
ASSESSING IMPACT 82
The Center currently works directly for the DoD and receives guidance from the
Assistant Secretary of Defense for Asian and Pacific Security Affairs (APSA). Document
analysis indicated that DKI APCSS aligned its mission and impact in an APSA approval
document in June 2017 stating, “The proposed program directly supports policy priorities within
the funding targets provided to the Center, leveraging regional partnerships to expand impact…”
(Helvey, 2017). Moreover, additional document analysis demonstrated that the S&A office
prepared a charter signed by the previous director that demonstrated alignment of the Center’s
mission and enhancing stability. The charter’s mission statement reflected “DKI APCSS
educates, connects, and empowers security practitioners to build partner capacity, shared
understanding, and networks to enhance stability in the Indo-Asia-Pacific region” (DKI APCSS,
2017a). Given strong agreement of the relation between enhancing stability and DKI APCSS’s
mission in the survey, within the Center, and with the Center’s higher headquarters, this assumed
need is not validated and a solution is not required.
Need to relate strategy, design and program development to creating and assessing
impact. The researcher assumed that staff and faculty need to know how to relate strategy,
design, and program development to creating and assessing impact. As part of the strategy
process, it is critical that the creation of the strategy and subsequent development of programs
are proactively designed with the intent to create impact. In this Center, the specific impact to
assess is enhancing stability in the Indo-Pacific region. Further, another vital component of the
strategy process is conducting assessments to identify if the strategy and its programs are on a
glide path to create the intended impact of stability. This portion of the process will assist in
validating the Center’s currently approved programs and provide opportunities to modify efforts
as needed.
ASSESSING IMPACT 83
Figure 5. Survey question - Extent stakeholders know how to relate strategy, design and program
development to creating impact
Survey results revealed that 61% of the respondents somewhat (24 of 46 or 52%) or very
much (four of 46 or 9%) knew how to relate strategy, design, or program development to
assessing impact. Further, almost 40% of participants indicated they had no or somewhat no
knowledge of this process, indicating that more procedural knowledge is needed in this area for
the majority of the Center’s participants.
During an interview, a senior leader commended DoD for improving their guidance to the
Center that included a noble effort to specify outcomes. However, the challenge remains that
some of the deliverables “weren’t necessarily developed (as) measurable.” The lack of
measurable outcomes created a gap in procedural knowledge as part of the assessment process.
Observations revealed that there was a gap between the DoD guidance provided and the
Center’s program plan, indicating there was a lack of a strategy document at the Center level.
The S&A office initiated a process to create a strategy for the Center in the fall of 2016 but due
to the loss of the supervisor trained in strategy development, coupled with the Center’s
leadership in transition with an interim director, efforts to create a strategy halted. Currently the
chief of the S&A is vacant and another military research analyst position was eliminated (DSCA
6
12
24
4
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you know how to
relate strategy, design and program
development to creating and assessing
impact at the Center?
ASSESSING IMPACT 84
(Defense Security Cooperation Agency), 2018). The two remaining positions and associated job
descriptions in S&A are focused on process management and legislative affairs and thus, are not
focused on strategy development nor assessment expertise. The lack of a strategy document
revealed a need for knowledge on how to develop and relate a strategy along with a system to
conduct assessments.
Similarly, a focus group participant identified that the Center previously applied the
Kirkpatrick model as part of its assessment plan. The Kirkpatrick model focuses on evaluating
training programs and it contains a framework of Levels 1 to 4. In response to the training
program being assessed, the Level 1 evaluates the participant’s reaction, Level 2 is focused on
the trainee’s learning, Level 3 evaluates the trainee’s behavior, and Level 4 is concentrated on
evaluating impact or results (D. Kirkpatrick & Kirkpatrick, 2006). The participant explained, “at
one point we had a really good assessment team” using this model but when the officer in charge
transferred and the program lead departed “the system basically fell apart.” The S&A office
refers to this model, but there is not a systematic process in place that links the model to a
strategy nor is there a process to implement Level 4 of the model, reflecting a lack of knowledge
of how to relate program development and assessing results or impact. The survey results, lack
of measurable outcomes, and lack of trained experts validate the need for a solution to create a
knowledge path and process that better links strategy to assessing impact.
Need to relate program development to indicators of success and assessing impact.
In addition to the publication of the National Security Strategy authored in the Executive Branch
of the U.S. Government and the National Defense Strategy from the U.S. Secretary of Defense,
DoD typically provides biennial guidance on policy priorities to the regional centers. In June
2015, the strategy, plans and capabilities office issued Center specific priorities to DKI APCSS
ASSESSING IMPACT 85
and partially described the strategic context of the region as “committed to working with key
partners to promote peace and security in the Asia-Pacific region” (DoD, 2015). The guidance
included priorities for broad categories such as “Support to Defense Reform and Institution
Building, Maritime and Border Security, and Regional Architecture (ASEAN)” as one of the first
three listed priorities. Subsequently, each priority further included country specific guidance.
Guidance for one country included “reinforce the importance of healthy civil-military relations
and civilian control of the military.” An example of a short-range indicator of success was
“Participant surveys reflect a deepened appreciation for the importance of civilian oversight and
respect for the rule of law.” A mid-range indicator of success for the same country described,
“Countries have adopted or implemented practices or recommendations emerging from
engagements.” An example of a long-range indicator of success, again for the same country,
demonstrated the difficulty in assessing a sensitive topic in a foreign country, “Defense
leadership shows an increased respect for and appreciation of international law and human rights,
including through removal of personnel with problematic human rights records.” Although each
priority did not have a list of indicators, and each one was not specifically measurable, they were
provided in an effort to better guide the regional centers and assist in assessments.
In an effort to determine the staff and faculty procedural need to know how to relate
program development to indicators of success and assessing impact, an open-ended survey
question requested participants to, “Please describe how DKI APCSS measures its impact
beyond outcomes.” Of the 46 responses, 18 responses or 40% were generally categorized in
either a ‘did not know how the Center measures its impact’ or ‘a process did not exist.’
Examples of these responses were “no idea,” “not sure,” “don’t know and don’t see it,” “unsure
if it measures impact beyond course evaluations,” and “…we are too busy making things happen
ASSESSING IMPACT 86
there is no time to sit back and look at the accomplishment of mission to the region.” Twelve
responses or 26% included soliciting alumni for feedback, and leveraging alumni attendance and
actions. The responses did not address how the Center obtains the information from alumni but
suggested indicators of success such as “Alumni actions and impact within their own Country
and also with their fellow Alums in other Countries,” “alumni anecdotal evidence…” and
“statistics correlating with who participates in APCSS courses, and the size of its alumni
network.” Four responses included leveraging the success of fellows’ projects that alumni
initiate at the Center and implement when returning to their country. Examples of the responses
that related fellows’ projects to assessing impact were, “A good measure are the successful
Fellow’s projects and their impact on their country/agency,” “The only way I am aware of to
judge impact of our Center is based on completion/implementation/success of the Fellows
Projects” and “…FP completion stories.” Other responses indicated confusion with regard to the
open-ended request “impact beyond outcomes” and responded, “there are no impacts beyond
outcomes,” “What we call outcomes are really outputs,” and “don’t understand what ‘beyond
outcomes’ means.” Of the 46 responses, there were no references to relation of the strategy,
program development, and indicators of success. However, there was a comment that captured
one broad view of assessments:
The Center has been struggling with measuring its impact for years. In fact, the Center
focuses more on outcomes and often neglects its impact. As a result, the empirical
evidence of our achievements often gets reduced to the number of fellows and events we
conduct, plus some fellow’s projects we claim we help the fellows to achieve. However,
we do not know how these things produce an impact on security in the region. And we
also ignore other indicators of our impact.
ASSESSING IMPACT 87
Overall, these responses suggest there is a lack of knowledge of a systematic process for how the
Center assesses impact. Interviews further revealed that other processes take place at the Center
that have varying degrees of relation to program development and indicators of success.
A focus group participant suggested, “I would say the most formal feedback is related to
the EXSUM (Executive Summary) that goes up to OSD, PACOM…basically just says here is
what we’re trying to do in the course, following the priorities and here’s the outcomes. So there
is some feedback based on what we learned from the course.” Collectively, survey responses,
interviews, and focus groups suggest there is no or limited procedural knowledge for how the
Center may relate program development, indicators of success, and assessing impact. As a
result, to reach the organizational goal of the Center assessing and articulating its impact, there is
a need to develop a solution for staff and faculty to relate program development and assessing
impact.
How to articulate the Center’s impact. The researcher identified that when the staff
and faculty know how to best conduct an assessment of impact, these same stakeholders must
also know how to articulate the impact of the Center. Although 76% of the staff and faculty
indicated they, at a minimum, somewhat know how to articulate impact, survey results also
revealed that 13 or 28% somewhat lack knowledge on how to articulate the Center’s impact,
indicating there are some difficulties in articulating impact.
Interviews further revealed that it is difficult to articulate the impact of the Center. In response
to a question about articulating impact, a senior leader responded that the Center is doing it but it
is a “scattergun approach…for which we use available ammunition…it deserves maybe a level
or two more of thoughtfulness.” Focus group participants also described progress with improved
civilian military relations in Myanmar because of their networking while at DKI APCSS, yet it is
ASSESSING IMPACT 88
challenging to articulate that progress because the on-going Rohingya crisis appears to override
any progress that had been made previously. Based on a 28% gap on the survey of knowledge to
articulate impact, interview comments, and focus group data, this was validated as a current
knowledge need. Because of this, a solution will be offered in Chapter Five to address this
knowledge need.
Figure 6. Survey question - Extent staff and faculty know how to articulate the Center's impact
How to apply theories, best practices, methods, models or rubrics to create a
methodology to align roles and assess impact. A critical element in assessing impact is
knowing how to select and apply appropriate approaches, of which some were described in
Chapter Two. Survey results reflected that 63% of staff and faculty indicated that did not know
how to do this (29 of 46 somewhat not or not at all knew). Of those who indicated they had
knowledge of this process, 14 had somewhat knowledge and only three reflected they had a
significant knowledge level of the process.
0
13
24
9
0
5
10
15
20
25
30
Not at all Somewhat not Somewhat Very much
To what extent do you know how to
articulate the Center's impact?
ASSESSING IMPACT 89
Figure 7. Survey question - Extent stakeholders know how to apply theories, best practices,
methods, models or rubrics to create a methodology to assess the Center's impact
Moreover, focus group interviews expounded on the complexity of how to assess impact,
which indicated a lack of knowledge of other methods or models to assess the Center’s impact.
A participant asked the group if impact should be assessed at the “individual, country, or sub-
region” level. Other comments reflected doubts about the ability for the Center to claim
“attribution or contribution” when identifying impact. A senior leader also shared a recognition
for the “limitation of just how much you can do (measurement) of a soft science, this art,
strategic aloha, it is so hard to measure.” In addition to the lack of knowledge, stakeholders
revealed that emphasis must be placed on this effort in order to be able to create a methodology.
This will be further explored and explained in the organizational resources portion. The majority
of survey results revealed a lack of knowledge of other means to assess impact, and group
comments about the difficulty substantiate this contininuing need to know how to apply other
means to create a methodology to assess the Center’s impact.
Awareness of inherent biases in the process and purpose of assessing impact. Based
on preliminary scanning, the researcher identified that an inherent bias could exist in the Center
that could influence the assessment of impact (OECD, 2002). With stakeholders serving as
12
17
14
3
0
10
20
Not at all Somewhat not Somewhat Very much
To what extent do you know how to apply
theories, best practices, methods, models,
or rubrics to create a methodology to
assess the Center's impact?
ASSESSING IMPACT 90
employees of the Center, a potential bias may exist to over emphasize the Center’s impact, as an
inability to assess and articulate impact could possibly place DKI APCSS at risk of its existence
and jeopardize future employment of the stakeholders. Staff and faculty must exercise caution
that having a noble mission and doing a good job may or may not lead to intended impact, or
impact that can be readily measured. Survey results indicated that over half of the staff and
faculty (26 or 56%, somewhat or very much identified biasness) revealed that some level of
inherent biasness exists in the Center as it relates to assessing impact.
Further, participants acknowledged earlier that the inherent goal and mission of the
Center to enhance stability and prevent conflict, along with mainly positive immediate feedback
from fellows, created an upbeat aura about the Center. In general, a positive sensing about the
mission is good for the Center in terms of organizational culture and performance. However,
one must be sensitive to the limitation of the aura if there is not a systematic way to validate that
the sensing ultimately leads to the Center creating its intended impact. This could lead to an
inherent bias about intended impact since the sensing is supported by immediate feedback from
fellows or a Level 1 validation in the Kirkpatrick model versus not having validation at the Level
4 with actual impact results.
ASSESSING IMPACT 91
Figure 8. Survey question - Extent stakeholders are aware of inherent biases in the process of
assessing the impact of the Center
Given that the Center conducts other courses and activities without a systematic
mechanism for collecting feedback over time, there is a risk of inherent bias in assessing impact.
The one exception to not having a systematic data collection process over time applies to the
recently implemented tracking of fellows’ projects that are initiated in the Center’s courses that
are typically 4 to 5 weeks long. Based on survey feedback suggesting biasness and the absence
of a comprehensive longitudinal mechanism to go beyond Level 1 reactionary results to
determine Level 4 results, this need was validated, and a recommendation will be provided in
Chapter Five.
Summary of Knowledge Related Findings
This section detailed the findings of eight assumed knowledge related needs of which two
were not validated, one was partially validated, and five were validated. The first assumed need
found not to be validated was factual knowledge of a definition of enhancing stability as staff
and faculty appear to have a common known definition. The second one not validated was
conceptually related knowledge focused on whether this stakeholder group understood that
enhancing stability reflected the goal of creating impact. In both circumstances, triangulated
5
15
21
5
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent are you aware of any
inherent biases in the process of
assessing the impact of the Center?
ASSESSING IMPACT 92
data of survey results, interviews, focus groups, and document analysis collectively determined
that the staff and faculty had a common definition of enhancing stability, and that there was
alignment within the mission statement to create impact by enhancing stability.
The partially validated need was a factual knowledge area that staff and faculty needed to
know their roles in assessing impact. Although survey results tended to imply that this
stakeholder group knew their role in assessing impact, interviews revealed that roles in the
assessment process were not well understood because a systematic assessment process was not in
place.
The five validated needs were predominantly focused on procedural knowledge. The
first four needs were related to gaps based on a spectrum of an assessment process that began
with knowing how strategy and their roles related to assessment, how to relate program
development with indicators of success, applying theories or best practices to create a
methodology, and articulating impact. The lack of having a strategy at the Center was a factor in
not being able to relate S&A. In addition to the survey results, interviews and focus group
comments further validated the procedural needs. The last validation in the knowledge area was
metacognitive and acknowledged a potential for inherent bias in the assessment process,
determined by survey results and the absence of a process to validate impact. The next group of
findings is focused on needs in the area of motivation.
Motivation Needs
Following the identification of knowledge needs, Clark and Estes’ (2008) second key
component of this process requires analysis of motivation related needs to reach organizational
performance goals. The researcher identified four motivation related assumed needs as part of
this innovation study. The first assumed need was related to the staff and faculty’s need to
ASSESSING IMPACT 93
personally value the importance of assessing and articulating the Center’s impact based on
empirical evidence. Table 6 indicates this need was partially validated as feedback revealed that
this stakeholder group seeks credible validation of the impact they intend to make but recognizes
that empirical evidence is a high standard to achieve for a soft power organization. The second
need regarding the cost value of the staff and faculty’s effort to assess impact as worthy of time
and resources was also partially validated. The third need related the stakeholder’s self-efficacy
or lack of confidence to create a process to articulate the Center’s impact was fully validated.
And the last motivation need regarding expectancy outcome or the realization that the Center
could lose resources if impact could not be articulated was not validated, as stakeholders
understood that resources could be at risk. Table 6 depicts a summary of the motivation needs
findings and reflects that one of the assumed needs was validated, two were partially validated,
and one was not validated. A detailed description of the analysis and findings for these assumed
needs is addressed following the table.
Table 6
Summary of Assumed Motivation Needs Validation
Motivation Validated Not
Validated
Partially
Validated
Staff and faculty need to personally value the importance of aligning roles
and assessing and articulating the Center’s impact based on evidence
(Intrinsic Value)
X
Staff and faculty need to value the effort of assessing and articulating the
Center’s impact as worthy of time and resources (Cost Value)
X
Staff and faculty need to have confidence to create a methodology, model,
or rubric to assess and articulate the Center’s impact (Self-Efficacy)
X
Staff and faculty need to realize the Center could lose resources if its
impact cannot be assessed and articulated (Expectancy Outcome)
X
ASSESSING IMPACT 94
Personally value aligning roles and assessing impact based on empirical evidence.
Staff and faculty have predominantly relied on anecdotal evidence to support the Center’s
asserted impacts. Empirical evidence is intended to validate findings with greater accuracy and
without bias through the use of systematic data collection. In an effort to build capacity for the
Center to assess its impact, stakeholders with greater intrinsic value for using empirical evidence
can enhance the process (Pintrich, 2003). Survey results demonstrated that 39 of 46 or nearly
85% of participants somewhat or very much value the importance of providing empirical
evidence when assessing and articulating the Center’s impact. The other seven participants or
15% of the respondents somewhat did not value assessing impact based on empirical evidence.
Figure 9. Survey question - Extent staff and faculty value the importance of assessing and
articulating the Center's impact on empirical evidence
The three interviews and two focus group comments reflected an overall desire to go
beyond anecdotal evidence. One leader commented that the assessment approach “deserves a
level or two more of thoughtfulness” beyond using whatever is handy to satisfy a particular
stakeholder. Another leader said, “It is critically important that we assess and we come up with
tangible, demonstrable evidence of impact.” Also, a leader described that people need to know
that “the work they are doing is worthwhile” and “attributable to their effort.” Yet other
0
7
14
25
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you value the importance
of assessing and articulating the Center's
impact based on empirical evidence?
ASSESSING IMPACT 95
comments also highlighted the challenges with the standard of empirical evidence, particularly
for a soft power institution that educates, connects, and empowers international practitioners with
an interest in security. Another leader comment described that the Center’s work is “soft
science, this art, strategic aloha, it is so hard to measure…so there is recognition as to how hard
this is.” Furthermore, interviewees spoke of the challenge to identify whether the Center’s
actions were correlated, causal, or otherwise related to impact further complicating use of
empirical evidence. One difficulty highlighted was “trying to establish the linkage between the
life experience at APCSS and things that happen in the region a month, six months, or five years
down the road.” There is the motivation to assess the Center’s impact based on empirical
evidence. Stakeholders understand the importance of doing that for the benefit of the Center and
their work. At the same time, there are unresolved challenges that can make the process
perplexing and it can feel impossible and overwhelming to do, therefore affecting motivation to
attempt it. This is important to establish in terms of offering recommendations to address this
challenge. This is closely connected with knowledge needs and the lack of knowledge of how to
do this. This could also be connected with organizational needs, the organization having a plan
and process to do this. The next assumed need, suggesting the impact assessment is worthy of
time and resources, is also heavily connected here—again suggesting there is motivation but an
uncertainty about how to accomplish it.
While the majority of survey respondents indicated faculty and staff valued the
importance of using empirical evidence, the interview groups expressed doubt about gathering
empirical evidence, and suggested that this motivation need was partially validated and called for
a recommended solution.
ASSESSING IMPACT 96
Value the effort of assessing impact as worthy of time and resources. The researcher
assumed that staff and faculty need to value the effort of assessing and articulating the Center’s
impact as worthy of the time and resources. Survey results indicated that assessing impact was
either somewhat worthy (13 of 46) or very much worthy (27 of 46) of the effort while only 6 of
46 responded that it was somewhat not worthy of the effort.
Figure 10. Survey question - Extent staff and faculty value the effort of assessing and
articulating the Center's impact as worthy of time and resources
Leaders at DKI APCSS indicated in their interviews that there is a need to spend more
time on assessing impact. One leader described the importance of assessing impact for an
internal stakeholder group, “People need to know that the work that they’re doing is oriented on
something worthwhile. And preferably it is something that is visible to them and for which they
can recognize directly…it is a very powerful thing for the organization itself…” In addressing
external stakeholders, the leader also addressed DKI APCSS’ responsibility to tax payers, DoD,
and USINDOPACOM as a mission partner. A leader portrayed how others can be enamored
with an impressive list of DKI APCSS activities and not get to “how has that advanced stability
in the region? Show me the connection here through our outcomes that we are seeking.”
Another leader said that there is a lot of emphasis placed on internal assessments of course
0
6
13
27
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you value the effort of
assessing and articulating the Center's impact
as worthy of time and resources?
ASSESSING IMPACT 97
execution but there is a need to spend more time on assessing impact. Another senior leader did
not see the value in attempting to demonstrate a single course to causation of impact because of
the effort but the leader suggested that the Center assess impact in an effective way by leveraging
feedback through alumni.
One participant in a focus group reiterated the worthiness of assessing impact, "You can
busy your whole life doing goodness and in the end you never meet the achievement of what
you're trying to do because you weren't assessing along that path to ensure you are on the
trajectory to achieve and what does it mean when you get there?" However, another participant
revealed that previously there was faculty resistance in participating with assessments saying, “I
didn’t come here to collect data. I came here to teach.” Further observations revealed that some
faculty do not see the value in efforts that are outside of their core expertise. Faculty
demonstrated in observation meetings that there were competing demands for their time and
priorities were not as clear as desired, which negatively impacted motivation levels of
persistence. Staff observations and focus group comments also revealed that staff sought to have
consistency in priorities so their efforts could also stay consistent and thus, persist to accomplish
established goals. The evidence revealed that this assumed need to value the effort of assessing
impact was partially validated because there is motivation, yet the ability to begin and persist
with this effort would compete with other priorities that have already been established, such as
teaching and researching for faculty. Further having a lack of knowledge of how to conduct the
assessments and a related lack of confidence of how to do this may require more time and effort
outside of their main focus. The value of assessing impact will be further discussed in the
organizational resources portion to address the Center’s priorities and competing demands for
faculty.
ASSESSING IMPACT 98
Extent that stakeholders have sufficient confidence to create a process to assess and
articulate impact. In addition to the staff and faculty’s need to know about a methodology,
model, or rubric to assess impact, they need to have the confidence to create a process along with
the appropriate tools. This need was validated as survey results indicated that more than half of
the respondents, 28 out of 40, had little or no confidence to create a methodology, model or
rubric to assess the Center’s impact, as shown in Figure 12.
Figure 11. Survey question - Extent stakeholders are confident they have the ability to create a
tailored methodology, model, or rubric to assess the Center's impact
As it relates to DKI APCSS assessment goals, a focus group participant questioned,
“How do we measure those outcomes?” A different participant expressed concern about how to
approach the process and queried whether the assessments should be based on individuals,
countries, or regions. A senior leader also argued that the Center needed to assess impact better
to survive and admittedly said the consistent engagement, involvement, and presence through the
Center is where the value is but it is “impossible to measure.” Comments indicate that
stakeholders value the need to assess impact, know it is important, and they are willing to devote
time and resources. However, they do not feel confident about how to assess impact in addition
to lacking knowledge of how to do it. The survey results coupled with interviewee comments
13
15 15
3
0
5
10
15
20
Not at all Somewhat not Somewhat Very much
To what extent are you confident that you have
the ability to create a tailored methodology,
model, or rubric to assess the Center's impact?
ASSESSING IMPACT 99
suggest that the staff and faculty lack confidence to create a process for assessing the Center’s
impact and this is a validated need that requires a recommended solution.
Figure 12. Survey question - Extent staff and faculty view resources could be at risk if the Center
cannot assess and articulate its impact
The extent stakeholders view resources could be at risk if impact cannot be assessed
and articulated. The researcher’s initial scanning revealed that DoD queries to DKI APCSS
about the impact of budget cuts served as a threat to resources. This led to an assumed need for
the staff and faculty to understand that the Center could lose resources if they could not assess
and more importantly, articulate the Center’s impact, to external stakeholders. Survey results
strongly indicated that 42 of 46 respondents (91%) either somewhat (18 of 42) or very much (24
of 42) viewed that resources could be at risk without an ability to assess and articulate its impact,
while only four respondents viewed that resources were somewhat or not at all at risk without the
capability to assess impact.
Each senior leader and both focus groups also indicated that the Center must be able to
assess and articulate impact or resources are at risk. One leader stated to maintain the existence
of the Center, “We have to assess impact better if we hope to survive.” Therefore, this assumed
2 2
18
24
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you view resources
could be at risk if the Center cannot assess
and articulate its impact?
ASSESSING IMPACT 100
need was not validated as the staff and faculty already have the motivation to assess the Center’s
impact to minimize the risk of lost resources.
Summary of Motivation Related Findings
Results from surveys, interviews, and focus groups led to one not validated need that
pertained to staff and faculty recognizing that resources could be lost if the Center’s impact
could not be articulated. The need to value the effort of assessing impact as worthy of time and
resources and the need for stakeholders’ intrinsic value to leverage empirical evidence in the
assessment process were both partially validated. The full validation of one motivation related
need focused on the staff and faculty’s lack of confidence in developing a method to assess
impact.
Under former President Obama’s administration, the National Security Strategy
emphasized the importance of the Pacific Region and the term “Rebalance” or “Pivot to Asia”
indicated the U.S.’s focus in this region. As such, the DKI APCSS was viewed as the regional
center to leverage this part of the strategy for DoD (DoS (Department of State), 2015; Hanauer et
al., 2014). This previous overarching strategy and the stable or increased funding without the
use of empirical evidence could be a contributing factor in some stakeholders not valuing its
importance to describe the Center’s impact.
The difficulty in assessing the impact of the Center’s efforts that can take years to build,
and the challenge of capturing the impact of increased capacity, relationships, networks, and
empowerment further deter motivation to assess impact. Moreover, when staff and faculty have
to choose between preparing and executing well for a course or lecture, and spending time on
assessing impact, there are a portion who indicated that it is not worth the effort to collect
ASSESSING IMPACT 101
evidence and articulate the Center’s impact. The next section will discuss the findings of the
organizational needs and address the cultural setting of priorities in the Center.
Organizational Needs
This study identified three key assumed organizational needs in support of the Center’s
efforts to create a process to systematically assess and articulate its impact. Two of the needs
were focused on the cultural setting in the DKI APCSS and the resources needed as part of the
assessment process. The needs specifically addressed resources to learn about methodologies
and aligning roles and those needed to collect and analyze data over time. The third need was
based on the cultural model of the Center and a need to believe that appropriate action will be
taken based on the impact assessment. Specifically, a cultural model refers to expectations or
norms of shared thoughts or actions that have evolved over time (Gallimore & Goldenberg,
2001). Whereas cultural setting reflects the routine actions of stakeholders in the organization
(Gallimore & Goldenberg, 2001). A summary of the organizational needs is displayed in Table
7 and reflects that all three assumed needs were validated.
Table 7
Summary of Assumed Organizational Needs Validation
Assumed Need Validated
Not
Validated
Partially
Validated
Organization
Staff and faculty need adequate resources to learn how to align roles and
create and execute a methodology to assess and articulate impact (Cultural
Setting1)
x
Staff and faculty need sufficient resources to collect and analyze data over
time (Cultural Setting2)
x
Staff and faculty need to believe the Center will request adequate resources
to create impact based on assessments (Cultural Model)
x
_______________________________________________________________________
ASSESSING IMPACT 102
Staff and faculty need adequate resources to learn how to align roles and create and
execute a methodology to assess and articulate impact. The researcher identified an assumed
need for resources to learn how to align roles, and create and implement a methodology, rubric,
or model to assess impact. Resources that could be identified may range from funding to support
training costs, contracting a subject matter expert, or time to learn and then develop a process.
This assumed need was addressed in a Likert scale question, open-ended question in the survey,
and again in the interviews and focus groups. This assumed organizational need was determined
to be a validated need. The need for adequate resources to learn a process to assess and
articulate DKI APCSS’s impact was first validated through the Likert scale survey question “To
what extent do you have adequate resources to learn how to create a methodology, model, or
rubric to assess and articulate DKI APCSS’ impact?” The majority of respondents, 32 of 46 or
69.6 % said they did not have adequate resources, of which 12 respondents or 26% indicated not
at all and 20 of 46 or 43.5% indicated they had somewhat not enough resources to learn how to
create a process. Only one response of 46 or 2 % indicated very much resources were available
and 13 respondents of 46 or 28.3 % reflected they somewhat had the resources to learn how to
create a process.
ASSESSING IMPACT 103
Figure 13. Survey question - Extent stakeholders have adequate resources to learn how to create
a methodology, model, or rubric to assess and articulate impact
Also in the survey, respondents commented extensively to an open-ended question,
“From your perspective, what resources are needed to assess and articulate the Center’s impact?”
A group of 11 of the 46 respondents suggested in their open-ended responses on the survey that a
dedicated subject matter expert (SME) in assessments is a needed resource to build the Center’s
ability to learn how to conduct assessments. One respondent said, “I believe it’s vital to our
existence to dedicate labor resources solely to this effort and not on a part-time basis, we need a
team of experts.” Three of these 11 participants indicated that the SME should come from a
third party such as an external contractor.
A focus group of supervisors recalled a period of time when the Center was applying the
Kirkpatrick model where “Level 4 was completion of a really significant fellows project.” A
participant further described that at that time, a team of two was tasked with “Kirkpatrick
modeling and collecting the information and analyzing it and then providing the feedback for
potential change.” However, participants also recalled that the task transferred to another
section, a specific person left the Center, the officer transferred and “the system basically fell
apart.” This created a gap in the learning of how the results or impact were being identified and
12
20
13
1
0
10
20
30
Not at all Somewhat not Somewhat Very much
To what extent do you have adequate
resources to learn how to create a
methodology, model, or rubric to assess
and articulate DKI APCSS' impact?
ASSESSING IMPACT 104
created a need to learn how it was being done. Additionally, others commented, “it wasn’t
resourced” and “people didn’t have enough time (to do what) what they had to do” and it
eventually became defunct due to a lack of resources. Thus, when a model and process were in
use, feedback suggested that it was dependent on specific personnel with knowledge, there was a
shortfall in identifying and training replacements for continuity, and a need to learn how to apply
the Kirkpatrick model or another methodology is a continuing need. The next section will
further discuss the resource needs that are also related to this need for training.
Based on the triangulation of data from extensive comments from the survey, interviews,
focus groups, and observations, data suggest that staff and faculty need additional resources to
learn how to align roles and create and implement a methodology to assess and articulate impact,
and so this assumed need is validated. The next chapter will offer a proposed solution to address
this need.
Figure 14. Survey question – Extent staff and faculty have sufficient resources to collect and
analyze data over time to assess impact
Staff and faculty need sufficient resources to collect and analyze data over time. The
researcher identified the need for staff and faculty to have sufficient resources to collect and
analyze data over time as part of the impact assessment process. As part of the validation
16 16
13
1
0
5
10
15
20
Not at all Somewhat not Somewhat Very much
To what extent do you have sufficient
resources to collect and analyze data over
time to assess impact?
ASSESSING IMPACT 105
process, this assumed need was specifically addressed in the survey with a Likert scale question,
“To what extent do you have sufficient resources to collect and analyze data over time to assess
impact?” The respondents indicated that 32 of 46 or almost 70% either had no resources (16 of
46 or 35%) and the other 16 of 46 or 35% had somewhat not sufficient resources. Only 28% or
13 of 46 responded that they somewhat had sufficient resources and one respondent or 2% of the
population indicated that very much resources were available to collect data over time. The
resource gap identified in these responses revealed that a suggested solution is needed for the
staff and faculty to collect data over time.
Also in response to the open-ended question, “From your perspective, what resources are
needed to assess and articulate the Center’s impact?” One of the two largest categories of
responses representing 11 respondents was related to the need to build and maintain a process
that consistently obtains long-term feedback from the Center’s alumni. These same 11
respondents also suggested building resources to collect feedback as part of the projects that
alumni complete as part of the multi-week DKI APCSS courses. A participant identified the
need for a “solid alumni network that would indicate how the lessons (learned) and relationships
(built) during courses, workshops, and/or dialogues have changed the course of a developing
situation toward the better.” Without constant feedback from the alumni whose efforts and
products create impact in the region, there is a dearth of information to conduct assessments, and
leads to a tendency to rely on anecdotal evidence.
A group of five respondents in the open-ended survey question indicated that faculty
resources could be involved with the development of an assessment process. This effort would
require an allocation of time to learn how to support the assessment process through a workshop,
and balanced with other requirements such as faculty further engaging with fellows projects, and
ASSESSING IMPACT 106
research time focused on the Center’s assessments. One comment suggested, “Faculty need to
be more involved with fellows after a course. Too many faculty seem to think they’re only
responsible for producing a course and its classwork.” As a result, faculty were identified as
capable of being involved in the assessment process but their lack of time and subsequent
competing priorities with teaching, would result in a need to prioritize resources should faculty
be involved in collection of data over time.
Four respondents identified the need for leadership to demonstrate the will to conduct
assessments and commit by prioritizing tasks that include time to contribute to assessments. One
participant sought a “clearer desire from leadership and specific, unchanging tools to assess
baseline and impact.” Other topics in the survey included single responses that were related to
the need for a framework, tools, data mining social media, capital, leveraging the current
Regional Center Persons/Activity Management System, or software to collect other data over
time. Six participants indicated they did not know what resources were needed to learn how to
conduct nor implement assessments.
In a meeting observation, a participant identified a spreadsheet that aligned indicators of
success provided by DOD to DKI APCSS activities, and it included a color-coded bar chart to
reflect an assessment of activities as it related to an indicator of success. However, personnel
shortages including leadership in the S&A office and the pending selection of the new director
for the Center, led to a pause in obtaining approval to use the tool and implementing a system to
conduct the assessments. The on-going shortage of personnel, particularly due to the elimination
of a research position and two other positions not coded specifically for assessment expertise,
lead to a continuing need related to having adequate resources to assess and articulate impact
over time.
ASSESSING IMPACT 107
An important aspect of the organizational need for resources related to having an ability
to allocate scarce resources within the Center. If the Center’s leadership can associate the
amount of impact relative to certain activities and programs, it may help the Center with
evidence-based distribution of funding. As one leader commented, if resources provided are less
than requested or needed, then the Center must answer, “Where is the emphasis? Do you want to
cut a course or do you want to cut this equally? How do we want to do this? What people are we
bringing in? What countries? Who are the priorities?” A leader further said the Center is able
to address these issues but “it would be wonderful to say, ‘if we cut this course, this is what is
going to happen 5-10 years from that,’ I don’t think we can say that today.” Moreover, a leader
questioned the resourcing for tracking data once it is identified, “who tracks all that?” If the
stakeholders had sufficient resources to learn how to create a process and as importantly, execute
the assessment, these questions may be answered as a result of the assessment process.
A senior leader felt there were limited resources existing in the Center to conduct
assessments but questioned whether there was a system to maximize collaboration. Another
leader cautioned, “by the nature of the business we’re in, the assessments process will be
evolutionary…but clearly we can improve, refine, maybe address broader constituencies, or be
able to focus and really address some very important stakeholders.” Collectively, senior leaders
indicated they could improve the process to systematically assess impact and how they could
allocate resources based on assessments of impact.
A leader described that fellows provide rather in-depth feedback at the conclusion of a
multi-week resident course, yet alluded to the shortage of data collected over time. The leader
further commented that, “Surveys really assess a product and the product is a course. How is the
admin support? What did you think of the professors? How did you like that plenary?
ASSESSING IMPACT 108
However, we get very little feedback on the impact of that course.” Thus, current limited
resources in the Center have predominantly been focused on Kirkpatrick’s Level 1 reactions to
capture fellows’ feedback at the end of a course, and not on collecting data over time (D.
Kirkpatrick & Kirkpatrick, 2006). Further the leader stated, “We need to have a mechanism to
assess at some period after they have left the course, so they can digest their experience, try to
apply it to their jobs.” Based on feedback from the stakeholder group, surveys are administered
at the six-month mark, but there are a limited amount of resources to collect and analyze these
data over time.
However, most recently, some of these capabilities were applied to the fellows project
efforts to capture the impact of projects after fellows have returned to their jobs in the region. A
leader interview addressed a new systematic initiative to follow up with alumni on fellows
projects that currently extends to at least six months to a year after the fellows graduate from a
long course. The Center has allocated a faculty member to lead this effort and hired a contractor
to assist. As a result, the leader viewed it “as a pretty darn good model in terms of trying to
assess a programmatic event here, the fellows project.” It was also acknowledged that this was a
3-year process and took positioning “a faculty member with scar tissue of teaching and working
with fellows for years.” The tracking of fellows projects is an initiative to devote resources to
collect and analyze data over time for a specific project, and while this is a step in the right
direction, additional resources are needed to extend this further to capture the impact of the
Center’s holistic efforts. Despite a promising initiative of obtaining long-term feedback on
fellows’ projects, there are concerns about post-event assessments for workshops because as one
leader described, they are viewed as a “different animal” with participants typically not classified
as alumni for workshops held less than four or five days. At this time, the Center collects
ASSESSING IMPACT 109
evaluation sheets at the end of a workshop but there is neither a focus nor known resources
dedicated to capturing the impact of workshops over time.
Overall, consistent feedback from participants in the survey, interviews, and focus groups
revealed that a solution to address a shortfall in resources is needed to collect data over time as
part of the process to capture the Center’s impact. The Center has made progress by
systematically creating a process and allocating resources to collect data over time for the
fellows’ projects, which required leader emphasis and additional personnel. Based on the need
to assess the Center’s activities holistically and collect data over time, a recommended solution
will be addressed in Chapter Five.
Staff and faculty need to believe the Center will request adequate resources to
create impact based on assessments. The researcher identified that the staff and faculty must
believe that appropriate action will be taken to create impact based on results of the assessments.
In some cases, it may require action to modify the program if the assessment determined that part
of the Center’s activities were not impactful as intended or it may require submitting a request
for additional funding to meet intended impact. While the assessment identifies impact in a
summative manner, the process also supports the Center in a formative way to help improve its
program. If unfulfilled, this need to take appropriate action based on the assessment results,
could contribute to devaluing the effort of conducting assessments overall.
Appropriate action within the strategy lifecycle includes a number of efforts as a result of
the assessments process. Actions such as identifying the most impactful aspects of programs and
where additional resources may leverage that impact, modifying the program of courses, and
making changes that would influence the on-going development of strategy are a few examples.
ASSESSING IMPACT 110
Figure 16. Example of a strategy process.
The question, “To what extent do you believe DKI APCSS will request adequate
resources to create impact based on the assessment results?” was addressed in a Likert scale
survey question. Responses were split 50% with six responses indicating a not at all level of
belief and 17 responding somewhat not compared to the other 50%, 17 responding to having
somewhat belief and six indications of very much believe that action, such as requesting
resources, would be taken based on the assessment feedback.
Develop
Strategy
and
Program
Execute
Program
Assess
Program
and Impact
Take appropriate action
Request Resources/Modify
Program
DOD
Guidance
ASSESSING IMPACT 111
Figure 15. Survey question – Extent stakeholders believe DKI APCSS will request adequate
resources to create impat based on assessment results
Furthermore, meeting observations also revealed that some gaps of trust within a section
of the Center existed, which likely contributed to survey feedback. The issue of trust was
addressed with multiple sessions of discussing trust based on the book Speed of Trust. Training
sessions were voluntary and not attended by the full section, thus trust issues linger. If the
stakeholder group has a higher level of trust in the organization, it can also positively affect
motivation levels. Given that half of the respondents reflected a lack of confidence in the
process, coupled with remaining trust issues in sections of the Center, this need was validated
and feedback suggested a continuing need for a solution to be recommended.
Summary of Organizational Resources Related Findings
There were three assumed organizational related needs and all three were validated. The
researcher determined that recommended solutions are needed for the staff and faculty to learn
how to align roles, create a methodology to assess and articulate impact, and to execute the
process. Additionally, this stakeholder group will need an organizational solution to be able to
collect and analyze data over time as part of the assessment process. Lastly, there is a need for
the staff and faculty to believe that appropriate action will be taken if the assessment process
6
17 17
6
0
5
10
15
20
Not at all Somewhat not Somewhat Very much
To what extent do you believe DKI APCSS will
request adequate resources to create impact
based on assessment results?
ASSESSING IMPACT 112
reveals that resources should be requested or other action should be taken to create the Center’s
intended impact.
Findings Summary
Overall the researcher projected 15 assumed needs of which eight were knowledge
related, four motivation related, and three related to organizational resources. Of the 15, nine
were validated, three were partially validated, and three were not validated. Findings also
revealed that the needs to meet DKI APCSS’ performance goal to assess and articulate impact
are less a motivation issue that needs to be solved, and more a knowledge and organizational
needs issue that needs to be addressed. Solving the knowledge and organizational needs would
also potentially further enhance motivation. The researcher acknowledges and findings
identified complicated knowledge needs and knowing how to assess impact in a complex soft
power environment is problematic with no easy answers. It will fall on the organization and the
organizational leaders to provide direction and support over the long run to meet the
stakeholders’ knowledge, motivation, and organizational needs to assess and articulate impact.
Table 8 provides a summary of the validation results by category.
Table 8
Summary of Findings
Assumed Need Validated
Not Partially
Validated Validated
Knowledge
Staff and faculty need to know their roles in assessing and articulating
impact
X
Staff and faculty need to have a common definition of ‘enhancing
stability’ as part of the mission statement in order to align roles and to be
able to assess the Center’s impact
X
Staff and faculty need to understand that enhancing stability reflects the
goal of creating impact in the Center’s mission
X
Staff and faculty need to know how to relate strategy, design, and program
development to creating and assessing the Center’s impact
X
ASSESSING IMPACT 113
Table 8, continued
Assumed Need Validated
Not Partially
Validated Validated
Staff and faculty need to know how to relate program development to
indicators of success and key performance indicators and assessing impact
X
Staff and faculty need to know how to articulate the Center’s impact X
Staff and faculty need to know how to apply theories, best practices,
methods, models, or rubrics to create a methodology to assess the Center’s
impact
X
Staff and faculty need to be aware of any inherent biases to the process
and purpose of assessing impact
X
Motivation Validated Not
Validated
Partially
Validated
Staff and faculty need to personally value the importance of aligning roles
and assessing and articulating the Center’s impact based on evidence
X
Staff and faculty need to value the effort of assessing and articulating the
Center’s impact as worthy of time and resources
X
Staff and faculty need to have confidence to create a methodology, model,
or rubric to assess and articulate the Center’s impact
X
Staff and faculty need to realize the Center could lose resources if its
impact cannot be assessed and articulated
X
Organization
Validated
Not
Validated
Partially
Validated
Staff and faculty need adequate resources to learn how to align roles and
create and implement a methodology to assess and articulate impact
X
Staff and faculty need sufficient resources to collect and analyze data over
time
X
Staff and faculty need to believe the Center will request adequate
resources to create impact based on assessments
X
_________________________________________________________________________
Findings related to the 12 validated or partially validated needs for the stakeholder group
of staff and faculty focused on six knowledge related needs, three motivation related needs, and
all three of the organizational resource needs. These findings revealed that staff and faculty have
a solid understanding of the Center’s mission and a common definition for ‘enhancing stability’
that was best described by conflict prevention. Further, this stakeholder group conveyed they
understood that enhancing stability reflects the goal of creating impact in the Center’s mission.
ASSESSING IMPACT 114
Of the six validated needs in the knowledge area, findings indicated the staff and faculty need
additional knowledge on the strategy process and its relation to program development, and
indicators of success as it relates to assessing impact. Additional knowledge is needed on how to
apply theories, best practices, methods, models, or rubrics to create a methodology to assess
impact. Lastly in the knowledge area, the staff and faculty need to address inherent biases in the
process while assessing impact.
The four motivation related needs resulted in finding one of the assumed needs as not
validated because staff and faculty understood the Center could lose resources if its impact
cannot be assessed and articulated. Two needs were partially validated and required a solution
for the staff and faculty to value the importance of assessing and articulating the Center’s impact.
A second solution is needed for the stakeholders who did not value the effort of assessing and
articulating impact as worthy of time and resources. The one fully validated motivational area
was focused on the need for staff and faculty to have the confidence to create a methodology for
the impact assessment process.
The last area of organizational resources validated the three assumed needs focused on
resources to learn how to align roles, create a methodology to assess and articulate impact, and to
execute the assessment process. Moreover, this stakeholder group validated the need for
sufficient resources to collect and analyze data over time. Lastly, the staff and faculty need to
believe the Center will request adequate resources or take appropriate action to create impact
based on assessments. Chapter Five will provide recommended solutions for the nine validated
needs and three partially validated needs.
ASSESSING IMPACT 115
CHAPTER FIVE: PROPOSED SOLUTIONS AND IMPLEMENTATION
This chapter focuses on providing recommended solutions for the assumed needs that
were validated in Chapter Four. Chapter Five also answers the researcher’s second question,
“What are the recommended knowledge, motivation, and organizational resource solutions to
meet the validated needs to achieve the organizational performance goal?” for the Daniel K.
Inouye Asia-Pacific Center for Security Studies (DKI APCSS) to assess and articulate its impact.
Following a brief overview of the validated needs, the next section of this chapter
provides recommended solutions. The third section discusses the strategies and action steps for
implementing the solutions. The next two areas address the resource requirements and timeline
for implementing the recommendations. The last portion will discuss the implementation
constraints and challenges, and areas for further research.
Overview of Validated Needs
Of the 15 assumed needs, analysis of survey results, interviews, document analysis, and
observations resulted in nine validated and three partially validated needs that generated
recommendations for the staff and faculty of the DKI APCSS to assess and articulate its impact
in the Asia-Pacific region. The table below summarizes the validated and partially validated
needs and identifies a code for each one, which will be associated with a proposed solution.
ASSESSING IMPACT 116
Table 9
Summary of Findings and Associated Codes
Validated Need Validated
Partially
Validated
Knowledge
KPV1 - Staff and faculty need to know their roles in assessing and
articulating impact
X
KV2 - Staff and faculty need to know how to relate strategy, design, and
program development to creating and assessing the Center’s impact
X
KV3 - Staff and faculty need to know how to relate program development
to indicators of success and key performance indicators and assessing
impact
X
KV4 - Staff and faculty need to know how to articulate the Center’s
impact
X
KV5 - Staff and faculty need to know how to apply theories, best
practices, methods, models, or rubrics to create and implement a
methodology to assess the Center’s impact
X
KV6 - Staff and faculty need to be aware of any inherent biases to the
process and purpose of assessing impact
X
Motivation Validated Partially
Validated
MPV7 - Staff and faculty need to personally value the importance of
aligning roles and assessing and articulating the Center’s impact based on
evidence
X
MPV8 - Staff and faculty need to value the effort of assessing and
articulating the Center’s impact as worthy of time and resources
X
MV9 - Staff and faculty need to have confidence to create a methodology,
model, or rubric to assess and articulate the Center’s impact
X
Organization
Validated
Partially
Validated
OV10 - Staff and faculty need adequate resources to learn how to align
roles and create and implement a methodology to assess and articulate
impact
X
OV11 - Staff and faculty need sufficient resources to collect and analyze
data over time
X
OV12 - Staff and faculty need to believe the Center will request adequate
resources to create impact based on assessments
X
K=Knowledge; M=Motivation; O=Organization; V=Validated; PV=Partially Validated
ASSESSING IMPACT 117
For the purposes of providing recommended solutions, partially validated needs are included and
will be further referenced simply as validated needs. Each of these needs will be addressed in
one of the four solutions provided in the next section.
Recommended Solutions for Implementation
The DKI APCSS staff and faculty’s 12 validated needs are addressed in one of the four
holistic proposed solutions designed to guide the Center in creating capacity to assess and
articulate its impact. While these solutions are presented separately with each addressing
specific validated needs, they all work in concert and build upon one another. The first
recommended solution is aimed at ensuring there is a broad roadmap for the Center’s actions by
developing a DKI APCSS strategy. This strategy also becomes instrumental in conducting
impact assessments and addresses both knowledge and motivational needs for the staff and
faculty to reach its performance goal. The second solution addresses leadership communicating
its commitment to conduct assessments due to the time and resources this process requires.
Following communication of leadership’s support for assessing and articulating the Center’s
impact, the third advised solution focuses on building capacity for the staff and faculty to have
the skills and procedural knowledge to conduct impact assessments. The fourth projected
solution recommends implementation of a DKI APCSS process for conducting impact
assessments and taking appropriate action as a result of assessments. Table 10 highlights the
four recommended solutions and lists the associated code for the validated findings that each
solution addresses. A greater explanation of each recommendation follows the table summary.
ASSESSING IMPACT 118
Table 10
Summary of Recommended Solutions Associated wtih Validated Finding Codes
Recommended Solution Validated Need(s)
#1 – Create a strategy at the Center level KV 2
MPV7
MPV8
#2 – Communicate leadership commitment to assess impact MPV7
MPV8
OV11
OV12
#3 – Build capacity for staff and faculty to conduct impact
assessments
KV1
KV2
KV3
KV4
KV5
KV6
MPV9
OV10
#4 – Implement impact assessment process and take appropriate
action
OV11
OV12
K=Knowledge; M=Motivation; O=Organization; V=Validated; PV=Partially Validated
Recommended Solution #1: Create a Strategy at the Center Level
The researcher recommends that DKI APCSS create an overarching strategy at the Center
level to guide the Center’s program and activities development in support of Department of
Defense (DoD) guidance (Athapaththu, 2016). A strategy must be in place to serve as the
foundation for assessing impact. The process of developing a strategy also sets conditions for
the staff and faculty to eventually assess its impact. Also, as mentioned in Chapter Two, having
a strategy at a central level is also often associated with motivation. It provides guidance and
direction, clarifies goals and vision, and motivates individuals to work toward something. It is
ASSESSING IMPACT 119
also motivating as it provides an opportunity to involve diverse stakeholders in its creation
(Athapaththu, 2016).
This strategy development and stakeholder commitment also becomes instrumental in
conducting impact assessments. The development of a strategy specifically addresses the
knowledge need for the staff and faculty to know how to relate strategy, design, and program
development to creating and assessing the Center’s impact (KV2).
A Center level strategy could affirm or modify the Center’s vision and mission and
provide a foundation for future assessments. The process could also incorporate guidance
provided through influencing documents such as the National Security Strategy, the National
Defense Strategy, and other critical inputs from USPACOM as a mission partner to DKI APCSS
to ensure alignment. The inclusion of stakeholders could provide perspectives from various
internal and external stakeholders and facilitate buy-in of the strategy. One result of the strategy
development would be the establishment of goals and objectives that would align with DKI
APCSS’s mission. Another benefit of a Center level strategy is that the framework allows the
organization to innovate while striving for mutually agreed upon end states.
Recommended Solution #2 - Communicate Leadership’s Commitment to
Assessments of Impact
This recommended solution includes a critical step for leadership to communicate the
Center’s determination to conduct impact assessments, as a leader’s deep commitment to a new
policy is essential. Moreover, when the Center communicates positive answers to the following
questions, conditions are conducive to moving forward: (a) Is there good reason for
implementing this policy? And does it solve a problem and empower others to “eventually
introduce other changes” (Fowler, Robertson, Fowler, & Frances, 2000, p. 278); (b) Is this
appropriate for our Center? Is it consistent with the Center’s priorities, culture, and values?; and
ASSESSING IMPACT 120
(c) Is there sufficient support among key stakeholders (Fowler et al., 2000, p. 281)?
Communicating this commitment often and in different ways addresses both organizational and
motivational needs for stakeholders to meet their performance goal of assessing and articulating
the Center’s impact.
Upon communicating the determination to assess impact, leaders can appropriately
prioritize resources to the extent needed. This communication of commitment is an important
step to building trust. A crucial related step is to show through actions, such as the allocation of
resources, that the commitment is there. Based on leadership committing to a specific level of
effort, this recommended solution addresses needs related to staff and faculty’s intrinsic
motivation and the need for their time and work conducting assessments to be worthy of their
efforts (MPV7 and MPV8). It further addresses the needs for staff and faculty to obtain adequate
resourcing to collect data over time and believe that appropriate actions will be taken based on
impact assessment results (OV11 and OV12). While resourcing for the efforts cannot be
guaranteed, it is reasonable to project that if the leadership articulates the impact assessment
endeavor, that leadership will also support efforts to request adequate resources and subsequently
take appropriate action based on the assessment results.
Recommended Solution #3 - Build Capacity for Staff and Faculty to Conduct
Assessments and Articulate Impact
As leadership articulates its commitment to assess the Center’s impact, the researcher
recommends that staff and faculty build their capacity to conduct assessments. This
recommended solution specifically targets the staff and faculty’s conceptual and procedural
knowledge and skills needs to know their roles, relate strategy, and understand methodologies
and approaches. Further it will address how staff and faculty articulate impact and to be aware
of any inherent biases (KPV1, KV2, KV3, KV4, KV5, and KV6). Moreover, when the staff and
ASSESSING IMPACT 121
faculty are resourced with the required education, training, time, and practice to build an impact
assessment, this recommended solution will also address the staff and faculty’s level of
confidence needed to assess impact (OV10 and MPV9). Kirkpatrick and Kirkpatrick (2006)
describe evaluating results and impact as the most challenging part of this evaluation model
(Level 4 in the Kirkpatrick model). Assessing impact in a large and diverse region further
complicates the process. Ensuring staff and faculty have the training or guidance from a subject
matter expert (SME) would improve their knowledge, skills, and confidence to conduct impact
assessments.
Upon identification of the team lead for the impact assessment process, it is
recommended that the SME conduct professional development or training for staff and faculty to
understand the intricacies of the process. Calling upon literature in Chapter Two, training could
include identification of the objectives of the impact assessment, options for approaches to
consider, and guidance to select an approach appropriate for DKI APCSS
(PricewaterhouseCoopers, 2007). The Center could also plan for the use of select tools identified
by staff and faculty to conduct assessments that are feasible and sustainable commensurate with
staffing training. Examples may include use of longitudinal surveys and faculty case studies that
should include a baseline assessment. Additional details will be provided in the implementation
plan.
Recommended Solution #4 – Implement Impact Assessment Process and Take
Appropriate Action
A summative assessment process must be implemented in order to accurately articulate
the impact of DKI APCSS (OECD/DAC, 2008). Further, impact assessment results would likely
trigger follow on actions. Several key ingredients contribute to the successful implemention of a
new policy or plan including a leader’s commitment to the effort. Upon Center leadership’s
ASSESSING IMPACT 122
communication to conduct impact assessments, there are a number of steps required to execute
the assessment. When staff and faculty have completed training to build capacity and identified
the best impact assessment approach for DKI APCSS, the Center could continue to prepare
implemention of this process.
With leadership commitment and broad consensus of the concept of the new policy,
preparation or mobilization of the new policy is another essential step (Fowler et al., 2000).
These tasks can include ensuring staff and faculty are sufficiently trained, have sufficient
resources, retain flexibility to adapt as needed, and ensure on-going professional development
opportunities related to assessment impact are available. A goal for implementation of the new
policy would be to institutionalize the process to endure challenges such as leadership turnover,
changeover of employees, or declines in budget (Fowler et al., 2000). Similar to on-going
formative assessments to improve performance, implementing an impact assessment provides a
method for leadership to take appropriate action based on the impact results. Without a
summative process, the system lacks an impetus to take action to answer, “are we doing the right
things?” Again, resources cannot be guaranteed but leadership taking action builds trust for staff
to witness leadership’s support and addresses organizational resources and follow on action
(OV11 and OV12).
Key steps to implementing the impact assessment likely include preparing alignment of
staff positions as needed, conducting a basline assessment, executing an impact assessment, and
articulating the impact. These steps will be further described in the implementation plan that
follows.
ASSESSING IMPACT 123
Summary of Proposed Solutions
The four recommended solutions above address the 12 validated needs for DKI APCSS
to assess and articulate its impact. Each suggested solution builds upon the previous one and
contains action steps, resource requirements, and a projected timeline, as presented below. The
first suggestion is focused on the development of a strategy for the Center and provides a basis
for conducting assessments in addition to addressing knowledge and motivational needs. The
second proposed solution suggests leadership articulate its commitment to conducting impact
assessments that supports both motivational and organizational needs by building trust and
prioritizing resources. The third recommendation flows from the second and targets the building
of capacity for staff and faculty to learn the requisite concepts and procedural knowledge needed
to implement a process for impact assessments. The final suggested solution is to implement the
assessment and subsequently, articulate the impact and take action on results as appropriate.
While these suggested solutions were presented separately, they are closely connected in terms
of building needed knowledge, motivation, and organizational support. The next section
provides more details on the action steps, resource requirements, and a projected timeline for
each of the recommended solutions.
Implementation Plan
The Center is currently under the leadership of a new director and implementation of the
recommended solutions may overlap with initiatives resulting from the recent transition of
leadership. In light of this, implementation of portions of the recommended actions may have
already begun as part of a separate action.
The Center is executing its program for Fiscal Year (FY) 2018 and programming for FY
2019 is in final approval stages. The FY 2019 program consists of courses, workshops,
ASSESSING IMPACT 124
dialogues, and visits, including a new proposed maritime security course. Leadership and staff
and faculty will be constrained on time to conduct additional training to learn about various
assessment processes and implement a new system. Further, the tasks themselves will demand
more time and the Center may need to identify which priorities may need to shift as appropriate.
The following outlines the four proposed solutions, the action steps associated with each
solution, the estimated resource needs for each solution, and estimated timelines. These are
critical steps that would keep implementation of the proposed solutions moving forward.
Table 11
Summary of proposed solutions, action steps, resources, and timelines
Recommended
Solution
Action Steps Resource
Requirements
Timeline
#1 Create a Center
level strategy
Identify lead Time April-Jun 2018
Incorporate
stakeholders
Time May-Aug 2018
Develop strategy Time and space Aug-Dec 2018
Communicate strategy Time and expertise Jan 2019
#2 Communicate
leadership
commitment to
assessment
Understand estimated
resourcing
requirements
Leader time,
expertise, and
estimate
Jan 2019
Communicate
leadership decision
Leader, staff, and
faculty time
Feb 2019
Resource Priorities for time,
training,
implementation
Mar 2019
#3 Build Capacity for
Staff and Faculty to
Conduct Assessments
and Articulate Impact
Identify expertise Civilian or military
manning
Contractor option
Apr 2019
Conduct professional
development/training
Personnel, time,
expertise
May 2019
Select approach Personnel
Faculty expertise
Jul 2019
Select tools:
Longitudinal surveys
Case studies
Personnel, time,
expertise
Jul 2019
ASSESSING IMPACT 125
Table 11, continued
Recommended
Solution
Action Steps Resource
Requirements
Timeline
#4 Implement Prepare Personel and time Aug 2019
Realign Center
positions and assets
Personnel, time,
expertise
Sept 2019
Conduct baseline
assessment
Personnel, time,
expertise
Oct 2019
Conduct
summative/impact
assessment
Personnel, time,
space, expertise
Oct thru Dec 2019
Articulate Center’s
impact
Personnel, time,
expertise
Jan 2020
Implementation for Recommendation #1 - Develop a Strategy at the Center Level
The following actions steps and identification of resources facilitate development of a
strategy for DKI APCSS. The Center’s strategy is essential to scope the approach for future
activities and serves as the foundation for follow on recommendations and activities.
Identify an office and team to lead strategy development. This action includes the
Center’s leadership identification and appointment of a lead to build a team and develop the
Center’s strategy (Athapaththu, 2016). Given the personnel shortages and current job
descriptions in the S&A office, a feasible option is to leverage select faculty and a third party.
Build a comprehensive strategy development team incorporating internal and
external stakeholders. Including key stakeholders both internally and externally contributes to
a more comprehensive strategy considering different perspectives (Athapaththu, 2016).
Inclusion of the Center’s stakeholders and internal influencers creates greater buy-in of the
completed strategy.
Develop and communicate the strategy. The Center is better postured when
stakeholders know and understand the Center’s strategy. Upon approval, the strategy should be
ASSESSING IMPACT 126
communicated to the staff, faculty and other stakeholders to visualize and understand the
Center’s approach for the way ahead (Athapaththu, 2016).
Implementation for Recommendation #2 - Communicate Leadership’s Commitment
to Assessment of Impact
The comprehensive process of preparing the Center’s strategy provides useful insight to
the current environment, competing priorities, and resources required to the implement the
strategy. The following implementation steps provide clarification and commitment to
stakeholders.
Prepare communication messages. Upon completion of the strategy development and
leadership’s commitment to assessing impact as part of the strategy, leadership can prepare to
communicate intent, commitment, and how leadership would support staff and faculty needs to
begin this process. Part of the communication preparation is for leadership to understand the
needs to plan, prepare, and execute impact assessments by having good estimates of time and
other resources. In preparing to implement a new policy such as conducting impact assessments,
it helps the leader to set the conditions, identify and align priorities, and prepare for a process of
mutual adaption (Fowler et al., 2000). Given competing demands, a solid estimate would
provide the leader with information needed to communicate current priorities and the potential
impact a new policy may have on those priorities, operations and scarce resources. Leadership
can also prepare to communicate whether impact assessments are conducted on an annual,
biennial or other identified basis that may influence the level of resources required within a
specified period of time. As discussed in Chapters One and Two, Center leadership
communication should address the need and expectation of the assessment program to include a
higher level of rigor by incorporating a systematic approach and improve upon reliance on
current anecdotal evidence, a low validity and insufficient method. Further, given the Center’s
ASSESSING IMPACT 127
operating budget of approximately $21M annually, it is recommended that leadership
communicate the extent to which it is willing to expend human capital and financial resources
toward this effort.
Communicate intent to conduct impact assessments and level of priority. Based on
the messages prepared, the researcher recommends that the Center’s leadership communicate its
level of commitment to conduct assessments and the extent of evidence required. This action
addresses stakeholder motivation if efforts to conduct impact assessments are aligned with the
leadership’s priorities (Athapaththu, 2016; Eccles, 2013). Staff and faculty could also gain
confidence that leadership wants to know the Center’s impact and can request additional
resources if needed to have a greater effect on impact.
Implementation for Recommendation #3 - Build Capacity for Staff and Faculty to
Conduct Assessments and Articulate Impact
Staff and faculty have the need to have capacity to know how to develop and execute a
methodology to assess and articulate impact. As such, a deliberate and resourced plan to build
capacity is further described below.
Identify expertise. The Center needs expertise to train staff and faculty to build capacity
in support of the assessment process. The Center’s current manning document includes a
Strategy and Assessment (S&A) unit but it is understaffed due to a departure and the elimination
of the operations research position (DSCA (Defense Security Cooperation Agency), 2018).
Further, the two remaining positions are currently focused on process management and
legislative affairs without training or expertise needed for conducting impact assessments. Thus,
the researcher recommends a review of options, including identifying the requisite skill set for
the incoming S&A Chief, providing S&A employees with professional assessment training, or
identifying an external SME, to garner the expertise to build capacity for the staff and faculty as
ASSESSING IMPACT 128
part of the impact assessment process for DKI APCSS. Although DoD has provided instructions
on centralized evaluations, since DKI APCSS maintains the responsibility to assess outcomes, of
which long-term outcomes are also considered impact, the Center should consider all options
(DoD, 2017). Options for identifying an external SME may include outreach to the DoD’s Joint
Staff that conducts analysis and lessons learned (Powers, 2018); DoD’s Center for Global Health
Engagement that conducts impact assessments, or a third party private contractor that supports
impact assessment endeavors. This SME would serve as the lead coach to engage with Center
staff and faculty, to educate on best practices, build teams, and institutionalize the assessment
process.
Conduct professional development/training for staff and faculty. As Chapter Four’s
findings revealed, the staff and faculty have a validated need to build capacity for conducting
systematic assessments. Stakeholders who have the training and knowledge to conduct
assessments are likely to be more confident and motivated to support this process (Clark & Estes,
2008).
Select approach. Building capacity for the staff includes gaining knowledge of
approaches to conduct impact assessments and understanding the best methodologies to apply to
DKI APCSS given its mission. It also would encompass leveraging current data bases or
creating a feasible database, and integrating the impact assessment with the on-going strategy
assessment process (PricewaterhouseCoopers, 2007). Following identification of the SME,
collectively staff and faculty need to learn and discuss approaches and determine which one is
best suited for the Center. Moreover, with a strategy in place, DKI APCSS is better postured to
select or innovate an approach that is appropriate for the Center’s process to assess impact.
ASSESSING IMPACT 129
Identify design and focus level. Upon selection of an approach, DKI APCSS can further
scope the focus level for its impact (PricewaterhouseCoopers, 2007). While education and
fellows projects may begin at an individual level, group exercises, organization and country
workshops may raise the level of impact. DKI APCSS could learn and then choose a
combination of individual or tactical level, country, sub-region, or organizational (for example,
ASEAN) levels to assess impact (PricewaterhouseCoopers, 2007). Further, as referenced in
Chapter Two, Mohr described elaborate designs that should be considered if a counterfactual
group is identified. Otherwise, learning about and use of the ex-post facto model may have to
serve as a more appropriate design (Mohr, 1995).
Longitudinal Surveys. DKI APCSS solicits feedback from participants and their
supervisors six months after the end of a multi-week course and the Center recently began
systematically tracking the completion of fellows projects. However, as discussed in the Chapter
Four findings, the six-month course feedback and alumni updates are not systematically
collected and analyzed. The systematic collection of feedback to analyze information is a key
step in assessing DKI APCSS’ impact (Church & Rogers, 2006; Curnan et al., 1998). As part of
the training, appropriate staff and faculty would need to learn how to prepare questions to obtain
the best feedback and address improvements and impact. Moreover, staff and faculty also
conduct numerous short courses and workshops, contribute to numerous conferences and
journals, and host hundreds of visits throughout the year without having a mechanism to capture
impact. The assessment team’s efforts to learn how to capture a holistic picture of the Center’s
impact would further provide feedback that better informs leadership and future decisions.
Select tools. DKI APCSS has a repository of data through systems that may or may not
be well suited to collecting and accessing data, particularly over an extended timeframe as
ASSESSING IMPACT 130
suggested above. Furthermore, with over 11,000 alumni and outreach to a growing number of
alumni over time, tools can be helpful to further analyze the systematic collection of data. As an
example, survey instruments such as Qualtrics or Survey Monkey generate automated reports
and provide statistical information. These tools also consolidate narrative feedback in its process
but may result in a plethora of data that is time consuming to analyze. Additional tools
mentioned in Chapter Two such as CHUPPET, have an ability to identify key words or phrases
that may assist in the analysis process (Powers, 2018).
If an existing system does not support the Center’s needs for collecting, storing, and
analyzing a system, the Center can internally design a database with its IT section. Alternatively,
in conjunction with DSCA, the Center may consider serving as a pilot to modify a system that all
regional centers could leverage. Recognizing the challenge that some DoD guidance and orders
are classified Secret or above (DoD, 2017), DKI APCSS must consider how much information is
needed and aggregated at an unclassified level. Furthermore, depending on the level of access
and input needed for faculty, many whom do not maintain clearances based on non-U.S.
citizenship, consideration must be provided for a feasible and sustainable system for access and
review. Ultimately, whatever system is implemented, stakeholders need training to ensure a
smooth process of input, review, and analysis.
Additional tools such as interviews and focus groups are key to many qualitative
approaches (Clark & Estes, 2008; Creswell, 2014; Merriam, 2009). When key stakeholders learn
how to conduct interviews and focus groups, the Center benefits from a process to obtain rich
data in a systematic manner. NASA also learned to incorporate these qualitative measures when
attempting to assess their engineer leadership program, in addition to designing and
administering surveys (Wilson, Kirkpatrick, & Magee, 2018). Further, as addressed in Chapter
ASSESSING IMPACT 131
Two, Mohr supported a qualitative approach recognizing that counterfactual groups are not
possible in all situations (Mohr, 1995) and cites Scriven’s modus operandi as a consideration
(Scriven, 1976). Implementing this approach also recognizes that this is not as rigorous as a
statistical validity of causal inference, but it is considered an improvement from the sole use of
anecdotal evidence by using operative reasoning (Mohr, 1995).
Faculty case studies. DKI APCSS staff and faculty maintain expertise in a variety of
areas and could have an opportunity to apply their research to assessing impact through the use
of case studies. As part of a systematic approach to assess impact, faculty could be guided to
produce case studies on countries, sub-regions, functional topics, or organizations as part of a
baseline and an impact assessment (Creswell, 2014; Merriam, 2009). Ideally, these case studies
would also be part of the longitudinal projections to compare changes based on DKI APCSS
contributions from an established base line.
Recommendation #4 - Implement Assessment Process
Preparation. With broad consensus of the concept of a new policy such as conducting
impact assessments, preparation or mobilization of the new policy is another essential step
(Fowler et al., 2000). These tasks could include ensuring staff and faculty are sufficiently
trained, have sufficient resources, retain flexibility to adapt as needed, and ensure on-going SME
or professional development opportunities related to assessment impact are available. A goal for
implementation of the new policy is to institutionalize the process to endure challenges such as
leadership turnover, changeover of employees, or declines in budget (Fowler et al., 2000).
Implementing an impact assessment provides a method for leadership to take appropriate action.
Without a summative process, the system lacks an impetus to take action. Again, resources
ASSESSING IMPACT 132
cannot be guaranteed but leadership taking action will build trust for staff to witness leadership’s
support and addresses organizational resources and follow on action (OV11 and OV12).
Realign Center positions and assets as needed. This action to realign assets as needed
would maximize the Center’s human capital assets. Further, if needed, the Center could modify
current job positions to support the assessment process or request additional personnel.
Conduct baseline assessments. An impact assessment is most effective when a
comparison can be done against a baseline to understand the starting point for assessments
(Church & Rogers, 2006; Curnan et al., 1998). It is also ideal to begin an assessment with a
control group that does not participate with DKI APCSS. However, given the challenge of
creating a control group without violating political sensitivities and creating potential moral
issues, DKI APCSS can consider de facto control groups. An example of this is the proposed
Burma Human Rights and Freedom Act that prohibits members of the Myanmar military to
participate in certain military training activities due to alleged human rights abuses in the
Rahkine State (McCain, 2018).
Conduct summative/impact assessment. Unlike on-going frequent formative
assessments, a summative or impact assessment may be conducted on a relatively infrequent
basis. Yet similarly to formative assessments, the impact assessment is most effectively
conducted when projected as part of a deliberate plan and timeline. As such, it is recommended
that DKI APCSS schedule its impact assessment in accordance with the Center’s annual FY
program and its developed strategy. To avoid the influence of inherent bias, the Center could
address bias mitigation and prepare to receive results of the impact assessment, regardless of the
feedback (Thaler & Sunstein, 2008). At the conclusion of the assessment, staff and faculty may
ASSESSING IMPACT 133
then provide feedback to identify where and how the Center most effectively creates its intended
impact or where it needs to make modifications.
Articulate and communicate the Center’s impact. Based on the results of the impact
assessment, DKI APCSS can leverage the Public Affairs Office leadership, and staff and faculty
to articulate the Center’s impact with stakeholders, including all Center personnel, budget offices
in DoD and USINDOPACOM.
Resource Requirements
Chapter Four addressed organizational needs of which some were resource oriented.
These needs also interacted and influenced motivation levels. The next sections address a need
to identify and provide guidance on priorities to ensure that resources are appropriately allocated
to assess impact along with competing demands.
Staff and faculty time. The DKI APCSS staff and faculty have increasingly taken on
greater responsibilities as the number of courses and workshops have increased over the years.
Further, gaps as a result of hiring delays created more challenges for staff and faculty.
Therefore, the possible addition of another task to prepare for and conduct impact assessments
would tax a valuable resource, time. To support this new policy, staff and faculty would need to
allocate time to contribute to development of the strategy, participate in training as appropriate,
and conduct or support designated portions of the assessment. As the process is institutionalized,
time could be further allocated for on-going professional development and train new personnel
rotating in, in addition to contributing to the assessment process as needed.
Human capital. While time for current staff and faculty is critical, acquiring the
appropriate staff expertise and skills are also important. If positions do not specify needed skills,
ASSESSING IMPACT 134
expertise and experience, the Center may need to add or modify job descriptions for future hiring
to support the impact assessment process.
Financial. To ensure adequate funding to implement this assessment process, there may
be financial considerations for resourcing. First, as an example, if staff must work overtime to
complete assessment tasks, civilian pay budgets should reflect appropriate overtime pay. Should
the DoD staff assist the Center in a temporary duty status or a consultant be recruited to conduct
training, this service expense should be included. If a new tracking or database system is needed
for long-term data collection and analysis, budgets should reflect the new equipment, training,
and maintenance. These financial considerations should be focused solely on impact
assessments of long-term outcomes and not in a holistic evaluation capacity that DoD is
responsible for maintaining in a centralized manner (DoD, 2017).
Space. A consideration for meeting or office space could be considered when reviewing
resources in the assessment process. Particularly when large courses are in session, seminar
rooms and workspace are at a premium. Ensuring adequate meeting and workspaces are
available will facilitate the planning and execution process when conducting the periodic
assessments whether it entails conducting interviews or focus groups as an example.
Tools. While financial resources may be needed to procure needed hardware or systems,
tools to support the assessment process are essential to capturing, maintaining, and analyzing
information pertaining to impact assessments. These tools may be developed internally or it may
require a modification of a current system or procurement of a new database system as an
example (OECD/DAC, 2008).
ASSESSING IMPACT 135
Timeline
As reflected in Table 11, the timeline to conduct the activities are generally
chronologically based with some overlap, and support the building of staff and faculty capacity
to conduct an impact assessment. Upon completion of the preparation and execution of the
assessment, the next step of the continuous cycle is to articulate the impact of DKI APCSS. The
impact assessment is intended to be conducted periodically over time and should be programmed
on a regular basis, no less than every six months and ideally no more than on a biennial basis.
Implementation Constraints and Challenges
DKI APCSS maintains a fast pace of programmed activities, visits, planning, and
responding to unprogrammed security related events in the Indo-Pacific region. As such, the
implementation of a new policy is an investment of time and resources that compete with
programmed activities and other high priorities. While the new director makes decisions on the
Center’s priorities, external influences may constrain the Center’s ability to execute the policy in
a timely manner. Further, the Center’s ability to adjust positions and hire long-term staff are also
subject to external influences.
When conducting the impact assessment, the inherent challenge of capturing impact in a
large, diverse, and complex region remains. Further, soft power activities including building
relationships, providing executive education, and empowerment are in the realm of social
sciences that are not conducive to quantitative science standards (Athapaththu, 2016). Moreover,
intentionally incorporating a control group can create ethical challenges. Lastly, many factors
influence the stability of a region and claiming attribution of DKI APCSS impact must be
approached carefully. Expectations are more appropriate for the Center to aim for a claim of
contribution.
ASSESSING IMPACT 136
Evaluation Plan
An evaluation of the implementation of a new policy contributes to determining its
effectiveness. DKI APCSS leadership could communicate its intent to conduct impact
assessments after the strategy has been developed. To ensure the Center continues to learn
through the process and determine its effectiveness in assessing its impact, the Center may
include a review of implementation as part of an evaluation plan. This evaluation could be used
to identify if the Center is meeting its goals and objectives to determine and articulate the
Center’s impact in the Indo-Pacific region. Applying a model such as Kirkpatrick’s New World
approach in the evaluation plan could provide the Center with an ability to sense the
stakeholders’ reactions of the leader’s communication of its intent to conduct impact assessments
(Level 1) (J. Kirkpatrick & Kirkpatrick, 2015). Leadership would likely also have interest in
whether stakeholders have gained the knowledge needed as part of the training in building
capacity to learn about, select, and plan for a methodology in the impact assessmement process
(Level 2). The next area of review is whether the learning and new behaviors have been applied
in conducting the assessment process (Level 3). The last step is identifying the results of the
impact assessment and its effectiveness in articulating DKI APCSS’ impact, regardless of the
results (Level 4) (J. Kirkpatrick & Kirkpatrick, 2015). The following table provides evaluation
suggestions for each action step within the four recommended solutions.
Table 12
Evaluation of Impact Assessment Process
Recommended
Solution
Recommended Action
Steps
Evaluation
#1 Create a Center level
strategy
Identify lead Observe for completion and consistency
Incorporate stakeholders Document for breadth and completion
ASSESSING IMPACT 137
Table 12, continued
Recommended
Solution
Recommended Action
Steps
Evaluation
Develop strategy Observe for completion
Communicate strategy Record extent of communication
#2 Communicate
leadership commitment
to assessment
Prepare communication
messages
Document information flow
Communicate leadership
decision
Record extent of communication of
commitment, clarification of priorities and
resources and reactions
Resource Document extent of prioritization of
policy and allocation of resources.
Solicit feedback at townhall and faculty
meetings.
#3 Build Capacity for
Staff and Faculty to
Conduct Assessments
and Articulate Impact
Identify expertise Establish selection of internal civilian,
military, DoD or
contractor option
Conduct professional
development/training
sessions
Document scheduling, attendance, and
quality of sessions. Attendees complete
training surveys.
Select approach Establish meetings for inclusive
discussion and selection of an assessment
approach
Select tools: Longitudinal
surveys
Case studies
Interviews and focus
groups
Diagnose selection and acquiring of tools
as needed
#4 Implement Prepare Record allocation of resources
Realign Center positions
and assets
Document modifications and changes
Conduct baseline
assessment
Establish scope of assessment
Review baseline assessments
Conduct summative/impact
assessment
Observe assessment process including
addressing feedback
Articulate Center’s impact Record and review communication (e.g.,
articles, speeches, documents,
engagements, etc.)
Take action based on
results of assessment
Document responses to action taken based
on assessment results
ASSESSING IMPACT 138
Areas for Future Research
This study has focused on the needs for DKI APCSS to assess and articulate its impact in
the Indo-Pacific region. As the Center continues to conduct formative and summative
assessments as part of the strategy process and impact assessment of long-term outcomes,
feedback could reveal areas that are most impactful and perhaps identify areas that need more
emphasis. The Center may likely have additional areas of interest for further research. As an
example, what are the causes of certain human behaviors and what can the Center employ to
enhance learning and implementation of behaviors that better contribute to enhancing stability
when they have returned to different environments following a course? As mentioned in the
counterfactual discussion and pending Burma Human Rights Act, this research has revealed
other areas to highlight such as the impact of omitting specific country militaries from attending
DKI APCSS courses during periods of human rights abuses as an example. Another research
area is the assessment of the impact with greater collaboration between the five regional centers.
This could include center leads with greater focus on functional areas such as women, peace, and
security. Given the broad ranging topics in soft power and social capital impacts, there are
numerous areas that warrant additional research that could enhance the Center’s approach to
creating impact and enhancing stability.
Conclusion
The DKI APCSS invited research on the Center’s ability to assess and articulate its
impact. The Center formatively assesses its courses extensively to improve its performance and
address the question, “How are we doing?” As part of its effort to determine its impact and learn
from the process, the Center has also asked, “Are we doing the right things?” to create the
intended impact of enhancing stability in the region.
ASSESSING IMPACT 139
An extensive literature review revealed the complexities and challenges of conducting
impact assessments; particularly for a soft power institution focused on building capacity, shared
understanding, and networks in a large and diverse region. Literature provided various
approaches for how an impact assessment may be approached and discussed advantages and
disadvantages for each one. Ultimately, a model designed for a unique organization is likely to
be most effective in assessing its impact. The ability to design and apply a model also requires
the staff and faculty to have the knowledge, motivation, and organizational capacity to do so.
The researcher identified 15 specific knowledge, motivation, and organizational resource
assumed needs then conducted a Center-wide survey, three individual leader interviews, two
focus group interviews that consisted of one peer level and one supervisor level group,
observations, and document analysis. Findings revealed that 12 of the assumed needs were
validated and led to four separate recommended solutions to help move the organization toward
its goal of assessing and articulating its impact. The suggested solutions included building a
Center-wide strategy, communicating leadership’s commitment to an assessment, building
capacity for staff and faculty, and implementing the impact assessment along with taking
appropriate follow-on action.
While this research focused on the DKI APCSS, these findings are likely generalizable to
other regional centers and potentially, to some extent, other DIB related programs. The regions,
specific programs, resources and structure may vary, but the concept of a systematic, prioritized,
and resourced approach to assessing impact may be generalized to other centers.
As mentioned in the beginning of this study and Senator Inouye addressed in his 2012
speech at DKI APCSS, conflict, national disasters, and war are devastating, expensive, and there
should be other options. Options may include the pursuit of conflict prevention by enhancing
ASSESSING IMPACT 140
stability and empowering partner nations to govern, provide their own security, and be resilient
in the face of natural disasters. The challenge remains in the difficulty to assess these options
with valid evidence, yet one must ponder if there is a better option than educating, empowering,
and connecting security practitioners in an effort to enhance stability. If not, perhaps it appears
worth the investment to make the effort to enhance stability and assess its impact.
ASSESSING IMPACT 141
REFERENCES
Allen, N., & Sharp, T. (2017). Process Peace : A New Evaluation Framework for Track II
Diplomacy. International Negotiation, 22, 92–122. http://doi.org/10.1163/15718069-
12341349
Anderson, L., & Krathwohl, D. (2001). A taxonomy for learning, teaching, and assessing: A
revision of Bloom’s taxonomy of educational objectives. (L. W. Anderson & D. R.
Krathwohl, Eds.) (Complete E). New York, NY: Longman.
Anderson, M. (2001). Measuring peace: Indicators of impact for peace practice.
Association of Southeast Asian Nations Regional Forum. (2017). Retrieved April 11, 2017, from
http://aseanregionalforum.asean.org/about.html
Athapaththu, H. K. S. H. (2016). An Overview of Strategic Management : An Analysis of the
Concepts and the Importance of Strategic Management. International Journal of Scientific
and Research Publications, 6(2), 124–127.
Ball, D., Milner, A., & Taylor, B. (2007). Track 2 security dialogue in the Asia-Pacific:
Reflections and future directions. Asian Security, 2(3), 174–188.
http://doi.org/10.1080/14799850600920445
Briard, S., & Carter, C. (2013). Communities of practice and communities of interest: Definitions
and evaluation considerations. Ontario, Canada. Retrieved from
http://niagaraknowledgeexchange.com/wp-content/uploads/2013/12/Communities-of-
Practice-Interest_Nov2013.pdf
Burgess, H., & Burgess, G. (2010). Peacemaker’s toolkit: Conducting track II peacemaking. (H.
A. Coyne & N. Quinney, Eds.), United States Institute of Peace. Washington, D.C.
Retrieved from
ASSESSING IMPACT 142
http://www.migrantclinician.org/files/MCN_Track_II_Continuous_Diabetes_mono.pdf
Bush, K. (2009). Aid for peace: A Handbook for applying peace and conflict impact assessment
(PCIA) to peace III projects, (September), 1–58. Retrieved from
http://eprints.ulster.ac.uk/24859/
Chalmers, M. (2007). Spending to save? The cost‐effectiveness of conflict prevention. Defence
and Peace Economics, 18(1), 1–23. http://doi.org/10.1080/10242690600821693
Church, C., & Rogers, M. M. (2006). Designing for results: Integrating monitoring and
evaluation in conflict transformation programs. Search for Common Ground. Retrieved
from
http://ildigital.olivesoftware.com/Olive/ODE/Mofet/default.aspx?href=MOF/2011/01/10
Church, C., & Shouldice, J. (2002). The evaluation of conflict resolution interventions: Framing
the state of play. INCORE International Conflict Research. Northern Ireland, UK.
Chairman of the Joint Chiefs of Staff. (2017). Joint publication 3-20: Security cooperation.
Washington, D.C.
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Charlotte, NC: Information Age Publishing.
Consolidated appropriations act of 2016. (2015). Washington, D.C. Retrieved from summary:
Consolidated appropriations act of 2016. (2015, Dec 16). Targeted News Service Retrieved
from
http://libproxy.usc.edu/login?url=http://search.proquest.com.libproxy1.usc.edu/docview/175
2735317?accountid=14749
Council for Security Cooperation in the Asia Pacific. (n.d.). Retrieved April 9, 2017, from
http://www.cscap.org
ASSESSING IMPACT 143
Crawford, N. C. (2016). US Budgetary Costs of Wars through 2016: $4.79 Trillion and
Counting. Retrieved from
http://watson.brown.edu/costsofwar/files/cow/imce/papers/2016/Costs of War through 2016
FINAL final v2.pdf
Creswell, J. W. (2014). Research Design. Los Angeles, CA: SAGE Publications, Inc.
Curnan, S., LaCava, L., Sharpsteen, D., Lelle, M., & Reece, M. (1998). W.K. Kellogg
Foundation evaluation handbook. Evaluation. http://doi.org/10.1007/s13398-014-0173-7.2
Daniel K. Inouye Asia-Pacific Center for Security Studies. (2016). Daniel K. Inouye Asia-Pacific
Center for Security Studies fact sheet. Retrieved from http://www.apcss.org/wp-
content/uploads/2016/DKI_APCSS_Fact_Sheet_Oct2016.pdf
Daniel K. Inouye Asia-Pacific Center for Security Studies. (2017a). Daniel K. Inouye Asia-
Pacific Center for Security Studies charter. Honolulu, HI.
Daniel K. Inouye Asia-Pacific Center for Security Studies. (2017b). Daniel K. Inouye Asia-
Pacific Center for Security Studies mission. Retrieved June 15, 2017, from
http://apcss.org/about-2/mission/
Daniel K. Inouye Asia-Pacific Center for Security Studies. (2017c). Daniel K. Inouye Asia-
Pacific Center for Security Studies Non-Attribution Policy. Retrieved June 12, 2018, from
https://apcss.org/about-2/public-affairs/news-media-policy/
Defense Security Cooperation Agency. (n.d.-a). DoD Regional Centers (RC). Retrieved from
http://www.dsca.mil/programs/dod-regional-centers
Defense Security Cooperation Agency. (n.d.-b). Institutional Capacity Building. Retrieved June
18, 2018, from http://www.dsca.mil/programs/institutional-programs
Defense Security Cooperation Agency. (2018). Daniel K. Inouye Asia-Pacific Center for
ASSESSING IMPACT 144
Security Studies Joint Table of Distribution.
Dembo, M. H. (2004). Motivation and learning strategies for college success: A self-
management approach. http://doi.org/10.1353/csd.2004.0072
Department of Defense. (2015). Regional Center FY 16-17 Policy Priorities. Washington, D.C.:
Department of Defense.
Department of Defense. (2017). DoD Instruction 5132.14: Assessment, monitoring, and
evaluation policy for the security cooperation enterprise. Washington, D.C.: Department of
Defense.
Department of Defense Inspector General. (2013). Defense Institution Reform Initiative program
elements need to be defined (Report No. DODIG-2013-019). Alexandria, VA: Inspector
General United States Department of Defense.
Department of State. (2015). Department of State Executive summary: Quadrennial diplomacy
and development review. Washington, D.C.: Department of State.
Dyke, N. B. (1997). Conflict prevention: Strategies to sustain peace in the post-cold war world.
The Aspen Institute. Aspen, CO: The Aspen Institute.
East-West Center. (2017). Retrieved March 5, 2017, from https://www.eastwestcenter.org/about-
ewc/mission-and-organization
Eccles, J. (2013). Reading Eccles Motivation.pdf. Education.Com. Retrieved from
www.education.com/print/expectancy-value-motivational-theory/
Erlandson, D. A. (2012). Naturalistic Evaluation. In C. Secolsky & D. B. Denison (Eds.),
Handbook on measurement, assessment, and evaluation in higher education (pp. 483–499).
New York, NY: Routledge Taylor and Francis Group.
Fast, L. A., & Neufeldt, R. C. (2005). Envisioning success: Building blocks for strategic and
ASSESSING IMPACT 145
comprehensive peacebuilding impact evaluation. Journal of Peacebuilding and
Development, 2(2), 24–41. http://doi.org/10.1080/15423166.2005.450817375284
Fayolle, A., Gailly, B., & Lassas-Clerc, N. (2006). Assessing the impact of entrepreneurship
education programmes: a new methodology. Journal of European Industrial Training,
30(9), 701–720.
Fowler, F. C., Robertson, G., Fowler, L. C., & Frances, C. (2000). Policy Implementation:
Getting people to carry out a policy. In D. A. Stollenwerk (Ed.), Policy Studies for
Educational Leaders (pp. 269–301). Oxford, Ohio: Prentice Hall.
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist, 36(1),
45–56. http://doi.org/10.1207/S15326985EP3601
Gordon, J., & Chadwick, K. (2007). Impact assessment of capacity building and training :
Assessment framework and two case studies. Canberra, Australia: Australian Centre for
International Agricultural Research.
Government Accountability Office. (2013). Building partner capacity; Actions needed to
strengthen DoD efforts to assess the performance of the regional centers for security studies
(Report No. GAO-13-606). Washington, D.C.
Haacke, J. (2009). The ASEAN regional forum: from dialogue to practical security cooperation?
Cambridge Review of International Affairs, 22(3), 427–449.
http://doi.org/10.1080/09557570903104057
Hailey, J., James, R., & Wrigley, R. (2005). Praxis Paper No. 2 Rising to the challenges:
Assessing the impacts of organisational capacity building. Oxford, England: The
International NGO Training and Research Centre.
ASSESSING IMPACT 146
Hall, A., Rasheed Sulaiman, V., Clark, N., & Yoganand, B. (2003). From measuring impact to
learning institutional lessons: An innovation systems perspective on improving the
management of international agricultural research. Agricultural Systems, 78(2), 213–241.
http://doi.org/10.1016/S0308-521X(03)00127-6
Hanauer, L., Johnson, S. E., Springer, C., Feng, C., McNerney, M. J., Pezard, S., & Efron, S.
(2014). Evaluating the impact of the Department of Defense regional centers for security
studies. Santa Monica, CA: RAND.
Helvey, D. (2017). Daniel K. Inouye Asia-Pacific Center for Security Studies (DKI APCSS)
Fiscal Year (FY) 2018 program plan. Washington, D.C.: Department of Defense.
Hirai, J. (DKIAPCSS). (2016). Personal Communication. Honolulu, HI.
Institution of Federal Red Cross and Red Crescent Societies. (2005). Impact Handbook.
Washington, D.C.: Institution of Federal Red Cross and Red Crescent Societies.
Inouye, D. K. (2012). Asia Pacific Center Maluhia Ribbon Cutting speech. Honolulu, HI: Office
of Senator Daniel K. Inouye.
Institute for Economics and Peace. (2016). Global peace index. Retrieved from
http://visionofhumanity.org/app/uploads/2017/02/GPI-2016-Report_2.pdf
International Non-Governmental Organization Training and Research Centre. (2015). Outputs,
outcomes, and impact. Retrieved April 11, 2017, from https://www.intrac.org/wpcms/wp-
content/uploads/2016/06/Monitoring-and-Evaluation-Series-Outcomes-Outputs-and-
Impact-7.pdf
Jackson, D., Messick, S., & Braun, H. (2002). The role of constructs in psychological and
educational measurement. (H. I. Braun, D. N. Jackson, & D. E. Wiley, Eds.) (16th ed.).
Mahwah; N.J: L. Erlbaum Associates.
ASSESSING IMPACT 147
James, R. (2009). Dealing with the dilemmas in monitoring and evaluating capacity building.
Retrieved from https://www.intrac.org/wpcms/wp-content/uploads/2016/09/Dealing-with-
the-Dilemmas-in-Monitoring-and-Evaluating-Capacity-Building.pdf
James, R. (2010). Monitoring and evaluating learning networks. Retrieved from
http://www.intrac.org/data/files/resources/679/Monitoring-and-Evaluating-Learning-
Networks.pdf
Jentleson, B. (1998). Preventive diplomacy and ethnic conflict: Possible, difficult, necessary. In
D. Lake & D. Rothchild (Eds.), The international spread of ethnic conflict: fear, diffusion,
and escalation. Princeton, New Jersey: Princeton University Press.
Jentleson, B. (2003). The realism of preventive statecraft. In conflict prevention: Path to peace
or grand illusion? (D. Carment & A. Schnabel, Eds.). Tokyo: United Nations University
Press.
Kim, S. E. (2012). Assessing the impact of mission attachment on agency effectiveness in U.S.
federal agencies. International Review of Public Administration, 17(3).
Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating Training Programs: The Four Levels
(Third). San Francisco, CA: Berrett-Koehler Publishers.
Kirkpatrick, J., & Kirkpatrick, W. (2015). An Introduction to the new world Kirkpatrick model.
Retrieved June 26, 2017, from
http://www.kirkpatrickpartners.com/Portals/0/Resources/White Papers/Introduction to the
Kirkpatrick New World Model.pdf
Krathwohl, D. (2002). A revision of Bloom’s taxonomy: An overview, theory into practice.
Theory into Practice, 41(4). http://doi.org/10.1207/s15430421tip4104
Lance, P., Guilkey, D., Hattori, A., & Angeles, G. (2014). How do we know if a program made a
ASSESSING IMPACT 148
difference? A guide to statistical methods for program impact evaluation. Chapel Hill,
North Carolina: MEASURE Evaluation.
Leeuw, F., & Vaessen, J. (2009). Impact evaluations and development; NONIE guidance on
impact evaluation. Washington, D.C.: NONIE - Network of Networks on Impact
Evaluation. Retrieved from www.worldbank.org/ieg/nonie
Lund, M. (1996). Preventing violent conflicts: A strategy for preventive diplomacy. Washington,
D.C.: United States Institute of Peace Press.
Mackinnon, S., & Stephens, S. (2010). Is participation having an impact? Measuring progress in
Winnipeg’s inner city through the voices of community-based program participants.
Journal of Social Work, 10(3), 283–300.
http://doi.org/http://dx.doi.org/10.1177/1468017310363632
Marimon, F., Mas-Machuca, M., & Rey, C. (2016). Assessing the internalization of the mission.
Industrial Management and Data Systems, 116(1), 170–187. http://doi.org/10.1108/IMDS-
04-2015-0144
Marquis, J. P., Mcnerney, M. J., Zimmerman, S. R., Archer, M., Boback, J., & Stebbins, D.
(2016). Developing an assessment, monitoring , and evaluation framework for U.S .
Department of Defense security cooperation. Santa Monica, CA: RAND.
Mattis, J. (2018). 2018 National Defense Strategy of the United States Summary. Retrieved from
https://www.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-
Summary.pdf
Mayer, R. E. (2011). Applying the science of learning. Boston, Massachusetts: Pearson
Education, Inc.
McCain, J. (2018). Burma Human Rights and Freedom Act. Retrieved May 29, 2018, from
ASSESSING IMPACT 149
https://www.govtrack.us/congress/bills/115/s2060/text
McInnis, K., & Lucas, N. (2015). What is “Building Partner Capacity ?” Issues for Congress.
Washington, D.C.: Congressional Research Service.
Meharg, S. J. (2009). Measuring what matters in peace operations and crisis management.
Kingston, On Canada: McGill-queen’s University Press.
Merriam, S. B. (2009). Qualitative Research (Second). San Francisco, CA: Jossey-Bass.
Mills-Scofield, D. (2012). It’s not just semantics: Managing outcomes v. outputs. Retrieved
April 11, 2017, from https://hbr.org/2012/11/its-not-just-semantics-managing-outcomes
Mohr, L. (1995). Impact Analysis for Program Evaluation (Second). Thousand Oaks: SAGE
Publications, Inc.
Moroney, J. D., Hogler, J., Kennedy-Boudali, L., & Pezard, S. (2011). Integrating the Full Range
of Security Cooperation Programs into Air Force Planning: An analytic primer. Santa
Monica, CA: RAND.
Muggah, R., & White, N. (2013). Is there a preventive action renaissance ? The policy and
practice of preventive diplomacy and conflict prevention Executive summary. NOREF:
Norwegian Peacebuilding Resource Centre. Oslo, Norway: Norwegian Peacebuilding
Resource Centre.
Munuera, G. (1994). Preventing armed conflict in Europe: Lessons from recent experience.
Alencon, France: Imprimerie Alenconaise.
Narayan, D., & Cassidy, M. F. (2001). A dimensional approach to measuring social capital:
Development and validation of a social capital inventory. Current Sociology, 49(2), 59–102.
Neufeldt, R. C. (2011). “Frameworkers” and “Circlers” : Exploring assumptions in impact
assessment. In B. Austin, M. Fischer, & H. J. Giessmann (Eds.), Advancing Conflict
ASSESSING IMPACT 150
Transformation. The Berghof Handbook II. Opladen/Framington Hills: Barbara Budrich
Publishers. Retrieved from www.berghof-handbook.net
Nye, J. (2004). The Benefits of Soft Power. Retrieved April 7, 2017, from
http://hbswk.hbs.edu/archive/4290.html
Oberg, M., Moller, F., & Wallensteen, P. (2009). Early conflict prevention in ethnic crises, 1990-
1998. Conflict Management and Peace Science. 26(1), 67-91. Retrieved from
https://doi.org/10.1177/0738894208097667
Organisation for Economic Co-operation Development. (2002). Glossary of Statistical Terms.
Retrieved June 5, 2018, from https://stats.oecd.org/glossary/detail.asp?ID=3681
Organisation for Economic Co-operation Development/Development Assistance Committee.
(2008). Guidance on evaluating conflict prevention and peacebuilding activities. Paris,
France. Retrieved from https://www.oecd.org/dac/evaluation/dcdndep/39774573.pdf
Organisation for Economic Co-operation Development/Development Assistance Committee.
(2012). Evaluating peacebuilding activities in settings of conflict and fragility: Improving
learning for results. DAC Guidelines and Reference Series. Paris, France: OECD.
http://doi.org/10.1787/9789264106802-en
Pacific Forum Center for Strategic and International Studies. (2016). Retrieved from
www.pacforum.org
Paulhus, D. (2002). Socially Desirable Responding: The evolution of a construct. In H. I. Braun,
D. N. Jackson, & D. E. Wiley (Eds.), Role of Constructs in Psychological and Educational
Measurement (pp. 49–69). Mahwah; N.J: Lawrence Erlbaum Associates, Publishers.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in
learning and teaching contexts. Journal of Educational Psychology, 95(4), 667–686.
ASSESSING IMPACT 151
http://doi.org/10.1037/0022-0663.95.4.667
Powers, M. (2018). Stochastic Linear Programming. Washington, D.C.: U.S. Department of
Defense, Joint Staff J7.
Powers, M. (2018). Content Heuristic Unstructured Parsing and Predictive Electronic Tool
(CHUPPET) Next. Norfolk, VA: U.S. Department of Defense, Joint Staff J7.
PricewaterhouseCoopers. (2007). A monitoring and evaluation framework for peacebuilding.
Regan, P. (2000). Civil wars and foreign powers: Outside intervention in intrastate conflict. Ann
Arbour, MI: University of Michigan Press.
Reichardt, C. S., & Cook, T. D. (1979). Beyond qualitative versus quantitative methods.
Qualitative and Quantitative Methods in Evaluation Research, (Idd), 7–32.
Ross, M. H. (2001). Action evaluation in the theory and practice of conflict resolution. Peace
and Conflict Studies, 8(1), 1–15.
Ross, T. (2016). Leveraging security cooperation as military strategy. The Washington
Quarterly, (Fall), 91–103.
Rothman, J. (1997). Action evaluation and conflict resolution training: Theory, method and case
study. International Negotiation, 2(Idd), 451–470.
Schein, E. (2004). The Concept of Organizational Culture: Why bother? In Organizational
Culture and Leadership (3rd ed., p. 464). Jossey-Bass. http://doi.org/Article
Scriven, M. (1972). Evaluating action programs; readings in social action and education. In C.
Weiss (Ed.), Evaluating action programs; readings in social action and education (pp.
123–136). Boston, Massachusetts: Allyn and Bacon, Inc.
Scriven, M. (1976). Maximizing the power of the causal investigations : The modus operandi
model. Evaluation Studies Review Annual, (Idd), 101–118.
ASSESSING IMPACT 152
Simister, N., & Smith, R. (2010). Monitoring and evaluating capacity building : Is it really that
difficult? Praxis (Vol. 357). Oxford, England. Retrieved from
http://www.intrac.org/data/files/resources/677/Praxis-Paper-23-Monitoring-and-Evaluating-
Capacity-Building-is-it-really-that-difficult.pdf
Skorupski, B. J., & Serafino, N. M. (2016). DoD security cooperation : An overview of
authorities and issues. Washington, D.C.: Congressional Research Service.
Solomon, T. (2014). The affective underpinnings of soft power. European Journal of
International Relations, 20(3), 720–741. http://doi.org/10.1177/1354066113503479
St. Laurent, J. a. (2013). Building partner capacity: Key practices to effectively manage
department of defense efforts to promote security cooperation (Report No. GAO-13-335T).
Washington, D.C.: United States Government Accountability Office.
Swaine, M., Eberstadt, N., Fravel, M., Herberg, M., & Keidel, A. (2015). Conflict and
cooperation in the Asia-Pacific region: A strategic net assessment. Washington, DC:
Carnegie Endowment for International Peace. http://doi.org/10.1007/978-3-531-91337-7
Templeton, D. J. (n.d.). A framework for assessing of the impact of capacity building. In
International Association of Agricultural Economists’ 2009 Conference (p. 12). Australian
Centre for International Agricultural Research. Retrieved from
http://ageconsearch.umn.edu/bitstream/51716/2/Assessing the benefits of capacity
building_D Templeton_IAAE.pdf
Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth, and
happiness. Nudge: Improving decisions about health, wealth, and happiness. New Haven,
Connecticut: Yale University Press.
Tonnesson, S. (Peace R. I. O. (2015). ASEAN ’ s Rohingya challenge should be established.
ASSESSING IMPACT 153
Oslo, Norway: Peace Research Institute of Oslo. Retrieved from
https://www.prio.org/utility/DownloadFile.ashx?id=114&type=publicationfile
Truman, C., & Triska, O. H. (2001). Modelling success: articulating program impact theory.
Canadian Journal of Program Evaluation, 16(2), 101–112.
United States Indo-Pacific Command. (2018). USINDOPACOM Area of Responsibility.
Retrieved June 12, 2018, from http://www.pacom.mil/About-USPACOM/USPACOM-
Area-of-Responsibility/
W.K. Kellogg Foundation. (2004). Using logic models to bring together planning, evaluation
and action: Logic model development guide. Battle Creek, Michigan: W.K. Foundation.
Whole of Society Conflict Prevention and Peacebuilding. (2017). Retrieved March 4, 2017, from
WOSCAP.eu
Wilson, D., Kirkpatrick, J., & Magee, K. (2018). Training for Mission Success. Retrieved June 6,
2018, from https://www.kirkpatrickpartners.com/Portals/0/Resources/Articles/KP NASA
article PDF May TD Magazine.pdf?ver=2018-05-02-075356-007
Woocher, L. (2009). Preventing violent conflict: Assessing progress, meeting challenges. United
States Institute of Peace. Washington, D.C.: United States Institute of Peace.
Xu, L., Cui, N., Qualls, W., & Zhang, L. (2017). How socialization tactics affect supplier-buyer
co-development performance in exploratory and exploitative projects: The mediating effects
of cooperation and collaboration. Journal of Business Research, 78(September), 242–251.
Zenisky, A., & Laguilles, J. (2012). Reporting assessment results in higher education. In C.
Secolsky & B. Denison (Eds.), Handbook on measurement, assessment, and evaluation in
higher education (pp. 593–610). New York, NY: Routledge.
ASSESSING IMPACT 154
APPENDIX A
Survey Protocols
This survey was distributed to all Daniel K. Inouye Asia-Pacific Center for Security
Studies (DKI APCSS) staff and faculty via email with a link for an online survey. The purpose
of the survey was to identify and validate assumed knowledge, motivation, and organizational
influences for the DKI APCSS participants needed to meet organizational goals. Following is
the email message and the content of the survey consisting of a brief narrative and questions with
clarifying directions for completion.
Aloha DKI APCSS Ohana,
You are invited to take an anonymous survey designed to help improve the Center’s
ability to assess and articulate its impact. Your answers will help validate knowledge,
motivation, and organizational levels at DKI APCSS as it relates to assessing impact. Select
interviews will also provide complementary information to contribute to the research project. If
you receive an invitation for an interview, please fill out this brief survey also.
This survey should take you less than seven minutes to complete and there is no
personally identifiable information collected. Your input is valuable and your time to complete
this survey is appreciated. All efforts are to create an effective and sustainable assessment plan
for DKI APCSS.
Should you have any questions, please feel free to contact either the researcher at #(808)
218-1408 or the S&A office at #(808) 971-4060. Mahalo for your consideration.
Please click on the link below to begin the short survey:
LINK
ASSESSING IMPACT 155
Aloha Staff and Faculty,
Thank you for taking time to participate in this survey focused on assessing and articulating
impact at the Daniel K. Inouye Asia-Pacific Center for Security Studies (DKI APCSS or also
referred to as the Center). Your participation is voluntary and appreciated. Your responses for
this on-line survey will not have any identifiable data and your responses will be anonymous.
1. Please indicate your role at the DKI APCSS:
Faculty _______ Staff ________
2. To what extent do you understand your role in assessing and articulating the impact of DKI
APCSS’ efforts to enhance stability in the Asia-Pacific region?
Not at all Very Much
1 2 3 4
3. To what extent do you agree that enhancing stability in the Asia-Pacific region reflects the
goal of impact in DKI APCSS’ mission?
Not at all Very Much
1 2 3 4
4. To what extent do strategy, design and program development relate to creating and assessing
impact at the Center?
Not at all Very Much
1 2 3 4
5. To what extent do you know how to relate strategy, design and program development to
creating and assessing impact at the Center?
Not at all Very Much
1 2 3 4
ASSESSING IMPACT 156
6. To what extent do you know how to articulate the Center’s impact?
Not at all Very Much
1 2 3 4
7. To what extent do you know how to apply theories, best practices, methods, models, or
rubrics to create a methodology to assess the Center’s impact?
Not at all Very Much
1 2 3 4
8. To what extent are you aware of any inherent biases in the process of assessing the impact of
the Center?
Not at all Very Much
1 2 3 4
9. Select one definition of ‘enhancing stability’ that is most applicable to assess the Center’s
impact:
_______Prevention of or less conflict in the Indo-Asia-Pacific Region
_______More military hardware sales
_______Increased trade agreements
10. To what extent do you value the importance of assessing and articulating the Center’s
impact based on empirical evidence?
Not at all Very Much
1 2 3 4
11. To what extent do you value the effort of assessing and articulating the Center’s impact as
worthy of time and resources?
ASSESSING IMPACT 157
Not at all Very Much
1 2 3 4
12. To what extent are you confident that you have the ability to create a tailored methodology,
model, or rubric to assess the Center’s impact?
Not at all Very Much
1 2 3 4
13. To what extent do you view resources could be at risk if the Center cannot assess and
articulate its impact?
Not at all Very Much
1 2 3 4
14. To what extent do you have adequate resources to learn how to create a methodology,
model, or rubric to assess and articulate DKI APCSS’s impact?
Not at all Very Much
1 2 3 4
15. To what extent do you have sufficient resources to collect and analyze data over time to
assess impact?
Not at all Very Much
1 2 3 4
16. To what extent do you believe DKI APCSS will request adequate resources to create impact
based on the assessment results?
Not at all Very Much
1 2 3 4
ASSESSING IMPACT 158
17. To what extent do you view that assessing and articulating the Center’s impact is in
alignment with the Center’s mission of enhancing stability?
Not at all Very Much
1 2 3 4
18. Please describe how DKI APCSS measures its impact beyond outcomes:
______________________________________________________________________________
______________________________________________________________________________
19. From your perspective, what resources are needed to assess and articulate the Center’s
impact?
______________________________________________________________________________
______________________________________________________________________________
ASSESSING IMPACT 159
APPENDIX B
Individual Interview Protocols
The following interview questions were used to identify and validate assumed
knowledge, motivation, and organizational needs for the Daniel K. Inouye Asia-Pacific Center
(DKI APCSS) staff and faculty. Three interviews based on purposeful sampling sought to obtain
rich data to further validated assumed needs. Interviews were conducted in a private conference
room at DKI APCSS.
Narrative: Thank you for participating in this interview. As a reminder, this is a voluntary
interview and you are free to stop the interview at any time. Do I have your permission to record
this interview for the purpose of transcribing your responses? Your feedback will remain
anonymous and will only be categorized as Center staff. We are scheduled to take no longer
than 45 minutes. Do you have any questions before we begin?
1. Please describe your role in assessing and articulating the Center’s impact in the Indo-
Asia-Pacific region.
2. Could you describe any known theories, best practices, methods, models or rubrics to
create a methodology to assess the Center’s impact?
3. What is your perspective on the importance of assessing and articulating the Center’s
impact?
4. Can you describe if one would value the worthiness of time and resources to assess
and articulate the Center’s impact?
5. What consequences, if any, would one identify if the Center does not have the ability
to assess and articulate its impact?
ASSESSING IMPACT 160
Organizational:
6. What resources would help the Center to create a methodology, model, or rubric to
assess and articulate the impact of the Center?
7. What resources would the Center need to collect and analyze data over time (e.g.,
several years)?
8. What structural changes would you recommend to support the Center’s employees
with having the ability to assess its impact?
Narrative (Based on time): Thank you for your time and input. At this time I would like to
conduct a member check and ask you to validate the information you provided.
ASSESSING IMPACT 161
APPENDIX C
Focus Group Interview Protocols
The following interview questions were used to identify and validate assumed
knowledge, motivation, and organizational needs for the Daniel K. Inouye Asia-Pacific Center
(DKI APCSS) staff and faculty. There were two focus group interviews based on purposeful
sampling that sought to obtain rich data to further validate assumed needs. The first group
consisted of peer supervisors and the second group consists of non-supervisory peers. Interviews
were conducted in a private conference room at DKI APCSS with six four selected staff and
faculty members in each group for a total of eight participants.
Narrative: Thank you for participating in this interview. As a reminder, this is a voluntary
interview and you are free to stop the interview at any time. Do I have your permission to record
this interview for the purpose of transcribing your responses? Your feedback will remain
anonymous and will only be categorized as either staff or faculty. We are scheduled to take no
longer than 60 minutes. Do you have any questions before we begin?
What are your positions at DKI APCSS (staff or faculty) (Four respondents)? _______________
1. Please describe a definition of ‘enhancing stability’ as part of the Center’s mission
statement?
2. Please describe how enhancing stability relates to the Center’s mission to its goal of
creating impact in the region?
3. Could you describe any known theories, best practices, methods, models or rubrics to
create a methodology to assess the Center’s impact?
4. What is your perspective on the importance of assessing and articulating the Center’s
impact?
ASSESSING IMPACT 162
5. What consequences, if any, would one identify if the Center does not have the ability
to assess and articulate its impact?
6. What structural and resource changes would you recommend to support the Center’s
employees with having the ability to assess its impact?
7. Can you describe how you foresee leadership using the results of an assessment of the
Center’s impact?
ASSESSING IMPACT 163
APPENDIX D
Document Analysis Checklist
Document analysis was helpful to triangulate data in validating knowledge, motivation,
and organizational needs to assess and articulate the impact of the Daniel K. Inouye Asia-Pacific
Center for Security Studies (DKI APCSS). The researcher acknowledges that there may be
limitations to the document analysis such as a delay in publication of updated or current
documents. The following checklist guided the review and analysis of specific documents.
Document Checklist:
1. What is the type of document (Memorandum of Instruction, Working Presentation)
2. What is the date of the document?
3. Which organization was responsible for authoring the document?
4. Who was the target audience for the document?
5. What was the purpose of the document?
6. What are the main topics of the document?
7. To what extent does the document refer to any of the stakeholder needs for assessing
impact?
8. To what extent does the document refer to any need to link strategy development to
assessing impact?
9. To what extent do indicators suggest DKI APCSS needs to create a methodology to
assess impact of the DKI APCSS?
10. What is the most significant message in the document?
11. Are there any critical gaps in the document?
ASSESSING IMPACT 164
APPENDIX E
Observation Checklist
Observation of stakeholders was beneficial in the triangulation of data to validate
knowledge, motivation, and organizational needs to assess and articulate the impact of the Daniel
K. Inouye Asia-Pacific Center for Security Studies (DKI APCSS). The researcher acknowledges
that there may be limitations to the observation process as interactions that took place may be
unique and not representative of on-going interactions. The following checklist guided the
conduct and analysis of observations as a part of this study.
Observation Checklist:
1. What is the type of activity (planning meeting, after action review)?
2. What is the date of the activity?
3. Which office was responsible for organizing the activity?
4. Who were the participants, by role, in the activity?
5. What was the purpose of the activity?
6. What are the main topics of the activity?
7. To what extent did the activity refer to any of the stakeholder needs for assessing impact?
8. To what extent did the activity refer to any need to link strategy development to assessing
impact?
9. To what extent did the activity suggest DKI APCSS needs to create a methodology to
assess impact of the DKI APCSS?
10. What were the most significant discussions in the activity as it relates to assessing
impact?
11. Were there any critical gaps in the discussions as it relates to assessing impact?
Abstract (if available)
Abstract
The Daniel K. Inouye Asia-Pacific Center for Security Studies (DKI APCSS or referred to as the Center) is a Department of Defense (DoD) regional center that aimed to assess and articulate its impact. This innovation study focused on the knowledge and skill, motivation, and organizational needs for the Center’s staff and faculty to meet its performance goal of assessing and articulating the Center's impact. As part of its mission, DKI APCSS builds partner capacity, shared understanding and networks to enhance stability in the Indo-Asia-Pacific region. The Center educates, connects, and empowers international security practitioners through courses, workshops, and dialogues. Given the Center’s role in U.S. security cooperation efforts, literature focused on conflict prevention, assessing soft power impacts, social capital, approaches, and methodologies. Applying an adapted Clark and Estes’ (2008) gap analysis model, the researcher validated 12 needs through a mixed-methods approach and suggested four solutions. Based on the Chapter Two literature review and findings in Chapter Four, recommendations included developing and communicating a Center strategy, conducting stakeholder training, and developing and implementing an innovative approach to assess and articulate its impact. The study concludes with an evaluation plan for the impact assessment process to improve upon its purpose and perhaps share best practices with the four other DoD regional centers.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The principal, the achievement gap and Children's Literacy Initiative: a promising practice
PDF
Welcoming and retaining expatriate teachers in an international school
PDF
Making a case for teaching religious literacy in Ethiopian schools: an innovation study
PDF
Critical thinking, global mindedness, and curriculum in a Saudi Arabian secondary school
PDF
Systemic multilayered assessment of global awareness in undergraduate students: an innovation study
PDF
Creativity and innovation in undergraduate education: an innovation study
PDF
Fundraising for construction for the Bangalore Conservatory: an innovation study
PDF
Preparing students for the 21st century labor market through liberal arts education at a Chinese joint venture university
PDF
Establishing domestic science, technology, engineering, and mathematics (STEM) programs in the global market: an innovation study
PDF
Efficacy of non-formal education programs in educational outcomes of marginalized Filipino children: an evaluation study
PDF
Chinese students’ preparedness for university studies in the United States
PDF
A gap analysis on improving teacher retention in kindergarten: a case of a private kindergarten in Hong Kong
PDF
Asia-Pacific employer perspectives of MBA graduates and the impact on MBA hiring
PDF
Leading the country in TVET: Don Bosco Technical Vocational Education and Training Center
PDF
Sustained mentoring of early childhood education teachers: an innovation study
PDF
The impact of Connecticut legislators on teacher diversity
PDF
Foreign-language teachers' needs to achieve better results: the role of differentiated instruction
PDF
Learner-centered teaching in Uganda: an analysis of continuing educational needs
PDF
Exploring mindfulness with employee engagement: an innovation study
PDF
Closing the “college aspirations - enrollment gap” in America’s urban public high schools: an innovation study
Asset Metadata
Creator
Gayagas, Christine M.
(author)
Core Title
Assessing and articulating the impact of the Daniel K. Inouye Asia-Pacific Center for Security Studies: an innovation study
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Global Executive
Publication Date
08/09/2018
Defense Date
08/07/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assessing impact,DKI APCSS,executive education,international fellows,OAI-PMH Harvest,regional center
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Krop, Cathy (
committee chair
), Datta, Monique (
committee member
), Picus, Larry (
committee member
)
Creator Email
cgayagas@hawaii.rr.com,Crissy@GayagasEnterprises.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-64493
Unique identifier
UC11671831
Identifier
etd-GayagasChr-6704.pdf (filename),usctheses-c89-64493 (legacy record id)
Legacy Identifier
etd-GayagasChr-6704.pdf
Dmrecord
64493
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Gayagas, Christine M.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
assessing impact
DKI APCSS
executive education
international fellows
regional center