Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Sense making in risk assessments
(USC Thesis Other)
Sense making in risk assessments
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Sensemaking in Risk Assessments: An Innovation Study
Eric Saylors
Rossier School of Education
University of Southern California
A dissertation submitted to the faculty
in partial fulfillment of the requirements for the degree of
Doctor of Education
August 2023
© Copyright by Eric Saylors 2023
All Rights Reserved
The Committee for Eric Saylors certifies the approval of this Dissertation
Kenneth Yates
Marcus Pritchard
Dr. Adrian Donato, Committee Chair
Rossier School of Education
University of Southern California
2023
iv
Abstract
The purpose of the study is to measure the gaps in knowledge, motivation, and
organizational support of homeland security professionals when assessing risk. Graduates of a
master's program in homeland security serve as the participants of focus. The study uses a mixed
method, sequential explanatory process consisting of quantitative surveys, qualitative interviews,
and document reviews. The quantitative survey includes questions to assess the levels and
degrees of knowledge, motivation, and organizational support of risk assessments. The results of
the study indicate gaps in knowledge and organizational support balanced with a solid
motivation to succeed. The survey received over 400 participants, giving it a 98% confidence.
The qualitative, semi-structured interviews consisted of ten participants selected from the survey
to help explain the results of the survey and provide insight on ways to close gaps. The findings
from the interviews indicated consistent patterns in the gaps discovered in the survey and
possible solutions to close the gaps. Significant gaps in conceptual knowledge and organizational
support, including cultural setting, indicate a need for additional training and education.
Recommendations map a graduate-level curriculum based on the gaps found in knowledge and
organizational support. The proposal includes an implementation and evaluation plan based on a
four-step process, including desired results, critical behaviors, levels of learning, and initial
reactions.
v
Table of Contents
Abstract .......................................................................................................................................... iv
List of Tables ............................................................................................................................... viii
List of Figures ..................................................................................................................................x
Chapter One: Introduction ...............................................................................................................1
Organizational Context and Mission ...................................................................................1
Organizational Performance Status......................................................................................2
Importance of Addressing the Problem ...............................................................................5
Organizational Performance SMART Goal .........................................................................5
Description of Stakeholder Groups ......................................................................................6
Stakeholder Group of Focus and Performance Goal for the Study .....................................7
Purpose of the Project and Questions ..................................................................................8
Overview of the Conceptual and Methodological Framework ............................................8
Definitions............................................................................................................................9
Organization of the Project ................................................................................................10
Chapter Two: Review of the Literature .........................................................................................11
Influences on the Problem of Practice ...............................................................................11
History of the Problem .......................................................................................................12
Attempts to Fix the Problem ..............................................................................................22
Graduate Programs and Risk Management........................................................................23
The Role of IHSR Alumni .................................................................................................25
Stakeholder Knowledge, Motivation, and Organizational Influences ...............................27
Summary ............................................................................................................................48
Chapter Three: Methods ................................................................................................................49
Conceptual and Methodological Framework .....................................................................49
vi
Overview of Design ...........................................................................................................51
Participating Stakeholders .................................................................................................53
Data Collection and Instrumentation .................................................................................55
Data Analysis .....................................................................................................................57
Credibility and Trustworthiness .........................................................................................58
Ethics..................................................................................................................................62
Limitations and Delimitations............................................................................................63
Chapter Four: Results and Findings ...............................................................................................65
Participating Stakeholders .................................................................................................65
Determination of Assets and Needs ...................................................................................69
Results and Findings for Knowledge Causes.....................................................................69
Results and Findings for Motivation Causes .....................................................................95
Results and Findings for Organizational Causes .............................................................112
Summary of Validated Influences ...................................................................................128
Chapter Five: Conclusions ...........................................................................................................131
Organizational Performance Goal....................................................................................132
Description of Stakeholder Groups ..................................................................................132
Goal of the Stakeholder Group for the Study ..................................................................133
Purpose of the Project and Questions ..............................................................................133
Introduction and Overview ..............................................................................................134
Recommendations for Practice to Address KMO Influences ..........................................134
Integrated Implementation and Evaluation Plan ..............................................................143
Strengths and Weaknesses of the Approach ....................................................................162
Limitations and Delimitations..........................................................................................164
Future Research ...............................................................................................................165
vii
Conclusion .......................................................................................................................165
References ....................................................................................................................................168
Appendix A: Sample Survey Items Measuring Kirkpatrick Levels 1 and 2................................180
Appendix B: Sample Blended Evaluation Items Measuring Kirkpatrick Levels 1–4. ................182
Open-Ended Questions for Revisiting Level 1 and Level 2 ............................................182
Five-Point Scale Questions for Evaluating Level 3 Critical Behaviors ...........................182
Level 4 Indicators and Results Sample Metrics. ..............................................................183
Appendix C: Outline of the Information Displayed on the Dashboard .......................................185
viii
List of Tables
Table 1: Organizational Mission, Goal, and Stakeholder Goal 7
Table 2: Summary of Assumed Knowledge Influences on Stakeholder’s Ability to Achieve
the Performance 36
Table 3: Summary of Assumed Motivation Influences on Stakeholder’s Ability to Achieve
the Performance Goal 41
Table 4: Summary of Assumed Organizational Influences on Stakeholder’s Ability to
Achieve the Performance Goal 47
Table 5: Data Sources 52
Table 6: Survey Respondents’ Level of Government 67
Table 7: Survey Respondents’ Domain 67
Table 8: Survey Respondents’ Level of Education 68
Table 9: Interviewees’ Level of Government 68
Table 10: Survey Results of The Definition of Risk 71
Table 11: Survey Result from the Common Components of a Risk Assessment 75
Table 12: Survey Result of Discerning Connected Events for Independent Events 79
Table 13: Survey Result of Discerning Connected Events for Independent Events 80
Table 14: Metacognitive knowledge 93
Table 15: Summary of Stakeholders’ Value of Expert Judgements 101
Table 16: Summary Statistics of Motivational Attribution Survey Results 109
Table 17: Summary Statistics of Cultural Models Survey Results 118
Table 18: Summary Statistics of Cultural Settings Survey Results 126
Table 19: Knowledge Assets or Needs As Determined by the Data 129
Table 20: Motivation Assets or Needs As Determined by the Data 129
Table 21: Organizational Assets or Needs As Determined by the Data 130
Table 22: Summary of Knowledge Influences and Recommendations 136
ix
Table 23: Summary of Motivation Influences and Recommendations 139
Table 24: Recommendations for the Influence Based on Theoretical Principles 141
Table 25: Outcomes, Metric, and Methods for External and Internal Outcomes 146
Table 26: Critical Behaviors, Metrics, Methods, and Timing for Evaluation of Students and
Alums of the IHSR 149
Table 27: Required Drivers to Support Critical Behaviors 150
Table 28: Evaluation of the Components of Learning for the Program 156
Table 29: Components to Measure Reactions to the Program 157
Table C1: Dashboard Information 185
x
List of Figures
Figure 1: Gap Analytical Framework 50
Figure 2: Survey Participates Actively Engaged in Risk Assessments 66
Figure 3: Survey Results of the Definition of Risk 72
Figure 4: Components of a Risk Assessments Survey Results 76
Figure 5: Survey Result: Statistics is a Useful Predictive Tool to Better Understand the
Future 84
Figure 6: Survey Results: Probability is a Useful Explanatory Tool 85
Figure 7: Metacognitive Knowledge 88
Figure 8: Metacognitive Knowledge 89
Figure 9: Metacognitive Knowledge 90
Figure 10: Metacognitive Knowledge 91
Figure 11: Metacognitive Knowledge 92
Figure 12: Summary Statistics of Metacognitive Knowledge 94
Figure 13: Motivation Value 97
Figure 14: Motivation Value 98
Figure 15: Motivation Value 99
Figure 16: Motivation Value 100
Figure 17: Summary of Stakeholders’ Value of Expert Judgements 101
Figure 18: Motivation Attribution 105
Figure 19: Motivation Attribution 106
Figure 20: Motivation Attribution 107
Figure 21: Motivation Attribution 108
Figure 22: Summary Statistics of Motivational Attribution Survey Results 109
Figure 23: Cultural Modeling 113
xi
Figure 24: Cultural Modeling 114
Figure 25: Cultural Modeling 115
Figure 26: Cultural Modeling 116
Figure 27: Cultural Modeling 117
Figure 28: Summary Statistics of Cultural Models Survey Results 119
Figure 29: Cultural Settings 122
Figure 30: Cultural Settings 123
Figure 31: Cultural Settings 124
Figure 32: Cultural Settings 125
Figure 33: Summary Statistics of Cultural Settings Survey Results 127
Figure 34: Results of the Pilot Risk Curriculum 160
1
Chapter One: Introduction
Over the last 200 years, the study of risk has become a foundation in education across the
world (Hubbard, 2020; Lewis, 2019). Universities and government intuitions seek to define,
frame, quantify, analyze, and manage risk through various methods. In the context of homeland
security, risk is typically defined as the probability of negative consequences (Hopkin, 2018). In
the context of risk, systems theory provides reliable guidance for educational institutions to
prepare practitioners, educators, and students to better define, discern, and manage risk in a
tightly coupled, modern world (Haimes, 2009). A risk assessment is a systematic approach to
characterizing the nature and magnitude of a negative event (Skoko, 2013). A risk assessment is
the first necessary step prior to implementing risk management actions. Risk management is
defined as implemented controls to limit negative consequences of an event. The Institution for
Homeland Security and Risk (IHSR)
1
is a leading foundation for educating homeland security
professionals. The IHSR is critical in preparing government leaders to assess and manage risk
(Bellavita, 2006; Kessler & Ramsay, 2013; Kiltz, 2011). However, IHSR has no curriculum
dedicated to risk assessment or management. When alumni graduate from the master’s program
at IHSR, they lack a fundamental tool to meet the IHRS’s mission.
Organizational Context and Mission
The mission of IHSR is to decrease risk and increase security by providing graduate-level
education to homeland security professionals. A graduate school located in the United States
since 2003, IHSR offers numerous programs focused on assisting leaders in the homeland
security domains to develop policies, strategies, programs, and organizational elements to
1 Information derived from organizational websites and documents not cited to protect
anonymity.
2
prepare for and respond to public safety threats across the nation. The IHSR now offers a
master’s program (MA) and leadership programs. Graduates of the programs are military
members, first responders, and Department of Homeland Security (DHS) professionals (U.S.
Coast Guard, U.S. Customs and Border Protection, U.S. Citizenship and Immigration Services,
the Cybersecurity and Infrastructure Security Agency, the Federal Emergency Management
Agency (FEMA), the Federal Law Enforcement Training Center, United States Immigration and
Customs Enforcement, the United States Secret Service, the Transportation Security
Administration, the Science and Technology Directorate, the Office of Intelligence and Analysis,
the Office of Operations Coordination, the DHS Countering Weapons of Mass Destruction
Office, and the Management Directorate). As of 2021, the alumni consist of roughly 1430
graduates in leadership positions.
As a practitioner’s institution of higher education, 100% of IHSR members are
committed to managing risk in an increasingly complex and connected world. Managing risk in a
modern world requires bridging gaps between intergovernmental, interagency, and civil-military
organizations. Risk viewed through a systems theory lens helps different agencies understand
how each is affected by, prepares for, and responds to negative events (Lewis, 2019; Montuori,
2011; Yunkaporta, 2020).
Organizational Performance Status
The organizational performance problem at the root of this study is homeland security
professionals’ failure to properly assess risk. The IHSR’s main focus is training and educating
the nation's homeland security professionals. Per IHSR’s agreement with its sponsor, graduates
of its program will better protect the nations from manmade and natural risks. Since 2003, IHSR
has educated roughly 1430 leaders in homeland security, including multiple FEMA directors.
3
The IHSR’s master’s program offers a top-ranked education and is free to government-employed
students, attracting the nation’s top leaders. However, IHSR does not have standards and
benchmarks for teaching risk based on knowledge, motivation, and organizational gaps in the
field. Failure to establish standards or benchmarks based on empirical results can result in a loss
of funds from its sponsor and a failure of government institutions to protect their citizens.
Related Literature
Numerous studies illuminate the challenges of assessing risk. Risk assessment typically
consists of expert elicitation and ranking/scaling tools. Risk judged by expert elicitation without
calibrated experts has a high failure rate (Colson & Cooke, 2020; Cooke & Goossens, 2004).
Leaders frequently misapply classical risk models based on scoring methods and ordinal scales,
producing poor results (Hubbard & Evans, 2010). When estimating risks using scoring methods
and ordinal scales, participants consistently underestimated risks by 55% (Slovic, 2020). In
addition, evidence suggests when people assess risk, they are overconfident in the accuracy of
their responses (Fischhoff et al., 1977; Fischhoff, 2021; Slovic, 2020). Experiments dating back
to the 1970s consistently show participants are extremely overconfident in their risk assessment
(Fischhoff et al., 1977; Fischhoff, 2021). Literature suggests the varying definitions of risk and
subsequent strategies for assessment are an influence on the problem.
Definitions of risk range from a measure of uncertainty (positive gain or loss) to a
probability of a loss (Hubbard, 2020). Varying definitions of risk propagate four different
schools of thought around risk: actuarial, economics, wargaming, and business management. The
various domains of thought influence alumni’s risk measurement and application strategies
(Lewis, 2019).
4
Measurement of risk falls into two general categories: objective and subjective (Hubbard,
2014). Objective risk is a numerical result from quantitative tools such as statistics, probability,
and the risk expectancy theory (Hubbard, 2014; Lewis, 2019). Students of business management
and actuarial studies use the tools of objective risk to assess and analyze risk. Lewis (2019)
suggested the strategies of objective risk include expected utility theory, Monte Carlo
simulations, frequency probability, and value at risk. Recent studies in complexity science and
catastrophe theory criticize the limitations of objective risk strategies (Cirillo & Taleb, 2020;
Haimes, 2009; Moreto et al., 2014; Taleb, 2007).
Subjective risk is a relative result of qualitative frameworks, such as normal accident
theory, self-organized critically, Bayesian belief networks, and network theory (Hubbard, 2014;
Lewis, 2019). Students of wargaming and economics use tools from subjective risk to assess and
analyze risk. The strategies of subjective risk include threat, consequence, vulnerability,
probabilistic risk analysis, exceedance probability, and model-based resource allocation (Lewis,
2019). Recent publications criticize the limitations of subjective risk, as the results can prove
inconsistent due to random experts who can weigh in on the process (Hubbard, 2014, 2020).
Systems theory offers an overarching framework to categorize types of risk depending on
the state of the system and the timeframe (Haimes, 2009; Kaplan & Garrick, 1981; Lewis, 2014;
Montuori, 2011). Systems theory focuses on the relationships of individual parts of the systems
and includes influences of feedback loops (Meadows, 2008). A reductive approach, such as
objective risk strategies, is valid if a stabilizing feedback loop is present (Hubbard, 2014;
Meadows, 2008). A holistic process such as subjective risk is helpful if there is a runaway
feedback loop (Cirillo & Taleb, 2020; Meadows, 2008; Montuori, 2011). The literature on
relevance realization (Irving & Vervaeke, 2016; Vervaeke et al., 2012) may offer guidance on
5
how to help leaders use systems theory to discern which category of risk they are assessing and
how to choose the appropriate strategy.
Importance of Addressing the Problem
The problem of homeland security professionals properly assessing risk is important to
solve for a number of reasons. First, on a societal level, risk is inequitable (Beeson, 2020; Moreto
et al., 2014; Wallace & Wallace, 1998). Frequently, the most vulnerable members of society
suffer from poor risk assessment and management. Second, on a national level, the 21st century
is not like the 20th century (Lewis, 2014). Laden with terrorism, financial collapse, war, climate
change, novel animal and cyber viruses, and an evolving socio pollical change, leaders in the
21st century need a conceptual framework for discerning risk (Haimes, 2009; Hubbard, 2020;
Kaplan & Garrick, 1981) And finally, at an organizational level, leaders of federal, state, and
local government require additional training and education in risk assessment and analysis, and
many of these leaders go to the IHSR for their graduate education.
Creating a homeland security program in 2003, IHSR attracts leaders from across the
country to learn about how to secure their jurisdiction. Risk assessment, analysis, and mitigation
are key functions of homeland security, yet the term “risk” itself is hard to define, let alone act
upon (Haimes, 2009; Hubbard, 2020; Kaplan & Garrick, 1981). If the nation’s leaders and
educators at IHSR lack a methodological process to categorize, assess, and analyze risk, then the
nation’s organizations, nation, and society may not survive the 21st century.
Organizational Performance SMART Goal
The goal at IHSR is to reduce risk by preparing leaders to navigate an increasingly
complex world. The IHSR attempts to achieve its goal by promoting the study and application of
risk and security as a discipline, specifically by creating shared educational programs that
6
include diverse organizational leaders across the nation. The IHSR’s federal sponsor established
the organizational goal after the failures to assess, discern, and mitigate the risks of the 9/11
attacks. The IHSR’s sponsor identifies communication links and shared knowledge between
federal, state, and local leaders, requiring all to assess and reduce risk at a micro- and macro-
level. The IHSR’s goal of navigating a complex world requires viewing the United States as an
interconnected system potentially affected at the macro-level by micro-level events on a global
scale. The IHSR’s goal is novel but lacks published benchmarking or standard metrics. The
general objective of reducing risk is clear; however, the goal lacks clarity in tactics. Exploring
the current state of homeland security professionals’ knowledge, motivation, and organizational
(KMO) gaps will help set standards and benchmarks for IHSR’s future programs. Failure to
study the problem of homeland security professionals’ ability to assess and discern risk
perpetuates a blind spot to risk assessment that permits local events to cascade into national
catastrophes.
Description of Stakeholder Groups
The study consists of three stakeholders required to meet performance goals, including
IHSR’s sponsor, faculty and staff, and alumni. The IHSR’s sponsor is a government agency
under the National Preparedness Directorate (NPD). The IHSR develops all programs in
partnership with the federal government to meet NPD goals. FEMA is the governing stakeholder
of IHSR and its programs.
IHSR faculty and staff accomplish the organization’s mission by developing and
providing meaningful curriculum to meet the sponsor’s and organizational mission. The IHSR’s
alumni are the stakeholders that implement the curricula to ultimately FEMA’s and IHSR’s
mission.
7
Stakeholder Group of Focus and Performance Goal for the Study
While all three stakeholders contribute to the mission and organizational goal of full
compliance, the alumni represent the group most responsible for implementing the mission and
closing performance gaps in the field. Therefore, IHSR alumni are the focus of this study. The
stakeholders’ goal, supported by the IHSR, is that by May of 2024, 100% of IHSR alumni will
accurately assess the probability and consequences of events (risk). The organization’s sponsor
and director established and approved this new stakeholder SMART goal in the 2024 curriculum
review meeting. A failure to effectively discern risk based on a methodological process
established for alumni of IHSR creates inconsistencies and gaps in assessing, analyzing, and
responding to risk, leaving the nation less secure. The performance gap is 100%. This is an
innovation study. Table 1 outlines the organization’s mission and goals.
Table 1
Organizational Mission, Goal, and Stakeholder Goal
Organizational mission
The mission of IHSR is to strengthen the national security of the United States by providing
graduate-level educational programs and services that meet the immediate and long-term
leadership needs of organizations responsible for homeland defense and security.
Organizational SMART performance goal
By May of 2024, IHSR will utilize systems theory to discern various types of risk related to
homeland security.
IHSR alumni staff SMART goal
By May of 2024, 100% of IHSR master’s degree alumni will assess the probability and
consequences of events (risk) in alignment with systems theory. The gap in performance is
100%.
8
Purpose of the Project and Questions
The purpose of this innovation study was to conduct a needs analysis in the areas of
KMO resources necessary for IHSR master’s degree students to achieve their stakeholder goal of
assessing the probability and consequences of events (risk) in alignment with systems theory by
May 2024. The analysis generates a list of possible needs for IHSR master’s degree students to
accomplish their goal and then systematically examines the needs to ascertain which are actual or
validated. While a complete needs analysis would focus on all stakeholders, for practical
purposes, the stakeholder of focus in this analysis are IHSR master’s degree students. Two
research questions guided this study:
1. What are IHSR master’s degree alumni’s knowledge, motivation, and organization
needs related to assessing the probability and consequences of events (risk)?
2. What are the knowledge, motivation, and organizational recommendations for
improving IHSR master’s degree student abilities to assess the probability and
consequences of events (risk)?
Overview of the Conceptual and Methodological Framework
Clark and Estes’s (2008) gap analysis, a systematic, analytical method that helps to
clarify organizational goals and identify the KMO influences, is adapted to an exploratory model
and implemented as the conceptual framework. Context-specific research, as well as general
learning and motivation theory, generates the assumed influences of KMO barriers that impact
IHSR’s staff’s performance in the area of implementing compliance procedures. The
methodological framework is a mixed-methods case study consisting of a survey and individual
interviews.
9
Definitions
The following definitions provide clarity for their use throughout this study.
● Consequences refer to the negative impact on a system from a threat (Meadows, 2008)
and are also described as a function of the time of the event, the state, the vulnerability,
and the resilience of the system (Haimes, 2009).
● Feedback loop is the process by which systems self-correct based on reactions from other
systems in the environment (Meadows, 2008). Feedback loops represent both stabilizing
loops and runaway loops.
● Resilience is the ability of a system to recover from a major disruption within an
acceptable time and cost (Haimes, 2009).
● Risk is the measure of the probability and severity of consequences (Kaplan & Garrick,
1981).
● Risk analysis is the prediction of risk based on the state of the system, the likelihood of
the threat, and the consequences (Kaplan & Garrick, 1981).
● Risk assessment is a systematic exercise to discover potential future risk-based events
(Haimes, 2009).
● State of a system refers to a system’s vulnerability and resilience at a moment in time
(Haimes, 2009).
● System is a group of interacting, interdependent parts that form a complex whole
(Montuori, 2011). A system consists of three things: elements, interconnectedness, and a
purpose (Meadows, 2008).
10
● Systems theory refers to a coherent set of basic concepts and axioms that assists with
understanding the parts of a system in the context of the whole by looking at
relationships, synergies, feedback loops, and emergence (Meadows, 2008).
● Threat is an initiating event (Haimes, 2009).
● Vulnerability is the inherent states of the system open to an exploit that can harm the
system (Haimes, 2009).
Organization of the Project
Five chapters organize this study. This chapter provided the key concepts and
terminology commonly found in a discussion about regional center noncompliance. In addition,
this chapter introduces the organization’s mission, goals, and stakeholders, as well as the initial
concepts of gap analysis. Chapter Two provides a review of current literature surrounding the
scope of the study as well as details the assumed influences. Chapter Three describes the
methodology when it comes to the choice of participants, data collection, and analysis. Chapter
Four analyzes and assesses the data and results. Chapter Five provides solutions, based on data
and literature, for closing the gaps as well as the formulation of an integrated implementation and
evaluation plan for the solutions.
11
Chapter Two: Review of the Literature
Risk assessment is foundational to homeland security. However, universities and
government intuitions struggle with defining, framing, and teaching risk assessments in our
current environment. Failure to assess and discern different types of risk leaves society
vulnerable to collapse (Hubbard, 2020; Lewis, 2019b). This chapter first reviews the history of
the problem, attempts to fix the problem, and challenges to human-based risk assessments. Next,
the chapter reviews the role of the alumni of IHSR, followed by the explanation of the KMO
influences lens used in this study. Finally, the chapter focuses on IHSR alumni’s KMO
influences and presents the conceptual framework.
Influences on the Problem of Practice
Risk assessment depends on a mixture of practitioners at multiple levels in the
government (Bellavita, 2019). These practitioners are collectively known as homeland security
professionals and occupy positions at the local, state, federal, and tribal levels of government.
Homeland security professionals include a diverse group of people, ranging from firefighters,
U.S. Coast Guard members, police officers, and the U.S. Secret Service (Bellavita, 2006). The
IHSR educates homeland security leaders. Influential stakeholders of risk assessments include
the educators at the IHSR, the board members who approve the curriculum, students attending
classes, and the alumni who shape the domain of homeland security (Comiskey, 2018). The
alumni tend to occupy leadership roles in the various government intuitions and have the greatest
influence on risk assessments in the homeland security enterprise (Pelfrey & Kelley, 2013).
Alumni of the IHSR are the focus of this study. The factual, procedural, conceptual, and
metacognitive knowledge passed on at the IHSR greatly impact the alumni. In addition, the
interpersonal motivational factors of attribution and task value beliefs impact professionals’ risk
12
assessments. Finally, cultural models and settings shape the behavior of homeland security
professionals.
The research suggests factors such as catastrophe theory, normal accident theory, self-
organized criticality, and systems theory impact the lenses of risk assessment (Lewis, 2019).
Catastrophe theory addresses cascading failures of components that spread across systems
(Lewis, 2019). Normal accident theory describes how one or two normal events mix together in
unexpected ways, creating catastrophic results (Lewis, 2019; Perrow, 1999). Self-organized
criticality proves that some catastrophic events are inevitable and unpredictable (Bak et al., 1988;
Lewis, 2014). And finally, systems theory explores how individual parts connect, creating
systems of feedback loops (Arnold & Wade, 2015; Lewis, 2019; Meadows, 2008; Montuori,
2011; Perrow, 1999).
History of the Problem
The history of the problem of assessing risk starts with the basic definition of risk
(Hubbard, 2020). Risk is defined as (a) a single dimension of variance, (b) two dimensions of
expectation and loss, and (c) three dimensions as a combination of threat, consequences, and
vulnerability. To compound the problem, the multidimensions of risk include a mixture of
objective and subjective metrics (Colson & Cooke, 2020; Hubbard, 2014). Finally, systems
theory takes a holistic approach to risk and focuses on the relationship of events as opposed to
the dimensional metrics (Arnold & Wade, 2015; Meadows, 2008).
The Definition of Risk
The Oxford English Dictionary defines risk as a chance or possibility of danger, loss,
injury, or other adverse consequences (Hopkin, 2018). However, the literature suggests the
foundational definition of risk is a two-bodied problem. Frank Knight defined risk as a measure
13
of uncertainty in his seminal text Risk, Uncertainty and Profit (Hubbard, 2020; Knight, 1921).
Considered a classic text among economists, Frank Knight’s definition is in stark contrast with
John Maynard Keynes’s definition of risk as the probability of a loss or sacrifice (Hubbard,
2020; Keynes, 1909). These two foundational definitions create confusion among professionals
trying to assess risk (Hopkin, 2018). One definition quantifies risk as variance, while the other
quantifies risk as a function of probability and negative consequences. To compound the
problem, Knight’s definition of risk suggests there can be an upside of risk equal to the
downside. Bellavita (2019) pointed out that the background of homeland security professionals
varies greatly, creating radical subjectivity concerning the definition of risk.
Risk as One Dimension: Variance
Variance as a measure of risk influences business, finance, and project management
(Arici et al., 2018; Fabozzi et al., 2002). Harry Markowitz won the Nobel Prize in Economics for
modern portfolio theory (MPT), using variance to measure risk (Fabozzi et al., 2002).
Capitalizing on MPT Knight’s definition, the international guide to risk-related definitions (ISO
Guide 73) defines risk as the effect of uncertainty (Tranchard, 2018). In addition, the Institute of
Internal Auditors defines risk as the uncertainty of an event occurring (Čular et al., 2020).
However, risk as a measure of variance assumes a certain level of knowledge about events
derived from historical data that may not be realistic in everyday life (Hopkin, 2018). Assessing
risk requires more than calculating the standard deviation of sets of data in the physical world
when some events are novel (Lewis, 2019). Hubbard (2020) contended a one-dimensional view
of risk as uncertainty is too limiting to homeland security professionals tasked with increasing
safety and security by limiting the impact of negative events.
14
Risk as Two Dimensions: Vector Quantity
The alterative definition of risk is an expected loss (Hubbard, 2020). The critical aspects
of the expected loss expand beyond Knight’s single dimension and express risk as a function of
two parts: expectations and loss. Loss as consequences, effects, implications, or injury is
quantified in dollars or human lives. (Aven et al., 2018; Haimes, 2009; Hopkin, 2018; Hubbard,
2014, 2020; Lewis, 2019). Loss is a negative event, departing from the former definition that
allows for a positive outcome of equal magnitude. The dimension of expectations falls into two
conceptual measurements; objective and subjective (Haimes, 2009; Hubbard, 2014; Kaplan &
Garrick, 1981).
The Shape of Risk: Objective Metrics
A survey performed in 2008 suggests 11% of all risk assessments use objective estimates
(Hubbard, 2020). Objective expressions of expectations are a probabilistic metric (Kaplan &
Garrick, 1981). Probability is the result of an equation producing a product ranging from zero to
one, where zero indicates the event is impossible and one indicates a guaranteed event (Lewis,
2019). Three standard equations produce objective probabilities: Pascal’s triangle, Laplace’s a
posteriori probability, and Bayesian belief networks (DHS, 2014; Lewis, 2019). Each calculation
of probability typically requires software and a useful data set. The limitation of calculating
probability is based on the user’s limited understanding of the math and software to produce
useful results (Hanea et al., 2021). Cooke (2020) claims the scarce availability of useful data sets
on real-world events coupled with requirements of advance math requires a subjective expression
of expectations (Colson & Cooke, 2020).
15
The Shape of Risk: Subjective Metrics
The same survey from 2008 suggests 89% of all risk assessments use subjective estimates
(Hubbard, 2020). Subjective expressions of expectations are a judgment metric (Hanea et al.,
2021). Judgement is a qualitative assessment performed by experts used to predict the
plausibility of negative events occurring when empirical data are unavailable. Roger Cooke
developed the classical model for aggregating expert judgment in 1985 (French et al., 2011). The
classical model attempted to solve the problems of ill-informed experts or experts applying the
wrong models or frameworks to potential scenarios (Colson & Cooke, 2020). The classical
model evolved into a best practice of using panels of experts, structured elicitation protocols, and
aggregation methods. Experts require calibration prior to estimating the likelihood of events
(Hubbard, 2020). Calibration is a process to debias experts and help them form an intuition for
understanding the probabilistic models they must replace (Colson & Cooke, 2020; Hubbard,
2020; Wiper et al., 1994). Lack of calibration results in overconfidence in the experts’ input and
resistance to adjust judgments when conditions change (Arbona et al., 2019; Fischhoff et al.,
1977).
Risk as Three Dimensions: Set of Triplets
In 2008, DHS published the first Risk Lexicon (DHS Risk Steering Committee, 2008;
Lewis, 2019), defining risk as a set of triple variables: threat, consequences, and vulnerability
(TVC). Since 2008, DHS has published two updates. The lexicon defines threat as a natural or
manmade occurrence, individual, entity, or action that has or indicates the potential to harm life,
information, operations, the environment, or property (DHS Risk Steering Committee, 2008).
Threat is the likelihood of an attack based on expert judgment (DHS, 2014). Consequence is the
effect of an event measured in human, economic, mission, or psychological impact (DHS, 2014).
16
And finally, vulnerability is a qualitative or quantitative expression of the level to which an
entity, assess, system, network, or geographic area is susceptible to harm (DHS, 2014).
Vulnerability is the likelihood that an attack is successful (DHS, 2014). The DHS Risk Lexicon
provides a framework of risk to help organize what can go wrong, how likely it is, and what the
impacts are (Kaplan & Garrick, 1981; Lewis, 2019). As a subjective framework, the lexicon
represents simplicity but receives criticism for its inability to produce useful results (Hanea et al.,
2021; Hubbard, 2014, 2020). Rozell (2015) cautioned against using subjective frameworks in
homeland security due to inconsistencies and suggested using quantitative risk models coupled
with expert review, while Jenkin (2006) argued the subjective perception of risk is more relevant
to assessing risk in the public.
The multiple conceptual and practical definitions of risks in the literature present
challenges for homeland security professions (Hubbard, 2020). Depending on the practitioner’s
background, they may view risk as a measure of variability or uncertainty (Hubbard, 2020), or
the practitioner may view risk as a mix of probability coupled with consequences (Hopkin,
2018). However, the multiple procedures to objectively calculate probability and consequences
based on reliable data sets can leave a user overwhelmed (Hanea et al., 2021). Finally, the
subjective approach may be the only alternative when facing novel events, but standards for
experiment judgment in risk assessment require multiple protocols, calibrations, and
interventions to produce meaningful results (Colson & Cooke, 2020). Stakeholders need to know
the definition of risk, the components of a risk assessment, and the concepts related to statistics
and probability.
17
Risk as Systems Thinking: Connected Events and Catastrophe Theories
Opposed to reducing events into fixed dimensions and components, systems theory
focuses on the relationships of components in a system (Arnold & Wade, 2015). Meadows
(2008) categories the relationships of systems into positive feedback loops or stabilizing
feedback loops. The interconnectivity of components determines the potential of feedback loops.
Events that impact or change components that are connected are connected events (Meadows
2008). Connected events have the potential to self-replicate, run networks, and cascade (Lewis,
2019). Connected events can produce outcomes beyond the sum of their components, occupying
the domain of catastrophe theories.
In 1979, Charles Perrow published normal accident theory (NAT) to explain how tow
systems connected to one another in unexpected ways can accelerate extreme events (Lewis,
2019; Weick, 2004). Lewis (2019) categorizes NAT into a class of catastrophe theories, or
events that follow a non-linear response. Normal accident theory is just one of many theories of
catastrophes following a power law (Lewis, 2019). A non-linear response is a power law
(Hubbard, 2020). Power laws result from component relationships within systems (Lewis, 2014;
Taleb, 2007). Network theory quantifies the degree of coupling between components as the
average number of links connecting each node (Lewis, 2019). Whereas traditional risk
assessments are based on analyzing one component at a time, a systems approach includes
connected components as well (Arnold & Wade, 2015).
Hubbard (2020) described NAT as a common failure mode where stress on a system can
increase the chances of all components failing in parallel. Perrow (1999) used the 1979 Three
Mile Island nuclear power meltdown, the Bhopal Gas tragedy, the Chernobyl meltdown, and the
Fukushima Daiichi disaster to demonstrate how risk increases in levels of magnitude when
18
tightly coupled systems fail under the same stress. Haimes (2009) ties Perrow’s NAT into the
risk definition by expanding the multidimensional definition of risk to include the timeframe of
events. If one negative event can trigger additional negative events in a very tight timeframe,
then the risk assessment must consider the state of the system in addition to other factors.
NAT is the catalyst for the application of complexity theory to risk assessments (Lewis,
2014; Lewis, 2019). Complexity theory categorizes environments as ordered and unordered
(French, 2015). Unordered environments create emergent or novel behavior (French, 2013). The
emergent or novel behavior can produce unexpected results beyond the sum of their components
when triggered, resulting in power laws (French, 2015; Lewis, 2019). Complexity theory further
defined Perrow’s NAT and Haime’s (2009) function of the timeframe of events as a combined
state of the system.
Bak et al.’s (1988) self-organized criticality (SOC) adds additional literature on the risk
of complex systems and catastrophe theory. Based on the study of how sand piles collapse
(Lewis 2014), physicist Per Bac developed a general theory called punctuated equilibrium, which
described how simple components coupled together can form complex systems with the potential
to catastrophically collapse (Lewis, 2014). Punctuated equilibrium occurs during the course of a
system in the state of SOC; the system appears stable under stress with no indications of failure
until it suddenly collapses, producing a punctuated return to equilibrium (Lewis, 2014). Hubbard
(2020) categorizes Bac’s punctuated equilibrium as a cascade failure, whereas the failure of one
component starts a chain reaction of failure in series. Lewis (2014) uses SOC as a tool to assess
the chain reactions in biological extinction, financial collapses, electrical grid failures, and
wildfires based on network theory. Network theory provides tools to quantify relationships in
systems suffering from SOC (Lewis, 2011). Lewis uses a specific type of network known as the
19
Amaral-Meyers (Nunes Amaral & Meyer, 1999) network to assess risk in terms of the degree of
connectivity, percolation, and network density. SOC is an additional phenomenon impacting risk
assessment.
Hubbard (2020) added an additional category to system risk called a positive feedback
loop, defined as a change of one component creating a feedback loop that exacerbates the change
of other components. Meadows (2008) slightly altered Hubbard’s title from positive feedback
loop to a reinforcing or runaway feedback loop. Reinforcing loops occur when a system’s
elements have the ability to reproduce themselves or grow as a constant fraction of themselves
(Meadows, 2008). In a positive feedback look, A causes B, and B also causes A. The potential
for feedback loops is one additional state of system when assessing risk through the lens of
system theory.
Mandelbrot (2021) considered the ability of an event to self-replicate through a system as
the ability to scale. Mandelbrot quantified risk through the domain of fractal geometry, creating
systems that can scale and self-replicate (Mandelbrot & Hudson, 2010). Much of Mandelbrot’s
work related to ecosystems and financial markets and led to the foundation of the Fractal Market
Hypothesis as a model for risk assessment (Hsieh & Peters, 1993; Sornette & Johansen, 1997).
Mandelbrot’s ideas are the main theme of Nassim Taleb’s view of risk in his popular
publications. Taleb argued risk is best understood through two-lens systems that scale and
systems that do not (Cirillo & Taleb, 2020). Taleb (2007) uses the metaphor of two separate
countries to represent the different systems: Extrimistan (scaling) and Mediocristan (non-
scaling). Lewis (2014) clarified Taleb’s premise of risk in Book of Extremes, dividing the world
into two systems as well, disconnected systems that follow a bell curve and systems that are
connected and follow a power law. Mandelbrot, Taleb, and Lewis agreed that connected,
20
scalable systems that follow a power law require different models and definitions to assess risk
(French, 2015; Lewis, 2014; Mandelbrot, 2021; Taleb, 2007).
Challenges to Risk Assessment Heuristics and Bias
Humans are not computers (Hubbard, 2020). The domain of judgment and decision-
making (JDM) addresses the human challenges to risk assessment. Kahneman and Tversky’s
seminal paper on decisions under risk exposed humans’ short coming when assessing risk
(Kahneman & Frederick, 2002). Kahneman and Tversky experimented on college students and
discovered they made suboptimal choices 87% of the time when assessing risk. These findings
countered the expected utility theory and suggested humans operate from heuristics bound by
bias.
Pope and Schweitzer (2011) and Cirillo and Taleb (2020) confirmed Kahneman and
Tversky’s findings by proving experts and professional statisticians make errors in risk
assessment 85% of the time. Critics of Kahneman’s results claim the conclusions based on 95
college students ’ performance are insufficient in predicting domain experts ’ performance when
operating in a high-stakes environment. However, evidence suggests that humans in a high-
stakes environment also violate expected utility theory (Post et al., 2006). In a study involving
151 game show contestants over 5 years, participants failed to discern the optimal solution to
maximize rewards under risk, confirming Kahneman ’s conclusions (Post et al., 2006).
Pope and Schweitzer (2011) expanded the scope of Kahneman ’s theory to professional
golfers. A study of 2.5 million putts made by professional golfers showed these domain experts
make choices based on the outcome of prior situations as opposed to optimal solutions 85% of
the time (Pope & Schweitzer, 2011; Simon, 1955). College students, game show contestants, and
professional golfers struggle with discerning the optimal solution when facing quantitative
21
choices under risk. Further studies suggest that a bounded rationality influences the inability to
select rational answers under risk.
Ideas Bounded by Experience
When multiple optimal choices exist in complicated environments, humans operate under
a bounded rationality (Simon, 1955). The idea of a bounded rationality surfaced in 1955 when
Hermon Simon claimed cognitive ability and social constraints limit human perception. Chen
and Zhu (2019) use bounded rationality to explain the security risk associated with complicated
systems such as the internet of things (IoT). With novel systems such as IoT, optimal solutions
are beyond the perceiver ’s limit and require automatized algorithms (Chen & Zhu, 2019). A
person ’s bounded rationality, or the combination of experience and context, restricts decisions to
prior problems and solutions.
While Chen and Zhu accept bounded rationality as inherent in human limitations, Simon
proved his theory through an experiment based on the sale of one ’s house. Ninety-five
participants in the experiment selected various optimal payoffs when contemplating the
timeframe and context of their life (Simon, 1955). Every participant ’s choice was rational when
considering the context, and every choice was different. Building off Simon ’s work, Bearden
(2008) claimed that a process branded as heuristics is the primary tool used to make decisions
within one ’s bounded rationality. Heuristics are mental shortcuts used to reach answers quickly
based on experience instead of analytics (Bearden et al., 2008). In three experiments, 36
Columbia college students used heuristics over rational processes where an optimal solution
existed (Bearden et al., 2008). Due to heuristics and bounded rationality, the students fell 36%
below the optimal answer (Bearden et al., 2008).
22
In addition to bounded rationality and heuristics, other peoples ’ choices greatly influence
one ’s decisions under risk (Gordji et al., 2018). Instead of seeking an optimal solution for
themselves, people tend to consider the impact on other people over an answer that best befits
them (Gordji et al., 2018). A phenomenon discovered by studying a series of competitive games,
players first consider the profit or loss of other players before considering their own (Gordji et
al., 2018). However, before overcoming heuristics, bounded rationality, and the propensity to
compete, leaders must first determine the environment of risk they face. Distinct risk
environments require different strategies for selecting optimal solutions, and humans need to
adapt their methods to unfamiliar environments.
Attempts to Fix the Problem
Snowden (2005) created the Cynefin framework to help users understand their
environment. The Cynefin framework divides the world into two domains: ordered and
unordered (Pauleen, 2017; Snowden, 2005). The Cynefin framework classifies ordered
environments as simple or complicated. Ordered environments respond predicably and follow a
bell curve. The Cynefin classifies unordered environments as complex or chaotic. Unordered
events respond erratically and tend to follow power laws. Snowden offers different strategies to
operate in each environment. Bellavita (2019) argued the Cynefin framework is useful for
organizing and understanding homeland security environments for graduate students studying
risk. Bellavita (2019) used the Cynefin framework to view events through a cause-and-effect
relationship, guiding homeland security professionals to learn differing strategies for each
environment. Although the Cynefin framework is useful conceptually to homeland students to
understand multiple environments, it still lacks tools to quantify risk objectively.
23
Hubbard (2020) advocated for pure objectivity and offered free spreadsheets online to run
probabilistic scenarios that follow a bell curve or power law to assist any professional in
assessing risk. But even Hubbard contended that without proper training, the spreadsheets are
useless. French, Hanea, Bedford, and Gabriela expand on Cooke’s classical model to improve
subjective expert elicitation (Hanea et al., 2021). French developed training courses in structured
expert judgment, adapting the model to terrorism and geo-political risk. In addition, Cooke
created software to help calibrate and prime experts to express structured judgment on risk called
Excalibur. However, there is no evidence of any of the above used in the homeland security
professions. DHS developed the risk lexicon to clarify definitions (DHS, 2014). However, the
Society for Risk Analysis published a glossary for risk in 2018, offering different definitions of
risk from the DHS (Aven et al., 2018). The literature continues to offer conflicting definitions,
concepts, and procedures to risk.
Graduate Programs and Risk Management
Hubbard (2020) calls classical graduate programs that include risk management “the four
horsemen” as a cynical analogy to the Apocalypse (p. 81). Hubbard’s four horsemen are (a)
actuaries, (b) war quants, (c) economists, and (d) management programs. Hubbard argued that
each domain holds a piece of the puzzle for risk assessment, but alone each fails dramatically.
But unlike the four horsemen, the field of homeland security is not a fully established domain
with room to evolve and grow (Plant et al., 2011). As a result, homeland security graduate
programs employ diverse academics and professionals teaching from multiple fields and
domains. This diverse group mixes components from Hubbard’s horsemen into curricula across
the nation (Comiskey, 2018). To codify risk management into the curriculum, the International
Society for Preparedness, Resiliency, and Security identified nine knowledge domains as a part
24
of homeland security studies, including risk management. A 2018 study shows 73% of homeland
security graduate programs teach risk management (Comiskey, 2018).
The Strengths of Graduate Programs on Risk Assessment
Research suggests graduate programs in risk management are succeeding in critical
thinking, diversity of theory, and experiential learning. Critical thinking is a core skill needed for
risk assessment (Clement, 2011; Comiskey, 2018; Danko, 2020; Pelfrey & Kelley, 2013; Plant et
al., 2011). A 2013 study suggests alumni of graduate programs are 53% more effective in
engaging in critical thinking (Pelfrey & Kelley, 2013). In addition, the emerging domain of
homeland security offers a diversity of theory for risk assessment. A study of 953 college
department heads and faculty affiliated with homeland security education (Comiskey, 2018)
shows theory associated with risk management derives from the domains of (a) emergency
management (79.6%, n = 133), (b) criminal justice/criminology (62.2%, n= 117), (c) cyber
security (64.7%, n = 108), (e =1 00), e) national security (58%, n = 97), (f) security studies
(56.9%, n = 95), (g) political science (55.7%, n = 93), and (h) law and justice (53.9%, n = 90).
Finally, graduate programs on risk management succeed with experiential learning (Danko,
2020). A 2020 study of 68 students with a 95% confidence level using the Student Assessment of
Learning Gains instrument showed graduate students perceived signification gains in their
understanding, skills, and integration of risk assessment using experiential learning via
simulations and case studies. The study showed a statistically significant correlation between the
use of experiential learning and perceptions of gains in real-world complex problems (Danko,
2020).
25
The Weakness of Graduate Programs on Risk Assessment
The literature suggests graduate programs on risk assessments struggle with consistency
and exert elicitation. An emerging domain, homeland security struggles with how to frame risk
assessment consistently. Three hundred and 15 colleges offer over 700 security degree offering
programs with no standard of risk assessments (Comiskey, 2018). Divergence is normal in
emerging disciplines; however, as Hubbard (2020) posited, holding one piece of the puzzle can
lead to over confidence and failure. Second, graduate programs struggle with teaching expert
elicitation for risk assessment (Pelfrey & Kelley, 2013). In a 2008 study, Hubbard posited that
none of the students used or heard of expert elicitation techniques. A review of the literature
failed to produce evidence of expert elicitation techniques used in graduate programs in
homeland security.
The Role of IHSR Alumni
IHSR alumni are the stakeholders of focus. The following explores literature describing
the alumni, what they learn, and why their role is so important to risk assessment. Alumni of
IHSR are diverse public-sector practitioners from federal, state, local, and tribal agencies who
study homeland security (Bellavita, 2019). Twenty years after the terrorist events on 9/11,
homeland security is still not a firmly established discipline, yet alumni still assess and prioritize
risk in the homeland enterprise (Brody, 2020). The diverse backgrounds and the relative
positions of the agencies present multiple challenges for the alumni, including (a) the nature of
federalism, (b) lack of standard risk assessment methodology, (c) unclear application of risk
assessments, and (d) a lack of political will. Although the challenges are daunting, the alumni
can enhance the enterprise of homeland security by coordinating and establishing risk-based
priorities through regionalized risk assessments (Brody, 2020).
26
Due to the diverse backgrounds and missions of the alumni, learning at IHSR revolves
around radical subjectivity. Radical subjectivity refers to a process of acknowledging individual
interpretations of risk while defending one ’s observations of risk to others (Bellavita, 2019, p. 5).
Radical subjectivity is tacit knowledge (Weichselgartner & Pigeon, 2015). Bellavita (2019)
argued that focusing on the exchange of ideas via tacit knowledge keeps homeland security
knowledge continuously evolving and helps expand the understanding of homeland security as a
social enterprise. Alumni progress through Kolb’s cycle of learning (Bellavita, 2019), moving in
a loop from concrete experience to reflective observation, to abstract observation, to active
experimentation, and back to concrete experience. The processes are intended to create lifelong
learning (Bellavita, 2019).
The alumni are introduced to risk assessment through the lens of critical infrastructure
(Taquechel & Lewis, 2017). Alumni use NAT, SOC, and exceedance probability when disusing
risk in complex systems. Sources of risks include natural disasters, terrorist events, and major
accidents (Lundberg & Willis, 2015). Brody (2020) identified current trends of the increasing
risk that alumni face in real life: (a) natural disasters are getting worse and occurring more
frequently, (b) manmade disasters are less probable yet more consequential, (c) systems are more
interconnected and interdependent than ever before, and (d) government is losing credibility
while asked to deliver more security. Considering the breadth and depth of increasing risks, the
diverse alumni of IHSR are critical stakeholders for coordinating and establishing risk-based
priorities across the homeland enterprise. Brody emphasizes that (a) there is no single
jurisdiction or sector of the economy that can assess all risks, (b) all are interconnected, (c)
federal homeland security strategy does not define or prioritize national, state, or local risks, and
(d) the federal government controls the allocations of resources. In summary, the literature
27
suggests the diverse alumni of IHSR are the most influential stakeholder for assessing risk across
the nation at all levels.
Conceptual Framework; Clark and Estes ’s Gap Analysis
Clark and Estes’ (2008) Turning Research into Results describes a process of applying
research from performance studies to practical results. A gap analysis is an approach used to
close the performance gap between an agency’s current status and a future performance goal.
Clark and Estes’ gap analysis relies on diagnosing the cause of the performance gaps, broken
into three types: KMO influences. Each general type of cause describes an influence negatively
affecting performance. The overall goal of Clark and Estes’s gap analysis is to identify the major
influencers of a stakeholder within the framework to improve performance to a set goal.
In this study, the elements of Clark and Estes’s (2008) gap analysis provide the
framework to produce a needs analysis for an innovation study surrounding homeland security
professionals’ ability to assess risk. As the alumni of IHSR are the stakeholder group most likely
to improve risk assessment, they are the focus of this study. The following section will address
the homeland security professional’s assumed influences.
Stakeholder Knowledge, Motivation, and Organizational Influences
Knowledge, motivation, and organizational culture influence the behavior of students
(Rueda, 2011). Literature expanding on homeland security professionals’ knowledge influencers
focus on factual, conceptual, and metacognitive elements. Literature describing homeland
security processionals’ motivational influences focuses on the value and attribution variables.
Finally, the literature on developing homeland security professionals’ organizational influences
focuses on cultural models and settings. The study explores the domains of influence to the limits
28
of the current literature and synthesizes them into a coherent theme within Clark and Estes’s
(2008) gap analytical framework.
Knowledge and Skills: Factual, Conceptual, and Metacognitive
The research suggests knowledge and skill are considerable influencers to risk
assessment. The study translates the required homeland security professional’s knowledge and
skill into Clark and Estes’ framework. Clark and Estes (2008) described how to adapt the results
of performance research into practical results using the active ingredients of knowledge.
Anderson and Krathwohl (2001) divide knowledge into four hierarchical levels of knowledge --
factual, conceptual, procedural, and metacognitive (Clark et al., 2008; Rueda, 2011). In addition,
each type of knowledge has five hierarchical dimensions; remembering, understanding, applying,
analyzing, evaluating, and creating (Anderson & Krathwohl, 2001; Gaillard & Mercer, 2013).
Weichselgartner and Pigeon (2015) add to the lexicon of knowledge specific to homeland
security professionals and create two categories of knowledge: explicit and tacit. The advent of
big data and learning algorithms interacting with human experience requires explicit and tacit
terms to explain the phenomenon of computer and human learning in risk assessment (Nonaka,
2008; Weichselgartner & Pigeon, 2015). Nonaka (2008) explained that a computer quickly
produces explicit knowledge, transmits it electronically, and stores it in databases. Tacit
knowledge is personal knowledge gained from experience. Research suggests explicit and tacit
knowledge influence each other in the context of risk and is a recurring theme in risk assessment
as metrics shift from objective to subjective.
Research recommends two general situations to enhance knowledge. First, when
practitioners do not know how to accomplish their goals, and second, when practitioners expect
novel challenges that exceed their current problem-solving capacity (Clark et al., 2008). Lewis
29
(2020) claims novel challenges increase as the world’s interconnectivity increases, requiring a
massive enhancement to homeland security professionals’ knowledge of risk. Enhancing
knowledge is a result of learning, defined as a change in behavior as the cause of experience
(Rueda, 2011). Hubbard’s (2020) The Failure of Risk Management builds on Lewis’s claims and
asserts learning in the domain of risk assessments is a failed practice in today’s complex world
due to applications in tacit knowledge.
The following section emphasizes the types of knowledge vital for homeland security
professionals to assess risk. The frameworks organize the types and subtypes of knowledge
identified in the literature most influential in assessing risk. Specifically, this dissertation
discusses the definitions of risk, cognitive applications of risk assessments, and metacognitive
factors related to biases.
Factual Knowledge Influences
Factual knowledge refers to knowledge that is basic to specific disciplines (Anderson &
Krathwohl, 2001; Rueda, 2011). Research suggests factual knowledge influences homeland
security professionals’ risk assessment. Factual knowledge includes terminology, basic required
components, and elemental concepts required to understand how to solve problems in a given
domain. In the context of homeland security, stakeholders need to know what is meant by risk
assessment (Hubbard, 2020; Lewis, 2019). The literature points to gaps in terminology,
specifically the definition of risk itself (Hubbard, 2014, 2020). In addition, research exposes gaps
in detailed knowledge of the components of risk (Lewis, 2019). And finally, gaps exist in the
elements of statistics and probability (Colson & Cooke, 2020; Taleb, 2007). The combination of
the above factual knowledge gaps creates challenges in assessing risk for homeland security
professionals.
30
Definition of Risk: Uncertainty or Expected Loss. The literature suggests the definition
of risk has a large influence on homeland security professionals’ risk assessment. The definition
of risk is a crucial type of factual knowledge required by homeland security professionals.
Research indicates there are two broad definitions of risk: risk as uncertainty and risk as a
function of expected loss (Hopkin, 2018). Risk as uncertainty influences the domains of finance
and economics as a measure of variance in a bell curve (Hubbard, 2014). Risk as a function of
expected loss influences the domains of actuaries, war studies, and emergency management. The
risk metric falls into two general domains: qualitative and quantitative. Quantitative risk involves
objective probabilistic measures derived from data sets. In Nonaka’s terms, this is a form of
explicit knowledge (Nonaka, 2008; Weichselgartner & Pigeon, 2015). Qualitative risk involves
subjective measures derived from expert judgment and elicitation (Aven et al., 2018; Haimes,
2009; Hanea et al., 2021; Hubbard, 2020; Kaplan & Garrick, 1981). In Nonaka’s terms, this is a
form of tacit knowledge. Each metric is context-bound, depending on the availability of data and
the novelty of events (Hanea et al., 2021). Domains with extensive data can emphasize explicit
knowledge, while disciplines with limited data emphasize tacit knowledge. In a survey, Hubbard
found 74% of practitioners used the qualitative definition of risk, with 16% using the quantitative
definition and neither using both (Hubbard, 2020). The literature suggests confusion among
practitioners about what definition of risk to use and how to measure it. Vague definitions of risk
are a serious issue because the components of risk assessments influence the basic definition.
Components of Risk: Variance, Probability, and Consequences. Research suggests
homeland security professionals must know the component of risk assessment to be successful
(Haimes, 2009; Hopkin, 2018; Hubbard, 2020; Kaplan & Garrick, 1981). As noted, risk is
defined as either a level of uncertainty or expected loss (Aven et al., 2018). Each definition has
31
different components. Risk as uncertainty consists of a statistical measure of variance derived
from a data set modeled into a bell curve (Hubbard, 2020). Risk as expected loss is a
combination of probability and consequences (Hopkin, 2018). Probability is either derived
objectively or subjectively (Hanea et al., 2021). Objective probability is a calculation from one
of three procedures, Pascal’s triangle, Laplace, or Bayesian, resulting in a number from 0 to 1
using explicit knowledge (Lewis, 2019). Subjective probability comes from expert judgment
using tacit knowledge usually elicited through a structured process (Colson & Cooke, 2020).
Research reveals that homeland security professionals rarely know the components of
risk assessment (Colson & Cooke, 2020; Hubbard, 2020). In a 2008 survey, Hubbard (2020)
suggested less than 60% of homeland security professionals understood the components of one
definition of risk. Hubbard (2020) argued gaps in knowledge of the fundamental components of
risk assessment create challenges in the domain of homeland security and lead to gaps in
conceptual knowledge.
Conceptual Knowledge Influences
Anderson and Krathwohl (2001) defined conceptual knowledge as categories,
classifications, principles, generalizations, theories, or models. Much of the literature suggests
conceptual knowledge greatly influences homeland security professionals’ risk assessments.
Practically, conceptual knowledge in homeland security is the understanding of relationships and
how things are connected (Weichselgartner & Pigeon, 2015). Meadows (2008) described the
knowledge of relationships as thinking in systems. Haimes (2009) refers to this as understanding
the system’s state. Lewis (2019) asserted risk assessment in homeland security requires multiple
system lenses to assess risk correctly.
32
Connected and Independent Events
Stakeholders need to be able to compare and discern connected events from independent
events. Independent events (IEs) have no memory and single occurrences (Lewis, 2014;
Meadows, 2008). Independent events vary in differing levels of magnitude and require a risk
assessment. Classical tools such as statistics and Laplacian probability measure IEs (Hubbard,
2014; Hubbard, 2020; Lewis, 2019). Independent events occur in an ordered environment and
behave as simple or complicated events (French, 2013, 2015; Williams & Hummelbrunner,
2020). In contrast, connected events (CEs) have memory, producing cascades and feedback
loops (Hubbard, 2020; Lewis, 2019; Meadows, 2008). Connected events have the ability to self-
replicate and run a network of nodes and links. Network theory, SOC, and fractal market theory
measure CEs (Lewis, 2019). Connected events operate in an unordered environment and behave
as complex or chaotic events (French, 2013, 2015; Williams & Hummelbrunner, 2020).
Connected events produce power laws, while IEs produce bell curves (Cirillo & Taleb, 2020b;
French, 2013; Hubbard, 2020; Lewis, 2019; Taleb, 2007)
The inability of homeland security professionals to discern and compare IEs and CEs can
lead to poor risk assessments (French, 2013; Hubbard, 2020; Lewis, 2019). Taleb (2007) asserted
the misunderstanding of CEs from IE’s is the leading cause of disasters. Lewis’s Book of
Extremes (2014) and Bak ’s Sand Pile (2014) emphasized the need for practitioners to understand
the increasing trend toward CEs as world interconnectivity increases. A risk assessment must
include novel tools for CEs. But Hubbard (2020) contended that less than 16% of practitioners
understand the difference between CEs and IEs, leading to failures in risk assessment. Objective
probability tools derived from the frequency of occurrence designed to assess IEs make
underestimates when used in CEs (Taleb, 2007). Not only do homeland security professionals
33
need to discern CEs from IEs, but they also need to understand what statistics and probability
can and cannot do.
Statistics and Probability
Stakeholders need to be able to interpret results from statistics and probability. Cooke
(2020) asserted that many CEs are novel and therefore have no data to support or bound their
likelihood of occurring. Cooke suggests homeland security professionals must understand what
statistics are telling them in these circumstances. Statistics is an explanatory tool for large sums
of data, and Taleb’s (2007) central argument is that novel events do not have large sums of data,
therefore, produce no meaningful statistics. Absence of evidence is not evidence of absence.
Perrow’s NAT highlights several tragic events that had no meaningful data in the form of
statistics to help practitioners assess the risk (Lewis, 2019).
Lewis (2019) contended that objective probability is either a priori or a posteriori. A
priori comes from prior knowledge of events, calculated from all possible outcomes. A posteriori
probability comes from knowledge of events after they have happened, calculated from statistics.
Homeland security professionals must be able to interpret results from both a priori and a
posteriori probability to assess risk using objective methods (Hubbard, 2020; Lewis, 2019).
Hanea et al. (2021) added to the literature on risk assessment with the interpretation of
subjective probability. Subjective probability comes from expert judgment and elicitation
(Colson & Cooke, 2020; Cooke & Goossens, 2004; Hubbard, 2020; Lewis, 2019). In a 2008
survey, Hubbard (2019) noted 89% of risk assessments use subjective estimates. Subjective
probability can produce results on the likelihood or uncertainty of an event (Aven et al., 2018).
But valid subjective probability requires experts calibrated to the environment or type of system
they are judging (Bonanno et al., 2021a; Wiper et al., 1994). A calibration is a rare event for risk
34
analysis. Without calibration, many experts voicing options about risk are not domain experts but
people with limited knowledge of the domain (Hubbard, 2020).
In the same survey, Hubbard (2020) stated that 44% of all variables in risk assessments
used subjective estimates, and none of them calibrated their participants. Non-calibrated,
subjective probability creates a significant problem in risk assessments because the lack of
conceptual knowledge of how to interpret subjective probability can lead to misleading
conclusions. The lack of conceptual knowledge of how to interpret subjective probability is the
leading cause in the 9/11 Commission Report when explaining why U.S. intelligence ignored the
signals of the terrorist attacks (National Commission on Terrorist Attacks Upon the United
States, 2004). Expert calibration before making judgments concerning probabilities is critical for
homeland security professionals to understand before interpreting the results (Bonanno et al.,
2021b; Colson & Cooke, 2020; Cooke & Goossens, 2004; Hubbard, 2020; Lewis, 2019). Cooke
(2020) and Lewis (2019) both contended practitioners need to interpret the expert elicitation
process and calibration because humans are victims of bias.
Metacognitive Knowledge Influences: Biases and Heuristics
Anderson and Krathwohl (2001) defined conceptual knowledge as categories,
classifications, principles, generalizations, theories, or models. Metacognitive knowledge is the
awareness of one’s cognitive processes and a better understanding of when and why people take
actions (Rueda, 2011). Research suggests metacognitive knowledge levels can significantly
influence homeland security professionals’ risk assessment.
The landmark publication of “Prospect Theory of Decisions Under Risk” in 1979
introduced the human bias’s effect on risk assessments (Slovic, 2020) (Kahneman & Tversky,
1979). Kahneman and Tversky’s study discovered a number of belief and decision-making
35
biases in the field of JDM (Slovic, 2020). In the context of homeland security, biases are a
tendency to think and behave in a way that interferes with rationality or impartiality (Hubbard,
2020). Bias shapes heuristics, or mental short cuts to answers to questions. Examples of major
bias are (a) anchoring, or the tendency for the last number in a participant’s mind to directly
affect the next number estimated; (b) availability, or the tendency to overestimate the probability
of events with greater recall in memory; (c) independency, or the tendency to a treat all events as
IEs; (d) loss aversion, or the perceived value of losing an item is greater than the value of gaining
the item; and (e) recency, or the impact of timing on the perceived probably of an event
(Hubbard, 2020; Kahneman & Tversky, 1979). Biases limit experts’ ability to estimate risks. In
addition, most experts are blind to their own biases (Pope & Schweitzer, 2011). Research
suggests this creates a significant problem in risk assessment if experts are not aware of
unconscious heuristics and biases and, as a result, make inaccurate judgments with extreme
confidence (Fischhoff et al., 1977; Slovic, 2020).
Table 2 shows the stakeholders’ influences and the related literature.
36
Table 2
Summary of Assumed Knowledge Influences on Stakeholder ’s Ability to Achieve the
Performance
Assumed knowledge influences Research literature
Factual
Stakeholders need to know the definition of
risk.
Haimes, 2009; Hanea et al., 2021; Hopkin,
2018; Hubbard, 2020; Kaplan & Garrick,
1981; Weichselgartner & Pigeon, 2015)
Stakeholders need to know the components of
a risk assessment.
Aven et al., 2018; Colson & Cooke, 2020;
Hanea et al., 2021; Hopkin, 2018; Hubbard,
2020; Lewis, 2019
Conceptual
Stakeholders need to be able to discern CEs
with IEs.
Anderson & Krathwohl, 2001; French, 2013,
2015; Haimes, 2009; Hubbard, 2014;
Hubbard, 2020; Lewis, 2014; Meadows,
2008; Taleb, 2007; Weichselgartner &
Pigeon, 2015; Williams & Hummelbrunner,
2020
Stakeholders need to be able to interpret
results from statistics and probability.
Aven et al., 2018; Bonanno et al., 2021;
Colson & Cooke, 2020; Cooke &
Goossens, 2004; Hanea et al., 2021;
Hubbard, 2020; Lewis, 2019; National
Commission on Terrorist Attacks Upon the
United States, 2004; Taleb, 2007; Wiper et
al., 1994
Metacognitive
Stakeholders need to reflect on bias
influencing risk assessment.
Fischhoff et al., 1977; Fischhoff, 2021;
Hubbard, 2020; Kahneman & Tversky,
1979; Slovic, 2020
Motivation: Attribution and Value
Motivation is the process of a goal-directed activity transforming into an instigated and
sustained action (Schunk et al., 2012). Current literature proposes that proper motivation is
critical for homeland security professionals (Cirillo & Taleb, 2020; Eccles & Wigfield, 2020;
Torres-Barrán et al., 2021; Werner & Ismail, 2021; Woo, 2021). Recent research into motivation
37
emphasizes that personal beliefs developed related to learning tasks and performing activities are
key factors to success (Eccles & Wigfield, 2020). Knowledge informs professionals how to do
things while motivation get them going, keeps them going, and tells them how much effort to
expend (Clark et al., 2008). Motivation is an important factor in performance gaps beyond
knowledge; just because professionals know how to do something does not guarantee they will
do it. Clark and Estes (2008) described motivation-driven performance in three facets: active
choice, persistence, and mental effort. The active choice is the extension of intention into action.
Persistence is the drive to continue the action despite distractions. Mental effort is the drive to
innovate novel solutions to complex problems. Rueda (2011) argued that motivational variables
are an assumed influence when professionals fail at a task.
Motivational variables consist of four categories: self-efficacy, attribution, value, and
goals (Rueda, 2011; Schunk et al., 2012). Self-efficacy and competence beliefs are professionals’
personal judgments concerning their capabilities (Bandura, 2000). Attributions include the
reasons for completing tasks and a personal belief in control (Weiner, 2005). Value refers to the
importance of the task and its impact on the world (Eccles & Wigfield, 2020). And finally, goals
relate the task to something external the professional wants to achieve (Locke & Latham, 2012).
Understanding and studying context-driven motivational variables in homeland security
professionals is key to recognizing what beliefs and processes influence risk assessments.
The Motivational Variable of Value: Expert Judgement
Value refers to the importance one assigns to tasks (Rueda, 2011). The research suggests
the motivational variable of value influences homeland security professionals’ performance
(Cirillo & Taleb, 2020; Eccles & Wigfield, 2020; Torres-Barrán et al., 2021; Werner & Ismail,
2021; Woo, 2021). Lewis (2014) argued the world is becoming more complex. From
38
environmental impacts on wildfires to bubbles in the stock markets, Lewis outlines 10 case
studies of Hubbard’s (2019) cascading failures or feedback loop catastrophes in Book of
Extremes. Each case study displays either an element of fractal geometry (Mandelbrot &
Hudson, 2010), self-organized critically (Bak et al., 1988), or black swans (Cirillo & Taleb,
2020; Taleb, 2007). Cooke, Lewis, and Taleb argued the aforementioned events fall into
Snowden’s Cynefin framework as unordered events that cannot be easily quantified by
subjective probability (Pauleen, 2017; Snowden, 2005). The alternative to objective probability
is subjective probability using expert judgment (Colson & Cooke, 2020). Literature concerning
subjective probability is critical of its efficacy, creating a stigma against its use (Hanea et al.,
2021; Hubbard, 2020). However, the literature also suggests tools used to produce objective
probabilities in a complex environment are less accurate and potentially misleading (Taleb,
2007). French (2013) contended that despite the potential shortcomings of expert judgment
eliciting subjective probabilities based on tacit knowledge in risk assessments, they are more
valuable than objective tools based on explicit knowledge used outside of their intended domain.
Research suggests that professions motivated by attainment values expend the required
effort to assess risk using expert elicitation (Abernethy et al., 2021; Eccles & Wigfield, 2020;
Weiner, 2005). Systematically calibrating experts to assess risk takes effort but produces high
value attainment. An expert risk assessment performed systematically to assess risk in complex
environments saves lives, while poor risk assessments using experts cost lives (Hubbard, 2020;
Lewis, 2014). Research shows participants using expert elicitation indicate high value
attainment. A survey from 2020 shows participants in an expert risk assessment elicitation scored
the experience as either good or excellent on a 5-point Likert scale (Bonanno et al., 2021). Risk
assessment via expert judgment produced positive results in case studies in the domains of
39
terrorism (Woo, 2021), internationalization (Zdziarski et al., 2021), supply chains (Torres-Barrán
et al., 2021), geo-politics (Werner & Ismail, 2021), and cyber security. In addition, Talib has
several published works, including his testimony before the U.S. congress showing the
importance of expert judgment in complex environments. Talib emphasizes the negative value of
using objective probability in place of subjective probability when operating in a complex
environment (Taleb et al., 2009; Taleb, 2007). The evidence and theory support the positive
value of expert judgment used to assess risk in a structured format.
The Motivational Variable of Attribution: Belief You Can Improve
Attribution is the belief one has about why they are successful or not (Rueda, 2011).
Research suggests homeland security professionals need to believe their actions will produce
positive outcomes to maintain persistence and expend mental effort when assessing risk (Colson
& Cooke, 2020; Han & Stieha, 2020; Hubbard, 2020; Lewis, 2014a; Michel-Kerjan, 2015;
Yeager & Dweck, 2020). And when it comes to motivation gaps, research reveals that belief is
almost everything (Clark et al., 2008). Professionals’ belief about reasons for success or failure
at a task, as well as the degree of control they have in affecting outcomes, is the attribution
variable of motivation (Rueda, 2011). Motivation theorist Weiner (2005) assigned three
dimensions to attribution: stability, locus, and control. Stability regards the permanency or
transiency of an attribution. Practitioners can assign attributions such as intelligence or
mathematical competence into a fixed or malleable category, greatly impacting their persistence
or mental effort expended. Locus refers to the origin of influencers, such as internal or external
factors. Events such as Hurricane Katrina have an external locus to homeland security
professionals, whereas the prevention efforts and response plan have an internal locus. And
finally, control categorizes events into things a professional can or cannot control (Weiner,
40
2005). Practitioners may not be able to control events physically, but they can influence the risk
assessment of events.
Literature regarding the attribution dimension of stability centers on the concept of a
growth mindset. Pioneered by Yeager and Dweck (2020), the growth mindset anchors the
practitioner’s belief regarding their attributions into a malleable state. Cheng (2021) argued the
inception of transient attributions, as opposed to fixed attributions, has the greatest impact on
motivation. Recent studies in neuroscience show the brain is much more malleable or plastic
than understood in the past (Yeager & Dweck, 2020). However, the studies also show that a
belief in attribution stability can counteract the brain’s natural plasticity, creating an artificially
fixed attribute (Han & Stieha, 2020).
Michel-Kerjan (2015) argued static attributions are a significant barrier to risk
assessments and that leaders must know they can adjust to complex environments, explicitly
regarding resource allocation. Purposely shifting resources and budgets to novel treats in
response to changing environments requires the belief we can learn and change. Using 257
sources of survey and archived data, a 2021 study noted substantial impacts of a growth mindset
associated with resource allocation with greater use of budgets to expand practices into emerging
trends (Abernethy et al., 2021). Lewis (2014) argued that as the world’s interconnectivity and
complexity increase, the internal models used to evaluate the external environment must change
(Cheng et al., 2021; Cirillo & Taleb, 2020; Colson & Cooke, 2020; Hubbard, 2020; Lewis, 2014;
Lewis, 2019). These new models are beyond the classical statical models and probability
equations taught in the educational system (Taleb, 2007b).
Assessing risk in the connected world can feel overwhelming (Slovic, 2020). Many
classical tools taught in universities, such as statistics and probability, fail in a complex
41
environment (Taleb, 2007). Current and future practitioners must believe their attributions are
plastic enough to keep up with the environment (Cheng et al., 2021). To expend the effort to
change, practitioners must believe their actions will produce positive results (Eccles & Wigfield,
2020; Hubbard, 2020; Weiner, 2005). The research shows a stakeholder’s perception of
themselves, their plasticity, and their positive feeling has considerable influence on risk
assessment. Table 3 shows the stakeholder’s motivational influences and the related literature.
Table 3
Summary of Assumed Motivation Influences on Stakeholder ’s Ability to Achieve the Performance
Goal
Assumed motivation influences Research literature
Value
Stakeholder needs to consider expert
judgment useful for themselves.
Abernethy et al., 2021; Clark et al., 2008;
Colson & Cooke, 2020; Han & Stieha,
2020; Hubbard, 2020; Lewis, 2014;
Michel-Kerjan, 2015; Rueda, 2011; Taleb,
2007; Yeager & Dweck, 2020
Attribution
Stakeholders must believe they can adapt to
complex environments.
Bak et al., 1988; Bonanno et al., 2021; Cirillo
& Taleb, 2020; Colson & Cooke, 2020;
Eccles & Wigfield, 2020; French, 2013b;
Hanea et al., 2021; Hubbard, 2020; Lewis,
2014; Mandelbrot & Hudson, 2010;
Pauleen, 2017; Snowden, 2005; Taleb,
2007; Torres-Barrán et al., 2021; Weiner,
2005; Werner & Ismail, 2021; Woo, 2021;
Zdziarski et al., 2021
42
Organizational Influences: Cultural Models and Cultural Settings
Organizational influences are the areas of culture, structure, policies, and practices
(Schein & Schein, 2017). Evidence suggests that organizational influences can impede
performance even when practitioners have the proper knowledge and motivation (Rueda, 2011).
Factors such as the quality of personal interactions and the practitioner’s perception of
institutional barriers affect how teammates behave and think. Schein and Schein (2017)
described organizational culture into elements such as observed behavior, climate, formal rituals,
espoused values, formal philosophy, group norms, rules of the game, the identity of self,
embedded skills, habits of thinking, share meanings, and root metaphors. Schein and Schein
generalized culture as a dynamic process that includes everything an organization has learned
through its evolution. Rueda (2011) divides culture into cultural models and cultural settings.
Cultural models are the invisible aspect of a group, including the shared worldview of how the
world currently works and how it should work; cultural settings are the visible aspects of an
organizational structure, such as structure, practices, and policies (Gallimore & Goldenberg,
2001). This next section focuses on the literature addressing gaps within cultural models and
cultural settings.
Cultural Model Influences
Cultural models are the invisible shared mental schema and understanding of the world
(Gallimore & Goldenberg, 2001). When identifying cultural gaps, Clark and Estes (2008)
referred to the alignment of someone’s core beliefs concerning important policies, procedures,
and communications. Schein and Schein (2017) described an individual’s core beliefs as a
pattern of basic assumptions formed by a group as it adapts to internal and external forces while
solving problems. New members learn successful beliefs as the correct worldview. World views
43
are the invisible aspect of culture that people use to perceive, think, and feel relative to the
problems they face (Clark et al., 2008; Rueda, 2011; Schein & Schein, 2017).
Bellavita (2019) uses the Cynefin framework as a useful worldview for homeland
security professionals to meet the challenges of emergent events. The Cynefin framework
divides the world into five different environments: the simple, complicated, complex, chaotic,
and disordered. Navigating the Cynefin frameworks is a change in core beliefs from a simple or
reductive view of the world to include a complex and holistic view of the world (Pauleen, 2017).
A shift from a reductive belief to a holistic belief is systems thinking (Arnold & Wade, 2015;
Meadows, 2008; Montuori, 2011). Research suggests homeland security professionals can
benefit from a culture that aligns with systems thinking.
Bellavita (2006) defined a strategy for homeland security culture as a pattern of
consistent behavior over time that is both intentional and emergent. And although the cultural
model of homeland security is intentional, Bellavita (2006) suggested gaps exist in dealing with
emergence. Emergence is when a number of simple components interact to form a novel,
complex, unpredictable system. Bellavita ties the gaps in dealing with emergence back to Lewis,
Taleb, and Snowden’s environment of complexity (Lewis, 2014; Pauleen, 2017; Taleb, 2007).
Arnold, Meadows, and Montuori argued that solving problems in a complex environment
requires system thinking (Arnold & Wade, 2015; Meadows, 2008; Montuori, 2011).
Both Taquechel (2017) and Taleb (2007) presented empirical evidence that increasing
interconnectivity increases emergence. Bellavita (2006, 2019) argued the homeland security
enterprise specifically needs a culture of willingness to change to keep up with the emergence of
our environment. Systems thinking allows for rapid change in practices when the environment
changes. Systems thinking starts with four basic questions: (a) what can be known, (b) what is
44
known, (c) how is it known, and (d) how can the knowledge prove useful (Yunkaporta, 2020)?
This humble approach prevents practitioners from locking into fixed models of thought.
Yunkaporta (2020) suggested asking the four basic questions in complex environments allows
space for solutions that work but are not fully understood, such as practices of indigenous
peoples. Synthesizing the literature suggests the cultural model of homeland security
professionals requires alignment with a system approach to solving problems in the current
environment.
Cultural Settings Influences: Policy
Cultural settings are the visible aspects of an organization (Gallimore & Goldenberg,
2001). The literature suggests cultural settings greatly influence homeland security professionals’
behavior and patterns of thought (Lewis, 2019). In the context of homeland security, cultural
settings refer to the people, documents, and actions that constitute everyday life in the
organization (Rueda, 2011). Miettinen et al. (2012) defined the people and actions of an
organization as the social context. An individual’s behavior and social context greatly influence
each other (Miettinen et al., 2012; Rueda, 2011). Policies, procedures, and the structure of the
organization are cultural settings creating ingrained patterns of thought and actions (Clark et al.,
2008; Schein & Schein, 2017). Schein and Schein (2017) added to the analysis by categorizing
the visible cultural settings as artifacts. Artifacts are one of three ways Schein and Schein
analyzed the structure of culture. Artifacts include the language, products, and policies of an
organization. This study focused on the policy impacts of artifacts such as the risk lexicon and
the risk-informed decision-making on homeland security professionals.
Literature reveals the cultural setting of risk assessment has continued to develop since
9/11 (Lewis, 2019). At the federal level, Lewis (2019) described how following the events on
45
September 11
th
, 2001, President Bush issued Executive Order 13231, creating the President’s
Critical Infrastructure Protections Board (PCIPB) with the mission of emergency preparedness.
PCIPB led to the creation of DHS, an amalgamation of 22 agencies under one umbrella with the
task of assessing the risk of, preventing, preparing for, responding to, and recovering from
disasters (Lewis, 2019). The mixing of diverse agencies resulted in a shift in cultural settings for
all and required the creation of artifacts to clarify meaning for the new organization (Lewis,
2019).
DHS’s risk lexicon attempts to establish common terms for homeland security
professionals to align behavior and patterns of thought (DHS, 2014). The lexicon defines risk as
a subjective measure based on expected loss using a mix of threats, consequences, and
vulnerabilities. The lexicon does not include risk, defined as uncertainty, and offers no objective
metrics. Hubbard (2020) is critical of the DHS lexicon because of what it lacks. Specifically, he
communicated the lexicon provides no clear procedure to determine threats, consequences, and
vulnerabilities or how to combine them. Hubbard pointed out that risk assessments based on an
unstructured, qualitative assessment used differently by different organizations result in an
arbitrary scoring scheme that lacks meaning. Marti et al. (2021) referred to Hubbard’s criticism
of the lexicon as the problem of the random expert. The lexicon, lacking any objective metrics,
creates a set of expert elicitation. But with no structured process, the expert elicitation becomes
random and haphazard. In a 2021 study involving 44 post-professional elicitations, Marti et al.
noted a structured approach for risk assessment using experts resulted in significantly better
results than a random process. Experts using structured processes outperformed the random
experts 95% of the time (Marti et al., 2021). The DHS risk lexicon creates a cultural setting of
random experts assessing risk to assist decision-making.
46
The definition of risk in federal artifacts influences behavior and patterns of thought
through risk-informed decision-making. Lewis (2019) described how, building off the risk
lexicon, a federal artifact called the National Infrastructure Protection Plan (NIPP) creates
courses of action based on risk assessments. The course of action he referred to includes the
allocation of funds. Lewis (2019) and Hubbard (2020) argued that the allocation of funds injects
politically motivated risk assessments that influence behavior. Slovic (2020) reasons that
artifacts such as the NIPP allow for powerful cognitive biases, political hyper-partisanship, and
deep social prejudices to produce nonrational judgments in risk assessments. As an example,
Slovic used federal funds diverted to racial injustice while giving little attention or concern for
nuclear war. Earlier studies show that the degree of feelings toward events as good or bad as
opposed to a ranking of expected loss influences risk assessment if left to random experts
(Alhakami & Slovic, 1994). In a recent survey, Hubbard found that structured risk assessments
only shaped 47% of strategic decisions (Hubbard, 2020). The NIPP shifted from a policy to
assess risk to a tool to allocate funds for political needs.
Finally, evidence suggests that the physical environment is shifting, requiring updated
policies to address systems thinking. In Book of Extremes, Lewis (2014) presented 14 case
studies of modern events that live in the complex or chaotic environments of the Cynefin
framework. The events represent a trend toward non-linear failures due to an increasingly
interconnected world. The black swans, normal accidents, and cascading feedback loops are
increasing. Both Cooke (2020) and Lewis (2019) argued that objective measurements fail to
assess risk and require a shift in policy toward systems thinking. Meadows (2008) described a
system as a set of things interconnected to produce a pattern of behavior over time. Policy
surrounding systems thinking must address three parts of the environment: elements,
47
interconnections, and a function or purpose (Meadows, 2008). Meadows argued this is how one
can understand the potential behavior of our environment. Currently, the DHS (2014) lexicon
defines a system as a combination of things integrated for a specific purpose but offers no way to
assess them. In addition, the DHS lexicon excludes the concept of systems integrated that serve
different purposes that can cause catastrophic cascading collapses. A synthesis of the literature
reveals the cultural setting of homeland security professionals requires policies to address a
system approach to solve problems in the current environment. Table 4 shows the stakeholder’s
motivational influences and the related literature.
Table 4
Summary of Assumed Organizational Influences on Stakeholder ’s Ability to Achieve the
Performance Goal
Assumed organizational influences Research literature
Cultural models
Stakeholders need to create a culture that
values system thinking.
Arnold & Wade, 2015; Bellavita, 2019;
Christopher Bellavita, 2006; Clark et al.,
2008; Gallimore & Goldenberg, 2001;
Montuori, 2011; Rueda, 2011; Schein &
Schein, 2017; Taquechel & Lewis, 2017;
Yunkaporta, 2020
Cultural settings
Stakeholders need to create policies that
address complexity.
Alhakami & Slovic, 1994; DHS, 2014;
Gallimore & Goldenberg, 2001; Hubbard,
2020; Lewis, 2014b; Lewis, 2019; Lewis,
2019; Marti et al., 2021; Meadows, 2008;
Miettinen et al., 2012; Rueda, 2011; Schein
& Schein, 2017; Slovic, 2020
48
Summary
Risk assessment is challenging. Risk is either variance or expected loss measured with
objective and subjective tools. Catastrophe theory presents risk assessment as either CEs or IEs.
DHS and others have attempted to clarify risk assessment issues by creating lexicons and
glossaries with minimal impact. At the same time, human bias may undo most efforts to
standardize risk assessments into a helpful form. Through the lens of a gap analysis, the literature
suggests the domains of knowledge, motivation, and organizational culture influence homeland
security professionals’ ability to assess risk. The assumed influential dimensions of knowledge
are factual, conceptual, and metacognitive aspects. Specifically, stakeholders must know the
definition of risk, the components of risk assessments, the concepts of statistics and probability,
the difference between connected and IEs, and the influence of bias on risk assessment. The
assumed influential dimensions of motivation are value and attribution. Stakeholders need to
value expert judgment and feel optimistic that their actions will produce positive outcomes.
Finally, the assumed influential dimensions of organizational support are cultural settings and
cultural models. Stakeholders need to be part of a culture that aligns with systems thinking and
need policies that address complexity.
49
Chapter Three: Methods
The purpose of this innovation study was to conduct a needs analysis in the areas of
KMO resources necessary for IHSR master’s degree students to achieve their stakeholder goal of
assessing the probability and consequences of events in alignment with systems theory by May
2024. The analysis generated a list of possible needs for IHSR master’s degree students to
accomplish their goal so that these could be examined to determine which are actual or validated.
Two research questions guided this needs analysis study:
1. What are IHSR master’s degree alumni knowledge, motivation, and organization
needs related to assessing the probability and consequences of events (risk)?
2. What are the knowledge, motivation, and organizational recommendations for
improving IHSR master’s degree student abilities to assess the probability and
consequences of events (risk)?
Conceptual and Methodological Framework
Clark and Estes’s (2008) gap analytical framework is a systematic method to clarify
organizational goals and the relevant gaps between preferred performance and realized
performance. The framework categorized relevant gaps into either knowledge, motivation, or
organizational barriers (Clark et al., 2008; Rueda, 2011). Assumed needs influencing gaps in
knowledge, motivation, or culture used for this study stem from published literature. The
research instruments of surveys and interviews will validate relevant influences to meet the
stakeholders’ needs, resulting in practical solutions for stakeholders to close the relevant gaps
(Clark et al., 2008; Saunders et al., 2015). Figure 1 diagrams the workflow in Clark and Estes’s
framework.
50
Figure 1
Gap Analytical Framework
Note. Adapted from Turning Research Into Results: A Guide to Selecting the Right Performance
Solutions by R. E. Clark & F. Estes, 2008. Information Age Publishing. Copyright 2008 by
Information Age Publishing.
Clark and Estes’s (2008) gap analysis framework starts by identifying the organization’s
performance goals. Next, the framework measures the distance between the actual performance
and desired performance, known as the gap (Clark et al., 2008). Analysis of the gap produces
specific causes. Solutions are implemented and evaluated based on validated causes. Finally,
outcomes from the solutions are measured against the original goals to seek out any novel gaps.
The framework categorizes identified causes into three elements: (a) knowledge, (b)
motivation, and (c) organization. Knowledge is further divided into four categories: (a) factual,
(b) conceptual, (c) procedural, and (d) metacognitive (Clark et al., 2008; Rueda, 2011). Factual
knowledge is the ability to understand parts of information related to the topic. Conceptual
knowledge is the ability to organize factual knowledge into categories or classifications.
51
Procedural knowledge is the ability to use knowledge to complete specific tasks. And
metacognitive knowledge is the ability to transfer prior knowledge into a novel context.
Motivation is also divided into three deeper categories: (a) choice, (b) persistence, (c) and
mental effort (Clark et al., 2008; Rueda, 2011). Choice is the decision to take action toward a
goal. Persistence is the ability to maintain action toward a goal despite obstacles. Mental effort is
the energy expended to maintain the action required to overcome obstacles.
Clark et al. (2008) categorized organizational barriers into cultural models and cultural
settings (Clark et al., 2008). Cultural models refer to the non-visible ways of thinking and acting
that persist in a group of people. Cultural settings are the visible policies, doctrine, and artifacts
of the organization. The analysis of cultural models and cultural settings provides guidance to
remove barriers to achieving the desired performance goals. The combination of KMO barriers is
the lens used in the framework to understand human behavior and organizational performance.
Overview of Design
The following section outlines the methodological design of the study, including the data
sources in relation to the research questions, participating stakeholders of the study, and data
collection instruments. The data sources were a mix of survey and interview questions. The
participating stakeholders’ relationship to the research questions, accompanied by sampling
criteria and recruiting strategies, justifies their use. The study was an explanatory-sequential,
mixed-methods study using quantitative surveys and qualitative interviews (Creswell &
Creswell, 2017). The data collection instruments included an online survey and semi-structured
interviews. Survey questions reflected the assumed influences identified through the published
literature on KMO barriers. Interview questions reflected the assumed influences identified
through the published literature on KMO barriers and increased the depth of understanding. The
52
study organized the survey and interview questions into tables based on the conceptual
framework. Table 5 outlines the research questions and the data sources used for the study.
Finally, the appendixes provide the specific language used for the data collection instruments.
Table 5
Data Sources
Research questions Method 1 Method 2
What are IHSR master’s degree alumni knowledge,
motivation, and organization needs related to assessing the
probability and consequences of events (risk)?
Survey Interviews
What are the knowledge, motivation, and organizational
recommendations for improving IHSR master’s degree
student abilities to assess the probability and consequences
of events (risk)?
Survey Interviews
53
Participating Stakeholders
The participating stakeholders were alumni of the IHSR master’s program. The alumni
make up leaders in the homeland security enterprise ranging from local, state, federal, and tribal
governments. These stakeholders make up a diverse swath of backgrounds and specific missions.
However, they all have a common mission to assess and reduce risk in the homeland domain.
The need to assess risk in differing environments makes them an excellent choice for
understanding how and why the nation performs risk assessments at all levels of government.
Their common graduate education makes the stakeholders an excellent gauge of gaps in our
education system and how to close them. Participants consist of people who graduated from the
IHSR master’s program.
Survey Sampling Criteria and Rationale
Criterion 1. Graduates from the IHSR master’s program.
Survey Sampling (Recruitment) Strategy and Rationale
The first phase of this explanatory-sequential mixed-methods study was the quantitative
survey (Creswell & Creswell, 2017). The alumni of IHSR represent the group most responsible
for assessing risk at the federal, state, and local levels of government (Bellavita, 2019).
Therefore, IHSR alumni were the intended participants of the survey. The study used the alumni
network to recruit participants to complete an online survey via email and social media groups.
There are roughly 1,430 alumni members, and they frequently exchange surveys to understand
better how to improve risk assessment and emergency management. The survey was a
nonprobability, purposeful sample choosing unique participants from government agencies that
contribute to risk assessment and are alumni of IHSR (Merriam & Tisdell, 2016). With a
54
population of 1,430, the goal was a minimum of 360 respondents for a confidence interval of
95%.
Interview and/or Focus Group Sampling Criteria and Rationale
Criterion 1
Graduates from the IHSR master’s program. Direct emails were sent to alumni of the
IHSR program using the alumni network. One of the interview questions asked if they did attend
the IHSR master’s program.
Criterion 2
Completed the above survey. The final questions in the survey asked the participant if
they were willing to be interviewed. If they agreed, there were blank fields to leave contact
information, including name, email, and phone number.
Criterion 3
Agreed to an interview, as noted above.
Criterion 4
Provided answers outside of the normal survey respondents to provide for maximum
variation (Creswell & Creswell, 2017). The survey results were filtered based on the willingness
to be interviewed and correct responses to the most commonly missed question concerning
statistics, probability, CEs, and IEs. The remaining participants were bucketed into levels of
government to achieve a diverse representation. Finally, the interviewees were selected based on
diversity of time and experience.
Interview and Sampling (Recruitment) Strategy and Rationale
The qualitative assessment was the second element of the mixed-methods study. The
sampling strategy for the study was a purposeful strategy with a goal of maximum variation
55
(Merriam & Tisdell, 2016). I contacted 10 participants from the survey respondents who
indicated they were willing to be interviewed.
To offer a better-informed explanation of the quantitative assessment (Creswell, 2014) of
the participants willing to be interviewed, the ones with maximum variation in their survey
responses were selected for the interview to increase the depth and breadth of understanding of
the topic. Sampling based on the quantitative results allows for the selection of participants who
provide distance viewpoints (Creswell, 2014). The timing of the interviews followed the survey.
The interviews were a semi-structured protocol focused on participants’ experiences and
reflections on risk assessment.
Three to ten participants from the survey results selected for the interview is a
representative number for a phenomenological study (Creswell & Creswell, 2017). The goal was
to interview ten interviewees to achieve insights into the KMO gap analysis of risk assessment.
Questions from each assumed influence from the KMO crosswalk provided a diverse explanation
of the qualitative results.
Data Collection and Instrumentation
The data collection proceeded in two distinct phases. The first was quantitative sampling
in the form of an online survey, and the second involved purposeful sampling in the form of an
interview (Creswell & Creswell, 2017). The strategy was that the quantitative results guided the
qualitative process. The qualitative process explains significant results from the quantitative
process. The interviewees were selected from the survey respondents in a form of follow-up on
outliners to help tease out the greatest KMO influences in the root causes (Creswell & Creswell,
2017).
56
The quantitative survey focused on assumed influences within the KMO framework
identified in the literature. Questions are adopted from Pintrich (1991) and developed to address
other significant influences identified in the literature. The survey was administered online. All
surveys were administrated in English.
The use of a survey provided quantifiable data to the research questions while clearly and
statistically identifying gaps in risk assessment through the lens of KMO barriers. The survey
questions focused on IHSR master’s degree alumni’s knowledge, motivation, and organization
needs related to risk assessment. The credibility and validation of the instruments are rooted in
the application of Clark and Estes’s (2008) gap analysis and the extensive literature review.
The qualitative interview focused on assumed influences with the KMO framework
identified in the literature. The use of interview questions provided a qualitative understanding of
the research questions through the lens of KMO barriers. The interviews expanded on the result
of the surveys, offering a further explanation of why we have gaps in risk assessments and how
we can close the gaps. The interviews provided a deeper and richer explanation of IHSR’s KMO
needs related to risk assessment. The combination of the instruments provided trustworthiness,
validity, and reliability of the results.
Surveys
The survey was administered online via Qualtrics. The survey was conducted online via
email through the alumni network. The survey was open for 4 weeks, from February 1 to
February 28, 2023. The data collection occurred in Qualtrics and was exported to a local file.
The study piloted the survey before use with a focus group of alumni and current
instructors at IHSR to ensure reliability. The focus group ensured clarity and relevance while
also providing an expert review of the questions asked and options for answers (Robinson &
57
Firth Leonard, 2018). The focus group was an experienced group of professionals in the
homeland security domain who practice and teach risk assessment. The survey consisted of 41
instruments, including questions on demographics, KMO elements, and interview willingness.
The majority of the KMO elements were four-scale Likert to assist with quantification. The
survey is included in the appendix.
Interviews
The interview protocol used a semi-structured approach to allow for structure and
flexibility (Creswell & Creswell, 1994). The underlying design of the protocol provided for the
recording and analysis of specific data. At the same time, the flexibility of probing questions
opened doors for further depth on the topic and other topics that warrant explorations (Merriam
& Tisdell, 2016). In addition, the semi-structured approach afforded the ability to skip questions
that may have already been answered or reorder the questions if the conversation naturally leads
to a later topic.
Data collection from the interview was based on handwritten notes, video recordings,
audio recordings, and transcriptions. Each interview included 12 questions and took about 1
hour. The interview platforms were Zoom. The location of the interviews was during office
hours. Zoom captured the audio, video, and transcriptions recording. The platforms allowed for
alumni interviews worldwide, increasing the study’s population.
Data Analysis
Data analysis is an ongoing, sequential process (Creswell & Creswell, 1994). The
analysis involves segmenting and taking apart the data and putting it back together into
meaningful structures. This study used sequential steps to encode the interview and survey
results into the KMO framework. The quantitative survey and qualitative interview data were
58
analyzed separately in this explanatory-sequential mixed-methods design. Then the results of the
quantitative surveys were integrated into the results of the qualitative data. Integrating the two
methods connected the quantitative results to the qualitative interviews, informing one another
(Creswell & Creswell, 2017).
Quantitative data analysis was performed in Qualtrics at the conclusion of the survey.
Each survey question was coded to an assumed KMO influencer based on the literature review.
The survey results were categorized into the KMO framework revealing the root KMO needs.
The data were displayed in raw form and graphed.
The qualitative interview data analysis followed a six-step process (Creswell & Creswell,
2017). First, the data were organized and prepared by transcribing the interviews and organizing
notes. Second, I read all the data to determine the interviews’ overall meaning, including (a)
what is the tone of the ideas and (b) what is the impression of the depth of expression. Third, I
coded the interviews into brackets. Fourth, I generated themes and descriptions for the brackets.
Fifth, I created abbreviations for each category and alphabetized the codes. Sixth, I organized
each category into the KMO framework. The coding of the interviews into the KMO framework
connected the results of the quantitative surveys to the qualitative interviews. The interview
results help explain the survey results with the needs analysis of the KMO framework (Clark et
al., 2008; Creswell & Creswell, 2017).
Credibility and Trustworthiness
The goal of research was to contribute credible and trustworthy knowledge to the field
(Merriam & Tisdell, 2016). Since qualitative studies are based on different assumptions and
worldviews than traditional research, different strategies are required to keep the results
believable and valid (Merriam & Tisdell, 2016). This study maintained credibility and
59
trustworthiness by deploying multiple criteria used in similar qualitative approaches. Credibility
and trustworthiness were maintained by checking interpretations with interviewees, peer
reviewing emergent findings, creating rich descriptions, deploying maximum variation in
interview subjects, and clarifying researcher bias (Merriam & Tisdell, 2015).
First, the study used member checking. Inspecting interpretations with interviewees post-
interview is member checking (Creswell & Creswell, 2017). This study took tentative the
interpretation and findings back to the participants from whom they were derived to ensure they
were plausible (Merriam & Tisdell, 2015). Member checking is a common strategy in qualitative
studies and ensures participants’ statements are not misconstrued or twisted into misleading
conclusions. Member checking protects both the subjects and the researcher from inherent bias
overpowering worldviews (Creswell & Creswell, 2017).
Second, the study used peer review. In addition to interviewees providing feedback,
outside experts will also be used in the research process. The study’s results and the process were
peer-reviewed via discussions with knowledgeable practitioners. Peer review maintains the
congruency of any emerging interview findings (Merriam & Tisdell, 2016). Examination of
tentative interpretations based on the conclusions of the research by other experts maintains the
credibility and trustworthiness of novel findings and helps control for researcher bias (Merriam
& Tisdell, 2016; Patton, 2014).
Third, the study used rich, thick descriptions. Vivid narratives ensure interviewees can
provide enough description to contextualize their responses to match the question (Creswell &
Creswell, 2017). Rich and thick descriptions are achieved by allowing enough description to pair
the research context with the readers’ situations and context (Merriam & Tisdell, 2016). Profuse
60
descriptions allow the readers to transfer the knowledge gained from the interviewee to their
situations, creating a useful study while also avoiding the misapplication of non-relevant results.
Fourth, the study used maximum variation. Maximum variation is purposefully seeking
diversity in the interviewee sample selection (Merriam & Tisdell, 2016). Seeking variation
ensures a greater range of application of the finding to differing readers (Creswell & Creswell,
2017; Merriam & Tisdell, 2016). Different types of homeland security professionals increase the
range in which the finding can be applied across the domain while identifying similar patterns.
Finally, I clarified my bias and assumptions. Researcher bias and assumptions are
reflexivity. Reflexivity is the critical reflection by the researcher of worldviews, theoretical
orientation, and relationship to the study that can affect the investigation (Merriam & Tisdell,
2016). To tease out reflexivity, I commented on past experiences and how these experiences
shape interpretations (Creswell & Creswell, 2017). Past experiences include statements of how
events in my life were impacted by the research problem that helped frame the connection
between the research and the study and how the impact of past experiences may shape
interpretations of the data (Creswell & Creswell, 2017).
Validity and Reliability
Validity refers to the issue of asking the correct questions, whereas reliability examines
the constancy of the answers to the questions (Creswell, 2014; Salkin, 2016). Quantitative
studies require well-designed instruments to ensure both validity and reliability (Creswell &
Creswell, 2017). The following presents the study’s strategies used to increase the confidence in
the survey sample, how the survey was administered, achieve a sufficient response rate, use of
member checking of interview data, and handling the bias inherent in non-response.
61
The sampling design increases confidence in the survey sample (Babbie, 2020; Creswell
& Creswell, 2017). The population of the survey is the members of the alumni association
network. The network consists of roughly 1,430 members. All members of the association have
an email listed and are accustomed to participating in alumni surveys. The sampling design is a
single stage using random sampling of the alumni list. No stratification of the population was
used. The survey was administered online via the alumni network list. Ideally, I would have liked
a census of the population; however, with a population of 1,430, my goal was a minimum of 360
respondents for a confidence interval of 95%.
Pilot testing is important to establish a survey’s validity in terms of the scores of the
instruments (Creswell & Creswell, 2017). The survey was pilot tested against 10 members of the
sample population. The pilot testing provided an initial evaluation of the internal consistency
with an opportunity to improve questions, format, and instructions (Creswell & Creswell, 2017).
In addition, follow-up member checking in the form of cognitive interviews with the pilot focus
group helped maximize validity and reliability (Robinson & Firth Leonard, 2018). The cognitive
interview helped to understand what the participants were thinking during the survey questions
to ensure I asked the right questions. The expert group provided open feedback in the interview
on the relevance of each question to the research questions. In addition, cognitive interviews can
address reliability issues by assessing consistency in the survey (Robinson & Firth Leonard,
2018; Salkind, 2017). I looked for questions expected to be answered in a certain way or
questions that confused the focus group. I assessed internal consistency by correlating each
individual score with the total score and interrater reliability by examining the percentage of
agreement between raters (Salkind, 2017) and examining Cronbach’s alphas. If the focus group
of experts was confused or inconsistent in responses, I reworked my questions.
62
Ethics
The responsivity of the primary researcher is to ensure the study’s trustworthiness by
carrying out the study as ethically as possible (Merriam & Tisdell, 2016). The primary tool used
in this study were the relevant items from Patton’s (2015) ethical issues checklist, including (a)
an explanation of the purpose of the study and methods used, (b) ensuring reciprocity, (c)
avoiding unfillable promises, (d) a risk assessment of data gathered to the participant, (e)
ensuring confidentiality, (f) obtaining informed consent, and (g) explaining data assets and
ownership. The introduction to the survey and interview included an explanation of the study,
including the purpose and methods used. The reciprocity of the results from the study to the
participants and the homeland security enterprise was explained in the introduction of the survey
and interview. I avoided making promises of results during the interview. Pseudonyms for
participants and their organizations were used to reduce the risk to the participants. Informed
consent was obtained in the introduction for the survey and interview, as well as an explicate
request to record the interviews. And finally, the assessment and ownership of the data were
explained in the introduction to the survey and interview. I am a pier of the stakeholder group of
focus. I have no formal influence over the participants and am not a supervisor or subordinate to
any of them.
Role of Investigator
My positionality as a senior leader in the fire service and emergency medical service in
the capital of California, a well-funded and affluent state, creates assumptions and bias toward
the questions I ask and what answers I expect. The structure of the questions may have omitted
impactful issues relevant to the study. In addition, my status as an alum may have influenced
participants to tell me what they thought I wanted to hear as opposed to candid answers. My best
63
strategy to avoid these pitfalls was to remind the participant that they are the expert, and I sought
their insight on how they see the problem. Humility and a form of appreciative inquiry help
overcome positions of power and bias (Whitney & Trosten-Bloom, 2010).
Limitations and Delimitations
Merriam and Tisdell (2016) explained that every study suffers from limitations.
Limitations include factors such as partial access or restricted resources. The goal of this study
was to conduct a gap analysis of risk assessments performed in the homeland security domain
using a quantitative survey and a qualitative interview instrument through the lens of Clark &
Estes's (2008) KMO framework. The design of the study and limited resources create three main
limitations. First, the study was limited to one graduate program. A focus on one program creates
a pointed study. Second, alumni’s time post-graduation varied, creating memory gaps for some.
Third, restrictions on the number of questions that can be asked and the time available for a
survey limit the depth and breadth of information elicited from the instrument.
In response, the survey questions were focused on specific influences related to the KMO
framework identified in the published literature to infer as much information as possible. In
addition, all survey responses were subjective to the participants’ context and understanding of
the question. To ensure reliability and validity, survey questions were adapted for Pintrich’s
(1991) MSLQ tool.
Interviews offer many limitations surrounding the researcher’s subjective interpretations,
including selective memory and relevancy bias. To assist with objectivity, interviews were
recorded, transcribed, coded, and analyzed, along with copious notetaking. In addition,
interviewees might have provided answers they deemed acceptable, as opposed to candid and
accurate answers. To mitigate this effect, participants’ anonymity was ensured, coupled with an
64
intent of reciprocity that their honest answers would improve the homeland security domain in
general.
The study is bound to alumni of a master’s program. However, the methodological
application can create generalized results the entire homeland security enterprise can use. The
use of Clark and Estes’s (2008) framework provided guidance for the educational system for
future leaders. Finally, the alumni are influential leaders across the nation, and engaging them in
a study of risk assessment will ensure the topic is pursued in other forms.
65
Chapter Four: Results and Findings
This study is a mixed-methods analysis of IHSR graduates using a sequential,
explanatory design to identify gaps in risk assessments. The survey and interview questions
derive from assumed influences indicated in the literature review. Three categories based on
KMO challenges organize the assumed influences. The collection of both quantitative and
qualitative data validated the assumed influences. Specifically, a survey followed by interview
data identified the KMO challenges homeland security professionals face when performing risk
assessments. The collecting and analysis of data occurred in two consecutive phases, starting
with the quantitative survey. The results from the quantitative determined the participants and
focus of the qualitative interviews. The results from the qualitative interviews help explain,
validate, and understand the survey results.
Participating Stakeholders
The participating stakeholders are graduates of the IHSR program who are leaders in the
homeland security domain at various government levels. Of the roughly 1,400 graduates, 488
participated in the survey, and 135 agreed to interviews. I interviewed 10 homeland security
professionals of diverse backgrounds and experience. The study includes all interviewees that
completed the interview.
Of the survey respondents, 49% are local leaders, 27% are federal leaders, 14% are state
leaders, and 10% are county leaders. The participants have a mix of backgrounds, including the
fire service at 31%, law enforcement at 23%, emergency management at 16%, DHS at 11%,
emergency services at 11%, public health at 3%, and individuals from other relevance
backgrounds at 12%. Participants had a mix of experience, with an average number of years at
25, and maximum number of years at 60, and a minimum of 2. Ninety-one percent of the
66
participants are actively engaged in risk assessments. Sixty-nine percent of the participants
completed a master’s degree, 11% completed a doctorate, and the remaining 20% currently have
an undergraduate degree. Figure 2 shows the number of survey respondents who engage in risk
assessments. Table 6 shows the levels of government at which participants operate.
Figure 2
Survey Participates Actively Engaged in Risk Assessments
67
Table 6
Survey R e s ponde n t s ’ Level of Government
Level of Government
Local 49%
Federal 27%
State 14%
County 10%
Table 7 shows the domain in which survey respondents operate by percentage. Table 8
shows their level of education.
Table 7
Survey R e s ponde n t s ’ Domain
Domain
Fire service 31%
Law enforcement 23%
Emergency management 16%
Department of Homeland Security 11%
Emergency services 11%
Public health 3%
Other 12%
68
Table 8
Survey Respondents ’ Level of Education
Level of education
Doctorate 11%
Master’s degree 69%
Undergraduate degree 20%
Of the interviewees, 30% are local leaders, 60% are federal leaders, and 10% are state
leaders. The participants have a mix of backgrounds, including the fire service at 10%, law
enforcement at 20%, emergency management at 20%, DHS at 30%, and emergency services at
20%. Participants had a mix of experience, with an average number of years at 24, a maximum
number of years at 35, and a minimum of 10. All participants are actively engaged in risk
assessments. Ninety percent of them completed a master’s degree, none completed a doctorate,
and the remaining 10% currently have an undergraduate degree. Table 9 shows the interviewees’
level of government they operate in by percentages.
Table 9
Interviewees ’ Level of Government
Level of government
Local leaders 30%
Federal leaders 60%
State leaders 10%
69
Determination of Assets and Needs
The sources of data for the study are quantitative surveys and qualitative interviews. The
data sources support one another through a sequential explanatory process. Survey results helped
guide interview questions to elaborate trends and outliers. Responses from the interview
triangulate the survey results with the literature review. Based on the population size, the survey
offered a 99% confidence level with a 5% margin of error. Correct answers to commonly missed
questions from the survey results helped select the interviewees. Ten interviews provided a
saturation of agreement on trends identified in both the survey and interview questions.
The quantitative determination of an asset based on survey data is a cumulative average
of above 3.2 on the 4-point Likert scale. The determination of a need based on survey data is a
cumulative average of 3.2 or below the four-scale Likert score. The rational of 3.2 as a pivot
point from a need to assets is based on a 20% need gap. In other words, a need exists if 20% or
more of the participants indicate a deficit in the assumed influence. The survey contains four
questions with multiple correct answers and multiple incorrect answers. At least one correct
selection and the exclusion of all of the incorrect selections on the multi-select questions
determine an asset. The determination of a need is the selection of any of the incorrect selections.
Qualitative assets or needs are based on agreement among participants in the interviews.
Results and Findings for Knowledge Causes
Results and findings are reported using the knowledge categories and assumed influences
for each category. The knowledge categories used in the study are factual, conceptual, and
metacognitive. The assumed influences for factual knowledge are the definition of risk and the
components of a risk assessment. The assumed influences for conceptual knowledge are the
ability to discern CEs from in IEs and the ability to interpret results from statistics and
70
probability. The assumed influences for metacognitive knowledge are the need to reflect on
biases affecting risk assessments. The influences are validated and triangulated using the
combination of the survey, interviews, and document analysis. The instruments are weighted by
percentage based on their depth, richness, validity, and relevance. Interviews findings are
weighted at 60% due to the rich, deep, and specific relevant data. Survey results are weighted at
20% due to their validity and relevance. Document analysis is weighted at 20% due to the
validity and relevance.
Factual Knowledge
Factual knowledge refers to terminologies, details, and basic elements of a domain (Clark
et al., 2008). The following section covers the major influences on stakeholders’ factual
knowledge discovered in the study through the survey and interviews. The major influences are
the definition of risk and the components of a risk assessment.
Influence 1: Stakeholders Need to Know the Definition of Risk
The definition of risk is critical to performing a risk assessment (Hubbard, 2020). The
following section covers the study’s findings and results concerning the definitions of risk used
by homeland security professionals. The outcomes are organized by survey results and interview
findings and summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the factual knowledge of the definition of risk. The literature suggests multiple
definitions of risk ranging from objective measures of the degree of uncertainty to subjective
frameworks of numerous elements (DHS Risk Steering Committee, 2008; Hanea et al., 2021;
Hopkin, 2018; Hubbard, 2020; Kaplan & Garrick, 1981; Weichselgartner & Pigeon, 2015). The
survey questions aim to determine if homeland security professionals are consistently using the
71
correct definition of risk. Empirical evidence from the survey suggests homeland security
professionals define risk as a subjective framework consisting of threat, consequences, and
vulnerability.
When given the question, “Based on your experience, please pick the closest definition of
risk used by your organization,” of the 408 respondents, 59% selected “a combination of threat,
consequences, and vulnerability,” 14% selected “a combination of consequences and
probability,” 10% selected “the probability of a negative event.” Five percent selected “a
measure of expected loss,” 2% selected “a measure of uncertainty,” and 8% selected “My
organization uses no definition of risk.” With 73% of survey respondents using a subjective
framework that includes consequences and vulnerability, empirical evidence from the survey
suggests a dominant definition of risk in the homeland security domain. The results from the
survey are displayed in Table 10 and Figure 3.
Table 10
Survey Results of The Definition of Risk
The closest definition of risk used by your organization
My organization uses no definition of risk 8.91%
A measure of uncertainty 2.29%
A measure of expected loss 4.83%
The probability of a negative event 10.69%
A combination of consequences and probability 14.25%
A combination of threat, consequences, and vulnerability 59.03%
72
Figure 3
Survey Results of the Definition of Risk
Interview Findings. Qualitative findings from the interviews indicate Homeland security
professionals tend to use a consistent definition of risk. Ninety of the respondents mentioned a
rule of triplets involving threats, consequences, and vulnerabilities (TCV). BJ stated, “That
whole realm of how we decide how much risk we’re going to take, you know, threat, hazard,
vulnerability, all those concepts … are new.” Similarly, Anais commented, “Risk is the Delta, …
an equation of consequence vulnerability and threat.” These comments aligned with previous
research that risk can be a set of subjective triplets involving three factors (Hubbard, 2020).
Additionally, risk is formally defined in the realm of homeland security as TCV. In discussing
73
risk, Carrie noted that “the whole threat vulnerability and consequence kind of mantra around
thinking about, this is how we would characterize risk, but that’s broadly doctrinally how DHS
thinks about risk.” Collectively, the declarative knowledge of risk started with TCV. Although
TCV is one standard definition of risk, confusion arises with other meanings.
Qualitative findings from the interviews indicate homeland security professionals tend to
use additional definitions of risks and indicate a need. All interviewees mentioned an additional
definition of risk beyond the DHS doctrinal TCV. When simply talking about risk, Anais stated,
“and so the definition of risk is the thing that will ruin your day.” Carrie reinforced Anais’s blunt
statement concerning risk by saying, “The definition of risk is just something bad might happen.”
In a more complex account, BJ described risk as “a measure of the degree or a measure of the
probability of a loss to life or things that would lead to a potential … damage or loss to property
or life.” Nicole added another human dimension of risk by saying, “risk is driven by perception,
ultimately. What I perceive as a risk or understand as a risk is maybe radically different and
maybe an opportunity for somebody else.” These candid definitions of risk are also supported in
the literature (Hanea et al., 2021; Hubbard, 2020). The multiple variation of risk expands beyond
the TCV framework to incorporate all bad things, probability, and perception. Tobias summed
the definition of risk up as “risks are the bad things that could happen that you are able to
bound.” Collectively, all the participants had more than one definition of risk.
Summary. The quantitative results indicate an inconsistent definition of risk indicating a
need. The dominant definition of risk used in homeland security is a mix of TCV. This outcome
is supported by less than 60% of the responses. The qualitative findings explain homeland
security professionals use the framework of TCV to subjectivity analyze risk. The quotes from
74
the interviews support this conclusion. The literature supports a definition of risk as a rule of
triplets involving TCV (DHS Risk Steering Committee, 2008).
The quantitative correlation with other results indicates TCV is also a common set of
components used in risk assessments. The qualitative alignment with additional findings also
shows TCV is a common set of components used to assess risk. However, the qualitative finding
also explains that other definitions of risk are used, ranging from layman’s descriptions to
advanced probabilistic processes. The quotes indicate these findings by defining risk from a
range of something bad might happen to a measure of the probability. Inferences from the results
indicate risk is still tough to define consistently. In conclusion, risk as an objective metric or
subjective framework must be defined before applying any subsequent methodology to a risk
assessment.
Influence 2: Stakeholders Need to Know the Components of a Risk Assessment
The components of a risk assessment are critical to performing a risk assessment. The
following section covers the study’s findings and results concerning the components of a risk
assessment used by homeland security professionals. The outcomes are organized by survey
results and interview findings and summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the factual knowledge of the common components used in a risk assessment. The
literature suggests many common components used in risk assessments, ranging from objective
probability to expert options (Aven et al., 2018; Hanea et al., 2021; Lewis, 2019). The survey
questions aimed to determine which, if any, common components of risk assessments are used
by homeland security professionals. Empirical evidence from the survey suggests homeland
security professionals lack common components of a risk assessment.
75
When given the question, “Please select the common components used by your
organization to assess risk,” of the 408 respondents, 34% selected a mix of “Threats,
Consequences, and Vulnerabilities,” 20% selected “Subjective probability – a number calculated
from human estimates,” 16% selected “The past, current, and future state of the system
(timing),” 10% selected “Objective probability – a number calculated from a mathematical
model,” 8% selected “Dollar amount calculated by an economic impact model (objective
consequences),” 6% selected “Panel of experts with training in statistics and probability
(calibrated expert elicitation),” and 4% selected “My organization uses no components to assess
risk.” With no single result equal to greater than 50%, empirical evidence from the survey
suggests no components of risk assessments are used in the homeland security domain. The
results from the survey are displayed in Table 11 and Figure 4.
Table 11
Survey Results From the Common Components of a Risk Assessment
The common components used by your organization to assess risk
A mix of threats, consequences, and vulnerabilities 34%
Subjective probability: a number calculated from human estimates 21%
The past, current, and future state of the system (timing) 16%
Objective probability: a number calculated from a mathematical model 10%
Dollar amount calculated by an economic impact model (objective consequences) 8%
Panel of experts with training in statistics and probability 7%
My organization uses no components to assess risk 4%
76
Figure 4
Components of a Risk Assessments Survey Results
Interview Findings. Homeland security professionals have many different components
of a risk assessment. All interviewees described different elements of risk assessment. Carrie
stated there is no census of the components of a risk assessment, “I will say I think there’s a lot
of different views on how exactly that should work. So, we have a lot of different tools and a lot
77
of different approaches for quantifying each of those components of risk.” BJ described real-time
tools such as “various mnemonics or acronyms that are used to assist in size up and in
prioritizing tasks that are performed. And that needs to be done with risk assessment, continually
ongoing.” Anais described excises as a way to tease out risk components: “To have that
structured conversation in the form of … a tabletop exercise with the decision makers tends to …
turn on some light bulbs.” When describing the differences in each agency’s use of risk
components, Richard explained that “mission essential functions and what risks there are in
terms of geography determine an agencies tools used in risk.” The diversity of risk components
described in the interviews aligns with the recent literature outlining the elements of a risk
assessment (Hubbard, 2020; Lewis, 2019). Inconsistent definitions and features used in risk
assessments create the inability to validate or reproduce risk assessments, leaving them as best
guesses at the time. Tobias reinforced the gap in risk assessments by stating, “I would say each
organization has a different set of risk constructs.” Collectively, all the interviews produced
different components of risk assessments.
Summary. The quantitative results indicate the most common components homeland
security professionals use are a mix of TCV. This outcome is supported by 34% of the responses.
The qualitative findings explain homeland security professionals use a number of diverse
components, ranging from mnemonics to tabletop exercises when building a risk assessment.
The quotes from the interviews support this conclusion. The literature supports various
components, from expert elicitation to probabilistic calculations (Colson & Cooke, 2020; Lewis,
2019). The quantitative correlation with other results indicates a risk assessment’s components
are diverse based on the defined methodology. The qualitative alignment with additional findings
shows a risk assessment’s components depend on the definition of risk. The inferences are that
78
risk components are ill-defined, and there is a pervasive gap in factual knowledge. The
quantitative and qualitative data suggest a need for factual knowledge. The quantitative and
qualitative findings triangulate with document analysis.
Conceptual Knowledge
Anderson and Krathwohl (2001) defined conceptual knowledge as categories,
classifications, principles, generalizations, theories, or models. The following section covers the
major influences on stakeholders’ conceptual knowledge discovered in the study through the
survey and interviews. The major influences are discerning CEs from IEs and interpreting results
from statistics and probability.
Influence 1: Stakeholders Need to Be Able to Discern Connected Events From Independent
Events
The ability to discern CEs from IEs is critical to homeland security professionals. The
following section covers the study’s findings and results concerning the ability to discern CEs
from IEs by homeland security professionals. The outcomes are organized by survey results and
interview findings and summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the conceptual knowledge of CEs that tend to produce power laws versus IEs that tend
to produce bell curves. The literature suggests people frequently confuse or mis-categorize
events that produce power law with events that produce bell curves, resulting in poor risk
assessments (Lewis, 2014a; Taleb, 2007). The survey questions aim to determine if homeland
security professionals can discern CEs from IEs. Empirical evidence from the survey suggests
homeland security professionals cannot discern CEs from IEs and indicates a significant need.
79
When given the question, “Based on your experience, mark the events below that best
represents your understanding of independent events or disasters that tend to follow a bell curve
(Pick all that apply from a list),” of the 408 respondents, 3% selected at least one independent
event while not including a connected event. Eighty-six percent selected at least one connected
event into their mix of IEs. Eleven percent did not understand the question. With 96% of survey
respondents either selecting an incorrect type of event supported by the literature or not
understanding the question, empirical evidence from the survey suggests a significate gap in
conceptual knowledge of IEs, as displayed in Table 12.
Table 12
Survey Result of Discerning Connected Events for Independent Events
Selecting only independent events
Correctly discerned CEs from IEs 3%
Confused CEs and IEs 86%
Did not understand the question 11%
80
When given the question, “Based on your experience, mark the events below that best
represents your understanding of CEs or disasters that can produce a power law (Pick all that
apply from a list),” of the 408 respondents, 6% selected at least one connected event while not
including an independent event. Seventy-three percent selected at least one independent event
into their mix of CEs. Lastly, 21% did not understand the question. With 94% of survey
respondents either selecting an incorrect type of event supported by the literature or not
understanding the question, empirical evidence from the survey suggests a significate gap in
conceptual knowledge of CEs, as displayed in Table 13.
Table 13
Survey Result of Discerning Connected Events for Independent Events
Selecting only connected events
Correctly discerned CEs from independent events 6%
Confused CEs and independent events 73%
Did not understand the question 21%
81
Interview Findings. Homeland security professionals struggle with discerning CEs from
IEs. This influence is a need. Ninety percent of the interviewees confirmed that the difference
between CEs that produce non-linear results is often confused and classified with IEs that tend to
follow a bell curve. When discussing the challenges of discerning CEs from IEs, Anais stated, “I
think it’s been a challenge for all of us because the more we are interconnected and the more the
world changes, the more emergent risks are coming up.” Rocky clarified the gap when
describing the process of considering CEs from IEs during risk assessments: “Critical
infrastructure protection is such an awesome, … big data problem. Like if we started digging
down into how are things connected, and we don’t; like you might think we do; I’m telling you
we don’t.” These statements align with previous research that CEs are increasingly potential and
frequently overlooked (Haimes, 2009; Hubbard, 2020; Kaplan & Garrick, 1981). As
globalization increases, the potential for catastrophic cascading failures is essential to risk
assessments; however, the difference between power laws and bell curves still needs to be fully
understood. Masami clarified the challenge when discussing the traditional views of risk
assessments:
This is one of those things where so many people that do critical infrastructure protection
came from the military, and all they can think is gates, guards, and guns. Like, oh, we’ll
just put more guards, or we need bigger guns or whatever. But the critical infrastructure
is an organism; it’s all interconnected.
Collectively, the gap in desiring CEs that can self-replicate and spread across a network from IEs
bound by regression to the mean is pervasive.
Summary. The quantitative results indicate homeland security professionals cannot
discern CEs from IEs. The results and findings validate this influence as a need. This outcome is
82
supported by 97% of the responses. Of the 488 respondents, only 10 consistently discerned CEs
from IEs. The qualitative findings explain that homeland security professionals are training to
assess IEs but face more emergent events every day. The quotes from the interviews support this
conclusion. The literature supports an increase in emergent and CEs contrasted with classical
methodologies of risk assessments designed for IEs (Hubbard, 2020; Lewis, 2014a; Taleb, 2007).
The quantitative correlation with other results indicates many definitions of risk, some
frameworks reliant on experts to understand the complexity of CEs, and some reliant on
probabilistic formulas reliant on events to behave independently and randomly. However, none
of the definitions of risk advise the user on what type of event the methodology is designed for.
It is no wonder, given the number of choices for definitions of risk and components to plug into a
risk assessment, that users need clarification on the type of events they are analyzing. The
qualitative alignment with additional findings shows that many professionals use the words of
risk assessments but need help understanding factually or conceptually what they mean. This is
cross supported in the quantitative data, with 21% of the respondents admitting they do not
understand the question. These findings are clarified and explained by the 3% of the respondents
who did discern CEs from IEs. When interviewed, the respondents explained that most
professionals only think in bell curves and do not understand the difference a power law can
make in a risk assessment.
Inferences from the results indicate that when facing a connected event that will likely
produce a power law, such as the 2008 financial crisis, professionals will likely categorize the
event as independent, using a methodology based on a bell curve. This conclusion exposes a
need and gap in conceptual knowledge that will affect the remaining findings of the study. The
83
quantitative and qualitative data suggest a need pertaining to conceptual knowledge. The
quantitative and qualitative findings triangulate with the literature.
Influence 2: Stakeholders Need to Interpret Results From Statistics and Probability
The ability to interpret results from statistics and probability is critical to homeland
security professionals. The following section covers the study’s findings and results concerning
the ability to interpret results from statistics and probability used by homeland security
professionals. The outcomes are organized by survey results and interview findings and
summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the conceptual knowledge of the statistics and probability used in a risk assessment.
The literature suggests a common misunderstanding and application of the domains of statistics
and probability, leading to challenges in interpreting their results (Hubbard, 2020; Taleb, 2007).
Empirical evidence from the survey suggests homeland security professionals have a significant
gap in conceptual knowledge of statistics and probability and indicate a need.
When given a 4-point Likert scale item, “Statistics is a useful predictive tool to better
understand the future,” of the 378 respondents, 29% selected they strongly agree, 63% selected
they agree, 6% selected they disagree, and 2% selected they strongly disagree (Figure 5). The
literature commonly describes the uses of statistics as an explanatory tool used to organize,
analyze, and interpret large sets of historical data (Hubbard, 2020; Salkind, 2017; Taleb, 2007).
Statistics is not a predictive tool. With 92% of survey respondents agreeing with a false
statement, evidence suggests a significant gap in conceptual knowledge of statistics.
84
Figure 5
Survey Result: Statistics is a Useful Predictive Tool to Better Understand the Future
When given a 4-point Likert scale item, “Probability is a useful explanatory tool used to
better understand large sets of data,” of the 380 respondents, 27% selected they strongly agree,
65% selected they agree, 6% selected they disagree, and 1% selected they strongly disagree
(Figure 6). The literature commonly describes the use of probability as a predictive tool used to
better understand the future. With 93% of survey respondents agreeing with a false statement,
evidence suggests a significant gap in conceptual knowledge of probability.
85
Figure 6
Survey Results: Probability is a Useful Explanatory Tool
Interview Findings. Practitioners in the homeland security domain are challenged with
discerning statistics from probability. This influence is a need. All interviewees agreed that
homeland security progressions struggle conceptually with probability. When asked to explain
the survey results, Carrie stated, “Ninety percent of us have no clue what probability is or how
not to use it.” Clarifying the topic, Nicole commented, “I think like, so typically, we talk about
probability as likelihood is how I hear it phrased. But when it’s being played, it turns into like the
intent and the capability rather than any real probability.” Rocky added, “I think, well, first of all,
I’m not sure we do exactly use probability when thinking about threat. We use the word and the
statistics, but those are not necessarily probabilistic.” When discussing the struggle of confusing
statistics and probability, Richard stated, “During COVID, it was just impossible with some of
the statistics they were throwing at us … determining the risk; but they’re [statistics] are not
predictive, they’re only reactive.” These comments reflect the findings in the literature that,
conceptually, statistical trends get confused for a probabilistic outcome (Hubbard, 2020;
86
National Commission on Terrorist Attacks Upon the United States, 2004; Taleb, 2007).
Confusing the outputs and conceptual limitations of probabilities and statistics will lead to
flawed risk assessments. When describing the dangers of confusing statistical trends for
probability in risk assessments, Tobias stated,
It’s a process that’s very bounded by assumptions and prior knowledge. I would not
describe it as a robust process. I think it is a very efficient process, but it does not really
incorporate a lot of the realities of the real world.
Finally, Masami clarified the dangerous thinking derived from the idea that past outcomes from
statistics predict future probabilities by stating, “But ultimately, when you really get down to it
from a philosophical … kind of perspective, the future is not knowable from examining at the
past. Full stop. Period. End of discussion.” Collectively, the lack of conceptual knowledge of
statistics and probability is pervasive in the homeland security domain.
Summary. The quantitative results indicate homeland security professionals cannot
interpret results from statistics or probability. The results and findings validate this influence as a
need. This outcome is supported by 92% of the responses to statistics and 93% of the responses
to probability. The qualitative findings explain that homeland security professionals conflate the
two, confusing statistical trends for probabilistic outcomes. The quotes from the interviews
support this conclusion. The literature supports that most people need clarification between
historical statistical trends and probabilistic futures. The quantitative correlation with other
results indicates confidence in understanding theories; however, the conceptual knowledge
assessment suggests a state of not knowing what they do not know. The qualitative alignment
with additional findings shows that many professionals are overloaded with the amount of data
produced from automated metrics and start to miss the forest for the trees. This is cross
87
supported in the literature describing NAT (Lewis, 2019), where the human is easily
overwhelmed with layers of complicated data.
These findings are further explained by the 3% and 6% of the respondents who did
discern CEs from IEs. When interviewed, the respondents explained that most professionals only
think in bell curves and do not understand the difference a power law can make in a risk
assessment. Inferences from the results indicate that when presented with statistical trends, many
users may interpret them as probabilistic outcomes, creating blind spots to very likely scenarios.
The challenge with statistical trends is they only contain what has happened in the known past
and, by definition, exclude novel events. Using statistics as a predictive tool in a complex
environment leaves one blind to catastrophic events that are very likely to emerge simply due to
the state of the system (Taleb, 2007). When risk assessments calculate the probability of an event
derived from a statistical data set, all they are indicating is the probability that one will find the
event in the data. That is not the same as the probability that the event will occur in the future.
This conclusion of the qualitative and quantitative study exposed a significant need and gap in
conceptual knowledge that affects the study’s remaining findings. The quantitative and
qualitative data suggest a significant need regarding conceptual knowledge. The quantitative and
qualitative findings triangulate with the literature.
Metacognitive Knowledge
Metacognitive knowledge is the awareness of one’s cognitive processes and a better
understanding of when and why people take action (Rueda, 2011). The following section covers
the major influences on stakeholders’ metacognitive knowledge discovered in the study through
the survey and interviews. The major influence is the need to reflect on bias influencing risk
assessments.
88
Influence 1 - Stakeholders Need to Reflect on Bias ’s Influencing Risk Assessments
The ability to reflect on bias influencing risk assessments is critical to homeland security
professionals. The following section covers the study’s findings and results concerning the
ability to reflect on bias influencing risk assessments used by homeland security professionals.
The outcomes are organized by survey results and interview findings and summarized in tables
for each.
Survey Results. When given a 4-point Likert scale item, “I often slow down and
question conclusions about risk to decide if the process used is valid,” of the 378 respondents,
33% selected they strongly agree, 55% selected they agree, 11% selected they disagree, and 1%
selected they strongly disagree. With 88% of survey respondents agreeing they slow down and
question conclusions, evidence suggests an asset in metacognitive knowledge, as shown in
Figure 7.
Figure 7
Metacognitive Knowledge
89
When given a 4-point Likert scale item, “When a theory or model is presented to describe
a risk assessment, I slow down to decide if theory or model fits the environment,” of the 380
respondents, 31% selected they strongly agree, 59% selected they agree, 9% selected they
disagree, and 1% selected they strongly disagree (Figure 8). With 90% of survey respondents
agreeing they slow down and question if the theory or model fits the environment, empirical
evidence suggests an asset in metacognitive knowledge.
Figure 8
Metacognitive Knowledge
90
When given a 4-point Likert scale item, “I treat someone else’s material as a starting
point and try to develop my own ideas about it,” of the 379 respondents, 21% selected they
strongly agree, 70% selected they agree, 9% selected they disagree, and none selected they
strongly disagree (Figure 9). With 91% of survey respondents agreeing they slow down and try
to develop their ideas, evidence suggests an asset in metacognitive knowledge.
Figure 9
Metacognitive Knowledge
91
When given a 4-point Likert scale item, “I try to play around with ideas of my own when
thinking about how to assess risk,” of the 378 respondents, 22% selected they strongly agree,
61% selected they agree, 16% selected they disagree, and 1% selected they strongly disagree
(Figure 10). With 83% of survey respondents agreeing they develop their own ideas about risk,
evidence suggests an asset in metacognitive knowledge.
Figure 10
Metacognitive Knowledge
92
When given a 4-point Likert scale item, “Whenever I read or hear an assertion or
conclusion on risk assessments, I slow down to think about possible alternatives to the
conclusion,” of the 379 respondents, 31% selected they strongly agree, 61% selected they agree,
7% selected they disagree, and 1% selected they strongly disagree (Figure 11). With 92% of
survey respondents agreeing they slow down to think about possible alternatives, evidence
suggests an asset in metacognitive knowledge.
Figure 11
Metacognitive Knowledge
93
The summary statistics from the metacognitive knowledge domain produced a mean
score of 3.1 out of a possible high of 4, with a standard deviation of .42 and a standard error
mean of .02. The results suggest a high degree of confidence that the score will fall between 3.2
and 3.1, as shown in Table 14 and Figure 12. The evidence from the survey suggests homeland
security professionals have a need for metacognitive knowledge, as described in Table 14 and
Figure 12.
Table 14
Metacognitive Knowledge
Summary statistics of metacognitive knowledge
Mean 3.18
Std dev 0.42
Std Err mean 0.02
Upper 95% mean 3.24
Lower 95% mean 3.14
N 362
94
Figure 12
Summary Statistics of Metacognitive Knowledge
Interview Findings. Internal biases are a continual challenge for homeland security
professionals. This influence is a need. All interviewees acknowledged human bias impacts risk
assessments. When describing the challenge of working through the internal bias in a risk
assessment, Anais stated, “You can talk about complex risk concepts, but if they’re not in a
headspace to make the changes to mitigate it, you’re really kind of wasting your time. And so,
you back the truck up a little bit.” BJ acknowledged personal experience weighs heavily on the
perception of risk: “I think it’s acknowledged now that we’re using recognition-primed decision-
making, and it may be fallible because of biases. … Recency bias is certainly one, but just you’re
operating on your own personal experience.” Masami explained that bounded rationality leaves
us blind to risks we have not seen by stating perceived risk is based off life experience: “only
able to bucket it in such a way that makes them feel all right, but they may be missing …
because they’re only able to kind of comprehend what their own biases can intake.” These
statements support the literature on how inherent bias affects risk assessments (Fischhoff et al.,
1977; Fischhoff, 2021; Slovic, 2020). The bounded rationality of experience shaping internal
95
bias may significantly impact risk assessment more than methodology. Richard explained,
“Partly there is a presumption of I know this, I’ve done this, and … not a willingness to reach out
and see what’s changed and see what’s new and see how things may have evolved in a certain
respect.” Collectively, all participants confirmed the need to overcome inherent bias in risk
assessments.
Summary. The quantitative results indicate homeland security professionals have a gap
in metacognitive knowledge. The results and findings validate this influence as a need. This
outcome is supported by 17% of the respondents agreeing that they need to slow down to think
about possible alternatives to conclusions. The qualitative findings explain that many
professionals are bound by their biases. The quotes from the interviews support this conclusion.
The literature supports inherent biases and bounded rationality drives humans. The quantitative
correlation with other results indicates organizations enforce cultural modeling that supports
biases. The qualitative alignment with additional findings explains that some hierarchies have a
solid connection to patristic authority that also supports biases. The inference is that homeland
security professionals agree with metacognitive knowledge processes when taking a survey but
have significant gaps in conceptual knowledge. These conceptual knowledge gaps may explain
the interview findings indicating the need to overcome inherent biases during risk assessments.
The quantitative and qualitative data suggest a need for metacognitive knowledge. The
quantitative and qualitative findings triangulate with the literature.
Results and Findings for Motivation Causes
Results and findings are reported using the motivational categories and assumed
influences for each category. The motivational categories used in the study are value and
attribution. The assumed influence for motivational value is that stakeholders need to consider
96
expert judgment useful. The assumed influence for motivational attribution is that stakeholders
must believe they can adapt to complex environments. The influences are validated and
triangulated using the combination of the survey, interviews, and document analysis. The depth,
richness, validity, and relevance weigh the outcomes of each instrument. Interviews findings are
weighted at 60% due to the rich, deep, and specific relevant data. Survey results are weighted at
20% due to their validity and relevance. Document analysis is weighted at 20% due to its validity
and relevance.
Value
Motivational value refers to the importance one assigns to tasks (Rueda, 2011). The
following section covers the major influences on stakeholders’ motivational value discovered in
the study through the survey and interviews. The major influences are the need to consider expert
judgment useful and the belief that stakeholders can adapt to complex environments.
Influence 1: Stakeholders Need to Consider Expert Judgment Useful
The need to consider expert judgment useful is critical to homeland security
professionals. The following section covers the findings and the study’s results concerning if
homeland security professionals consider expert judgment useful. The outcomes are organized
by survey results and interview findings and summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the Motivational value of expert judgment to homeland security professionals. The
literature suggests risk assessments must include expert elicitation and judgment (Pauleen, 2017;
Woo, 2021). The survey questions aim to determine if homeland security professionals value
expert judgment for themselves. The survey’s empirical evidence suggests that homeland
security professionals value expert judgment and indicates an asset.
97
When given a 4-point Likert scale item, “Experts from other domains have valuable
insights to teach me on how to think about risk,” of the 370 respondents, 66 % selected they
strongly agree, 33% selected they agree, 1% selected they disagree, and none selected they
strongly disagree. With 99% of survey respondents agreeing they value insights from other
domains, evidence suggests an asset in motivational value, as indicated in Figure 13.
Figure 13
Motivation Value
98
When given a 4-point Likert scale item, “When faced with novel events, the viewpoint of
multiple experts is important to help assess risk,” of the 369 respondents, 56% selected they
strongly agree, 39% selected they agree, 4% selected they disagree, and none selected they
strongly disagree. With 96% of survey respondents agreeing they value the viewpoints of
multiple experts and 4% disagreeing, the evidence suggests an asset, as indicated in Figure 14.
Figure 14
Motivation Value
99
When given a 4-point Likert scale item, “I value human judgments over mathematical
models in risk assessments,” of the 368 respondents, 12% selected they strongly agree, 54%
selected they agree, 33% selected they disagree, and 1% selected they strongly disagree. With
68% of survey respondents agreeing they value human judgment over computer models and 33%
disagreeing, evidence suggests a need regarding motivational value, as indicated in Figure 15.
Figure 15
Motivation Value
100
When given a 4-point Likert scale item, “Risk assessments based on human judgment and
experience is important to me,” of the 368 respondents, 25% selected they strongly agree, 68%
selected they agree, 6% selected they disagree, and 1% selected they strongly disagree. With
94% of survey respondents agreeing they value human judgment and experience and 6%
disagreeing, evidence suggests an asset in motivational value, as indicated in Figure 16.
Figure 16
Motivation Value
The summary statistics from the motivational value domain produced a mean score of 3.3
out of a possible high of 4, with a standard deviation of .39 and a standard error mean of .02. The
results suggest a high degree of confidence that the score will fall between 3.3 and 3.2, as shown
in Table 15 and Figure 17. The evidence from the survey suggests homeland security
professionals place a high value on expert knowledge, as described in Table 15 and Figure 17.
101
Table 15
Summary of Stakeholders' Value of Expert Judgements
Summary statistics of motivational value
Mean 3.29
Std dev 0.39
Std err mean 0.02
Upper 95% mean 3.33
Lower 95% mean 3.25
N 370
Figure 17
Summary of Stakeholders' Value of Expert Judgements
102
Interview Findings. The evidence from the interviews suggests that stakeholders tend to
value the judgment of experts. This influence is an asset. Eighty percent of the respondents
acknowledge the value of expert opinions over mathematical models. Anais summed it up
quickly when discussing risk assessments in her organization, “It’s probably 75% to 25% the
expert and less on math; the math is usually useless.” Carrie supported the position by describing
how her agency uses expert options by saying, “There’s been a much more of a focus on expert
solicitation and kind of degrees of confidence in different outcomes.” Finally, Masami added that
experts outside of the normal domain should also be included: “We need to bring in the actual
experts. Maybe not the person that you would normally ask? But the person that actually knows
it. Bring them in, and we’re going to pivot off of what they say.” These statements support the
importance of expert elicitation described in the literature on risk assessments. Many events in a
tightly coupled society are emergent and beyond a mathematical model’s ability to see. When
talking about emergent events, Richard talked about the limits of traditional metrics that must be
balanced by experts: “I don’t believe metrics are the answer to everything. And they can throw
some numbers at me all day, but it still didn’t see 9-11.” Collectedly, the participants valued
expert judgment.
Summary. The quantitative results indicate homeland security professionals value the
judgment of experts. The results and findings validate this influence as a need. This outcome is
supported by 96% of the responses valuing expert input when facing novel events. However,
when asked about valuing experts over mathematical models, 68% agreed, and 33% disagreed.
The qualitative findings explain that homeland security professionals rely on human judgment
most of the time in the form of expert elicitation. The quotes from the interviews support this
conclusion.
103
The literature supports the common use of expert elicitation; however, selecting and
calibrating experts is often spurious (Hubbard, 2020). The quantitative correlation with other
results indicates that most homeland security professionals must become more familiar with
statistical modeling and frequently rely on expert judgment. The qualitative alignment with
additional findings shows that mathematical models are not valuable for assessing novel events,
so expert opinions are the best option. Inferences from the results indicate expert judgments are a
valuable tool for risk assessments. The mathematical models are rigorous, available, and easy to
access, but their underlying processes are only obtainable to some users. In addition, the
mathematical models are bound by historical events and therefore miss novel and emergent
events produced by complexity. The blindness to novel events summons criticism toward the
mathematical, quantitative methods in the literature. However, using mathematical models in
simple and complicated environments seems appropriate, while using human experts in complex
or chaotic environments.
In conclusion, the homeland security professional needs to identify the type of
environment before choosing the risk assessment methodology. Independent events producing
bell curves may be best assessed with mathematical models, while CEs creating power laws may
be best assessed with expert judgment. The confusion between the two methods and
environments leads to blind catastrophes. The quantitative and qualitative data suggest an asset
in motivational value.
Attribution
Attributions include the reasons for completing tasks and a personal belief in control
(Weiner, 2005). The following section covers the major influences on stakeholders’ motivational
104
attributions discovered in the study through the survey and interviews. The major influences are
that stakeholders must believe they can adapt to complex environments.
Influence 1: Stakeholders Must Believe They Can Adapt to Complex Environments
The need to believe they can adapt to complex environments is critical to homeland
security professionals. The following section covers the findings and the study’s results
concerning if stakeholders believe they can adapt to complex environments. The outcomes are
organized by survey results and interview findings and summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the motivational attribution of adapting to a complex environment of homeland
security professionals. The literature suggests the environment of homeland security
professionals is increasing in complexity (Cirillo & Taleb, 2020; Lewis, 2014a). The survey
questions aim to determine if homeland security professionals believe they can adjust to a
complex environment. The survey’s empirical evidence suggests that homeland security
professionals believe they can adapt to a complex environment.
When given a 4-point Likert scale item, “I am able to learn new theories quickly,” of the
370 respondents, 18% selected they strongly agree, 69% selected they agree, 13% selected they
disagree, and 1% selected they strongly disagree. With 86% of survey respondents agreeing they
learn new theories quickly, evidence suggests an asset in motivational attribution, as indicated in
Figure 18.
105
Figure 18
Motivation Attribution
When given a 4-point Likert scale item, “The amount of my effort I invest into new
concepts greatly impacts my outcomes,” of the 369 respondents, 25% selected they strongly
agree, 61% selected they agree, 14% selected they disagree, and none selected they strongly
disagree. With 86% of survey respondents agreeing that the amount of effort invested impacts
outcomes, evidence suggests an asset in motivational attribution, as indicated in Figure 19.
106
Figure 19
Motivation Attribution
When given a 4-point Likert scale item, “Even after I fail, I believe I can learn new tools
and techniques to function in complex environments,” of the 370 respondents, 48% selected they
strongly agree, 51% selected they agree, 1% selected they disagree, and none selected they
strongly disagree. With 99% of survey respondents agreeing they can learn new tools and
techniques for complex environments, evidence suggests an asset in motivational attribution, as
indicated in Figure 20.
107
Figure 20
Motivation Attribution
When given a 4-point Likert scale item, “Given enough time and effort, I can adapt to
most situations,” of the 369 respondents, 54% selected they strongly agree, 45% selected they
agree, 1% selected they disagree, and 1% selected they strongly disagree. With 98% of survey
respondents agreeing they can adapt to most situations, evidence suggests an asset in
motivational attribution, as indicated in Figure 21.
108
Figure 21
Motivation Attribution
The summary statistics from the motivational attribution domain produced a mean score
of 3.3 out of a possible high of 4, with a standard deviation of .39 and a standard error mean of
.02. The results suggest a high degree of confidence that the score will fall between 3.3 and 3.2,
as shown in Table 16 and Figure 22. The evidence from the survey suggests homeland security
professionals have a high belief in their ability to adapt to complex environments, as described in
Table 16 and Figure 22.
109
Table 16
Summary Statistics of Motivational Attribution Survey Results
Summary Statistics of Motivational Attribution
Mean 3.29
Std dev 0.38
Std err mean 0.02
Upper 95% mean 3.33
Lower 95% mean 3.25
N 370
Figure 22
Summary Statistics of Motivational Attribution Survey Results
110
Interview Findings. The interview evidence suggests that stakeholders do not believe
homeland security professionals are adapting to a complex environment. This influence is a
need. All respondents voiced concerns about the homeland security domain’s ability to adapt to
complex environments. When discussing the challenges of adapting to a complex environment,
Carrie stated, “The things that worry me is that they’re overwhelmed by the amount of
information that’s coming at them.” Tobias described the impact of too much information as
“literally being excluded or included from the understanding of what the world is. And that is the
piece that we’re not good at.” Anais reiterated the threat of far-reaching events evolving from
complexity: “There is a thing out there that will impact every single one of us, and we need to be
prepared for that, but many of us don’t believe it is possible.” Rocky stated, “There was a
differentiation when people talk about this. When talking about these novel, or new, events
because there are plenty of people out there that are like, that can’t happen.” These statements
align with the literature discussing organizations’ ability to navigate and adapt to complexity
(Hubbard, 2020; Taleb, 2007).
Individuals seem to understand complexity; however, many of their organizations
struggle with adapting to complex environments. When talking about the individual limits in a
fixed organization, Tobias stated, “to understand that this authoritarian structure that’s
hierarchical does, in fact, have limitations. And then being able to maneuver those to understand
what’s the possible field of action in which you can actually get something done is another
piece.” BJ summed up the challenges of our current organizations: “What we’re not good about
is just understanding how we’re blind to things.” Finally, Masami stated the reality of a complex
world when discussing emergent events: “But it doesn’t matter because the event doesn’t care
whether you have a lot of experience or not, or understand it or not; it is happening. Right now.
111
Right here. Live or die.” Collectively, interviewees agree there is a need to adapt to complex
environments.
Summary. The quantitative results indicate homeland security professionals believe they
can adapt to complex environments. The results and findings validate this influence as a need.
This outcome is supported by 98% of the responses agreeing that they can adapt to most
situations. The qualitative findings explain that homeland security professionals think they can
adapt to anything but are limited by other factors. The quotes from the interviews support this
conclusion. Tobias explained homeland security professionals are challenged with filtering and
categorizing information into digestible pieces. The literature supports the idea that homeland
security professionals can adapt to complex environments when trained to identify them
(Hubbard, 2020; Yeager & Dweck, 2020). Training to CEs that product feedback loops assist
people in calibrating their beliefs in possible outcomes (Hanea et al., 2021; Hubbard, 2020).
The quantitative correlation with other results indicates confidence in abilities to adapt to
complex environments but a gap in abilities to discern IEs from CEs. The qualitative alignment
with additional findings explains the difference between the survey results and interview findings
may be related to the person’s organizational environment. An individual’s attribution may need
to be more to overcome organizational barriers. Inferences from the results are that homeland
security professionals are highly motivated and believe they can adapt to most environments but
may have organizational barriers to adapting at the speed of change required to keep up with the
environment. The quantitative data suggest an asset, while the qualitative data suggest a need
regarding motivational attribution.
112
Results and Findings for Organizational Causes
Results and findings are reported using the organizational categories and assumed
influences for each category. The organizational categories used in the study are cultural
modelings and cultural settings. The assumed influence for cultural modeling is that stakeholders
need to create a culture that values systems thinking. The assumed influence for cultural settings
is that stakeholders need to create policies that address complexity. The influences are validated
and triangulated using the combination of the survey, interviews, and document analysis. The
depth, richness, validity, and relevance weigh the outcomes of each instrument. Interviews
findings are weighted at 60% due to the rich, deep, and specific relevant data. Survey results are
weighted at 20% due to their validity and relevance. Document analysis is weighted at 20% due
to its validity and relevance.
Cultural Models
Cultural models are the invisible aspect of a group, including the shared worldview of
how the world currently works and how it should work; cultural settings are the visible aspects of
an organizational structure, such as structure, practices, and policies (Gallimore & Goldenberg,
2001). The following section covers the major influences on stakeholders’ cultural models
discovered in the study. The major influences are that stakeholders need to create a culture that
values systems thinking.
Influence: Stakeholders Need to Create a Culture That Values Systems Thinking
The need to create a culture that values systems thinking is critical to homeland security
professionals. The following section covers the study’s findings and results concerning the
current culture in homeland security. The outcomes are organized by survey results and
interview findings and summarized in tables for each.
113
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the cultural models of system thinking in homeland security organizations. The
literature suggests the cultural models in homeland security organizations struggle with systems
thinking. The survey questions aim to determine if homeland security organizations model
system thinking. The survey’s empirical evidence suggests that homeland security organizations
do have a culture of modeling system thinking, as indicated in Figure 23.
Figure 23
Cultural Modeling
114
When given a 4-point Likert scale item, “When assessing risk, my organization tries to
understand how one event can cause another event,” of the 365 respondents, 29% selected they
strongly agree, 50% selected they agree, 20% selected they disagree, and 1% selected they
strongly disagree. With 79% of survey respondents agreeing that their organizations consider
how one event can cause another, the evidence suggests a need pertaining to cultural modeling,
as indicated in Figure 23.
When given a 4-point Likert scale item, “My organization considers feedback loops when
assessing risk,” of the 364 respondents, 22% selected they strongly agree, 50% selected they
agree, 28% selected they disagree, and none selected they strongly disagree. With 72% of
survey respondents agreeing that their organizations consider feedback loops, the evidence
suggests a need regarding cultural modeling, as indicated in Figure 24.
Figure 24
Cultural Modeling
115
When given a 4-point Likert scale item, “When assessing risk, my organization considers
cascading failures,” of the 364 respondents, 26% selected they strongly agree, 47% selected they
agree, 27% selected they disagree, and none selected they strongly disagree. With 73% of
survey respondents agreeing that their organizations consider cascading failure, the evidence
suggests a need pertaining to cultural modeling, as indicated in Figure 25.
Figure 25
Cultural Modeling
116
When given a 4-point Likert scale item, “When assessing risk, my organization tries to
understand what can be known before accepting what is known,” of the 361 respondents, 11%
selected they strongly agree, 54% selected they agree, 30% selected they disagree, and 5%
selected they strongly disagree. With 65% of survey respondents agreeing that their
organizations consider what can be known vs. what is known, the evidence suggests a need
regarding cultural modeling, as indicated in Figure 26.
Figure 26
Cultural Modeling
117
When given a 4-point Likert scale item, “My organization treats events as stand-alone
occurrences,” of the 361 respondents, 6% selected they strongly agree, 33% selected they agree,
52% selected they disagree, and 10% selected they strongly disagree. With 39% of survey
respondents agreeing that their organizations treat events as stand-alone, the evidence suggests a
weak need regarding cultural modeling, as indicated in Figure 27.
Figure 27
Cultural Modeling
118
The summary statistics from the organizational modeling domain produced a mean score
of 2.93 out of a possible high of 4, with a standard deviation of 0.56 and a standard error mean of
.03. The results suggest a high degree of confidence the score will fall between 2.9 and 2.8, as
shown in Table 17 Figure 28. The evidence from the survey suggests homeland security
organizations have a need to create a cultural model that supports systems thinking, as described
in Table 17 and Figure 28.
Table 17
Summary Statistics of Cultural Models Survey Results
Summary statistics of cultural models
Mean 2.93
Std Dev 0.56
Std Err Mean 0.03
Upper 95% Mean 2.98
Lower 95% Mean 2.87
N 370
119
Figure 28
Summary Statistics of Cultural Models Survey Results
Interview Findings. Stakeholders see a need for more cultural models that supports
systems thinking. This influence is a need. 90% of the respondents stated the need for cultures
that encouraged systems thinking. When discussing the need for a culture of systems thinking
concerning risk assessments, Carrie stated, “I think we’ve gone down the rabbit hole of, uh,
breaking things up in a reductive process as far as we can go. And I think we need to back up and
start looking at the whole organism.” Tobias reinforced Carrie’s assessment by describing
cultures of ordinal thought,
It’s black and white. It cuts things and divides things, and that’s how, in particular
government, handle things; you put things into boxes, and there’s your box and my box
and the other stuff we’re not going to worry about.
When discussing the challenges of risk assessments without systems thinking, Rocky
said, “We’ve gotten so good at reductive logic that we’ve almost trapped ourselves in it. And it’s
so hard for some to look at the system as a whole, … whereas a reductive science is so easy to
define.” Masami described how some hierarchies and relationships work against system
120
thinking, “if you’re hardwired into this hierarchy, right, the reductive thinking, and there’s a
really strong connection with authority that’s patristic, that’s paternal, then you can’t do it.”
These statements support the literature on the dynamics of cultural models and the need for
system thinking (Arnold & Wade, 2015; Bellavita, 2019; Montuori, 2011; Yunkaporta, 2020).
The individuals in the organizations must create a new culture that supports and values systems
thinking in risk assessments. Anais talked about working in a culture of fixed, reductive thought
when assessing risk, “It’s virtually impossible to work with someone in that culture. They’re not
tethered to any kind of reality. And we have to change that. Us, the ones that build the culture.”
BJ advised shifting cultural modeling to systems thinking by stating, “Simply talking about
systems theory in every assessment is really about maintaining completeness and consistency in
the logical understanding of what the world is.” Collectively the respondents voice a need for
cultural modeling that values systems thinking.
Summary. The quantitative results indicate homeland security professionals believe they
need a model of a culture that supports systems thinking. The results and findings validate this
influence as a need. This outcome is supported by 28% of the respondents disagreeing that their
organization considers feedback loops. In addition, 39% of the responses indicate that their
agency treats all events as independent or stand-alone. The qualitative findings explain that many
organizations model reductive thinking. The quotes from the interviews support this conclusion.
Rocky explained that organizations are so good at reductive processes that they have difficulty
thinking differently.
The literature supports that reductive thinking is still prominent in government and risk
assessments (French, 2015; Hanea et al., 2021; Hubbard, 2020; Meadows, 2008; Taleb, 2007).
The quantitative correlation with other results suggests that even if survey results indicate
121
organizations model systems thinking, previous survey questions offer less than 6% of the
professionals can discern between the two systems of CEs and IEs. The qualitative alignment
with additional findings explains that most organizations have barriers to adapting to complex
environments, and complex environments require systems thinking instead of reductive analysis.
The inference is that homeland security professionals may desire a culture that supports
systems thinking more than one exists. The responses to the survey questions involving systems
thinking may pull participants toward the affirmative. All interviewees indicated a gap in cultural
modeling toward system thinking. The quantitative and qualitative data suggest a need regarding
organizational modeling. The quantitative and qualitative findings triangulate with the literature.
Cultural Settings
Cultural settings are the visible aspects of an organization (Gallimore & Goldenberg,
2001). The following section covers the major influences on stakeholders’ cultural settings
discovered in the study through the survey and interviews. The major influences are the need to
create policies that address complexity.
Influence: Stakeholders Need to Create Policies That Address Complexity
The need to create policies that address complexity is critical to homeland security
professionals. The following section covers the findings and the study’s results concerning the
cultural setting that creates policies that address the complexity used by homeland security
professionals. The outcomes are organized by survey results and interview findings and
summarized in tables for each.
Survey Results. The following are descriptive statistics from the quantitative survey
assessing the cultural setting address complexity in homeland security organizations. The
literature suggests the cultural settings in homeland security organizations struggle with
122
addressing complexity (Hubbard, 2020; Lewis, 2014b; Slovic, 2020). The survey questions aim
to determine if homeland security organizations have policies that address complexity. The
survey’s empirical evidence suggests that homeland security organizations do have culture
settings to address complexity.
When given a 4-point Likert scale item, “My organization’s policies provide me with the
time to conduct risk assessments,” of the 360 respondents, 9% selected they strongly agree, 52%
selected they agree, 33% selected they disagree, and 7% selected they strongly disagree. With
61% of survey respondents agreeing that their organizations give them time to conduct risk
assessments, the evidence suggests an asset in cultural settings, as indicated in Figure 29.
Figure 29
Cultural Settings
123
When given a 4-point Likert scale item, “My organization’s policies suggest a method to
perform risk assessments,” of the 363 respondents, 14% selected they strongly agree, 42%
selected they agree, 36% selected they disagree, and 8% selected they strongly disagree. With
56% of survey respondents agreeing that their organizations have policies suggesting methods
for risk assessments, the evidence indicates a need in cultural settings, as indicated in Figure 30.
Figure 30
Cultural Settings
124
When given a 4-point Likert scale item, “My organization regularly prepares for negative
events outside of its normal domain,” of the 362 respondents, 15% selected they strongly agree,
45% selected they agree, 35% selected they disagree, and 5% selected they strongly disagree.
With 60% of survey respondents agreeing that their organizations consider events outside of
their domain, the evidence indicates a need in cultural settings, as indicated in Figure 31.
Figure 31
Cultural Settings
125
When given a 4-point Likert scale item, “My organization’s policies encourage
interaction with other organizations,” of the 363 respondents, 44% selected they strongly agree,
41% selected they agree, 13% selected they disagree, and 2% selected they strongly disagree.
With 85% of survey respondents agreeing that their organizations encourage interaction with
outside agencies, the evidence indicates an asset in cultural settings, as indicated in Figure 32.
Figure 32
Cultural Settings
126
The summary statistics from the organizational setting domain produced a mean score of
2.7 out of a possible high of 4, with a standard deviation of .45 and a standard error mean of .02.
The results suggest a high degree of confidence that the score will fall between 2.7 and 2.7, as
shown in Table 18 and Figure 33. The evidence from the survey suggests homeland security
organizations have a cultural setting that addresses complexity, as described in Table 18 and
Figure 33.
Table 18
Summary Statistics of Cultural Settings Survey Results
Summary statistics of cultural settings
Mean 2.72
Std dev 0.45
Std err mean 0.02
Upper 95% mean 2.77
Lower 95% mean 2.67
N 370
127
Figure 33
Summary Statistics of Cultural Settings Survey Results
Interview Findings. Homeland security professionals see the need for creating policies
that address complexity. This influence is a need. Anais discusses the need for policies to address
complexity in a connected system: it is “really hard because you have regulation working
contrary to mitigating risk. The regulation’s intent was to mitigate some risk, but the legislative
conclusion was negotiated and shifted risk to another area.” Rocky described how a small subset
of power holders influence most governmental policies instead of considering the whole system:
“The lobbyists get involved and actually get doctrine to say, take more risk because that’s how
we make money.” Finally, Nicole talked about why policies look the way they do: “I’ll start by
saying a camel is a horse made by committee. … There’s a lot of hands in the pie, and some of
them just want to have pie on their hands and not really contribute.” The statements support the
literature on policy creation surrounding risk assessments. Risk assessment policies must address
complexity in a tightly coupled, interdependent society. Richard talked about the result of
policies that don’t address complexity: “So that there’s this cumulative effect that these people
who really could have needed this assistance and accessed it were denied access to it.”
128
Collectively, the need for policies to address complexity is pervasive in the homeland security
domain.
Summary. The quantitative results indicate homeland security professionals need better
policies to address complexity. The results and findings validate this influence as a need. This
outcome is supported by 44% of the respondents agreeing that their organizations lack policies
suggesting a method for risk assessment. The qualitative findings explain that creating policies in
the homeland security domain frequently involves outside stakeholders with different objectives.
The quotes from the interviews support this conclusion. Rocky indicates this process sometimes
results in higher levels of risk due to some dogmatic policies that cannot cope with complexity.
The literature supports that politics frequently impact policy creation, and politics rarely consider
outcomes that address complexity (Alhakami & Slovic, 1994; Slovic, 2020). The quantitative
correlation with other results indicates that professionals are challenged with discerning CEs
from IEs, suggesting their policies must address complexity. The qualitative alignment with
additional findings explains that hierarchies support reductive thinking, creating barriers to
systems thinking. The inference is that homeland security professionals have some policies that
address a complex, interconnected world but also have many barriers. The quantitative and
qualitative data suggest a need in organizational settings. The quantitative and qualitative
findings triangulate with the literature.
Summary of Validated Influences
Tables 19, 20, and 21 show the knowledge, motivation and organization influences for
this study and their determination as an asset or a need.
129
Table 19
Knowledge Assets or Needs As Determined by the Data
Assumed knowledge influences Asset or need
Factual
Stakeholders need to know the definition of
risk.
Need
Stakeholders need to know the components of
a risk assessment.
Need
Conceptual
Stakeholders need to be able to discern CEs
with IEs.
Need
Stakeholders need to be able to interpret
results from statistics and probability.
Need
Metacognitive
Stakeholders need to reflect on bias
influencing risk assessment.
Need
Table 20
Motivation Assets or Needs As Determined by the Data
Assumed motivation influences Asset or need
Value
Stakeholder needs to consider expert
judgment useful for themselves
Asset
Attribution
Stakeholders must believe they can adapt to
complex environments
Need
130
Table 21
Organizational Assets or Needs As Determined by the Data
Assumed organizational influences
Asset or need
Cultural models
Stakeholders need to create a culture that
values system thinking.
Need
Cultural settings
Stakeholders need to create policies that
address complexity.
Need
Next, Chapter Five will present recommendations for solutions aligned with these
influences based on empirical data.
131
Chapter Five: Conclusions
The mission of IHSR is to decrease risk and increase security by providing graduate-level
education to homeland security professionals. A graduate school located in the United States
since 2003, IHSR offers numerous programs focused on assisting leaders in the homeland
security domains to develop policies, strategies, programs, and organizational elements to
prepare for and respond to public safety threats across the nation. The IHSR now offers a
master’s program (MA) in homeland security. Graduates of the programs are military members,
first responders, and DHS professionals (U.S. Coast Guard, U.S. Customs and Border Protection,
U.S. Citizenship and Immigration Services, the Cybersecurity and Infrastructure Security
Agency, FEMA, the Federal Law Enforcement Training Center, United States Immigration and
Customs Enforcement, the United States Secret Service, the Transportation Security
Administration, the Science and Technology Directorate, the Office of Intelligence and Analysis,
the Office of Operations Coordination, the DHS Countering Weapons of Mass Destruction
Office, and the Management Directorate). As of 2021, the alumni consist of roughly 1,430
graduates in leadership positions.
As a practitioner’s institution of higher education, 100% of IHSR members are
committed to managing risk in an increasingly complex and connected world. Managing risk in a
modern world requires bridging gaps between intergovernmental, interagency, and civil-military
organizations. Risk viewed through a systems theory lens helps different agencies understand
how each is affected by, prepares for, and responds to negative events (Lewis, 2019; Montuori,
2011; Yunkaporta, 2020).
132
Organizational Performance Goal
The organizational performance problem at the root of this study is homeland security
professionals’ failure to properly assess risk. The IHSR’s main focus is training and educating
the nation's homeland security professionals. Per IHSR’s agreement with its sponsor, graduates
of its program will better protect the nation from manmade and natural risks. Since 2003, IHSR
has educated 1,430 leaders in homeland security, including multiple FEMA directors. The
IHSR’s master’s program offers a top-ranked education and is free to government-employed
students, attracting the nation’s top leaders. However, to fill its mission of reducing risk in the
nation, IHSR does not study or report its progress using standards and benchmarks for teaching
risk based on KMO gaps in the field. Failure to establish standards or benchmarks based on
empirical results can result in a loss of funds from its sponsor and a failure of government
institutions to protect their citizens.
Description of Stakeholder Groups
The study consists of three stakeholders required to meet performance goals: (a) IHSR’s
sponsor, (b) faculty and staff, and (c) alumni. One of IHSR’s directors is FEMA under the NPD.
The IHSR develops all programs in partnership with FEMA to meet NPD goals. FEMA is the
governing stakeholder of IHSR and its programs.
IHSR faculty and staff accomplish the organization’s mission by developing and
providing meaningful curriculum to meet the sponsor’s and organizational mission. The IHSR
alumni are the stakeholders that implement the curriculum to ultimately FEMA’s and IHSR’s
mission.
133
Goal of the Stakeholder Group for the Study
While all three stakeholders contribute to the mission and organizational goal of
compliance with 100%, the alumni represent the group most responsible for implementing the
mission and closing performance gaps in the field. Therefore, IHSR alumni are the focus of this
study. The stakeholders’ goal, supported by the IHSR, is that by May of 2024, 100% of IHSR
alumni will accurately assess the probability and consequences of events (risk). The
organization’s sponsor and director established and approved this new stakeholder SMART goal
in the 2024 curriculum review meeting. A failure to effectively discern risk based on a
methodological process established for alumni of IHSR creates inconsistencies and gaps in
assessing, analyzing, and responding to risk, leaving the nation less secure. The performance gap
is 100%. This is an innovation study. Table 1 summarizes the organization’s missions and goals.
Purpose of the Project and Questions
The purpose of this innovation study was to conduct a needs analysis in the areas of
KMO resources necessary for IHSR master’s degree students to achieve their stakeholder goal of
assessing the probability and consequences of events in alignment with systems theory by May
2024. I generated a list of possible needs for IHSR master’s degree students to accomplish their
goal and examined them to ascertain which were validated. The stakeholder of focus in this
analysis were IHSR master’s degree students. Two research questions guided this study:
1. What are IHSR master’s degree alumni’s knowledge, motivation, and organization
needs related to assessing the probability and consequences of events (risk)?
2. What are the knowledge, motivation, and organizational recommendations for
improving IHSR master’s degree student abilities to assess the probability and
consequences of events (risk)?
134
Introduction and Overview
At its foundation, risk assessments are a human endeavor resulting in the survival or
death of humans and all other living species on Earth. Humans are not computers tabulating
statistics, calculating probabilities, and optimizing choices. Rather, they are risk-averse,
heuristics-driven organisms bounded by personal experiences. The best hope for survival from
potential extensional crises such as climate disasters, mass extinction, runaway artificial
intelligence, or nuclear warfare is to expand humans’ knowledge, motivation, and institutional
support in assessing risks and allocating resources. The following contains the recommendations
resulting from the mixed-methods, explanatory, sequential study of homeland security
professionals’ ability to make sense of risk assessments. The recommendations summarize KMO
support gaps, followed by an integrated implementation and evaluation plan. The
implementation and evaluation plan includes desired results, behaviors, learning, and reactions of
critical stakeholders. Next, the chapter explores the strengths and weaknesses of the prescribed
approach coupled with limitations and delimitations. Finally, the study recommends topics for
future research.
Recommendations for Practice to Address KMO Influences
Knowledge Recommendations
The knowledge influences in Table 2 identify the assumed knowledge influences critical
to achieving the stakeholder’s goals. The analysis of the results and finding in Chapter Four
suggest needs in all dimensions of knowledge study. The assumed or possible knowledge
influences from the case study include the factual, conceptual, and metacognitive elements of
risk assessments. Specifically, the factual influences include the definition of risk and the
components of a risk assessment. The conceptual influences included the ability to discern CEs
135
from IEs and interpret results from statistics and probability. Finally, the metacognitive
influences include the reflection on bias affecting risk assessments.
The study validates gaps in all knowledge influences through the literature, survey, and
interviews. The influences are prioritized based on the number of stakeholders affected. The gaps
in conceptual knowledge are a priority due to the overall survey results and the impact on other
dimensions of the KMO analysis. Ninety-six percent of the survey responses indicated a gap in
the ability to discern CEs from IEs and interpret results from statistics and probability. All
interviews validated the gaps in conceptual knowledge. In addition, much of the recent literature
supports the gap.
Krathwohl and Anderson (2010) provided the framework for educational objectives to
guide a discussion on knowledge influences based on Bloom’s original taxonomy of major
categories in the cognitive domain. Krathwohl and Anderson’s revised taxonomy includes
factual, conceptual, procedural, and metacognitive knowledge. For this analysis, only conceptual
knowledge categories are addressed. Table 22 shows the recommendations for the influences
based on theoretical principles.
136
Table 22
Summary of Knowledge Influences and Recommendations
Assumed knowledge
influence
Asset or
need
Priority Principle
Context-specific
recommendation
Stakeholders need to
know the
definition of risk
(F).
Need No N/A N/A
Stakeholders need to
know the
components of a
risk assessment
(F).
Need No N/A N/A
Stakeholders need to
be able to discern
CEs with
independent
events (C). Need Yes
Information learned
meaningfully and
connected with prior
knowledge is stored
more quickly and
remembered more
accurately because it
is elaborated with
prior learning (Schraw
& McCrudden, 2006).
Provide
instruction on
different types
of events using
familiar case
studies.
Stakeholders need to
be able to interpret
results from
statistics and
probability (C).
Need Yes
How individuals
organize knowledge
influences how they
learn and apply what
they know (Schraw &
McCrudden, 2006)
Provide
instruction on
statistics and
probability
using familiar
metaphors tied
to familiar
experiences.
Stakeholders need to
reflect on bias
influencing risk
assessment (M).
Need No N/A N/A
Conceptual Knowledge Solutions
The results and finding of this study indicated that 96% of homeland security
professionals need more in-depth conceptual knowledge to interpret results from statistics and
137
probability. The recommended solution is grounded in information processing system theory.
The primary principles of the theory state information learned meaningfully and connected with
prior knowledge is stored more quickly and remembered more accurately because it is elaborated
with prior learning (Schraw & McCrudden, 2006). The principle allows individuals to organize
knowledge about the concepts of statistics and probability without courses in advanced math.
IHRS should provide instruction on statistics and probability using familiar metaphors tied to
familiar experiences.
Lakoff (2008) described how people think and learn via metaphors. Novel concepts are
linked to others, known as the adjacent possible. If novel concepts are too far from the students’
adjacent possible, understanding the idea is not possible. Oakley (2014) supported Lakoff’s
findings and described how students chuck concepts to familiar ones, specifically in math.
Oakley offers practical solutions to develop students understanding of mathematical concepts
beyond memorization of formulas. The evidence supports providing instruction on statistics and
probability using familiar metaphors tied to familiar experiences.
In addition, the results and finding of this study indicated that 98% of homeland security
professionals need more in-depth conceptual knowledge to be able to discern CEs from IEs. The
recommended solution is grounded in information processing system theory. The primary
principles of the theory state that to develop mastery, individuals must acquire component skills,
practice integrating them, and know when to apply what they have learned (Schraw &
McCrudden, 2006). The principle allows individuals to practice automaticity in recognizing CEs
from IEs, taking less capacity in working memory. IHRS should provide instruction on
connected and IEs using familiar case studies.
138
Colson and Cook (2020) described using real case studies to help professional risk
managers identify and respond to CEs that have a tendency to cascade. There are many case
studies for homeland security professionals to use during instruction that demonstrate CEs and
IEs. As a professor of risk analysis, Taleb (2020) used the example of contagious diseases as an
example of case studies demonstrating the impact of CEs beyond IEs. The evidence suggests
providing repetitive case studies as an instructional tool increases the student’s ability to discern
CEs from IEs.
Motivation Recommendations
The motivational influences in Table 3 identify the assumed influences critical to
achieving the stakeholder’s goals. The analysis of the results and finding in Chapter Four suggest
various assets in attribution. The assumed or possible motivational influences from the case
study include the attributional and value elements of risk assessments. Specifically, the
attributional influences include the belief that they can adapt to complex environments. The
value influences included considering expert judgment useful. The study suggests gaps in
attribution through the interviews. The influences are prioritized based on the number of
stakeholders affected. The gaps in attribution are a priority due to the overall interview results.
All interview responses voiced concerns about the homeland security domain’s ability to adapt to
complex environments.
The recommendations to close the gap originate from Clark and Estes’s (2008) gap
analysis. The framework suggests ways to reinforce potential motivational needs to meet the
stakeholders’ goals. Table 23 shows the recommendations for the influence based on theoretical
principles.
139
Table 23
Summary of Motivation Influences and Recommendations
Assumed motivational
influence
Asset or
need
Priority Principle Recommendation
Value
Stakeholder needs to
consider expert
judgment useful for
themselves (V).
Asset No
Attribution
Stakeholders must
believe they can
adapt to complex
environments (A).
Need Yes
Adaptive
attributions
and control
beliefs
motivate
[individuals]
(Pintrich,
2003).
Provide
instruction on
ways to
control beliefs
and adapt to a
rigid work
environment.
140
The finding from the interviews of this study indicated that 100% of homeland security
professionals have a gap in their belief in their ability to adapt to complex environments. The
recommended solution is grounded in attribution theory (Weiner, 2005). The primary principles
of the theory state that adaptive attributions and control beliefs motivate individuals (Rueda,
2011). The principle allows individuals opportunities to exercise choice and control over
behavior, cognition, and emotions. The IHRS should provide instruction on ways to control
beliefs and adapt to a rigid work environment.
Clark and Estes (2008) stated that performance at work emanates from beliefs about
ourselves, our coworkers, and our prospects for being effective. Beliefs are grounded in a
mindset of personal control and adaptation. Ozduran and Tanova (2017) studied the impact of
leaders’ mindsets on the beliefs and behaviors of organizations. In a study of 176 employees and
40 leaders, the incremental mindsets of leaders showed empirical evidence of a positive
influence on organizational effectiveness and adaptability. In addition, Hanson et al. (2016)
demonstrated that a leader’s mindset in organizations has a positive collective outcome. In a
study of five leaders and 64 subordinates, a quantitative survey showed a positive relationship
between a leader’s mindset of belief in adaptability and control with positive organizational
results. The empirical evidence of attribution theory suggests that instruction in ways to increase
control and choice would improve the beliefs and attribution of leaders and employees in rigid
environments such as a government bureaucracy.
Organizational Recommendations
The organizational influences in Table 4 represent the assumed organizational influences
critical to achieving the stakeholder’s goals. The influence analysis showed varying degrees of
needs in each dimension. The critical influences identified through the literature, survey, and
141
interviews are the bases of the recommendations summarized in the table. The recommendations
to close the gaps originate from Clark and Estes’s (2008) gap analysis. The framework suggests
ways to close organizational gaps to meet the stakeholders’ goals. Table 24 shows the
recommendations for the influence based on theoretical principles.
Table 22
Recommendations for the Influence Based on Theoretical Principles
Assumed
Organizational
Influence
Asset or
need
Priority Principle Recommendation
Cultural models
Stakeholders need to
create a culture
that values system
thinking.
Need Yes Effective change efforts use
evidence-based solutions
and adapt them, where
necessary, to the
organization’s culture
(Clark & Estes, 2008).
Provide
instruction on
systems theory
and the
empirical
evidence
increasing
connectivity.
Cultural settings
Stakeholders need to
create policies
that address
complexity.
Need Yes Effective organizations
ensure that organizational
messages, rewards,
policies, and procedures
that govern the
organization’s work are
aligned with or support
organizational goals and
values (Clark & Estes,
2008).
Provide
instruction on
ways to draft
policies and
procedures that
address
complexity.
142
Clark and Estes (2008) suggested that change efforts to cultural models are effective
when they use evidence-based solutions. This study’s results and finding validated that 90% of
homeland security professionals interviewed show a gap in the cultures that value systems
thinking. The recommended solution is grounded in organizational theory. The primary
principles of the theory state that effective change efforts use evidence-based solutions and adapt
them, where necessary, to the organization’s culture (Clark & Estes, 2008). The principle allows
individuals to build a culture that values systems thinking by presenting empirical evidence of
the increased connectedness in risk assessments. IHRS should provide instruction on systems
theory and the empirical evidence increasing connectivity.
Lewis and Taquechel (2017) described how strategies and theories are evolving in the
homeland security domain using systems theory. They provide empirical evidence of the
increasing connectedness of our systems via transfer pathways and provide tools in network
science to model risk. Bellavita (2019) suggested a theory on increasing learning in homeland
security using constructs such as the Cynefin framework to implement systems thinking into risk
assessments. The evidence affirms the recommendation that providing instruction on systems
theory and evidence of increasing connectivity will increase the cultural value of systems
thinking in the homeland security domain.
In addition, Clark and Estes (2008) stated that effective organizations ensure messages,
rewards, polices, and procedures are aligned with goals and values. This study’s results and
finding validated that 100% of homeland security professionals interviewed show a need for
policies that address complexity. The recommended solution is grounded in organizational
theory. The primary principle of the theory state that effective organizations ensure that
organizational messages, rewards, policies, and procedures that govern the organization’s work
143
are aligned with or support organizational goals and values (Clark & Estes, 2008). The principle
allows individuals to build cultural settings of policies, procedures, and messages that align with
the complex environment of homeland security. The recommendation is that IHRS provide
instruction on ways to draft policies that address complexity.
Hubbard (2020) described possible policies concerning risk assessments in the homeland
security domain that can address complexity. He suggests policies and procedures to use expert
elicitation to assess the risk of novel or emergent events in a complex environment when
standard objective processes based on mathematical modeling break down. Cook and Goossens
(2014) demonstrated procedures to calibrate an expert’s risk assessment in critical infrastructure
and prove outcomes are more accurate than non-calibrated experts or computer models. Finally,
Slovic (2020) demonstrated the need for homeland security policies to address complexity to
overcome flawed perceptions of risk derived from a hyperpartisan and virtually active world.
The evidence affirms that providing instruction on ways to draft policies and procedures to
address complexity will close the gaps in cultural settings.
Integrated Implementation and Evaluation Plan
Training must be designed effectively from the start with implementation and evaluation
in mind (Kirkpatrick & Kirkpatrick, 2016). The following section outlines the implementation
and evaluation framework, the organization's expectations, and the four levels of the new world
Kirkpatrick model (Kirkpatrick & Kirkpatrick, 2016).
Implementation and Evaluation Framework
The new world Kirkpatrick model (Kirkpatrick & Kirkpatrick, 2016) is the foundation of
the implementation and evaluation framework. The model uses four levels of training evaluation,
updated from the original and adjusted to the modern economy. The new world model runs in
144
reverse order, starting with the ultimate goal of the stakeholder first as Level 4. Working
backward, the model identifies critical behaviors leading to the ultimate goal as Level 3. Level 2
identifies the training required to change behaviors, while Level 1 assesses the reaction to the
training delivered. The new world model focuses on the results of the process.
Organizational Purpose, Need, and Expectations
The IHSR’s purpose is to decrease risk and increase security by providing graduate-level
education to homeland security professionals. A graduate school located in the United States of
America since 2003, IHSR offers numerous programs focused on assisting leaders in the
homeland security domains in developing policies, strategies, programs, and organizational
elements to prepare for and respond to public safety threats across the nation. The main goal of
IHSR members is to manage risk in an increasingly complex and connected world by bridging
gaps in intergovernmental, interagency, and civil-military cooperation. However, the IHSR has
no curriculum dedicated to risk preparation or management. When alumni graduate from the
master’s program at IHSR, they lack a fundamental tool to meet the organization’s mission.
This study examined gaps in KMO factors influencing risk assessments. Solutions are
proposed for each dimension of the gap analysis based on literature, quantitative results, and
qualitative finding. Theoretical and empirical evidence suggest the proposed solutions will
produce the desired outcome of preparing 100% of IHSR students to access risks accurately.
Level 4: Results and Leading Indicators
Kirkpatrick and Kirkpatrick (2016) defined Level 4 as the reason why training is created
to help stakeholders reach the results outlined in the suggestions. Leading indicators are short-
term observations and measurements that indicate critical behaviors are on track to produce
results. Table 25 shows the proposed Level 4 results and leading indicators in the form of
145
outcomes, metrics, and methods for external and internal outcomes for IHSR alumni. The
outcomes are the lead indicators of the application of systems thinking at the individual,
educational institutions, and external organizations. The external observations and measurements
refer to information that originates from outside the alumni’s organization. The internal
observations and measurement refer to information that originates from inside the alumni’s
organization. Table 25 summarizes the external and internal outcomes of the study.
146
Table 23
Outcomes, Metric, and Methods for External and Internal Outcomes
Outcome Metric Method
External outcomes
Decreased growth of urban search
and rescue (USAR) deployment
Number of annual USAR
deployments
Review of FEMA
annual reports
national
preparedness report
Increased sharing of the
curriculum with other intuitions
Number of annual Red Cross
deployments
Review of the World
Disaster Report
Increased published policies
addressing a systematic
approach for characterizing the
nature and state of system
during risk assessments
Number of updated DHS risk
lexicon with procedures for
systems thinking
Review of DHS Risk
lexicon
Increased number of risks that are
categorized by type of system
and state of system.
Number of risks categorized
by type of system and state
of system
Review of FEMA
annual reports
national
preparedness report
Internal outcomes
Increased use of developed
standards and benchmarks for
assessing risk
Number of published
guidelines with procedures
categorizing system,
discerning the state of the
system, and appropriate
strategies.
Review of
community and
national threat and
hazard
identification and
risk assessment
(THIRA)
Increased number of practitioners
able to apply system thinking
during risk assessments to
categorize systems into ordered
or unordered
Number of practitioners
applying systems thinking
strategies and number of
systems categorized into
ordered or unordered.
Review of
community and
national THIRA
Increased number of practitioners
able to realize the current state
of the system as ordered;
simple or complicated or
unordered; complex or chaotic
Number of practitioners using
the system and number of
systems states identified as
ordered; simple or
complicated or unordered;
complex or chaotic
Review of
community and
national THIRA
Increased number of practitioners
able to apply appropriate
strategies to the state of the
system to find the correct
practice
Number of practitioners
applying strategies and
number of strategies
appropriately applied to the
state of the system
Review of
community and
national THIRA
147
Level 3: Behavior
Behavior is the degree to which participants apply what they learned when they are on
the job (Kirkpatrick & Kirkpatrick, 2016). Behavior is the most important level because it
impacts the organizational outcomes. Level 3 is a comprehensive, continuous monitoring system
that includes critical behaviors, required drivers, and organizational support. The following
section describes the elements of Level 3 recommended to achieve stakeholder success.
Critical Behaviors
The stakeholder group of focus in this study is students and alums of the IHSR. Critical
behaviors depend on categorizing systems into ordered or unordered. Ordered systems are
knowable in time and space, while unordered systems are unknowable in time and space. Once
the system is categorized, the stakeholder must realize the state of the system as simple or
complicated as an ordered system or complex or chaotic as an unordered system. Simple states of
systems are known; cause-and-effect relationships are perceivable, predictable, and repeatable.
Complicated states of systems are knowable; cause-and-effect relationships are separated in time
and space. Complex states of systems are retrospectively coherent; cause-and-effect relationships
are not repeatable. Chaotic states of systems are incoherent; cause-and-effect relationships are
not perceivable.
Finally, the stakeholder must use the appropriate strategy to align with the state of the
system. Ordered simple systems require a process of sense, categorize, and respond to find best
practices through standard operating procedures. Ordered complicated systems require a process
of sense, analysis, and response to find good practices by applying expert judgement. Unordered
complex systems require a process of probe, sense, and respond to find emergent practices by
148
pattern management via multi-experimentation. Unordered chaotic systems require a process of
act, sense, and respond to find novel practices via stability-focused interventions.
The timing of the critical behavior assessment assumes the students of the IHRS are all
working professionals as they are now. The length of the program and the arrangement of the
units provide the opportunity for professors in later units to assess the practitioner’s ability to
apply the skills, procedures, and strategies on real-time risk assessments. These evaluation tools
can assess Level 1, Level 2, and Level 3 during the program. In addition, commitment from the
alumni’s supervisor and sponsor to provide feedback on critical behaviors 6 months and 1 year
after the program concludes provides, signaling the staying power of the program in the work
environment outside of school. Table 26 specifies the metrics, methods, and timing for the
evaluation of each of these critical behaviors. The performance of the critical behaviors,
measured as indicated in Table 26, should result in the internal Level 4 outcomes, which, in turn,
should result in Level 4 external outcomes.
149
Table 24
Critical Behaviors, Metrics, Methods, and Timing for Evaluation of Students and Alums of the
IHSR
Critical behavior Metrics Methods Timing
Applies system
thinking during
risk assessments to
categorize systems
into ordered or
unordered
Number of risk
assessments
categized into
ordered systems
and unordered
systems
Professor feedback
on projects
Supervisor survey
Self-assessment
survey
During Critical
Infrastructure Unit
and unconventional
threat unit
Six months after
graduation
One year after
graduation.
Records the current
state of the system
as ordered; simple
or complicated or
unordered;
complex or chaotic
Number of risk
assessments using
the state of the
system as simple,
complicated,
complex, or chaotic
Professor feedback
on projects
Supervisor Survey
Self-assessment
survey
During Critical
Infrastructure Unit
and unconventional
threat unit
Six months after
graduation
One year after
graduation
Applies appropriate
strategies to the
state of the system
to find the correct
practice for each
risk assessment and
records it the
community and
national THIRA
Number of strategies
used to find best
practices, good
practices, emergent
practices, and
novel practices
Professor feedback
on projects
Supervisor Survey
Self-assessment
survey
During Critical
Infrastructure Unit
and unconventional
threat unit
Six months after
graduation
One year after
graduation
Required Drivers
To achieve the desired outcomes, students need supportive classroom environments led
by professors with the knowledge and skills to cultivate the critical behaviors described in Table
27. The professors and their assessment options will drive the development of students’ critical
behaviors. Kirkpatrick and Kirkpatrick (2016) categorized drivers as either reinforcing,
150
encouraging, rewarding, or monitoring. Many knowledge-based recommendations align with
reinforcing by incorporating classroom training and education solutions. Motivational
recommendations are primarily encouraging, as classroom practices help students initiate and
sustain goal-oriented behaviors. In addition, motivational solutions involve incentives that
reward students for their successes. Finally, monitoring is an organizational-level solution,
applying accountability measures and data-driven decision-making. Table 27 identifies and
categorizes the required drivers identified in this study, outlines the time interval for enacting
each strategy, and demonstrates the alignment of each driver to particular critical behaviors.
Table 25
Required Drivers to Support Critical Behaviors
Methods Timing
Critical
behaviors
supported
Reinforcing
Provide instruction on different types of events
using familiar case studies.
Daily 1, 2
Provide instruction on statistics and probability
using familiar metaphors tied to familiar
experiences.
Daily 1, 2,
Training in the classroom about the required skills
and habits of mind for systems thinking
Daily 1, 2
Repeated practice with real-life case studies Weekly 1, 2, 3
Education on cultural change and group dynamics At the beginning and
end of semester
3
Encouraging
Provide instruction on ways to control beliefs and
adapt to a rigid environment.
Daily 1, 2, 3
Frequent, specific feedback which is aligned with
systems thinking and includes conceptual
guidance
Daily 1, 2
A positive emotional environment that shows all
students and organizations are capable of
successful application of system thinking
Daily 1, 2, 3
151
Methods Timing
Critical
behaviors
supported
Access to examples of successful case studies of
system thinking applied in real life
Daily 1, 2, 3
Attributional retraining, particularly around
changing and functioning hierarchal,
bureaucratic organizations
At the beginning and
end of semester
3
Introduction to the theory of hierarchal complexity At the beginning and
end of semester
1, 2, 3
Introduction to the theory of relevancy realization At the beginning and
end of semester
1, 2
Rewarding
Celebration of successes on successful group
projects
Weekly 1, 2, 3
Assessments that measure and reward individual
progress and growth
At the beginning and
end of semester
1, 2, 3
Monitoring
Provide instruction on systems theory and the
empirical evidence increasing connectivity.
At the beginning and
end of semester
3
Provide instruction on ways to draft policies and
procedures that address complexity.
At the beginning and
end of semester
3
Annual assessment on project scores Yearly 1, 2, 3
Number of theses related to systems thinking Yearly 1, 2, 3
Organizational Support
The critical behaviors outlined in Table 26 and the drivers described in Table 27 require
organizational support for implementation. The IHSR must support the recommendations to
close the gaps in KMO barriers. The IHSR has a history of thinking and acting progressively
toward its mission, with a track record of innovating new curricula to meet its students’ needs. A
novel course in risk that emphasizes the uses of case studies and metaphors to help students
discern statistics from probability, CEs from IEs, and think in terms of systems is a task the
IHSR is capable of. In addition, part of IHSR’s charter is to share what is learned with other
educational institutions while progressing the topic of homeland security as a domain. A course
152
on risk developed by capable IHRS will be shared with other institutions via numerous academic
and security conferences. Finally, IHSR encourages students to change their organizational
culture positively. A risk curriculum that included topics on advancing cultural modeling and
settings toward systems thinking meets the mission of IHSR and students’ needs. The curriculum
should include instruction, demonstrations, practice, and feedback on using system theory and
how to draft policies that address complexity using the Cynefin framework. The intersection of
IHSR’s mission, IHSR’s robust capacity, and the students’ needs are crucial in ensuring internal
and external outcomes are attainable through the development of critical behaviors.
Level 2: Learning
Learning is the degree to which participants acquire the intended knowledge, skills,
attitude, confidence, and commitment from the program (Kirkpatrick & Kirkpatrick, 2016).
Success depends on a purpose and deliberate evaluation to ensure there are enough resources to
impact Levels 3 and 4. The following section outlines the learning goals, program description,
and evaluation components.
Learning Goals
The following learning goals derived from the Level 3 critical behaviors support the
stakeholder’s behavior change. The behavior change guides stakeholders to achieve the leading
internal indicators at Level 4, aligning with the stakeholder and organizational goals.
1. Summarize the definitions of risk in the context of homeland security. (K-C)
2. Describe the components of a risk assessment. (K-F)
3. Discern statistical results from probability results. (K-C)
4. Identify the difference between objective probability and subjective probabilistic
estimates. (K-C)
153
5. Identify potential feedback loops. (K-F)
6. Able to integrate systems thinking into risk assessments. (K-P)
7. Utilize the Cynefin Framework to categorize the state of the system. (K-P)
8. Reflect on the impact of the inherent bias (K-M)
9. Indicate confidence in executing systems thinking in their home organization. (Value)
10. Commit to changing cultural models and settings to value systems thinking. (Value)
Program
The program outlines five broad goals, including providing instruction that includes
demonstration, practice, and feedback assessing risk on:
1. Different types of events using familiar case studies.
2. Statistics and probability using familiar metaphors tied to familiar experiences.
3. On ways to control beliefs and adapt to rigid environments.
4. On systems theory and the empirical evidence increasing connectivity.
5. On ways to draft policies and procedures that address complexity.
The program will start as a one-semester pilot in the master’s program and use case studies as
a springboard for the goals. Over the following years, the small-scale pilots will continue to run
in specific homeland security departments through new transdisciplinary courses designed to
create unique thinking for students to assess risk. An evaluation will methodically inform each
iteration of the pilot course from the previous year. As such, the program described in this
section represents suggestions for the next iteration of pilot courses, incorporating the KMO
needs analysis from this dissertation study. The recommended program includes evidence-based
suggestions for designing and improving the following homeland security curriculum. The
identified assets and needs from the study should be incorporated into the next round of pilot
154
courses at IHSR while correcting for the validated influences with the context-specific
recommendations highlighted in Table 22.
The pilot courses should focus on classroom instruction that presents case studies
requiring the skills and mindsets of systems thinking, followed by repeated practice in risk
assessments with frequent formative feedback. Specific training should be added in the form of
discerning statistics from probabilities, categorization of environments, and recognition of the
system’s state.
Evidence supports students’ comprehension of the difference in events increases by
examining case studies. The study showed a gap in understanding CEs from IEs. Using case
studies in class can compare and contrast events that created positive feed loops via their
connectivity with other systems to independent and self-encapsulating events. Specially curated
case studies can become the foundation for comprehension, procedural application, and
reflection of risk assessments.
Evidence suggests students chunk novels and complex topics into concepts they already
know. The difference between statistics as an explanatory tool of the known past and probability
as a predictive tool of the unknown future can be taught using familiar metaphors without
advanced math. Familiar statistical metaphors are typically sports numbers, such as baseball’s
runs batted in (RBI), whereas familiar metaphors for probability are odds in dice rolling
outcomes or poker hands. Sports numbers such as RBI represent the known past and do not
predict the future, whereas the odds in dice rolling are a bound prediction of the future and have
no relation to the past. The conceptual knowledge of statistics and probability will tease out their
limitations outside of games and bring clarity to the need for additional tools to assess real-life
case studies. Instruction on subjective methodologies based on probabilistic expert elicitation
155
will assist students in understanding the case studies. Expert elicitation relies on systems thinking
to categorize events based on their environment and the relevant state of the system using tools
such as the Cynefin framework.
Students need to have confidence that their new skills in system thinking will positively
impact their environment. Instruction on the empirical evidence of increasing connectivity
impacting risk assessments will ensure students will value the skills. In addition, instruction and
examples on how to author policies that include systems thinking will ensure students can impact
cultural settings and models when they return to their organizations.
Evaluation of the Components of Learning
The program’s effectiveness derives from checks on students’ investment in declarative
and conceptual knowledge performed throughout instruction. Beyond knowledge, assessing
confidence is essential to ensure this motivational influence does not hinder students’
understanding of the topics. In addition, the evaluation requires monitoring students’ attitudes
and commitment, providing self-direction and ownership from students. Finally, the critical
behaviors require the monitoring of commitment to sure there are no gaps in students’
perceptions of the value of the topic. Table 28 highlights the methods and timing for evaluating
these knowledge-based and motivational components of learning.
156
Table 26
Evaluation of the Components of Learning for the Program
Methods or activities Timing
Declarative knowledge: “I know it.”
Knowledge checks through formative quizzes The beginning of each project to ensure
concepts are internalized
Knowledge checks in real time through “pair,
think, share” and other individual/group
activities
Periodically throughout instruction,
documented through field notes
Procedural skills: “I can do it right now.”
Observations of students’ application of
conceptual knowledge of statistics and
probability
Periodically throughout instruction,
documented through field notes
Observations of students’ application of
Conceptual knowledge of CEs and IEs
Periodically throughout instruction,
documented through field notes
Scenario questions during case studies Beginning, middle, and end of course
Attitude: “I believe this is worthwhile.”
Discussions with students about value,
rationale, and issues with topic of risk
Ongoing informally during the course,
targeted focus groups at the half-way point
of each semester
Pre- and post- project assessment Beginning, middle, and end of course
Confidence: “I think I can do it on the job.”
Likert scaled survey items related to
confidence
Ongoing to monitor progress during each
project (beginning and end of each project)
Discussions with students while executing
tasks in case studies
Ongoing, informal, recorded in field notes
Pre- and post- project assessment Beginning, middle, and end of course
Commitment: “I will do it on the job.”
Goal setting: quality of individual action
plans at work or in thesis
Ongoing to monitor progress during each
project (beginning, middle, and end of each
project)
Observations by instructor during class Ongoing, informal, recorded in field notes
Likert scaled survey items related to
commitment
Ongoing to monitor progress during each
project (beginning, middle, and end of each
project)
157
Level 1: Reaction
Level 1 evaluation measures the reactions to the program in the categories of
engagement, relevance, and customer satisfaction. Table 29 conveys the methods or tools for
evaluating these reactions and suggests the frequency and timing of each evaluation.
Table 29
Components to Measure Reactions to the Program
Methods or Tools Timing
Engagement
Completion of assignments Ongoing during the course
Observations by instructor during class
indicating comprehension and engagement
Ongoing during the course
Observation of out-of-class discussions
relevant to thesis or work
Ongoing during the course
Course evaluation At the conclusion of each semester
Relevance
Student survey Half-way point of semester
Discussions with student Ongoing during course with a targeted focus
group at half-point of semester
Course evaluation At the conclusion of each semester
Customer satisfaction
Student survey Half-way point of semester
Discussions with student Ongoing during course with a targeted focus
group at half-point of semester
Course evaluation At the conclusion of each semester
158
Evaluation Tools
Immediately Following the Program Implementation. Appendix A describes the
course evaluations and specific survey items as examples of ways to measure Level 1 and Level
2 outcomes immediately following program implementation. In this evaluation plan, Level 1
questions reflect just the post-course reactions, measuring students’ perceptions of their
engagement while learning, their satisfaction with the experience, and the relevance of what they
learned using a 4-point Likert scale.
Level 2 evaluations incorporate declarative and procedural knowledge, commitment,
confidence, and attitude measurements. Level 2 rating items in this evaluation plan include post-
course assessments and pre-course reflections using a five-point scale. These items measure the
program’s effectiveness at achieving the intended learning goals while assessing students’
perceptions of their opportunities for growth in knowledge, confidence, commitment, and
attitude. Appendix A provides examples of Level 1 and Level 2 rating items on a course
evaluation at the end of each semester of a pilot course.
Delayed For a Period After the Program Implementation. Kirkpatrick and
Kirkpatrick (2016) recommended additional evaluation after activation of required drivers when
students have used the knowledge and skills they cultivated in the program. The recommended
timeframe for a post-assessment varies between organizations depending on the length of time
activation takes for both the drivers and the stakeholders’ critical behaviors (Kirkpatrick &
Kirkpatrick, 2016).
The drivers in the implementation and evaluation plan describe actions for professors in
the proposed pilot risk courses to reinforce, encourage, and reward students as they are critical
behaviors. The program’s one-semester term allows professors to adjust the implementation of
159
the Level 3 drivers based on feedback to support the student’s needs better. Appendix B shows
Level 3 rating items for drivers using a 4-point, forced-choice Likert scale for a survey
administered mid-way through a pilot course.
The program measures Level 3 drivers’ mid-semester to allow for adjustments during the
course and Level 3 critical behaviors after completion to evaluate the transfer of behaviors
developed during the pilot course. Level 4 outcomes measure the internal and external outcomes
of the stakeholders and are assessed 3 months after completion of the pilot course. In addition,
Kirkpatrick and Kirkpatrick (2016) encouraged reviewing Level 1 metrics of relevance and
customer satisfaction and assessing the retention of Level 2 knowledge and skill-based learning
in a delayed survey. Appendix B has sample Levels 1, 2, 3, and 4 assessment items for delayed
surveys used after 3 months of completion of the pilot risk course.
Data Analysis and Reporting
The data analysis and reporting plan design intends to avoid common pitfalls in data
collection and analysis outlined by Kirkpatrick and Kirkpatrick (2016). The common pitfalls
include (a) spending too much time and energy on Level 1 and Level 2 feedback; (b) asking
questions that do not generate useable data; (c) making presentations of data analysis too
complicated; and (d) simply not using the data that has been collected. Creating a web-based
dashboard for professors and administrators can guide the development of the pilot course by
continuously collecting, analyzing, and displaying results. The visual displays internal outcomes
as a result of achieved critical behaviors, which can guide professors during the semester to
improve the learning environment and develop the next pilot. Figure 34 displays the expected
results from a pilot course on risk. The graph includes the current study’s results, minimum
results to accept the success of the program, and the expected results of 100% of IHSR master’s
160
degree students’ ability to assess the probability and consequences of events in alignment with
systems theory. Appendix C outlines the information displayed on the dashboard.
Figure 34
Results of the Pilot Risk Curriculum
0%
20%
40%
60%
80%
100%
120%
Current Results Minimum Results Expected Results
Results Using System Thinking
Interpreting results Discerning connected events Application of systems Theory
161
Summary
The new world Kirkpatrick model is an integrated implementation and evaluation plan to
apply recommended solutions to the problem of practice (Kirkpatrick & Kirkpatrick, 2016). The
model is developed backward, starting from the Level 4 leading indicators of goal attainment
followed by Level 3 stakeholder critical behaviors. Finally, Level 2 learning goals and Level 1
reactions are developed.
The program offers guidance on adjusting and improving future curricula for the IHSR
master’s program. Successful pilot program implementation requires instruction on defining risk,
applying statistics and probability, recognizing connected systems, influencing attributional
control, building cultural models, and updating cultural settings. The recommended tools to
achieve the goals include using case studies, metaphors, empirical evidence, and frequent
practice.
The Level 4 external and internal outcomes are the required result of a pilot course in risk
at the IHSR. The critical behaviors and required driving supported by the professors in Level 3
determine the successful execution of a pilot course, resulting in the student’s attainment of the
learning goals from Level 2. Kirkpatrick and Kirkpatrick (2016) recommended gathering data
continuously to monitor the program’s efficacy. The evaluation plan describes an interactive
design for frequent data collection and feedback with a web-based dashboard for professors and
administrators. Users of the dashboard can adjust practices during the semester while planning
for the next iteration of the pilot. The constant system of monitoring assists in answering the
following questions about outcomes for Levels 1, 2, 3, and 4: (a) Does this outcome meet
expectations? (b) If so, why? (c) If not, why not? (Kirkpatrick & Kirkpatrick, 2016). These Level
162
3 critical behaviors and Level 2 learning measures will help identify any reasons students are not
meeting expectations. Student expectations become the ultimate measure of success.
Kirkpatrick and Kirkpatrick (2016) defined return on expectations (ROE) as the highest
indicator of value for a training program. The model defines expectations in the beginning and
clarify the attainment of the leading indicators is the measure of success for the program. Using
ROE established a metric for the success and value of a change initiative. Metrics of value and
success increase stakeholder acceptance and willingness to actively support and participate in the
program.
Strengths and Weaknesses of the Approach
This study combined two methodological procedures to examine and recommend
solutions for the problem of practice. Clark and Estes’s (2008) KMO gap analysis framework
guided the organization of literature in Chapter Two, the design of the survey instrument and
interview protocol in Chapter Three, the analysis of data in Chapter Four, and the investigation
of research-aligned recommended solutions in Chapter Five. In addition, the new world
Kirkpatrick model functioned as the framework for converting recommendations into an
implantation and evaluation plan to close the gaps identified with Clark and Estes’s (2008) KMO
framework.
Clark and Estes’s (2008) and new world Kirkpatrick models (Kirkpatrick & Kirkpatrick,
2016) have strengths and weaknesses pertaining to the study. Clark and Estes’s gap analysis
framework is an ideal approach for root cause analysis when applied to a problem of practice in
the context of an upper educational school. The strength of Clark and Estes’s KMO gap analysis
is the application of the framework to an educational institute focused on instructing practicing
institutional leaders. The theories of learning and motivation embedded into the K and M of the
163
framework aligned with the goals and practices of the IHSR. In addition, all students work in a
government organization offering support and barriers. The O dimension of the framework is
practical when assessing bureaucratic organizations. One of the missions of the IHSR is to
influence the government institutions its students serve, and the organization theory present in
the KMO framework provides practical and empirically supported recommendations.
The strength of Clark and Estes’s (2008) KMO gap analysis is the application of a
knowledge and motivation gap framework to established leaders in their field. Professionals may
be inclined to hide gaps in knowledge or motivation. The results in Chapter Four, based on
conceptual knowledge, demonstrated this reluctance to expose gaps in knowledge. On the
survey, two knowledge questions had an option of I don ’t understand the question. The results of
the two questions were missed by 90% of the respondents, and only 8% chose I don’t understand
the question, indicating professionals may choose to guess rather than show a knowledge gap. In
addition, the motivational survey results deviated from the interview findings, indicating that
professionals may always respond positively to surveys instead of verifying a motivational gap.
The new world Kirkpatrick model (Kirkpatrick & Kirkpatrick, 2016) aligns as a powerful
method to design a program addressing K and M influences for a student stakeholder group. The
backward planning of the model toward an outcome is clear and compelling. In addition, the
robust methodological process leaves nothing to chance. The model guides the user through
every iteration of a continuing evaluation. The establishment of a goal with a metric of ROE
increases stakeholder acceptance and willingness to support and participate in the program
actively. However, the robustness of the model could be improved when it comes to
implementation. IHRS is an established, top-rated institution with internal metrics based on
success and innovation. The detailed steps in the new world Kirkpatrick model may not align
164
with established practices and could be disregarded for the effort required to implement the
evaluation process.
Limitations and Delimitations
The initial potential limitations and delimitations of this study are outlined in the
methodology’s design, as seen at the conclusion of Chapter Three. Completing the results and
findings, coupled with recommendations, allows for a deeper discussion of the limitations and
delimitations. The limitations are discussed through the lens of the survey and interviews.
The survey results produced a high level of confidence based on the number of responses
relative to the population. However, the nature of the survey has limitations with the application
of the Likert scale. The most significant finding in the survey was the gap in conceptual
knowledge related to statistics and probability. However, the questions related to statistics and
probability were reversed in relation to the rest. If respondents did not stop and think about the
questions, they could have followed the trend in the survey and selected agree or strongly agree.
With momentum inherent to Likert scale questions, it is hard to estimate the number of
respondents who did not know the correct answer or needed to pay attention. In addition, the
survey asked the respondents to select events that could follow a power law vs. a bell curve. It is
possible that many respondents were confused by the question but still failed to choose the
option “I don’t understand the question.” With this limitation, the survey result cannot discern if
the respondents do not know what a bell curve is or if they do not know how a bell curve
behaves. Either way, there is a gap in conceptual knowledge of CEs and IEs. The interviews
produced consistent and reliable results. However, the nature of the gap analysis may influence
the social desirability to answer questions in one direction. I am confident the interviewers were
truthful and candid; however, the lens you look at the problem can influence the observation.
165
Much of the validation of gaps came from questions looking for gaps, and this will always be a
limitation of interviews.
The study was bound to IHSR alumni, which is a thin slice of the homeland security
ecosystem. Their selection was based on the consistency of education, access, and the tendency
for executive leadership; however, their homogeny limits the breadth of the study. The study
would have yielded more credible results if broadened to alums from other institutions. In
addition, the study is one pass across a diverse group of people. Given more time, an additional
survey improved upon after the survey was conducted may yield different results. These
limitations and delimitations may guide future research.
Future Research
This study explores the KMO influences on homeland security professionals who
attended the IHSR through a mixed-methods, sequential explanatory process. Future research
should include multiple variations of this study. First, the stakeholders of focus could include
alum from other institutions, and the results of the KMO dimensions may vary when the
diversity of education is increased. Second, future studies may use a different framework, such
as Bronfenbrenner’s ecological systems theory, to understand research questions regarding risk
assessment. Finally, the gaps in conceptual knowledge should be explored further to understand
their origins.
Conclusion
At its base, risk assessment is a human endeavor; it is the fundamental question to human
survival and how we perceive and understand the possibility of negative consequences. This
study explored the ability of homeland security professionals to assess risk bound into two
research questions:
166
1. What are IHSR master’s degree alumni’s knowledge, motivation, and organization
needs related to assessing the probability and consequences of events (risk)?
2. What are the knowledge, motivation, and organizational recommendations for
improving IHSR master’s degree student abilities to assess the probability and
consequences of events (risk)?
The stakeholders of focus were the alumni of the IHSR because they represent leaders in
homeland security and because IHSR’s mission is relevant to the research questions.
The research methodology derived its dimensions of examination from Clark and Estes’s
(2008) KMO gap analysis framework. The study process followed sequential explanatory mixed
methods based on assumed influences from the literature review of the problem. The results from
the quantitative survey indicated significant gaps in conceptual knowledge and minor gaps in
motivation and organizational support. The finding from the qualitative interview explained the
depth and reason for the gaps found in the survey. The gaps found in the survey were validated
and triangulated through the interviews and literature.
Principles grounded in informational processing theory, motivational theory, and
organization theory suggested recommendations for the gaps with the highest priority. The
recommendations are organized into the new world Kirkpatrick model (Kirkpatrick &
Kirkpatrick, 2016) for implementation and evaluation. The study provides milestones and
processes for implementing and evaluating a novel pilot class on risk at the IHSR. Example
methodologies based on theory and empirical evidence for instructing risk are provided along
with learning objectives.
Risk is inequitable; one event can impact people differently based on their stage in life,
status in society, and health. An event that creates a minor inconvenience for one person or
167
family can be a catastrophe for another. Although we are affected differently, we are still
connected to each other. One person’s tragedy has the potential to snowball into increased
homelessness, diseases, and crime.
The world is tightly connected as well. A hurricane smashing into the island of Puerto
Rico can create a worldwide shortage of critical medical supplies. And the people in the position
to sense, categorize, and respond to the risk inherent in a tightly coupled society are homeland
security professionals. The gaps in KMO support of our security professionals affect us all, and
the quality and effectiveness of their education are paramount to our survival. This study hopes
to be a guiding compass to educators at institutions that teach homeland security for the better of
all humanity. At its foundation, risk assessment is a human endeavor, as are the consequences of
failure.
168
References
Abernethy, M. A., Anderson, S. W., Nair, S., & Jiang, Y. (2021). Manager ‘growth mindset’ and
resource management practices. Accounting, Organizations and Society, 91, Article
101200. https://doi.org/10.1016/j.aos.2020.101200
Alhakami, A. S., & Slovic, P. (1994). A psychological study of the inverse relationship between
perceived risk and perceived benefit. Risk Analysis, 14(6), 1085–1096.
https://doi.org/10.1111/j.1539-6924.1994.tb00080.x
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing:
A revision of Bloom’s taxonomy of educational objectives. Longman.
Arici, G., Dalai, M., Leonardi, R., & Spalvieri, A. (2018). A communication theoretic
interpretation of modern portfolio theory including short sales, leverage and transaction
costs. Journal of Risk and Financial Management, 12(1), 4.
https://doi.org/10.3390/jrfm12010004
Arnold, R. D., & Wade, J. P. (2015). A definition of systems thinking: A systems approach.
Procedia Computer Science, 44, 669–678. https://doi.org/10.1016/j.procs.2015.03.050
Babbie, E. R. (2020). The practice of social research. Cengage learning.
Bak, P., Tang, C., & Wiesenfeld, K. (1988). Self-organized criticality. Physical Review A:
General Physics, 38(1), 364–374. https://doi.org/10.1103/PhysRevA.38.364
Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in
Psychological Science, 9(3), 75–78. https://doi.org/10.1111/1467-8721.00064
Bearden, J. N., Murphy, R. O., & Rapoport, A. (2008). Decision biases in revenue management:
Some behavioral evidence. Manufacturing & Service Operations Management, 10(4),
625–636. https://doi.org/10.1287/msom.1080.0229
169
Beeson, M. (2020). A plague on both your houses: European and Asian responses to
Coronavirus. Asia Europe Journal, 18, 245–249. https://doi.org/10.1007/s10308-020-
00581-4
Bellavita, C. (2019). How to learn about homeland security. Homeland Security Affairs, 15,
Article 15. www.hsaj.org/articles/15395
Bellavita, C. (2019). How to learn about homeland security. Homeland Security Affairs, 15,
Article 5. www.hsaj.org/articles/15395
Bellavita, C., & Gordon, E. M. (2006). No title. Changing Homeland Security: Teaching the
Core,
Bonanno, P., Colson, A., & French, S. (2021). Developing a training course in structured expert
judgement. In A. M. Hanea, G. F. Nane, T. Bedford, & S. French (Eds.), Expert
judgement in risk and decision analysis (pp. 319–343). Springer International Publishing.
https://doi.org/10.1007/978-3-030-46474-5_14
Brody, M. (2020). Enhancing the organization of the United States Department of Homeland
Security to account for national risk. Homeland Security Affairs, XVI.
https://search.proquest.com/docview/2522295241
Chen, J., & Zhu, Q. (2019). Interdependent strategic security risk management with bounded
rationality in the internet of things. IEEE Transactions on Information Forensics and
Security, 14(11), 2958–2971. https://doi.org/10.1109/TIFS.2019.2911112
Cheng, M. W. T., Leung, M. L., & Lau, J. C. (2021). A review of growth mindset intervention in
higher education: The case for infographics in cultivating mindset behaviors. Social
Psychology of Education, 24(5), 1335–1362. https://doi.org/10.1007/s11218-021-09660-9
170
Cirillo, P., & Taleb, N. N. (2020). Tail risk of contagious diseases. Nature Physics, 16(6), 606–
613. https://doi.org/10.1038/s41567-020-0921-x
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. Information Age.
Clark, R. E., Estes, F., Middlebrook, R. H., & Palchesko, A. (2008). Turning research into
results: A guide to selecting the right performance solutions. Wiley Online Library.
Clement, K. E. (2011). The essentials of emergency management and homeland security
graduate education programs: Design, development, and future. Journal of Homeland
Security and Emergency Management, 8(2), 1. https://doi.org/10.2202/1547-7355.1902
Colson, A. R., & Cooke, R. M. (2020). Expert elicitation: Using the classical model to validate
experts’ judgments. Review of Environmental Economics and Policy, 12(1), 113–132.
Comiskey, J. (2018). Theory for homeland security. Journal of Homeland Security Education, 7,
29–45.
Cooke, R. M., & Goossens, L. H. (2004). Expert judgement elicitation for risk assessments of
critical infrastructures. Journal of Risk Research, 7(6), 643–656.
Creswell, J. W. (2014). Research design: Qualitative, quantitative and mixed-methods
approaches. Sage.
Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed
methods approaches. Sage publications.
Čular, M., Slapničar, S., & Vuko, T. (2020). The effect of internal auditors’ engagement in risk
management consulting on external auditors’ reliance decision. European Accounting
Review, 29(5), 999–1020. https://doi.org/10.1080/09638180.2020.1723667
171
Danko, T. T. (2020). Perceptions of gains through experiential learning in homeland security and
emergency management education. Journal of Homeland Security Education, 9(2), 1–31.
Eccles, J. S., & Wigfield, A. (2020). From expectancy-value theory to situated expectancy-value
theory: A developmental, social cognitive, and sociocultural perspective on motivation.
Contemporary Educational Psychology, 61, Article 101859.
https://doi.org/10.1016/j.cedpsych.2020.101859
Fabozzi, F. J., Gupta, F., & Markowitz, H. M. (2002). The legacy of modern portfolio theory.
Journal of Investing, 11(3), 7–22. https://doi.org/10.3905/joi.2002.319510
Fischhoff, B. (2021). Regulation of risk. Regulatory Policy and the Social Sciences, 5, 241.
Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness
of extreme confidence. Journal of Experimental Psychology. Human Perception and
Performance, 3(4), 552–564. https://doi.org/10.1037/0096-1523.3.4.552
French, S. (2013). Cynefin, statistics and decision analysis. The Journal of the Operational
Research Society, 64(4), 547–561. https://doi.org/10.1057/jors.2012.23
French, S. (2015). Cynefin: Uncertainty, small worlds and scenarios. The Journal of the
Operational Research Society, 66(10), 1635–1645. https://doi.org/10.1057/jors.2015.21
French, S., Bedford, T., Pollard, S. J., & Soane, E. (2011). Human reliability analysis: A critique
and review for managers. Safety Science, 49(6), 753–763.
https://doi.org/10.1016/j.ssci.2011.02.008
Gaillard, J. C., & Mercer, J. (2013). From knowledge to action. Progress in Human Geography,
37(1), 93–114. https://doi.org/10.1177/0309132512446717
172
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist,
36(1), 45–56. https://doi.org/10.1207/S15326985EP3601_5
Ghosh, A., & Boyd, E. (2019). Unlocking knowledge-policy action gaps in disaster-recovery-
risk governance cycle: A governmentality approach. International Journal of Disaster
Risk Reduction, 39, 101236. https://doi.org/10.1016/j.ijdrr.2019.101236
Gordji, M. E., Askari, G., & Park, C. (2018). A new behavioral model of rational choice in social
dilemma game. Journal of Neurodevelopmental Cognition, 1(1), 40–49.
Gutiérrez, K. D., & Rogoff, B. (2003). Cultural ways of learning: individual traits or repertoires
of practice. Educational Researcher, 32(5), 19–25.
https://doi.org/10.3102/0013189X032005019
Haimes, Y. Y. (2009). On the complex definition of risk: A systems‐based approach. Risk
Analysis: An International Journal, 29(12), 1647–1654.
Han, S. J., & Stieha, V. (2020). Growth mindset for human resource development: A scoping
review of the literature with recommended interventions. SAGE Publications.
https://doi.org/10.1177/1534484320939739
Hanea, A. M., Nane, G. F., Bedford, T., & French, S. (Eds). (2021). Expert judgement in risk and
decision analysis. Springer. https://doi.org/10.1007/978-3-030-46474-5
Hanson, J., Bangert, A., & Ruff, W. (2016). Exploring the relationship between school growth
mindset and organizational learning variables: Implications for multicultural education.
Journal of Educational Issues, 2(2), 222–243. https://doi.org/10.5296/jei.v2i2.10075
Hoffmann, H., & Payton, D. W. (2018). Optimization by self-organized criticality. Scientific
Reports, 8(1), 1–9. https://doi.org/10.1038/s41598-018-20275-7
173
Hopkin, P. (2018). Fundamentals of risk management: Understanding, evaluating and
implementing effective risk management. Kogan Page Publishers.
Hsieh, D. A., & Peters, E. E. (1993). Chaos and order in the capital markets: A new view of
cycles, prices, and market volatility. The Journal of Finance, 48(5), 2041–2044.
https://doi.org/10.2307/2329084
Hubbard, D. W. (2014). How to measure anything: Finding the value of intangibles in business.
John Wiley & Sons.
Hubbard, D. W. (2020). The failure of risk management: Why i t ’s broken and how to fix it. John
Wiley & Sons. https://doi.org/10.1002/9781119521914
Irving, Z. C., & Vervaeke, J. (2016). Relevance realization: An emerging framework of
intelligence in Garlick, Van der Maas, Mercado and Hawkins. The relevance realization
framework of intelligence. University of Toronto.
Jenkin, C. M. (2006). Risk perception and terrorism: Applying the psychometric paradigm.
Homeland Security Affairs, 2(2).
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in
intuitive judgment. Heuristics and Biases: The Psychology of Intuitive Judgment, 49, 81.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Ki r k pat r i c k ’s four levels of training evaluation.
Association for Talent Development.
Knight, F. H. (1921). Risk, uncertainty and profit. Houghton Mifflin.
Krathwohl, D. R., & Anderson, L. W. (2010). Merlin C. Wittrock and the revision of Bloom’s
taxonomy. Educational Psychologist, 45(1), 64–65.
https://doi.org/10.1080/00461520903433562
Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago press.
174
Lewis, T. (2014). Bak's sandpile: Strategies for a catastrophic world. Agile Research and
Technology, Inc.
Lewis, T. (2019). Critical infrastructure protection in homeland security: defending a networked
nation. John Wiley & Sons.
Locke, E. A., & Latham, G. P. (2012). New developments in goal setting and task performance.
Routledge. https://doi.org/10.4324/9780203082744
Lundberg, R., & Willis, H. (2015). Assessing homeland security risks: A comparative risk
assessment of 10 hazards. Homeland Security Affairs, 11, Article 10.
https://www.hsaj.org/articles/7707
Maes, J., Poesen, J., Parra, C., Kabaseke, C., Bwambale, B., Mertens, K., Jacobs, L., Dewitte, O.,
Vranken, L., De Hontheim, A., & Kervyn, M., (2017, May). The persisting gap between
knowledge and action in disaster risk reduction: Evidence from landslides in Uganda and
Cameroon. In M. Mikoš, Ž. Arbanas, Y. Yin, & K. Sassa (Eds.), Advancing culture of
living with landslides. WLF 2017. Springer, Cham. https://doi.org/10.1007/978-3-319-
53487-9_46
Mandelbrot, B. B. (2021). The many faces of scaling: Fractals, geometry of nature, and
economics. In W. C. Shieve & P. M. Allen (Eds.), Self-organization and dissipative
structures (pp. 91–109). University of Texas Press.
Mandelbrot, B. B., & Hudson, R. L. (2010). The (mis) behaviour of markets: a fractal view of
risk, ruin and reward. Profile books.
Marti, D., Mazzuchi, T. A., & Cooke, R. M. (2021). Are performance weights beneficial?
Investigating the random expert hypothesis. In A. M. Hanea, G. F. Nane, T. Bedford, &
175
S. French (Eds.), Expert judgement in risk and decision analysis (pp. 53–82). Springer
International Publishing. https://doi.org/10.1007/978-3-030-46474-5_3
Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
implementation (4th ed.). Jossey-Bass.
Michel-Kerjan, E. (2015). Effective risk response needs a prepared mindset. Nature, 517(7535),
413. https://doi.org/10.1038/517413a
Miettinen, R., Punamaki, R., & Engestrom, Y. (2012). Perspectives on activity theory.
Cambridge University Press.
Montuori, A. (2011). Systems approach. In M. A. Runco & S. R. Pritzker (Eds.), Encyclopedia
of creativity (2nd ed., vol. 2, pp. 414-421). Academic Press.
Moreto, W. D., Piza, E. L., & Caplan, J. M. (2014). “A plague on both your houses?”: Risks,
repeats and reconsiderations of urban residential burglary. Justice Quarterly, 31(6),
1102–1126. https://doi.org/10.1080/07418825.2012.754921
National Commission on Terrorist Attacks Upon the United States. (2004). The 9/11 Commission
report: Final report of the national commission on terrorist attacks upon the United
States. W. W. Norton & Company.
Nonaka, I. (2008). The knowledge-creating company (Reprint ed.). Harvard Business Review
Press.
Nunes Amaral, L. A., & Meyer, M. (1999). Environmental changes, coextinction, and patterns in
the fossil record. Physical Review Letters, 82(3), 652–655.
https://doi.org/10.1103/PhysRevLett.82.652
176
Oakley, B. A. (2014). A mind for numbers: How to excel at math and science (even if you
flunked algebra). TarcherPerigee.
Özduran, A., & Tanova, C. (2017). Manager mindsets and employee organizational citizenship
behaviours. International Journal of Contemporary Hospitality Management, 29(1), 589–
606. https://doi.org/10.1108/IJCHM-03-2016-0141
Patton, M. Q. (2015). Qualitative research and evaluation methods (4th ed.). Sage Publications.
Pauleen, D. J. (2017). Dave Snowden on KM and big data/analytics: Interview with David J.
Pauleen. Journal of Knowledge Management, 21(1), 12–17.
Pelfrey, W. V. S., & Kelley, W. D. J. (2013). Homeland security education: A way forward.
Homeland Security Affairs, 9(1) Article 3. https://www.hsaj.org/articles/235
Pelfrey, W. V. S., & Kelley, W. D. J. (2013). Homeland security education: A way forward.
Homeland Security Affairs, 9, Article 3. https://www.hsaj.org/articles/235
Perrow, C. (1999). Normal accidents: Living with high risk technologies. Princeton University
Press.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in
learning and teaching contexts. Journal of Educational Psychology, 95(4), 667–686.
https://doi.org/10.1037/0022-0663.95.4.667
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of
the motivated strategies for learning questionnaire (MSLQ). University of Michigan.
Plant, J. F., Arminio, T., & Thompson, P. (2011). A matrix approach to homeland security
professional education. Journal of Homeland Security and Emergency Management, 8(2),
1. https://doi.org/10.2202/1547-7355.1883
177
Pope, D. G., & Schweitzer, M. E. (2011). Is Tiger Woods loss averse? Persistent bias in the face
of experience, competition, and high stakes. The American Economic Review, 101(1),
129–157. https://doi.org/10.1257/aer.101.1.129
Post, T., Baltussen, G., & Van den Assem, M. (2006). Deal or no deal? Decision-making under
risk in a large-payoff game show. American Economic Review, 98(1), 38–71
Robinson, S. B., & Firth Leonard, K. (2018). Designing quality survey questions (1st ed.). SAGE
Publications.
Rozell, D. J. (2015). A cautionary note on qualitative risk ranking of homeland security threats.
Homeland Security Affairs, 11, 238–249.
Rueda, R. (2011). The 3 dimensions of improving student performance. Teachers College Press.
Salkind, N. J. (2017). Statistics for people who (think they) hate statistics (6th ed.,). SAGE.
Salkind, N. J. (2017). Statistics for people who (think they) hate statistics (6th ed.). SAGE.
Schein, E. H., & Schein, P. (2017). Organizational culture and leadership (5th ed.). Wiley.
Schraw, G., & McCrudden, M. (2006). Information processing theory.
http://www.education.com/reference/article/information-processingtheory/
Schunk, D. H., Meece, J. R., & Pintrich, P. R. (2012). Motivation in education: Theory,
research, and applications. Pearson Higher Ed.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of
Economics, 69(1), 99–118. https://doi.org/10.2307/1884852
Skoko, H. (2013). Systems theory application to risk management in environmental and human
health areas. Journal of Applied Business and Economics, 14(2), 93–111.
Snowden, D. (2005). Strategy in the context of uncertainty. Handbook of Business Strategy, 6(1),
47–54. https://doi.org/10.1108/08944310510556955
178
Sornette, D., & Johansen, A. (1997). Large financial crashes. Physica A, 245(3-4), 411–422.
https://doi.org/10.1016/S0378-4371(97)00318-X
Torres-Barrán, A., Redondo, A., Rios Insua, D., Domingo, J., & Ruggeri, F. (2021). Structured
expert judgement issues in a supply chain cyber risk management system. In A. M.
Hanea, G. F. Nane, T. Bedford, T., & S. French (Eds.), Expert judgement in risk and
decision analysis (pp. 441–458). Springer International Publishing.
https://doi.org/10.1007/978-3-030-46474-5_20
Tranchard, S. (2018). Risk management: The new ISO 31000 keeps risk management simple.
Governance Directions, 70(4), 180–182.
Weichselgartner, J., & Pigeon, P. (2015). The role of knowledge in disaster risk reduction.
International Journal of Disaster Risk Science, 6(2), 107–116.
https://doi.org/10.1007/s13753-015-0052-7
Weichselgartner, J., Pigeon, P., & the U. S. Department of Homeland Security Risk Steering
Committee. (2008). DHS risk lexicon. Department of Homeland Security.
Weick, K. E. (2004). Normal accident theory as frame, link, and provocation. Organization &
Environment, 17(1), 27–31. https://doi.org/10.1177/1086026603262031
Weiner, B. (2005). Motivation from an attribution perspective and the social psychology of
perceived competence In A. J. Elliot, & C. S. Dweck (Eds.), Handbook of competence
and motivation (pp. 73–84). Guilford.
Werner, C., & Ismail, R. (2021). Structured expert judgement in adversarial risk assessment: An
application of the classical model for assessing geo-political risk in the insurance
underwriting industry. In A. M. Hanea, G. F. Nane, T. Bedford, T., & S. French (Eds.),
179
Expert judgement in risk and decision analysis (pp. 459–484). Springer International
Publishing. https://doi.org/10.1007/978-3-030-46474-5_21
Woo, G. (2021). Expert judgement in terrorism risk assessment. In A. M. Hanea, G. F. Nane, T.
Bedford, T., & S. French (Eds.), Expert judgement in risk and decision analysis (pp. 485–
501). Springer International Publishing. https://doi.org/10.1007/978-3-030-46474-5_22
Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies?
The American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794
Yunkaporta, T. (2020). Sand talk. The Text Publishing Company.
Zdziarski, M., Nane, G. F., Król, G., Kowalczyk, K., & Kuźmińska, A. O. (2021). Decision-
making in early internationalization: a structured expert judgement approach. In A. M.
Hanea, G. F. Nane, T. Bedford, T., & S. French (Eds.), Expert judgement in risk and
decision analysis (pp 503–520). Springer International Publishing.
https://doi.org/10.1007/978-3-030-46474-5_23
180
Appendix A: Sample Survey Items Measuring Kirkpatrick Levels 1 and 2
For questions 1–4, answer with Strongly disagree, disagree, agree, or strongly agree.
1. This course consistently held my interest. (Level 1: engagement)
2. I constantly learned and grew in this course. (Level 1: engagement)
3. The competencies that were the focus of this course will have relevance in my
professional life. (Level 1: relevance)
4. I enjoyed the systems thinking approach to assessing risk. (Level 1: customer
Satisfaction)
Questions 5–10. Use the 5-point scale articulated below to respond to the prompts. Each
question asks you to consider the way you would have responded before participating in this
course compared to how you respond now at the conclusion of the course.
1 2 3 4 5
Not at all Barely Somewhat Quite a bit Extremely well
5. I am committed to applying systems thinking to risk assessments. (Level 2: commitment)
1. Before this course
2. After this course
6. I can interpret results from statistics and probability (Level 2: conceptual knowledge)
1. Before this course
2. After this course
7. I can use the Cynefin Framework to categorize different environments during a risk
assessment (Level 2: procedural knowledge)
1. Before this course
2. After this course
181
8. I feel confident that I can discern connected systems from independent systems. (Level 2
Confidence)
1. Before this course
2. After this course
9. I feel confident that I can master the competencies is a risk assessment in a complex
environment. (Level 2: confidence)
1. Before this course
2. After this course
10. I see value in the systems thinking process for risk assessments. (Level 2: attitude)
1. Before this course
2. After this course
182
Appendix B: Sample Blended Evaluation Items Measuring Kirkpatrick Levels 1–4.
The new world Kirkpatrick model recommends revisiting Level 1 relevance and
satisfaction and Level 2 knowledge and skills in a delayed survey to pilot course participants. In
addition, Level 3 critical behaviors and Level 4 indicators and results should also be assessed in
this measure administered 3 months after completion of the pilot course. Sample items are shown
below.
Open-Ended Questions for Revisiting Level 1 and Level 2
1. What projects and/or competencies from the pilot risk course continue to feel relevant
to you now? (Level 1 Relevance)
2. I would recommend this training to my co-workers. (L1: customer satisfaction)
3. Scenario question: You are asked to assess the relative risk from a potential collapse
of the Shasta dam using the TCV framework. Explain the questions you would ask to
determine the state of the system. Discuss what environment based on knowability
you categorize the event using the Cynefin framework. (Level 2: procedural
knowledge)
Five-Point Scale Questions for Evaluating Level 3 Critical Behaviors
For questions 4–6 below, identify the degree to which you have continued to practice the
behaviors that were cultivated in your pilot course on risk. (Level 3: critical behaviors).
1. Little or no application
2. Mild degree of application
3. Moderate degree of application
4. Strong degree of application
5. Very strong degree of application and desire to help others do the same
183
4. I cultivate mindsets of systems thinker when I assess risk.
1 2 3 4 5
5. I regularly consider and recognize the state of the system when assessing risk.
1 2 3 4 5
6. I am building a culture of systems thinking at my organization.
1 2 3 4 5
7. If you circled 3 or below on any of the questions above, please indicate the reason(s) why
you are not continuing to practice and apply these behaviors in your organization.
• My organization is not involved in risk assessments, therefore these behaviors don’t
apply.
• My organization is not regularly involved in risk assessments, however systems
thinking still seems relevant but I don’t have support to incorporate it.
• My organization regularly assess risks, but I don’t understand exactly how to apply
these behaviors in this context.
• My organization regularly assess risks, but I don’t have the confidence to apply what
I learned.
Other, please specify _______________________________
Level 4 Indicators and Results Sample Metrics.
8. I have noticed the following continued outcomes from my participation in the pilot
risk course. Check all that apply.
• I naturally apply systems thinking to risk assessments.
• I recognize when statistics and probability are relevant.
• I am supporting or building a culture of systems thinking.
184
• I try to update policies or procedures to incorporate systems thinking when
assessing risk.
• I am confident I can use the Cynefin framework to categorize environments
and select an appropriate approach.
• I am confident I can discern connected events from independent events.
• I value feedback from experts in complex environments.
• I am better at recognizing when mathematical models are relevant.
• None of the above. I don’t feel any continued positive outcomes.
9. To what degree do you feel that the risk course had an impact on your professional life?
Have you changed your approach to assessing risk? Explain your thinking.
185
Appendix C: Outline of the Information Displayed on the Dashboard
Survey responses are confidential to ensure students are candid in their responses.
Table C1
Dashboard Information
New world model level Data displayed Visual presentation
Level: 1 Reactions Four-point Likert scale data
from survey items
evaluating engagement,
relevance, and satisfaction
Pie charts that illustrate the
percentage of respondents
that strongly disagree,
disagree, agree, and
strongly agree for each
metric
Level 2: Learning Pre- and post-responses on
five-point scale for items
aligned to declarative and
procedural knowledge,
commitment, confidence,
and attitude
Bar charts that illustrate the
means of pre- and post-
responses of each metric
Level 3: Drivers Four-point Likert scale data
from survey items
evaluating students’
perceptions of the
professors effectiveness at
reinforcing, encouraging,
and rewarding their
learning
Pie charts that illustrate the
percentage of respondents
that strongly disagree,
disagree, agree, and
strongly agree from the
most recent administration
of each metric and the
mean responses to each
metric compared to means
from previous pilots
Abstract (if available)
Abstract
The purpose of the study is to measure the gaps in knowledge, motivation, and organizational support of homeland security professionals when assessing risk. Graduates of a master's program in homeland security serve as the participants of focus. The study uses a mixed method, sequential explanatory process consisting of quantitative surveys, qualitative interviews, and document reviews. The quantitative survey includes questions to assess the levels and degrees of knowledge, motivation, and organizational support of risk assessments. The results of the study indicate gaps in knowledge and organizational support balanced with a solid motivation to succeed. The survey received over 400 participants, giving it a 98% confidence. The qualitative, semi-structured interviews consisted of ten participants selected from the survey to help explain the results of the survey and provide insight on ways to close gaps. The findings from the interviews indicated consistent patterns in the gaps discovered in the survey and possible solutions to close the gaps. Significant gaps in conceptual knowledge and organizational support, including cultural setting, indicate a need for additional training and education. Recommendations map a graduate-level curriculum based on the gaps found in knowledge and organizational support. The proposal includes an implementation and evaluation plan based on a four-step process, including desired results, critical behaviors, levels of learning, and initial reactions.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Enhancing socially responsible outcomes at a major North American zoo: an innovation study
PDF
Optimizing leadership and strategy to develop an expenditure-reduction plan: an improvement study
PDF
Gender diversity in optical communications and the role of professional societies: an evaluation study
PDF
Employee standardization for interchangeability across states: an improvement study
PDF
Navigating race, gender, and responsibility: a gap analysis of the underrepresentation of Black women in foreign service leadership positions
PDF
ReQLes technology's this is your life: an innovation study
PDF
Critical behaviors required for successful enterprise resource planning system implementation: an innovation study
PDF
Primary care physicians' experiences working within the patient-portal to improve the quality of patient care
PDF
Millennial workforce retention program: an explanatory study
PDF
Leadership in times of crisis management: an analyzation for success
PDF
Role ambiguity and its impact on nonprofit board member external responsibilities: a gap analysis
PDF
Participating in your own survival: a gap analysis study of the implementation of active shooter preparedness programs in mid-to-large size organizations
PDF
Addressing employee retention in the technology industry: improving access to corporate social responsibility programs
PDF
Recruiting police diversity
PDF
A framework for customer reputation evaluation: an innovation study
PDF
Improving the evaluation method of a military unit: a gap analysis
PDF
Taking the pulse on accountability: an innovation study
PDF
Role understanding as a pillar of employee engagement: an evaluation gap analysis
PDF
Mentoring as a capability development tool to increase gender balance on leadership teams: an innovation study
PDF
Inclusionary practices of leaders in a biotechnology company: a gap analysis innovation study
Asset Metadata
Creator
Saylors, Eric
(author)
Core Title
Sense making in risk assessments
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Degree Conferral Date
2023-08
Publication Date
07/06/2023
Defense Date
04/14/2023
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Consequences,equity,homeland security,OAI-PMH Harvest,probability,risk,Security,statistics,threats,vulnerabilities
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Donato, Adrian (
committee chair
), Yates, Kenneth (
committee member
)
Creator Email
esaylors8@gmail.com,saylors@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113262026
Unique identifier
UC113262026
Identifier
etd-SaylorsEri-12024.pdf (filename)
Legacy Identifier
etd-SaylorsEri-12024
Document Type
Dissertation
Format
theses (aat)
Rights
Saylors, Eric
Internet Media Type
application/pdf
Type
texts
Source
20230706-usctheses-batch-1062
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
equity
homeland security
probability
risk
threats
vulnerabilities