Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A system framework for evidence based implementations in a health care organization
(USC Thesis Other)
A system framework for evidence based implementations in a health care organization
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A SYSTEM FRAMEWORK FOR EVIDENCE BASED IMPLEMENTATIONS
IN A HEALTH CARE ORGANIZATION
by
Caitlin Hawkins
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(INDUSTRIAL AND SYSTEMS ENGINEERING)
May 2013
Copyright 2013 Caitlin Hawkins
ii
Acknowledgements
The process of developing this research began my junior year of undergrad at Cornell
when I became curious about research and specifically research applications in health
care and has evolved to the work presented here. Throughout the process, there have
been ups and downs, anxiety and joy, as I completed my journey. I owe extreme
gratitude to many people who have guided and supported me along the way.
First, my undergraduate advisor, Dr. Jack Muckstadt, for showing me what it really
means to do research, for instilling a sense of excitement for the methods that can be
used through his teaching, and for his continued support throughout my time as a
graduate student. It was wonderful to have such a strong mentor leading me into
graduate school and supporting me through my transition from University of California
Los Angeles to the University of Southern California.
Second, my graduate advisor, Dr. Shinyi Wu, for helping me find a black cat in a dark
room and even more importantly a room that had a cat in it at all (also known as a
narrowed dissertation topic that was interesting not only to me, but also to the scientific
community). Without her guidance through the research process, I may not have
completed this process at all.
Also, gratitude goes to my dissertation committee members including Dr. Stan Settles
and Dr. Paul Adler as well as those who served on my qualifying committee: Dr.
iii
Najmedin Meshkati and Dr. Robert Myrtle for challenging me to refine my arguments
and expand my validation methodology to include methods not traditionally used by
industrial systems engineers, but still contributed to my work in developing a systems
framework for evidence based implementations in health care. I would also like to
acknowledge Dr. Peer Fiss, Dr. Chih-Ping Chou, and Dr. Sandra Eckel for their aid in
strengthening my understanding of the methods I utilized in throughout my validation.
Extreme gratitude also goes to my support network: my friends in the PhD program
alongside me for listening to my complaints about work and sharing in the sentiment as
well as for helping me overcome hurdles in my research by being a sounding board for
ideas, my friends outside of this network for understanding for forcing me to work by
joining me in study parties, Travis for not letting me quit even when I thought I wanted
to and for always reminding me to do my work no matter how much I hated hearing it,
Cindy for acting as a second mother to me especially during the early years, and last but
not least, my family who cheered me on from Texas, listening to all my ups and downs,
and providing constant support through it all from the very beginning.
iv
Table of Contents
Acknowledgements ......................................................................................................... ii
List of Tables .................................................................................................................. vi
List of Figures ................................................................................................................ vii
Abstract ........................................................................................................................ viii
Chapter One: Introduction .............................................................................................. 1
Phase 1: Constructing the ABCD Implementation Framework .................................. 3
Phase 2: Validating the ABCD Implementation Framework ...................................... 6
Organization of the Dissertation ................................................................................. 9
Chapter Two: Literature Review .................................................................................. 11
2.1 Implementation Science ..................................................................................... 11
2.2 Implementing Evidence Based Practice .............................................................. 15
2.3 Justification for Framework Components .......................................................... 17
2.3.1 Assistive Systems ......................................................................................... 18
2.3.2 Behavior Activation ...................................................................................... 21
2.3.3 Culture Building............................................................................................ 25
2.3.4 Data Focus .................................................................................................... 28
2.3.5 Alternative Strategies .................................................................................. 31
2.4 System Design Theory ......................................................................................... 32
2.4.1 What is a system? ........................................................................................ 32
2.4.2 Complex Systems Theory ............................................................................. 33
2.4.3 Implementing System Change ..................................................................... 35
Chapter Three: Developing the Conceptual Framework .............................................. 40
3.1 Review of the Central Line Implementation Bundle........................................... 41
3.1.1 The Development of the Central Line Check List Implementation.............. 42
3.1.2 Large Scale Implementation Efforts on the Central Line Bundle................. 45
3.2 Comparison to Current Implementation Frameworks ....................................... 46
3.3 Finalized ABCD Implementation Framework ...................................................... 52
Chapter Four: Methods ................................................................................................ 66
4.1 Research Design .................................................................................................. 67
4.2 ICICE CCM Data Sets ............................................................................................ 69
4.2.1 Qualitative Data Set ..................................................................................... 70
4.2.1 Implementation Outcome Data ................................................................... 71
4.2.3 Patient Health Outcomes Data for Patients with Asthma ........................... 72
v
4.2.3 Patient Health Outcomes Data for Patients with Diabetes ......................... 74
4.3 Research Methods .............................................................................................. 76
4.3.1 Qualitative Data Analysis ............................................................................. 76
4.3.2 Qualitative Comparative Analysis Methods ................................................ 82
4.3.3 Statistical Analysis Methods ........................................................................ 93
4.5 Limitations ......................................................................................................... 100
Chapter Five: Results .................................................................................................. 104
5.1 Qualitative Analysis Results .............................................................................. 104
5.2 Qualitative Comparative Analysis Results ........................................................ 116
5.2.1 Necessary Conditions of Successful Implementation ................................ 116
5.2.2 Sufficient Conditions of Successful Implementation ................................. 118
5.3 Statistical Analysis Results ................................................................................ 122
5.3.1 Effects on Implementation Outcomes ....................................................... 122
5.3.2 Effects on Patient Health Outcomes .......................................................... 124
5.4 Summary of Results .......................................................................................... 131
Chapter Six: Discussion ............................................................................................... 136
6.1 Evidence of Framework Elements .................................................................... 137
6.2 Validation of the ABCD Implementation Framework Elements ....................... 144
6.2.1 The Assistive System Component .............................................................. 145
6.2.2 The Behavior Activation Component ......................................................... 145
6.2.3 The Culture Building Component .............................................................. 146
6.2.4 The Data Focus Component ....................................................................... 147
6.2.5 The Interactions Element ........................................................................... 149
6.3 Validation of the ABCD Implementation Framework as a Whole .................... 151
6.4 Study Limitations .............................................................................................. 154
6.5 Future Work ...................................................................................................... 155
Chapter Seven: Conclusions ........................................................................................ 157
References .................................................................................................................. 161
Appendix ..................................................................................................................... 176
A. Details of Existing Implementation Frameworks ............................................... 176
B. ACIC Survey Instrument ..................................................................................... 180
C. Final Coding Tree for Qualitative Analysis ......................................................... 191
D. Examples of ABCD Implementation Framework Elements ............................... 194
E. Detailed Results of Qualitative Coding By Site ................................................... 208
vi
List of Tables
Table 1 Levels of a system ............................................................................................ 36
Table 2 Comparison of Implementation Frameworks .................................................. 50
Table 3 Breakdown of Framework Components and Attributes .................................. 53
Table 4 Descriptive Statistics for Counts of Change Activities ..................................... 70
Table 5 ACIC Score Descriptive Statistics ...................................................................... 71
Table 6 Sample Characteristics for Asthma Sites .......................................................... 73
Table 7 Sample Characteristics of Diabetes Sites ......................................................... 75
Table 8 Component Frequency Results of Single Site Coding .................................... 105
Table 9 Count of Actions Relating to Interaction Elements ....................................... 109
Table 10 ABCD Implementation Framework Elements Addressed Over Time .......... 115
Table 11 Results of Necessity Tests ............................................................................ 117
Table 12 Configurations for Achieving ........................................................................ 118
Table 13 Configurations for Failed Implementations ................................................. 120
Table 14 Correlations between ABCD Scores and ACIC Change Scores ..................... 123
Table 15 Parameter Estimates for ABCD score Effects on Asthma Outcomes ........... 126
Table 16 Relationships between ER Visits and Framework Components .................. 128
Table 17 Parameter Estimates for ABCD Score Effect on Diabetes Outcomes .......... 130
Table 18 Details of Existing Implementation Frameworks ......................................... 177
Table 19 Complete Results of Qualitative Coding ...................................................... 208
vii
List of Figures
Figure 1 NIH Roadmap .................................................................................................. 11
Figure 2 Abstracted Implementation Model ................................................................ 44
Figure 3 ABCD Framework ............................................................................................ 60
Figure 4 Missing Component Attributes by Site ......................................................... 107
Figure 5 ABCD Scores Resulting From Qualitative Coding .......................................... 112
viii
Abstract
Background: The literature has shown mixed results for the ability of implementation
efforts to actually improve patient care. Implementations that employ a
multicomponent strategy are more effective – an example of employing system’s
thinking in designing implementations. A system’s perspective requires an
understanding of how interconnected elements are organized to achieve a specific
purpose. Failing to understand the systems view of an organization’s implementation
context contributes to this lack of success. Therefore, the purpose of this work was to
answer the following: How do we characterize the context of a healthcare organization
that affects the likelihood of success in implementing evidence based practice (EBP)?
Research design: In an effort to answer the above question, the objective of this
research was to develop a validated framework that characterizes the internal
contextual and behavioral factors of a health care organization that affect
implementation. Phase 1 of this work consisted of an in-depth case study of the central
line checklist implementation, a successful implementation of an EBP, which was
complemented using a comparative analysis of existing implementation frameworks.
The finalized implementation framework was validated in Phase 2 through a mixed
methods analysis of the implementation of another EBP, the Chronic Care Model (CCM)
across 34 sites. First, qualitative evaluation used systematic coding to determine if the
framework categorized change efforts. The qualitative comparative analysis allowed us
to make causal inferences regarding the necessity and sufficiency of the elements of the
ix
framework in causing a successful implementation. The statistical analysis determined
whether or not elements of the framework affect the success of an implementation
based on (1) implementation outcomes using Pearson correlation analysis and (2)
patient health outcomes using multilevel modeling techniques.
Phase 1 Results: Through review of the central line checklist implementations, I
identified four components critical to the implementation success – assistive systems
(A), behavior activation (B), culture building (C), and data focus (D). The comparative
analysis of existing frameworks uncovered that no existing framework addressed the
four components together and often include elements not critical to the organization
level. Many of the existing frameworks also failed to identify the interactions between
the components included. The finalized ABCD implementation framework includes the
four components identified above, but and interactions between the components.
Phase 2 Results: There was evidence that the framework sufficiently described the
change activities conducted by the sites. At the 0.75 consistency level, the assistive
system, behavior activation, culture, and interactions were found to be necessary for a
successful implementation while failing to address the data focus component was
necessary for an unsuccessful implementation. At the 0.90 consistency level, cases that
addressed all five elements together lead to successful implementation. It was found
that an implementation that involves more activities related to the elements of the
x
framework results in better improvement in implementation and patient health
outcomes.
Discussion and Conclusion: The ABCD implementation framework characterizes the
context of a healthcare organization that affects the likelihood of success in
implementing EBP. Each of the elements is critical as none of the validity tests
suggested the elimination (or addition) of an element. The framework recognizes the
complexity of designing an implementation, but correctly identifies system boundaries
that allow focused efforts on factors that an organization can change. The resulting
framework provides a guide for researchers as well as leaders in health care
organizations to design future implementations. By understanding the framework, it is
possible to tailor the implementation of an EBP to the internal context found within the
specific organization.
1
Chapter One: Introduction
In today’s healthcare industry, there is a large divide between the knowledge of a best
practice or clinical evidence and the integration of such knowledge into routine practice
in healthcare institutions. Unfortunately, even the most important of these advances
can take up to 20 years to be widely implemented [1]. In the past decade, there has
been a significant effort to push for broad adoption of best practices to improve quality
of patient care through the use of clinical guidelines, standardizing the care given to
each patient. However, the literature has shown mixed results for the ability of such
effort to improve patient care [2–5]. Successful implementations have been found to
apply combination of approaches including using literature review to educate staff on
the clinical guidelines, empowering leaders to step forward and using feedback and
reminder systems [6–10]. This combination tactic is a direct example of using a system’s
thinking approach to design and implement clinical guidelines, although it is rarely
recognized as such.
The field of implementation research is blossoming in order to address the needs of
health care organizations as they attempt to translate evidence-based health care from
the clinical knowledge base into routine practice [11]. Awareness of the necessity for
research to understand the success of such implementations is growing among the
major health-related funding agencies including the National Institute of Health (NIH),
the Centers for Disease Control and Prevention (CDC), and the World Health
Organization (WHO) [12]. The field of implementation research is tasked with producing
2
insights and generalizable knowledge regarding implementation processes, barriers,
facilitators, and strategies for success. A prominent area of interest is on the
implementation of evidence based practices (EBP) [13]. Another task of researchers in
this field is to develop implementation theories and frameworks that will serve to
eliminate the gap between research evidence and practice [14].
The implementation of a well-studied EBP requires a model to guide implementation
that is different from the traditional quality improvement methods that are trial-and-
error based to find out what works, such as the plan-do-study-act (PDSA) cycle.
Implementations that have utilized conventional methods of change management are
often unsuccessful [2–5]. The PDSA structure only provides a mechanism for change,
but provides no specific direction for health care organizations regarding what types of
changes to focus on during the design of an implementation effort. It is imperative that
a framework that guides implementation of known EBPs is developed as it would allow
for the clear definition of the issues that must be addressed by an organization and in
turn the organization can abstract a robust plan for implementation based on the
properties of the known EBP and their organization.
A systems thinking approach to implementation research allows a researcher to
acknowledge the complexity faced, and to design solutions that specifically address this
complexity. The 2010 AHRQ report entitled Industrial and Systems Engineering and
Health Care: Critical Areas of Research called for a focus on knowledge innovation to
align the tools and principles of industrial engineering with the needs of the complex
3
health care system [15]. In a step towards this knowledge innovation, I will answer the
following question: How do we characterize the context of a healthcare organization
that affects the likelihood of success in implementing evidence-based practice? The
objective of this research is to apply a system’s perspective to develop a framework that
reduces the gap between clinical evidence and routine practice by
1. constructing a framework that characterizes the elements of an organization’s
internal context that drives the success of an EBP implementation (Phase 1)
2. validating the framework by assessing its ability to describe another
implementation project’s success (Phase 2)
Phase 1: Constructing the ABCD Implementation Framework
In an effort to develop a framework that serves to eliminate the gap between evidence
and practice, it is useful to identify successful implementation attempts in order to learn
from their experiences. Although there have been many implementation efforts that
have been successful in a single organization, fewer have been successful on the large
scale as well. Examples of EBP implementations that have been successful nationwide
include the IMPACT model for evidence based depression care [16], the evidence based
quality improvement for depression care in the VA [17], the IHI strategic initiative –
Transforming Care at the Bedside (TCAB) [18], and the central line checklist
implementations [7]. There are lessons to be learned from each of the
implementations.
4
I sought to study an implementation of a unique EBP that occurred within a single
organization or previously un-networked set of organizations with an implementation
setting that is contained within the current staffing and practices of that organization.
When the EBP implemented varies between organizations, like TCAB [18], it becomes
difficult to understand the over-arching implementation components as they may vary
in each implementation. The second inclusion criteria developed centered on the type
of organizations to study. There are certain advantages to implementations within a
network, such as the depression care improvements implemented within the Veterans
Health Administration [17], due to the organizational structure that is already in place.
However, implementation efforts that are successful in networked organizations are not
always translatable to the individual organization. It is my goal to develop a
generalizable implementation framework for any organization, so it was imperative to
study implementations in organizations without this advantage. The final inclusion
criteria, containment of the implementation setting, was developed in an effort to
reduce the variability associated with potential communication issues and increased
implementation components to develop an external relationship, like with the IMPACT
model [16]. Removing this outside relationship reduces the complexity and makes it
easier to understand the factors that led to the implementation’s success. Based on the
above inclusion criteria, the central line checklist implementation was the best choice
for my analysis.
5
The central line checklist implementation was developed by Peter Pronovost, M.D., and
his team at John Hopkins University. They created a successful implementation design
by using a checklist that reduced central line associated bloodstream infections (CLABSI)
in hospitals between 70% and 100% [7], [19]. His design was successful through an
iterative change process that addressed barriers to change as they became apparent.
Although not viewed in terms of a systems approach, each step in his process addressed
a single part of the overall system itself, mimicking the practice of an industrial
engineer. After experiencing success at Johns Hopkins, an initiative was undertaken to
implement the iterative process across the state of Michigan. The success stories from
the Michigan Keystone Intensive Care Unit (ICU) [20–27] project fueled the creation of a
national program sponsored by the Agency for Healthcare Research and Quality known
as “On the CUSP: Stop Bloodstream Infections” (CUSP stands for Comprehensive Unit-
based Safety Program). The project has expanded to 45 states, but the results and
commitment to the implementation are mixed [7], [28], [29]. The success of this
implementation attempt can serve as a guide for future theoretical development in
implementation science.
In an effort to answer my research question, I completed a critical evaluation of the
central line checklist implementation literature from the systems perspective, which
resulted in a four-component framework of the implementation system under which
CLABSI rates were reduced. The first component, data focus, is tied to knowledge and
accessibility of CLABSI rate data and checklist compliance. These data triggered changes
6
in the other three components, assistive system, behavior activation, and culture
building. Behavior activation occurred with the implementation of the checklist to
follow when placing central lines and was followed by a change in the assistive systems
with the creation of a central line cart. The push to complete compliance came by
addressing communication issues between the nurses and physicians in the ICU.
Changes were iterative and driven by feedback highlighting improvement in CLABSI
rates and checklist compliance. Through analysis of the change process that contributed
to the success of the central line checklist implementation, the ABCD Implementation
framework began to define the general system within which implementation efforts are
undertaken. The analysis highlights the need to take a systems perspective in order to
successfully implement change. The framework was enhanced through a comparative
review of the literature relating to current existing implementation frameworks.
Through this comparison, I validated my hypothesis that all four components
determined from the grounded theory analysis of the central line checklist
implementation are not fully embraced in the literature of implementation research. I
also used the comparison of the frameworks to adjust and finalize the attributes and
relationships between the elements of the framework to ensure that a complete picture
of the implementation system was developed.
Phase 2: Validating the ABCD Implementation Framework
An important issue in the development of an implementation framework is the need to
test the construct validity of the framework in order to ensure its usefulness and
generalizability for planning future implementation projects. In order to provide a
7
complete validation of the ABCD implementation framework, the following three
questions must be answered:
1. Are the four components and the interactions of the ABCD implementation
framework employed in other successful implementations of EBP?
2. Are the components and interactions included in the ABCD implementation
framework either necessary or sufficient in bringing about a successful
implementation of EBP?
3. Does an implementation project that involves more activities related to the
ABCD implementation framework show more alignment with EBP and better
patient outcomes?
I utilized qualitative and quantitative data provided by 34 organizations throughout their
process to implement the Chronic Care Model (CCM). The data included qualitative
data in the form of monthly reports of change activities conducted as well as
implementation team survey and interview data. The quantitative data includes
baseline and final scores based on the Assessing Chronic Illness Care (ACIC) survey
instrument for 29 sites and patient health outcomes data for 11 sites focusing on
improving asthma care and 6 sites focusing on improving diabetes care.
To answer the first question, I analyzed the qualitative data by coding their
implementation efforts based on the above framework. This analysis proved that the
ABCD implementation framework effectively described all of the activities involved in a
8
different implementation process. It also provided evidence that each of the
components and their interactions of the ABCD implementation framework were
addressed throughout the implementation of the CCM.
The second question was answered through the use of qualitative comparative analysis
(QCA), a method which allows us to make causal inferences regarding an outcome in
instances with a small number of cases. In studying the data using QCA, I was able to
determine whether or not the elements of the ABCD implementation framework were
causal conditions that result in a successful implementation of the CCM (outcome).
Tests of necessity provided an interesting insight. Cases that showed a successful
implementation shared the following causal conditions (at the 0.75 consistency level):
assistive system, behavior activation, culture, and the interactions. The data focus
component was not found to be necessary for a successful implementation. However,
the lack of a data focus component was found to be necessary for an unsuccessful
implementation. This result justifies the inclusion of the data focus component in the
ABCD implementation framework. Tests of sufficiency were also conducted to
determine if cases that share the same causal conditions share the same outcome. The
results of the sufficiency test showed that the four components and their interactions
are sufficient causal conditions for a successful implementation.
The final question was answered through the use of statistical analysis. I utilized the
results of the qualitative coding analysis to assign ABCD scores, which reflected the
number of change activities relating to each of the framework elements. First, Pearson
9
correlation analysis was conducted between the ABCD scores and the ACIC change
score. The results of this analysis showed positive correlation not only overall, but
between the ACIC change score and each of the elements of the ABCD implementation
framework individually. Secondly, I used multilevel modeling techniques to study the
relationship between the ABCD scores and the patient health outcomes for the asthma
and diabetes sites separately. The results of this analysis proved that an
implementation project that involves more activities related to the ABCD
implementation framework showed better patient outcomes.
Organization of the Dissertation
This dissertation includes a thorough review of the literature in the next chapter. The
review of the literature provides a review of the field of implementation science, critical
factors in implementing evidence based practice, a justification for each of the
framework components, and a review of system design theory. The third chapter
details the development of the ABCD implementation framework. This chapter includes
a thorough review of the central line case study, the results of the comparative analysis
of existing implementation frameworks, and the finalized ABCD implementation
framework as it was abstracted from a critical evaluation of the literature using a
system’s lens. Chapter 4 discusses my research methods conducted to validate the
objectives of this research. The methods section provides a detailed description of the
research design, CCM data sets, methods used to validate the framework, and
limitations. Chapter 5 provides the results of the validation. The results are divided by
the research method utilized – qualitative analysis, qualitative comparative analysis, and
10
statistical analysis - as each of the methods were tailored specifically to answer the
validation questions. Chapter 6 provides a discussion of the results and concluding
remarks. Based on the results of the three-step validation process, I conclude that the
ABCD implementation framework provides a generalizable implementation framework
for any organization for the implementation of a well-studied EBP. By adapting the
system’s view, I was able to narrow the system boundaries to focus only on the
elements of an implementation that an organization has the ability to change.
11
Chapter Two: Literature Review
In order to gain an understanding of the components and their interactions for the
development of an organizational implementation framework, a comprehensive
literature review was conducted. First, we will review current state of implementation
science literature, which will be followed by a discussion of implementation of evidence
based practice (EBP). Then, we will provide a justification for each of the four
framework components. Finally, we will discuss the system design theory that provided
the backbone for the creation of the ABCD implementation framework.
2.1 Implementation Science
A commonly cited statistic is that it takes 17 years to translate only 14% of original
research to benefit patient care at the bedside [1]. The primary reasons identified for
Figure 1 NIH Roadmap
T2
f
Guideline
Development
Systematic Reviews
Meta-analyses
Basic
Science
Research
Preclinical studies
Animal Research
Human Clinical
Research
Controlled
Observational Studies
Phase 3 Clinical Trials
Clinical Practice
Delivery of Recommended Care
to Right Patient at Right Time
Identification of New Clinical
Questions and Gaps in Care
T1
f
Case Series
Phase 1 and 2
Clinical Trials
T3
f
Dissemination and
Implementation
Research
Practice-Based
Research
Phase 3 and 4 Clinical Trials
Observational Studies
Survey Research
BENCH BEDSIDE
TRANSLATION TO PATIENTS
TRANSLATION TO PRACTICE
PRACTICE
12
this delay have been termed “roadblocks” on the NIH roadmap (Figure 1), which defines
the path of clinical research [30]. Two roadblocks are identified as barriers to the
translation between basic science and improved health processes and outcomes. The
first roadblock, T1 – translation to humans, occurs in translating basic science into
effective clinical research and is not addressed in this research. The second roadblock
focuses on the translation of effective clinical research into routine practice in health
care. It has been divided into two types of translation. T2 – translation to patients –
focuses on the development of guidelines through meta-analyses and systematic
reviews. Although the guidelines become the driver for implementation projects, the
proper methods for the development of the evidence base to be implemented will not
be assessed in this research. T3 – translation to practice – is the focus of
implementation science research and assesses both the efficacy and the effectiveness of
practice change caused by implementation programs relating to evidence based
guidelines [30]. The translation of research to practice is the focus of this research.
The initial focus of implementation science research was on changing physician behavior
[31–33]. It was believed that the physician was the primary driver of health care
practice and resource utilization. Therefore, changing provider behavior would also
improve adherence to recommended practice and improve quality of patient care.
Strategies to achieve this initially included conventional educational strategies and
passive dissemination of information [32]. These passive approaches were recognized
to be ineffective and there was a push for the testing of more active behavior change
13
strategies including the use of opinion leaders or audit and feedback programs [9], [34–
36]. During the 1990s, it was recognized that the physician behavior is not the only
driver that affects implementation strategies and the focus of implementation science
shifted to focus on the role of organizational structures and policies [32]. The strategies
employed also shifted to focus on theory from management science and quality
improvement [32], [37].
The current focus of the field of implementation science centers on identifying barriers
and facilitators to implementation in healthcare, developing strategies for success, and
in turn producing generalizable knowledge regarding the process of implementation in
an effort to reduce the T3 gap between research evidence and practice [11], [32], [38].
As the field has evolved, the strategies for successful implementation too have altered
from a single-component structure focusing on one element to change provider
behavior to approaches that use a hybrid of strategies at different levels, tailored to
specific settings [33], [35], [39]. This method of implementation highlights the need for
a systems perspective [38], [40], [41]. It is now recognized that a true implementation
strategy must be selected based on “(1) identified causes of quality and implementation
gaps, (2) an assessment of barriers and facilitators to practice change, (3) guided by
appropriate behavior change theory and conceptual models, and (4) sensitive to
features of the context and settings in which the implementation effort will occur [32]”.
Implementation gaps are defined as gaps between existing knowledge of best practice
or evidence and actual practice. Since most implementation settings include a variety of
14
implementation gaps and barriers to change, the structure of the implementation must
include many components that are designed to address each of the issues discovered.
The components must be guided by relevant theory and their effects properly evaluated
after implementation [32]. The emphasis in evaluation is on measurement of changes in
adherence, clinical practice, adoption rates, and quality of care. Clinical outcomes serve
as a secondary measure in this type of research. They should necessarily improve if the
implementation is successful as implementation programs focus only on the adoption of
proven clinical innovations [14]. The current taxonomy of implementation outcomes
[42] directly corresponds to the elements of intervention characteristics that have been
defined as key attributes determining a successful intervention design [33], [38], [43],
[44].
As evidence based medicine and practice has become more prominent, research in
implementation science has focused its efforts in this area. Studies have been
developed to understand successful implementations of EBP and to develop strategies
for future implementation efforts for this area of healthcare [32]. Implementation
research highlights the necessity for frameworks and theory to guide implementation
projects of EBP [11], [33], [40], [45–50], with some studies proving that an
implementation that does not follow a model of this type is less successful than those
that do [51–53].
15
2.2 Implementing Evidence Based Practice
The purpose of evidence based medicine is to utilize the best available evidence gained
from systematic research to aid in clinical decision making and ensure that patients are
receiving the highest quality of care in routine practice [54], [55]. EBP is an extension of
evidenced based medicine as it applies beyond physician practice to other areas of
health care. Evidence-based guidelines are the use of evidence based practice at the
organizational level through the production of guidelines, policies, and regulations [8].
The development of evidence based guidelines is undertaken by individuals focusing on
T2 translational research [30].
The dissemination and implementation of EBP into routine clinical practice is influenced
by several factors, including information, clarity of contents, perceived values,
preferences, and beliefs, and the organizational context [13]. For example, it is
imperative that the EBP be appropriately developed and sponsored, fully endorsed and
supported, and not easily dismissed [56]. This requires that the EBP be developed from
a sound scientific basis and from a reputable source (i.e. professional organization or
government) [8], [57]. If this is not the case, physician acceptance will be low which will
cause adherence and implementation success to be low as well. These factors have also
been identified as significant to success in several of the implementation frameworks
[33], [38], [43], [44] that will be discussed in 3.2. Other facilitating factors to the
implementation include dedicated time to learn and practice EBP, management
support, the integration of EBP across all disciplines of patient care, easily accessible
sources of EBP, and promotion through spread of successful EBP interventions [58].
16
Several barriers to implementation of EBP have been identified. The first centers on the
current state of professional norms and culture balanced with the high levels of
uncertainty that define medical practice. Professional norms are extremely stable, and
are therefore difficult to alter. For example, guidelines developed by a professional
community or a physician’s peers are seen as more credible than those developed by
insurance companies or governmental agencies [59]. The norms common in healthcare
include the belief in individual judgment and patient-by-patient decisions. These norms
are intensified by uncertainty which permeates decision making related to treatment
and in the cause and effect relationship between treatment and patient health
outcomes. EBP is implemented in a range of health care practices – each defined by its
own context. Each practice has different constraints and influences, which can become
a barrier to implementation if the strategy is not appropriately tailored to the
implementation site [32]. Other barriers that have been identified in actual
implementation projects include time constraints, knowledge and awareness gaps, and
poor availability or applicability of evidence. Each of the barriers clearly trace back to
the facilitators previously defined [55], [58].
It is believed that the focus on EBP can aid in minimizing the gap between research and
practice [60], but the development of legitimate EBP is clearly only the first step to
developing an implementation program. Many barriers still exist to successfully
implement and sustain such evidence in practice and the developments of
implementation science described in 2.1 directly apply in this area. Programs to
17
implement EBP must be well designed, well prepared, and pilot tested before use [8].
Current implementation research has focused on developing tailored implementation
strategies based on the evidence itself and the organizational context through the
application of implementation frameworks. Although many frameworks have been
developed [33], [35], [38], [39], [43], [44], [49], [52], [58], [61–63], the literature still
lacks a practical model for organizations at the level of implementation [38], [64], and a
gap remains between implementation and sustainability [13].
2.3 Justification for Framework Components
The implementation context can be defined as the characteristics of an organization and
its environment that influence the implementation and its effectiveness [65].
Understanding this context is critical to the success of an implementation [47], [64–67]
and often is the cause of the variability in an implementation’s success [43], [61], [68].
It is clearly evident from the implementation science literature that the implementation
of evidence based practice (EBP) requires a multidimensional approach that
appropriately addresses context. Strategies focusing on a single aspect of change have
been proven to be ineffective [32], [35]. The facilitators and barriers to implementation
are typically associated with one of the following: the individual clinician, the social
context or care provision, and the organizational context [9]. For this reason, the
strategies for implementation also fall into one of these categories. It is also well
understood that a tension for change must exist for an implementation to be successful
[32], [33], [35], [43]. These four areas combine to define the internal context of an
organization undergoing an implementation and are shown to influence implementation
18
success [64]. As introduced previously, the framework developed in this research
includes four key components: assistive system, behavior activation, culture, and data.
Each of which maps directly to the context defined above. I will provide the theories
and strategies that influence these areas as it is imperative that theory be the guiding
factor in the development of an implementation framework [46].
2.3.1 Assistive Systems
The need for a systems perspective in implementation science is clearly cited. Yet the
literature that addresses system changes is underdeveloped with very few
implementation studies highlighting the need to understand how process and assistive
systems affect the success of the implementation [35], [39]. However, studies are
emerging that recognize the need to focus on how work is organized [66], [69], [70] and
the equipment and technology that support the work process [41]. The assistive
systems include the process redesign, the supply of materials, the data collection
system, and other aspects of the hospital or clinic as an environment that may become a
barrier to implementation. The significance and approaches for modifying this
component stem from organizational, scientific management, and power theories.
Organizational theories focus on creating the necessary conditions for change [9]. The
belief is that lack of adherence to an EBP is not an individual doctor problem, but a
system failure attributable to inadequately organized care processes [39]. As expected,
managers tend to see the system as the problem and propose structural or
organizational improvement to make implementation possible [9]. Under this theory,
19
health care delivery is seen as a series of interrelated processes and providers are
people who depend on each other to achieve the desired results – connecting the
system component to the cultural component. The benefit of this approach is that fault
is placed on the system, and providers can focus on improvement without becoming
defensive [36], [71]. Complexity of care is a major barrier to translating evidence into
practice and sustaining implementation efforts [55], [72]. Each step in a process has an
independent probability of failure. Therefore, each additional step increases the
likelihood for failure. It is imperative that providers understand the system and work to
reduce its complexity by appropriately analyzing the steps of a process [55]. Cited
strategies in the healthcare literature include the substitution of tasks to other members
of the care team and strategies to implement total quality management [39]. Total
quality management emphasizes that the greatest opportunity for improvement is
found by focusing on organizational characteristics and pushes for change in the care
system rather than in provider behavior [55].
Scientific management is a theory of management developed by Frederick Taylor that
analyzed and synthesized workflows. He highlighted the need for the system to come
before the man. The theory centers on the fact that inefficiency exists in almost all
processes and to remedy this lies with systematic management. The application of
scientific management has been proven to show astounding results [73]. From this
field, the tool of workflow modeling was developed. This approach allows you to
understand your workflow prior to implementation in order to assess how
20
implementation will affect process of care. In doing so, barriers to implementation are
uncovered and the future workflow can also be modeled and providers can be trained in
not only the EBP, but also the new workflow so they are more equipped to succeed with
the implementation [74]. Although this method has been proven successful in other
fields, it has not been fully adapted in the implementation science literature [35], [39].
Taylor believed in standardization of best practices, which directly relates to the
theories behind EBP guideline development and his tools of management can be used to
aid in successful implementation design [73].
Power theory provides an alternative approach to system change through coercive
means which use organizational policies to force change [75]. Organizations utilize
simple policy changes to create or reduce barriers to undesired behavior. For example,
forcing approvals for certain tests can create a barrier by preventing physicians from
needlessly ordering, but simplifying the form would make ordering easier and reduce a
barrier to change [9], [71], [76]. In order for these simple changes to be effective, the
organization must understand and capitalize on the nature of the current process and
understand the barriers within the process of care. If not, there is the potential to
create adverse effects and increase the burden of the providers as they try to “work
around” the system [72]. The studies of the effectiveness of these policies show mixed
results [76]. More intensive coercive approaches are used as well, but will be discussed
in a 2.3.5.
21
Alternative systems strategies include changing physical structure and equipment,
facilitating relay of data to providers, and changing records systems. These approaches
highlight the need to assess the environment in which care is taken place as well as the
system and tie to human factors engineering [63]. The extent to which a system exists
to collect, manage, and facilitate the use of data needed to support performance
improvement is of particular importance to the success of the implementation [33],
[38], [61], [77], [78]. As previously highlighted, performance measure data is a key
component to the initiation and sustainment of an implementation, but to obtain these
measures a good method to collect the data must be in place. Quality management
theory also highlights the need for a data collection system in order to evaluate the
impact of system changes [55]. The data system should serve as a surveillance
mechanism to identify events (numerator) and those at risk for events (denominator),
which will contribute to clear performance measurement. Neglecting to address the
denominator as well as the numerator may cause hospitals to look harder for events
and in turn experience worse performance.
2.3.2 Behavior Activation
Implementation science originated with a primary focus on determining how to best
activate change in provider behavior [31], [32]. The theories and literature in this area
are well developed, but have been shown to lack effectiveness when implemented
alone [9], [39], [76]. Cognitive, adult learning, and behavioral theories provide the
theoretical basis for studying behavior change. Each approach emphasizes different
aspects of behavior, but does not provide a complete perspective to understand
22
successful implementation or provider behavior on its own [9]. Social influence theory
and organizational theory have also been highlighted as a means to motivate provider
behavior change, but in actuality better address culture and system change, respectively
and are discussed in the respective sections.
Cognitive theories suggest that the ability to change provider behavior centers on a
provider’s knowledge (or lack thereof) of the EBP. The theory capitalizes on the rational
information seeking and decision making of providers [75]. It is believed that providers
will weigh the evidence and come to a reasonable conclusion [36]. Therefore, the
assumption is that lack of adherence to EBP stems from doctors’ poor knowledge about
the results of said lack of adherence. Epidemiological strategies in this area focus on
providing better information about the evidence base in an effort to increase
compliance [39]. The primary strategy is to increase awareness through the
development of guidelines that are credible [75]. Educational materials are also
distributed to professionals through courses, mailed pamphlets, and journal articles.
The strategies in this area are passive, and the effects are limited [36], [39], but due to
their low cost, they are important to be considered as part of a more comprehensive
approach [39].
Adult learning theories focus on the intrinsic motivation of professionals. The theory is
that change is driven by an internal striving for professional competence or an inherent
motivation to grow [9], [75]. Strategies based on these theories include the promotion
of learning from experience, problem-based learning, interactive group learning, and
23
creating a process to develop local consensus. This type of behavioral change strategy
highlights the belief that clinicians must experience a problem in practice before they
are truly motivated to change. Using local consensus also gives those involved a feeling
of ownership and further drives commitment to change [9]. The solutions determined
through local consensus may extend beyond provider behavior activation, but this
strategy speaks to the provider motivation. There is little evidence of the effectiveness
of this approach on its own [39].
Behavioral theories focus on the external influences for behavior change [75]. The
theory suggests that performance is primarily influenced through conditioning and
controlling behaviors. Behavior is primarily influenced by external stimuli before or
after a specific action. Implementation strategies focus on behavior activation through
feedback, incentives, modeling, and external reinforcement [36], [39]. These methods
have been proven to be effective for test ordering and prevention, but the size of the
effect is limited by the type of feedback, its source and format, and the frequency or
intensity of presentation [39]. A relatively new approach to provider behavior change is
the use of independent redundancy. Application of this strategy utilizes checklists to
remind providers of the critical steps of a process. Independent redundancy was
adapted from the field of aviation safety and capitalizes on the principles of behavior
theory [55]. The checklist protects against failures due to provider memory and
attention. It reminds providers of the minimum necessary steps by making them
explicit, which instills discipline and creates higher performance [79]. It is important to
24
note that a culture that supports teamwork must exist in order for independent
redundancy to be successful. A culture of teamwork is required because quality patient
care often relies on the work of an interdisciplinary team. The team must be willing to
not only implement the checklist, but also have members in place to enforce the use of
the checklist in the early stages. Without a culture of teamwork, these actions can
create resistance from team members [55].
It is clear that many of the strategies defined above highlight more than one component
of the framework developed in this research. This is due to their inability to successfully
maintain implementation change without addressing other aspects of the
implementation context. For example, cognitive theories fail to address the other two
components, culture and system, and in turn are relatively ineffective. Educational
strategies work only if the physician is unaware of the evidence base, but are ineffective
if the tools are not in place to support change. The strength of adult learning theory
capitalizes on developing ownership, which is an attribute of an organizations culture.
The results of the local consensus may require more than provider behavior activation,
but also implement system changes if such barriers are identified. Behavioral theories
highlight the necessity of adequate feedback, but to do so data must be readily available
that correctly highlights the effects of implementation changes. The process of
delivering care may change depending on the format of the reminder. The intersection
of the application of behavioral theories with the other components of the
25
implementation context supports the necessity of a system framework that correctly
defines the interconnections between components.
2.3.3 Culture Building
Most change efforts target the more visible aspects of an organization. An explanation
for why these efforts are unsuccessful centers on the failure to address the culture of
the organization, a less tangible component of the implementation context [43], [52],
[80]. An organization’s culture is defined by the norms, values, and basic assumptions
inherent in the organization [43]. A supportive culture advocates for the
implementation project and requires continued commitment that serves to enhance its
chances of success [33]. The emphasis for cultural change efforts stem primarily from
social influence theory [39], [75].
Social influence theory highlights the influence of significant peers or role models
on provider behavior [75]. The pressure from significant peers can have a substantial
impact on whether new scientific findings are adopted [9]. Lack of adherence to an EBP
is perceived to be due to the absence of social norms promoting adherence or a lack of
leadership in management that prevents the following of guidelines. The use of local
opinion leaders is an example of a strategy from this theory [33], [39], [63]. Expert
opinion leaders exert influence through their authority and status within the culture,
while peer opinion leaders exert influence through their representativeness and
credibility. Opinion leaders can have a positive or negative influence on the adoption of
an implementation [33]. This has been shown to have mixed effects as the feasibility of
26
identifying opinion leaders in different settings is uncertain [33], [39]. Another strategy
based on this theory is multi-professional collaboration, which has been proven
effective in shortening hospital stay, reducing costs, and increasing patient satisfaction
[39]. This collaboration can include team care for patients or group rounds to ensure
the entire care team is aware of the current patient status [7]. The benefits of this
approach lie in the emphasis on professional communication and need for support and
approval from peers [9].
Communication failure is the most common factor that contributes to major errors in
care delivery [69]. The nurse-physician relationship is strained by its hierarchical nature,
which directly affects communication [81], [82]. In order to improve communication,
team training techniques are often employed. These strategies focus on creating high-
reliability practice through team adaptation and coordination and team self-correction.
The techniques teach teamwork communication skills oriented to healthcare safety and
how to create a shared mental model for improved performance [72], [83]. Another
effective team training technique is the use of task groups to improve inter-group
dynamics through meetings with an impartial group facilitator [81]. These exercise work
to build not only communication, but also trust between the providers and nurses, their
subordinates, through empowerment [82]. Doctors who create a relationship of trust
with their nurses are more likely to create a safe working environment and build the
perceptions of a strong safety culture among subordinates [84].
27
Quality and safety culture is important, measurable, and improvable. It measures how
caregivers communicate and interact [69], [85]. The Safety Attitudes Questionnaire
(SAQ) is a validated tool to measure safety culture. It evaluates caregivers’ perceptions
of teamwork and safety climate across six domains: teamwork climate, safety climate,
perceptions of management, job satisfaction, working conditions, and stress recognition
[86]. Through the use of the survey, providers can obtain a valid and feasible
assessment of teamwork and safety culture in their clinical area. Higher scores are
associated with lower rates of nurse turnover, CLABSI, and in hospital mortality [69].
The survey provides an evaluation measure for organization’s attempting to assess their
safety culture prior to an implementation. Using this as a tool to highlight areas that
need improvement prior to implementation can aid in identifying barriers and
succeeding in implementation [87]. Safety culture can vary across the units in a single
institution so it is imperative to measure and understand its current state in any area
undergoing an implementation [88].
Organizational and adult learning theories defined previously also address the cultural
component. Organizational theory highlights the need for a culture oriented at
collaboration and improvement of care in addition to addressing inadequate organized
care process [39]. Strategies from adult learning theories create ownership, an element
of organizational culture, to develop a commitment to change from providers [9].
Strategies focus on communication and teamwork and work to address the safety
28
culture of the organization. Their success hinges on a commitment to improve, and the
motivation to change [61].
2.3.4 Data Focus
An essential element to the initiation of an implementation project is creating a tension
for change [33], [35], [43], [61]. Tension for change is defined as the degree to which
stakeholders perceive the current situation as intolerable or requiring change. In EBP
implementation, tension for change can be developed by highlighting a discrepancy
between current practice and the evidence base [12], [89] or through firsthand
experience with a given problem [39]. The discrepancy may be found in the process of
care or the use of outdated materials, but provider’s require proof that the discrepancy
is affecting care to patients in a negative way [89]. Health services researchers collect
data on variations in patient care in order to identify such discrepancies [9]. A common
strategy to understand where discrepancies exist and in what magnitude is the use of
performance measures, which indicates that data provides the trigger that stimulates
change and commitment to an implementation project [33], [38], [61], [63], [68], [69].
The data focus component is therefore broken down into three attributes: performance
measurement, accessibility of data for EBP, and completeness of information.
Performance measures provide the mechanism to understand discrepancies and
identify areas for improvement, but the data must be accessible so that the organization
can use it to develop measures and complete so that the measures developed provide a
clear understanding of the organizations performance.
29
The benefits of performance measures and data accessibility are twofold as they also
provide the means to evaluate the implementation’s performance [33]. Providing
adequate feedback is an essential element to sustaining an implementation [9], [33],
[39], [61], [90]. Feedback is defined as the accurate and timely information about the
impact of the implementation process [33]. The theoretical basis for the benefits of
feedback stems from behavioral theory, which holds that behavior will change with
control and conditioning from external stimuli. Behavior theory is discussed in more
detail in 2.3.2. Feedback of early measures of program effectiveness guide
implementation adoption and maintenance in a critical feedback loop, which is vital to
implementation success [38]. Therefore, data provides both the trigger and the
assessment for an implementation project and proper measurement is imperative [90].
The need to develop and organize data into meaningful performance measures stems
from quality management theory [63]. Performance measures are used to examine the
outcomes of implementations and hold providers accountable for the quality of care
being provided. They are utilized to draw conclusions regarding the quality of care a
patient receives [91]. It is difficult to distinguish performance measures that can be
validly measured as rates, from those that cannot [69], [92]. Therefore, it is imperative
that measures be transparent and their intention well understood. Without this, the
validity of the data for the specific purpose becomes questionable [91]. The first step in
identifying performance measures is to develop a standardized definition for the events
to be measured and then to identify those at risk for the event. Completing these two
30
steps is crucial to maintain clarity for performance measures [92]. One method to
avoid issues with valid and invalid rate measures is to stratify performance measures for
implementation into one of four domains: (1) How often do we harm patients?, (2) How
often do we provide the interventions that patients should receive?, (3) How do we
know we learned from defects?, and (4) How well have we created a culture of safety?.
The first two domains can be defined in terms of rates and relate to patient health
outcomes and the process of care (assistive system). The third is a structural measure
based on the evidence base. The fourth relates to the culture of the environment in
which the employees work [69]. Another framework for evaluating factors that
contribute to an incident focuses on patient factors, task factors, provider factors, and
team factors [93]. Both methods for defining implementation performance measures
highlight measures in each of the areas of the framework developed in this research.
Three strategies have been proposed to guide policy for developing performance
measures within the four domains previously defined. One, assume all harm is
preventable. This approach is effective when nearly all harm is in fact preventable, such
as the case of CLABSI. Two, adjust measures for preventability. This approach is
appropriate in addressing cases such as mortality in the intensive care unit or due to
specific conditions and utilizes risk-adjustment models. Three, link the care received to
the outcome measure. This approach determines preventability based on the process
of care. For example, it is known that administering antibiotics before surgery reduces
the risk of infection. If the patient does not receive the antibiotic and develops an
31
infection, then the infection could be preventable. Each strategy has risks and benefits
and selection for developing performance measures for an implementation project
should be done carefully and be tailored to the implementation [92].
2.3.5 Alternative Strategies
The previous sections discussed theoretical change strategies that target the inner
implementation context. Alternative approaches attempt to force provider behavior
change without specifically addressing the motivation of the provider, the system itself,
or the culture of the organization. These coercive approaches are administrative,
economic, and political. Simple administrative approaches were mentioned in relation
to system changes, but the approaches discussed here are broader and made without
taking the internal implementation context into account. These interventions take the
form of laws, regulations, or financial incentives [63], [76]. The theory behind coercive
approaches focuses on pressure and control as a method for change [75]. The
effectiveness of these approaches capitalizes on perceived negative consequences from
learning theory or perceived power and authority. Their value lies in the ability to use
power to break fixed habits and routines of providers. The pressure from an outside
source may be the deciding factor to implement and maintain desired change [9]. The
research evidence for the use of these approaches is scarce and not straightforward
[39]. These approaches fail to take the implementation context into consideration and
providers forced to change may face barriers created by the inability of the system or
culture to nurture and support the desired change, which can increase the burden on
32
the physician. The employment of these strategies is beyond the capabilities of a single
organization. Therefore, they are excluded from our analysis.
2.4 System Design Theory
Issues faced in health care are increasingly being described as complex problems and
the health care system itself has expanded in complexity over the last 30 years [94],
[95]. Therefore, it is imperative to study these problems through the systems lens
which allows researchers to acknowledge complexity and design solutions to address it
[95–98]. We will define a system, introduce complex systems theory, and identify issues
encountered in creating system change.
2.4.1 What is a system?
A system is defined as a set of interconnected elements that are coherently organized in
order to achieve a specified purpose or function. A system is defined by its elements,
interconnections, and function or purpose [99]. For example, the elements of a single
healthcare organization can include physicians, patients, administrators, etc. It is
interconnected through hospital policy, the differing physician’s personal strategy and
expertise in patient care, guidelines based care practices, the progression of a disease,
etc. The purpose of the hospital is to create the best outcomes for each individual
patient. This system can be viewed as a subsystem of the overall complex health care
system or can be further broken down into smaller subsystems such as the individual
unit within the hospital. As we can infer from this simplistic overview of healthcare as a
system, the system is more than the sum of its parts. The system can only be
understood as an integrated whole – analysis must reflect the elements and the
33
relationships among these components. A system can exhibit adaptive, goal-seeking,
dynamic and evolutionary behavior [99].
2.4.2 Complex Systems Theory
A system can be simple or complex. A simple system is predictable and when analyzed
based on its parts is understandable. Simple systems typically exhibit the following
characteristics: homogeneous, linear, deterministic, static, independent, lack feedback,
do not adapt or self-organize, and do not connect to other subsystems[96]. A
complicated system is simply a simple system with many elements [100]. Complex
systems are not predictable and must be studied to determine how the elements are
interconnected to serve the purpose of the system. Change in complex systems require
a multi-level, multi-actor response that requires an understanding of the
interconnections of elements within the system [96]. The development of solutions that
center on the ability to understand the behavior of systems are characteristic of
complex systems theory [101].
The organic view of complex system theory centers on the idea that the study of the
parts of the system will fail to accurately predict the responses of the system as a whole
[101]. A common analogy for complex systems theory stems from the study of a flock of
birds. The behavior of a flock of birds is generated through the interconnections of all of
the birds in the flock, but an individual bird cannot produce the emergent patterns that
are observable as the flock changes paths for a variety of reasons [100].
34
Complex systems are heterogeneous, nonlinear, stochastic, dynamic, interdependent,
contain feedback, self-organize, and exhibit emergent behavior and chaotic [96], [101].
Heterogeneity implies that the systems have a large number of structural variations.
The nonlinear property indicates that a cause does not produce a proportional effect
and the interconnections do not have a linear relationship. The stochastic and dynamic
properties suggest that there is uncertainty about the outcome of an event and the
system changes over time with the past impacting the future. The subsystems are
interconnected and affect each other which give the system an interdependent
property. Both positive and negative feedback loops exist in a complex system which
causes an element itself to be altered as an effect of its behavior. Self-organization
centers on the order that results from the internal dynamics of the system. This is often
an emergent behavior. Emergent behavior, in terms of complex systems, relates to the
development of new behaviors based on the collective behaviors of the parts of the
system and the system’s response to its environment. The chaotic property of the
system implies that the system is sensitive to the current state of the system [96], [101].
Complex adaptive systems (CAS) are a special case of complex systems in that they
exhibit the all of the characteristics of complex systems, but also are able to adapt by
changing and learn from experiences. Agents in an adaptive system react to both the
system and to each other [100]. It is important to note that a CAS operates far from
equilibrium as they are in constant flux in response to changes within the system and its
environment [102]. Healthcare systems have been proven to exhibit the adaptive
35
property of complex systems and addressing this property directly has proven success in
improving patient outcomes [97], [102–105].
It is important to note that even though the complex systems theory perspective
suggests that everything within the system is connected, it is necessary to identify
boundaries in order to be able to successfully study the system [96]. The can be a very
difficult task. One solution is to identify the critical interconnections within the system
that affect a specific function and focus on the elements and purpose from this
perspective [35], [99].
2.4.3 Implementing System Change
As mentioned, solutions to system issues require a more comprehensive approach.
Focusing on linear thinking solutions and the one-best solution can cause a system to
destabilize by making too many assumptions regarding the interconnections between
elements [98], [106]. It is clear that to solve the problems of a complex system, change
is required at a variety of levels. Meadows proposed a list of 12 potential places to
intervene within a system known as leverage points. These are defined as places where
a small shift in one thing can produce big changes in everything else. The highest level
of leverage points centers on changing and transcending the paradigm, which is defined
as the mindset out of which the system arises. Change at this level is the most difficult
to cause and sustain, but also would cause the most impact to the system. The middle
range of leverage points affects the goals and rules of the system and the system-level
structures. The lowest level of the spectrum of leverage points are “constants, numbers
36
and parameters”. They affect change in the structural elements of a system, such as the
actors and physical elements of the system. Changes in this area are often effective, but
only produce a local impact [107].
The 12 leverage points proposed by Meadows were adapted by Malhi et al. to define an
intervention level framework that was used to sort qualitative data on systems change
related to obesity [106]. Their five-level framework, shown in Table 1, provides a
breakdown of the levels of the system upon which change can occur and consolidates
the 12 leverage points into five categories. Although this is a very effective method to
classify the levels of a system at which an implementation may occur, it does not
provide a comprehensive method to classify an implementation approach.
Table 1 Levels of a system
Level Explanation
Paradigm The system’s “mindset”, the deepest held, often unspoken believes about
the way the system works. Goals, rules, and structures that govern the
system arise out of the paradigm. Actions and ideas at this level propose
to either shift or reinforce the existing paradigm. Intervention at this level
is very difficult.
Goals Targets that need to be achieved for the paradigm to shift, Actions at this
level focus or change the aim of the system.
System structure Elements that make up the system as a whole, including the subsystems,
actors, and interconnections among these elements. The structure
conforms to the system’s goals and paradigms. Actions at this level can
change the system structure by changing linkages within the system or
incorporating new types of structural elements.
Feedback and delays Feedback allows the system to regulate itself by providing information
about the outcome of different actions back to the source of the actions.
Feedback occurs when actions by one element of the system affect the
flows into or out of that same element. Actions at this level attempt to
create new, or increase the gain around existing, feedback loops. Adding
new feedback loops or changing feedback delays can restructure the
system.
Structural elements The smaller subsystems, actors, and physical elements of the system,
connected through feedback loops and information flows. Actions at this
level affect specific subsystems, actors, or elements of the system.
37
In addition to the fact that most issues in health care occur on a range of the levels
found above, many solutions run into other issues that result from the structure of the
system itself. Meadows labeled these as “system traps” [99]. Several of which are
especially applicable to the healthcare industry - policy resistance, drift to low
performance, success to the successful, and the shift of the burden to the intervener.
Policy resistance occurs due to the bounded rationality of the actors in a system. Each
actor has his/her own goals and take action if there is a discrepancy between their goal
and the system. Resistance to change occurs when the goals of subsystems are both
different and inconsistent. Therefore if a policy is effective in one subsystem, it will pull
the system in a direction that is farther from the goals of actors in another subsystem,
which produces resistance. In order to avoid this “trap”, it is imperative to remove the
policy and re-focus efforts to seek out a mutually satisfactory way for the goals of all
actors to be realized [99]. We see this in health care as doctors and nurses both work
towards their common goal of improving patient health, but are hindered by the
communication relationship between the two actors within the system, which causes
them to resist changes.
The drift to low performance occurs when a system allows past performance to
influence performance standards. This creates a reinforcing feedback loop of eroding
goals causing the system to drift towards low performance. Avoiding this is simple –
maintain absolute performance standards that are developed based on the best
performance not the worst [99]. In healthcare, the lack of feedback relating to
38
performance measures of patient health outcomes and compliance causes physicians to
have the “if it ain’t broke, don’t fix it” mentality. This perpetuates an environment that
resists guideline based care because it is not clear that the guidelines are not being
followed.
The “trap” known as success to the successful occurs when winners are rewarded with
the means to win again. This creates a reinforcing feedback loop in which the winners
eventually eliminate the losers. Policies that level the playing field are imperative in
avoiding this issue [99]. Within the healthcare system, hospitals face this problem
because those hospitals that are successful in quality improvement projects are
rewarded with government funding, but those that are unable to meet the
improvement objectives are neither punished nor rewarded. These hospitals are aided
in determining a path to be successful in the future.
Shift of burden to the intervener occurs when a solution to a system problem reduces
the symptoms, but does not actually solve the underlying problem. This will eventually
cause the system to deteriorate and become dependent on the intervention and less
able to maintain its desired state. Practices that are symptom-relieving should be
avoided by focusing on the long-term changes to solve the problem [99]. Within the
healthcare system, this often occurs in the implementation of quality improvement
projects. A simple fix is implemented in order to meet the goals of the initiative, when
full system analysis should be completed in order to ensure that the improvement is
effective in the long run.
39
The literature review detailed in this chapter highlights the current state of
implementation science research and current facilitators and barriers to implement EBP.
In order to more successfully implement EBP, it is clear that a system framework is
needed to guide implementations. I studied the implementation context to gain an
understanding of components and their interactions for the development of an
organizational implementation framework. The systems perspective studied here
provides an understanding of the importance of defining the elements and interactions
of a system in order to successfully create change. The next chapter discusses the
development of the systems framework for implementation in a health care
organization and presents the finalized framework.
40
Chapter Three: Developing the Conceptual Framework
In addition to the AHRQ report calling for knowledge innovation, there has been a push
for the development of multidimensional systems approaches to overcome the barriers
of translating evidence into practice [55], [92], [108]. There has also been a call for a
framework to aid in the designing of future implementations that focuses on the use of
theory and logic models [65]. In an effort to develop a system implementation
framework that meets both of these needs, I critically evaluated the literature with a
grounded theory qualitative approach. Through this approach, a coding scheme that
identifies the main themes of the data, in this case the literature, is developed
inductively by searching the data for the answers to a specific set of questions. Codes
are defined for the answers, and then recorded for all of the data available. The codes
are then grouped into similar concepts to form categories that are the basis for the
creation of a theory, or reverse engineered hypothesis about that provides an
explanation for the subject of the research [109].
In this instance, we analyzed the literature relating to the central line implementation
and the existing implementation frameworks to answer the following questions: What
aspects of change process led to the success of the central line checklist implementation
in eliminating CLABSI? Are these elements sufficient to provide a complex system
perspective of an implementation process? How does the conceptual framework
adapted from the review of the single implementation case of the central line checklist
compare with other implementation frameworks developed? How should the
41
framework be modified based on the comparison to other frameworks? In this chapter,
we will first provide the results of the review of the central line implementation bundle,
show the comparison of other implementation frameworks, and then present our
finalized conceptual framework developed through this analysis.
3.1 Review of the Central Line Implementation Bundle
There are infectious complications in 5-26% of patients with a central line inserted [110].
The line breaks the skin barrier and causes an elevated risk by exposing the body to
infection – either during line placement or in the subsequent maintenance of the central
line. A central line associated bloodstream infection (CLABSI) is defined as a blood
stream infection that occurs from viruses and bacteria entering the bloodstream at the
site of the central line. The CDC estimates that 200,000 CLABSI occur each year in the
United States [111]. It is estimated that between 14,000 and 28,000 deaths occur
annually in the US due to CLABSI and for the survivors length of stay is prolonged by 7
days on average resulting in between $3700 and $29,000 in additional costs in private
sector hospitals [112].
It is important to note that in order to officially determine that a bloodstream infection
is indeed a CLABSI a laboratory test must be done to prove that the bloodstream
infection did originate from the site of the line. This is problematic because the line is
not always removed due to clinical needs of the patient, the availability of proper
methods to determine the sources of infection, and procedural compliance issues
related to labeling and sending the correct portion of the removed line. Therefore the
42
identification of a CLABSI typically uses the rule of thumb that if a central line had been
placed within 48 hours of the bloodstream infection, it is classified as a CLABSI. Although
the true incidence of CLABSI may be overestimated, the high infection rate is still a clear
patient safety issue within hospital organizations [113].
3.1.1 The Development of the Central Line Check List Implementation
After analysis of infection control data, Dr. Peter Pronovost noticed a high level of
CLABSI in the Surgical Intensive Care Unit (SICU) at John Hopkins. To tackle this
problem, Dr. Peter Pronovost began by compiling the literature on central line insertion
and synthesized the information into a checklist with five simple, easy to remember
steps. His initial implementation of the checklist involved simple notification of the
doctors in the SICU of the new procedure. The checklist called for simple process
redesign so it was thought that behavioral activation through notification of the benefits
of using the checklist would be sufficient to reach 100% compliance. However,
behavioral activation resulted in only 38% compliance with the checklist. In order to
increase compliance, the doctors in the SICU were approached to uncover the reasons
why they were not complying with the checklist. It was found that there was an assistive
system issue relating to finding and accessing the supplies needed. In emergent
conditions in the SICU, time for searching was limited. Thus, doctors often made a time
and benefit trade-off decision and proceeded in placement with missing supplies. A
central line cart to hold all supplies related to central line placement was created to
remediate the issues relating to supply location. After addressing the assistive system
issue, compliance increased to 70%, but still did not reach 100% [7], [19].
43
The first two steps towards implementation success are part of a process that was
coined as “translating research into practice (TRIP)”. The TRIP model highlighted four
key themes: 1. Summarize the evidence, 2. Identify local barriers to implementation, 3.
Measure performance, and 4. Ensure all patients receive the interventions [66]. The
model highlighted two key elements for success: ownership and measurement [7].
Without ownership in the design of the checklist and in the adaption of its use to daily
routine, compliance is traditionally low. By incorporating step 2 into the model, the
team of physicians implementing the checklist is not only able to understand the
necessity for such an implementation, but also contribute to the success of the
implementation. Measurement provides the proof that fuels continued commitment to
an implementation. Without knowledge of actual improvements to patient safety, there
is no justification to continue an implementation especially if it increases physician’s
workload [7], [66].
The final push to 100% compliance came with the development of the comprehensive
unit-based safety program (CUSP). This program highlights the necessity for cultural
change focusing on the unit level [7], [20], [26]. The CUSP program educates the staff on
the science of safety, the process of identifying defects and learning from at least one
per quarter, involves a senior leader in change, and implements teamwork tools [114].
Without a culture of commitment to change, implementations often fail or face
significant barriers to success. One common cultural barrier to change revolves around
doctor-nurse communication [115]. Through a focus on teamwork, the communication
44
between the nurses and the doctors was re-aligned. This created an environment in
which nurses felt comfortable correcting doctors during a procedure and doctors
accepted this correction [7].
The implementation bundle created by Dr. Pronovost highlights the need for behavioral
activation, assistive system, and cultural changes with a focus on clear data collection
and analysis in order for an implementation of this kind to be successful. Pronovost’s
approach only tackles the issues of a single element at a time, such as the supply of
materials or the safety culture of the unit. The implementation framework abstracted
from Pronovost’s iterative change process
to reduce CLABSI rates is shown in Figure
2. It is clear that the success of this
implementation can be adapted to
develop a systems approach that takes
into account all aspects of such an
implementation. A true system
perspective defines how each of these elements is interconnected in order to achieve a
purpose – in this case the reduction of CLABSI rates. Therefore, it is beneficial to
analyze this implementation from the overall system perspective because each of the
elements he addressed is interconnected within the system in which the
implementation was undertaken. Changes to one will affect the others and also affect
the system’s ability to serve its purpose. This is clearly evidenced by Pronovost’s
Figure 2 Abstracted Implementation Model
Data
Focus
Behavior
Actication
Assistive
System
Culture
Building
1
2
1
3
2
45
iterative implementation procedure – as each change highlighted issues in other areas.
Consequently, data focus, behavior activation, assistive systems, and culture building
are the four components that serve as the base of the system implementation
framework developed in this research.
3.1.2 Large Scale Implementation Efforts on the Central Line Bundle
The implementation bundle created by Pronovost was successfully translated from small
success within a single institution to statewide success in the state of Michigan through
the Keystone ICU project. The statewide implementation included 103 ICUs and data
analysis exhibited reduced rates of CLABSI [25], [27] that were sustained during and
after the implementation period [22], a significant decrease in hospital mortality [21],
and significant improvements in safety culture and climate [20], [26]. The project’s
success was attributed to four key themes: “1) the interventions were driven by
evidence, 2) data that was important to teams were presented to them in a feedback
loop, 3) efforts were made to improve culture and teamwork, and 4) ICU teams were all
in it together as a “state”.” [25]. Tied to the solidarity of the state was the additional
support of a chief executive officer at each hospital who was committed to the project
and worked to ensure the resources necessary for success were available to the
physicians and nurses implementing the changes to their ICUs.
Following the success of the Keystone ICU Project, a nationwide program was developed
sponsored by AHRQ and other states followed suit. In total, 45 states have attempted to
implement the central line checklist and associated system and cultural changes. The
46
results are mixed with some states showing the same success as Michigan, for example
Rhode Island and Hawaii [29], [114], others lacking the data to prove success [7], and
still others remain stalled in the planning stages due to a failure to gain support from the
organizations within the state [28]. The largest barrier identified in the state-wide
implementations was the lack of data – both to fuel the support for the implementation
across the state and to prove the success of the implementation. A prime example was
the results of the implementation across the state of New Jersey – the state felt that
their implementation had been successful, but 60 percent of the required data was
missing. This created an issue in making any conclusions relating to the true
improvement of patient safety and lowered infection rates [7].
Although there were mixed results due to the issues described, the core process of the
implementation efforts remained the same as in the implementation in the SICU of John
Hopkins. The basis of the implementation focused on four components – data, behavior
activation, assistive system, and culture in an iterative fashion. Each ICU required a
different level of intervention on each of the components, but when each was
successfully addressed the CLABSI rates were lowered and sustained. This affirms the
use of those components as the basis of the conceptual framework, but questions the
relevance of following a strictly iterative implementation process.
3.2 Comparison to Current Implementation Frameworks
There are an increasing number of published studies that develop conceptual
frameworks or theories to aid in the reduction of the gap between research and routine
47
practice in healthcare. In order to add to this body of knowledge, it is important to
understand its current state. Therefore, I conducted a comparative review of the
implementation frameworks that have been developed in order to test the hypothesis
that the four components identified above are not fully embraced in implementation
research literature. Through this analysis, I was also able to identify gaps in the
literature in order to tailor the framework in development to reduce these gaps.
To determine relevant frameworks, a comprehensive search of the Medline literature
from 2000 to present was performed using the key words implementation science,
implementation research, implementation framework, or a combination of the three.
The first 400 titles were reviewed as the search results exceeding that point were
irrelevant. Additional frameworks were identified using a “snowball” approach –
reading the abstracts of references in selected publications to identify additional
sources. Frameworks were included that at least partially focused on the
implementation phase of the diffusion-dissemination-implementation continuum and
also addressed interventions that were evidenced based. Although varying definitions
of the phases of the continuum exist, I classified the frameworks based on the following
definitions. Diffusion is the passive and unplanned spread of new interventions.
Dissemination is a more active approach of spreading interventions using planned
strategies. Implementation is the process of putting to use or integrating evidence-
based interventions within a setting. Adoption is the decision of an organization or
community to commit to and initiate evidence based intervention. Sustainability
48
focuses on the ability of an intervention to succeed in delivering its intended result over
an extended period of time [12]. Intervention types identified broadly ranged from
programs, practices, policies, and guidelines. Eleven frameworks were identified for
comparison. Appendix A provides the details of each of framework including its
purpose, primary elements, type of intervention targeted, and the phase of the
diffusion-dissemination-implementation continuum that is addressed.
The frameworks were developed through either a systematic review of the literature
[33], [35], [38], [43], [49], [52], [58], [62], [63], [90], expert opinion [61], or the
combination of the two [44]. Nine of the eleven frameworks intended to aid in the
understanding of the key aspects of an implementation process and serve as a guiding
theory or memory aide in the development of tailored plans for implementation [33],
[38], [43], [44], [49], [52], [61–63]. Of the two remaining frameworks, one served to
develop a model for implementation within a large scale organization [35]. The other
worked to incorporate policies from the national level to the local level of the health
care system in order to support future intervention implementation success [58]. The
majority of the implementation frameworks attempt to describe not only the context in
which the implementation is undertaken, but also expand to include elements of the
intervention, the adopters, and the outer setting [33], [35], [38], [43], [58], [62], [63].
Although this is beneficial for overall understanding, the frameworks do not provide a
practical aid for the design of implementation projects. The abstraction level is too high
and the boundaries of the system are in turn too broad for an organization to study and
49
create change within the system. It is important to note that characteristics of the
intervention and of the adopters have been clearly defined and have little variation
between frameworks [33], [96], so will not be addressed in this research. However it is
important to note, the characteristics of the intervention support the necessity of the
data focus component to define the perceived needs of the organization by creating
tension for change and visibly prove the benefits of the adoption [33], [35], [43]. Many
of the frameworks fail to provide an understanding of the interaction between the
elements [35], [38], [43], [44], [52], [58], [62], [63], [90], which is a key component of a
system definition. The frameworks also fail to address the process of care delivery in
which the implementation is taking place, with only two taking the process redesign into
account beyond recognizing processes in order to identify discrepancy [35], [38]. The
perspectives of systems engineering have yet to be fully embraced in the literature in
this area. In addition, the validity of the frameworks is rarely shown, with only three of
the frameworks providing any type of validation [35], [44], [52].
Table 2 provides a comparison of the eleven frameworks identified across the four
elements of the ABCD implementation framework – assistive system, behavior
activation, culture building, and data focus – and the interaction element, which
characterizes the framework within the systems perspective of this research. The table
shows whether the framework includes each component and the sub-elements related
to the components. The final column of the table shows additional components
included in the framework not covered by the ABCD implementation framework.
50
Table 2 Comparison of Implementation Frameworks
Framework
Assistive
System
Behavior
Activation
Culture
Building
Data Focus
Interaction
Other
Greenhalgh, T, et al.
Diffusion of Innovations
in Service Organizations:
Systematic Review and
Recommendations. The
Milbank Quarterly 2004,
82(4):581-629. [33]
Data
Collection
only
User
orientation
Leadership
and vision,
risk-taking
climate
Tension for
change,
feedback
Yes Intervention
Adopters
Outer
context
Damschroder, L, et al.
Fostering
implementation of
health services research
findings into practice: a
consolidated framework
for advancing
implementation science.
Impl. Sci 2009, 4:50. [43]
Knowledge
and beliefs
about EBP
Culture,
capacity for
change,
leadership
engagement
Tension for
change,
feedback
Intervention
Adopters
Outer
context
Cretin, et al. Evaluating
an Integrated Approach
to Clinical Quality
Improvement. Medical
Care 2001, 39(8) Supp
II:70-84.[35]
Process,
Data
Collection
Physician
education
materials
Provide
social
support
Tension for
change,
metrics and
monitoring
Intervention
Adopters
Outer
context
Wandersman, A, et al.
Bridging the Gap
between Prevention
Research and Practice:
The Interactive Systems
Framework for
Dissemination and
Implementation. Am J
Community Psychol
2008, 41:171-181. [49]
Capacity
building
primary
element
Leadership
and
commitment
Yes Intervention
Adopters
Kaplan H, et al. The
Model for
Understanding Success
in Quality (MUSIQ):
building a theory of
context in healthcare
quality improvement.
BMJ Qual Saf 2012,
21(1):13-20. [61]
Physician
involved
Culture
supportive
of QI,
readiness to
change,
leadership
Tension for
change
Context
influences
only
Outer
context
Kitson, A, et al.
Evaluating the
successful
implementation of
evidence into practice
using the PARiHS
framework: theoretical
and practical challenges.
Implementation Science
2008, 3:1. [44], [50], [80]
Facilitation Culture,
Leadership
Evaluation Examples
of
addressing
varied
levels of
elements
Intervention
51
Table 2 Continued
From this comparison analysis, I uncovered that the four components – assistive system,
behavior activation, culture, and data – exist as elements in the framework literature,
but not consistently, and only as sub elements. I also identified two key gaps in the
Framework
Assistive
System
Behavior
Activation
Culture
Building
Data Focus
Interaction
Other
Feldstein, A, Glasgow, R.
A Practical, Robust
Implementation and
Sustainability Model
(PRISM) for Integrating
Research Findings into
Practice. Jt Comm J Qual
Patient Saf 2008,
34(4):228-43. [38], [116]
Adaptable
procedure
Adopter
training and
support
Readiness to
change,
dedicated
team,
leadership
support
coordination
Provide
tracking
data
Intervention
Adopters
Outer
Context
Stetler, C, McQueen, L, et
al. An organizational
framework and strategic
implementation for
system-level change to
enhance research-based
practice: QUERI Series.
Implementation Science
2008, 3:30. [52], [53],
[90], [117]
Recognize
practice
patterns
Clinical
reminder
content
Readiness to
change,
coordination
Metrics
and
evaluation
Intervention
Dzewaltowski D,
Glasgow, R, et al. RE-
AIM: Evidence-Based
Standards and a Web
Resource to Improve
Translation of Research
Into Practice. Ann Behav
Med 2004, 28(2): 75-80.
[62], [118], [119]
Protocol
training
Evaluated
and
maintained
Intervention
Adopters
Powell, B, McMillen J, et
al. A Compilation of
Strategies for
Implementing Clinical
Innovations in Health
and Mental Health. Med
Care Res Rev 2012,
69(2):123-57. [63]
Develop
materials,
data
collection
systems,
Inform and
Educate
providers
Build buy-in,
initiate
leadership,
develop
relationships
Monitoring
systems,
audit and
provide
feedback
Intervention
Adopters
Outer
Context
Ubbink, D, Vermeulen H,
et al. Implementation of
evidence-based practice:
outside the box,
throughout the hospital.
Neth J Med 2011,
69(2):87-94. [58]
EBP
education
Atmosphere
that
embraces
EBP,
leadership
support
Access to
data
Intervention
Adopters
Outer
Context
52
current body of knowledge – the lack of identification of interactions and the high
abstraction level. The framework developed in this research defines the interaction
relationships between each of the four components in order to provide a complete the
system perspective. I also developed the framework from a lower abstraction level by
focusing only on the components of organizational change that must be addressed in
order to create a successful implementation, rather than expanding to include elements
of the intervention design and the outer context. From the framework literature, I also
expanded and validated the attributes of each component to develop a complete
description of the system, which are detailed in 3.3. The framework developed will
provide a functional tool for an autonomous hospital or clinic to use to guide the
development of future implementation projects of evidence based interventions.
3.3 Finalized ABCD Implementation Framework
Through the combination of a grounded theory qualitative approach to the literature
review of central line checklist implementations and the comparison review of existing
frameworks, I synthesized the success factors of the central line implementation bundle
to create a conceptual framework that contextualizes the organizational context of an
implementation in terms of a complex adaptive system. I defined the elements of the
framework as the components of change identified in the central line implementation
literature review – assistive system, behavioral activation, culture building, and data
focus. Table 3 shows a breakdown of each of these components to the specific
attributes that define them, which were extracted from combining knowledge of the
central line implementation case study and the comparison literature review. To
53
provide a better understanding of each component, it will be discussed in terms of the
central line case and each attribute will be clearly defined. Attribute definitions
presented below were refined during the qualitative analysis validation phase.
Table 3 Breakdown of Framework Components and Attributes
Component Attributes Definition
Assistive System
(A): creating
facilitating or
removing impeding
tools, processes,
and policies to
facilitate
implementation
Supply of
Materials for EBP
What, how, and where materials required to perform
work related to the EBP are stored in order to increase
access and proper use of said materials
Process Redesign
for EBP
Alterations to the delivery of care that increase
compliance with the EBP
EBP Data
Collection
System
The method by which data elemental to the
implementation of the EBP is collected and stored
(paper or electronic)
Behavior Activation
(B): increasing
awareness and
understanding for
performance of the
EBP
Awareness of
EBP
Notification of the elements of the EBP and the reason
for implementation
Understanding of
EBP
Providing training and feedback to ensure the benefits
of the EBP are understood and the required changes in
care delivery are performed accordingly
Culture Building
(C): developing a
culture that
supports and
increases
coordination and
quality of care
through readiness
and commitment to
change
Culture of
Coordination
Characteristics of communication and coordination
between varying member groups of the organization
that affect the implementation of the EBP
Quality Culture
Focusing on core values and behaviors resulting from a
collective commitment by leaders and individuals to
emphasize quality over competing goals
Readiness to
Change
The degree to which the organization recognizes the
benefits and effectiveness of the change, the
discrepancy, and the principle support provided in
undertaking the implementation
Commitment to
Change
A dedication by the organization and its employees to
the project that connects personal success with
project success and a desire to spread the adoption of
the EBP
Data Focus (D):
emphasizing the
use of data to
create tension for
change and an
understanding of
the effects of
change efforts
Performance
Measurement
Identifying and measuring the degree of performance
for pre-defined implementation and health outcomes
that are affected by the implementation of the EBP
Accessibility of
Data for EBP
The degree to which data relating to the
implementation is available for review and analysis
Completeness of
Information
The degree to which the data depicts the full picture
of factors relating to the EBP that affect outcomes
54
The first component, assistive systems, is related to any major system issue that would
impede an implementation such as issues with the supply of materials or the IT system
itself, but also includes the process redesign of patient care provided. The use of
systems interventions and ongoing coaching relating to the new process are been
clearly identified in the literature as components of other successful implementation
designs [35], [36]. In the central line example, the assistive system component involved
the creation of a centralized supply storage cart – addressing the materials attribute of
the assistive systems component. The materials can relate to major items required to
perform work, such as the sterile drapes and gloves needed to perform a central line
insertion or to informational documents supplied to the patients upon discharge or
during patient education. The need for a central line cart was identified by analyzing the
second attribute of the assistive systems component – process redesign. Process
redesign centers on the actual workflow of physicians, nurses, or support staff that will
be affected by the implementation effort. The final attribute of the assistive systems
component is the EBP data collection system. Although the EBP data collection system
initially appears to be an attribute of the data itself, I define the data strictly in terms of
available information. In the case of the central line example, limitations in making
alterations to their electronic medical record required the use of a paper form to collect
data. A system solution was required in order to obtain the correct data via the most
appropriate method based on the processes within the unit. These system attributes
are the most easily identified as solvable through the traditional tools of the industrial
engineer. Their identification and solution clearly requires process flow modeling of the
55
initial process and of the proposed changes caused by the intervention. By working
through the process redesign, system barriers can be addressed prior to the
implementation and therefore potential setbacks can be avoided.
Behavior activation is characterized by the willingness of a provider to make the
required to shift to guideline-based care based on their awareness and understanding of
the EBP. In the central line example, the behavioral activation component included
simple notification of the central line checklist (the EBP in this case) and the direct
changes to the provider behavior that were required by the checklist, such as use of full
sterile precautions whenever placing a central line. Notification ensures provider
awareness, the first attribute of the behavior activation component. In order for
provider’s to be willing to make behavior change, the implementation must be driven by
clear and conclusive evidence of improvement. For this reason, provider education and
training relating to the material of the evidence base being implemented is a common
component of successful implementation designs and was the focus of the initial
research in implementation science [32], [105]. In the central line example, the
providers were also informed of the benefits of using the checklist. Therefore, provider
understanding of the EBP through training was included as a key attribute of the
behavior activation component of the framework. Provider behavior activation also
depends on the individual provider’s knowledge, skills, attitudes, personality, habit and
routines. These characteristics have been proven to affect a provider’s willingness to
change as well as their ability to learn, which is shown to increase the effectiveness of
56
an implementation [9], [105]. Although these are leverage points for the development
of strategies to ensure behavior activation, they are not included as attributes of the
framework because they are intrinsic to each provider and successfully characterized
through awareness and understanding.
The cultural building component is defined by the interactions among the team of
caretakers involved in a patient’s case. In the central line example, the cultural changes
required an analysis of the communications issues that are associated with the
hierarchical relationship between the doctors and nurses and the general safety culture
of the ICU. The hierarchical relationship between doctors and nurses often causes
communication and coordination issues between the two groups during care. Nurses
do not feel comfortable policing doctors and doctors do not accept criticism well from
the nursing staff. Therefore, these communication issues can cause critical knowledge
regarding patient care to remain unshared [7]. Communication and coordination issues
also occur between the providers at the unit level and upper management or between
the organization and an outside party, such as the pharmacy or primary care doctors
[7], [104]. The communication issues may be related to actual patient care, but can
often be related to the understanding and spread of in an implementation.
Communication and coordination can be used as a tool to facilitate intervention success
by enabling sense-making and learning regarding the effectiveness and necessity of the
implementation [104]. A culture of coordination is imperative to the successful
implementation of an EBP.
57
According to the Final Safety Culture Policy Statement released by the Nuclear
Regulatory Commission, a positive safety culture is associated with nine key traits: (1)
commitment to safety is demonstrated in the decisions and behaviors of leaders in the
organization, (2) potential safety issues are promptly identified, fully evaluated, then
addressed and corrected based on their significance, (3) all individuals take personal
responsibility for safety, (4) safety is maintained through planning and controlling of
work processes, (5) an environment of continuous improvement pushes individuals to
learn and implement methods to ensure safety, (6) personnel should feel free to raise
safety concerns without a fear of retaliation creating a safety conscious work
environment, (7) communications maintain a focus on safety, (8) all individuals treat
each other with trust and respect, and (9) individuals hold a questioning attitude
continuously challenging existing conditions and activities to identify areas that might
result in error or inappropriate action [120]. The use of the CUSP program ensured that
a safety culture existed in the ICUs in which the central line checklist was implemented.
Safety culture is closely related to quality culture as having a strong quality culture
equates to also having a strong safety culture [7]. Therefore, the second attribute of the
culture building component is defined as quality culture. This expands the
generalizability of the framework as safety culture is not often addressed in the primary
care setting, but a culture of quality encompasses the need for safety culture in the
hospital setting. Quality culture reflects the belief of the staff members relating to the
quality of the care patients within the unit or organization experience. This aspect of
organizational culture can be a detriment to implementation projects as often providers
58
believe in the quality of their unit even when that is not the case [7]. A lack of
knowledge of performance measures is the reason for this – highlighting need for
accessible and complete data to uncover flaws in the culture [7].
The third attribute of the cultural building component is readiness to change. It was not
addressed in the literature relating to checklist implementation, but gathering data and
determining the organization’s climate relating to readiness to change is paramount to
the success of an implementation [9], [12] and will therefore be included in our model.
The factors that are associated with readiness for change include change valence or
attractiveness, change efficacy, discrepancy, and principal support. Discrepancy is
defined as the belief in the necessity of change to bridge the gap between the
organization’s current and desired state. Signs of discrepancy include low performance
measures and gaps between the evidence base and actual practice. The existence and
belief in discrepancy is identified as a critical factor associated with readiness as it
provides a motivating push for change [12], [89]. Principal support is associated with
the commitment of leadership to ensure the success of the implementation project and
is often cited as a critical component for implementation success [12]. Leadership
support is often cited as a cultural component. Although this is proven to be an
important aspect for implementation success, this framework associates the need for
involved leadership with the principal support factor of readiness to change.
The final cultural building attribute revolves around ownership and commitment.
Commitment to change creates a commitment to the project that is necessary in order
59
to ensure follow through with the implementation. Developing this feeling of
ownership is often through the use of opinion leaders in the community, a common
component to successful implementations [9], [36], [75]. Commitment to change
relates to both the organizational and unit levels. It is imperative to have ownership at
the unit level as it drives the implementation itself, but commitment to change at the
organizational level ensures ongoing financial support for the project and instills
confidence and commitment at the unit level.
The final component of the implementation framework is data focus. The data focus
component of the central line checklist example highlights the necessity of the
performance measurement: CLABSI rate as a measure of performance in terms of
patient health outcomes and checklist compliance as a measure of performance in
terms of implementation outcomes. The accessibility of data for EBP and completeness
of information are necessary to fuel the implementation process. By having accessible
and complete data the organization can determine that the changes made were in fact
successful based on review and analysis of the data, which will increase the chances of
sustaining the implementation. Evaluation of the staff and program through
performance measurement determined from complete and accessible data is a common
component of a successful implementation [8], [39]. The data focus component
provides the necessary tension for change by highlighting problem areas and provides
the necessary feedback to ensure commitment to the implementation project by
providing tangible proof of the results of their labor towards implementation.
60
The relationship between the
components is shown in Figure 3.
First, the relationship between the
data focus component (D) and the
remaining components will be
described because the existence of
these relationships was clearly
proven in the central line checklist
example. Data and measurement
are the key component to the success of any implementation as they not only provide
the motivation for commitment to a project through the clear proof that improvement
needs to be made (i.e. identification of a discrepancy), but they also highlight the results
of the improvement through tangible values (i.e. performance measurement).
Therefore, before any implementation procedure can be put into place, a strong
centralized data management system must be created that has the capability to collect
complete, accurate data on the proper measures for said implementation. This data
must also be accessible for analysis. The directional arrows between the data focus
component and each of the remaining three components – assistive system (A),
behavior activation (B), and culture building (C) represent the link between data analysis
and the change process. Data information fuels change in one of the three components,
but that change in turn affects the data. After the selection of an initial component to
change, the data may highlight the necessity to change another aspect. This was the
Figure 3 ABCD Framework
Data
Focus
Behavior
Activation
Assistive
System
Culture
Building
61
case in the central line implementation as first behavioral changes were completed,
followed by assistive system and culture building changes as determined by the
measurement data on checklist compliance. The framework developed highlights the
interaction of the components rather than the order in which they are addressed as in
the central line case study. The results of the large scale attempts at the central line
implementation revealed that order is not sufficient to ensure a successful
implementation.
It is important to note that the A, B, and C components are also interconnected. A
change in any one component may trigger changes in either of the other two
components, as indicated by the arrows in the diagram. A change in the assistive
system may affect changes in either the behavior activation component or the culture
building component. For example, a change in the process of care will involve
notification of providers as they must be aware of the change in order to adapt to the
new process. Assistive system trigger may be created through changes to the supply of
materials required to perform a physician’s work that in turn activate provider behavior
change by reminding them of the elements of the EBP. A process redesign can also
affect the dynamic between the doctors and nurses as a change in process can change
the level of interaction required for patient care. This can be a positive or negative
effect as it may exacerbate issues between the two groups if interaction increases.
Behavior activation may necessarily cause process redesign (assistive system) by which a
provider conducts patient care or cause alterations in the typical interactions with the
62
supply of materials or EBP data collection system. Provider behavior activation may also
affect the culture building component. For example, the provider may necessarily
believe that the quality culture has increased as a result of the behavior change or the
behavior change may require more communication and coordination among staff in
order to provide care according to the guidelines.
Altering the attributes of the cultural building component can also affect change in the
other two components – although this interconnection is less studied and therefore less
clear. An example of a potential interconnection is that an increase in the organization’s
readiness to change can cause an implementation to push forward and in turn cause
providers to be more open to activation. Attempts to increase the quality culture of the
unit may necessitate a process redesign or uncover a materials issue and therefore
process will change in order to respond to the need for better quality culture. A final
example is the relationship between communication and behavior activation. The way
the implementation is communicated to the providers may cause them to be more or
less willing to participate as they assess the necessity of the implementation based on
the information provided through communication. These waterfall changes may not be
sufficient or necessarily in the positive direction to achieve successful implementation
which will be evident in the data, so the full set of interactions is necessary for a
complete system implementation framework.
The ABCD implementation framework provides a mechanism to design implementation
projects that avoid the common system “traps” such as policy resistance, drift to low
63
performance, success to the successful, and a shift of burden to the intervener. If
utilized, the ABCD implementation framework focuses policy efforts on an
implementation project that factors in the goals of all of the elements of the system in a
mutually satisfactory way by inherently focusing on this balance from the initial design
phase. The framework focuses on the availability and completeness of data relating to
an implementation project, which will serve as a trigger to avoid a drift to low
performance by having increased knowledge of clear performance standards. The
framework also provides a mechanism to level the playing field by providing a guide for
ANY organization to develop an implementation project that will be successful. By
keeping each of the components in mind when designing an implementation project,
organizations will avoid solutions that simply reduce the symptoms of the problem
because they only address a single aspect of change. Avoidance of this issue is
reinforced by the back and forth connection between the three change components and
data analysis.
The ABCD implementation framework expands beyond the workings of each subsystem
of change to portray the overall system view of the implementation setting, whether
that is a unit in a hospital or an outpatient clinic. In terms of the five levels of a system,
the framework defines an implementation paradigm that describes how the system of
implementing EBP operates in practice. It defines what factors affect this system and its
successful achievement of its goal of implementing an EBP. The framework provides the
structure that makes up the system as a whole in the form of the four components and
64
defines the feedback in terms of the interactions of the components. The structural
elements are specified by the definition of the component attributes. The framework
provides a complete definition of the implementation system.
The system under which implementation projects are undertaken can be described in a
much more complex manner involving several additional interconnections and elements
as shown in the existing implementation frameworks. For example, it is imperative that
the evidence based practice (EBP) embody certain attributes that have been clearly
defined in the literature [9], [33], [121] and implementation-system fit must be
addressed prior to implementation [33]. The framework assumes that the
appropriateness of the EBP has been addressed prior to implementation, but still aids in
the determination of implementation-system fit. Because the components of the
framework define the context under which an implementation is taking place, an
organization can assess the current state of the implementation setting and in turn
develop the EBP to have strong implementation-system fit. The ABCD framework
identifies the critical interconnections of the context of an implementation by defining
clear boundaries to the system. This is an important and necessary step in the analysis
of a system that allows researchers to be able to study it. The boundaries of the ABCD
implementation framework were set with the intention to develop a practical model for
use by a healthcare organization. The framework contextualizes the organizational
characteristics that affect the success of an implementation and when applied will aid in
the identification of facilitators and barriers to change within their personal
65
implementation context. By addressing each of the components, an organization will
utilize a combination of the current strategies for successful implementation unique to
their setting and obtain the multidimensional approach required to successfully create
change.
66
Chapter Four: Methods
The successful knowledge innovation of the principles of industrial engineering to
address the needs of the complex healthcare innovation is paramount to the success of
future implementation projects. The current pressure to innovate care delivery and
implement best practices within the health care industry is stimulating the study of not
only what but also how to be successful in these attempts. The previous chapters
reflect a systems thinking approach to the issues faced in implementation science in
health care. The critical analysis of the single example of the success of the central line
checklist implementation provided the background for the development of a systems
framework for implementations in health care. The results of this analysis were then
compared to other existing frameworks. This allowed us to refine the framework while
also demonstrating the contribution the ABCD implementation framework provides to
the literature. However, this framework is limited by its theoretical basis and has yet to
be tested in a real world context or validated through its application to another
implementation attempt. The remainder of this work focused on the validation of the
ABCD implementation framework to prove that it does in fact aid in the planning and
development of future implementation projects and in defining how to be successful in
an implementation attempt. In order to validate the ABCD implementation framework,
the following questions were addressed:
1. Are the four components and the interactions of the ABCD implementation
framework employed in other successful implementations of EBP?
67
Hypothesis 1: All four components and their interactions will be represented in
other EBP implementation efforts.
2. Are the components and interactions included in the ABCD implementation
framework either necessary or sufficient in bringing about a successful
implementation of EBP?
Hypothesis 2a: The components and their interactions will be necessary in
bringing about a successful implementation of EBP.
Hypothesis 2b: The components and their interactions will be sufficient when
considered as a combination of conditions, but will not be sufficient individually.
3. Does an implementation project that involves more activities related to the
ABCD implementation framework also show higher improvement in
implementation and better patient outcomes?
Hypothesis 3a: An implementation that involves more activities related to the
ABCD implementation framework will show higher improvement in
implementation outcomes.
Hypothesis 3b: An implementation that involves more activities related to the
ABCD implementation framework will show better patient health outcomes.
4.1 Research Design
A mixed method evaluation of multiple retrospective case studies will be conducted in
order to answer the remaining questions to be addressed by this research. In order to
68
address hypothesis 1, a qualitative evaluation was completed through systematic coding
based on a coding rubric developed prior to analysis. This allowed me to determine if
the ABCD implementation framework can categorize change efforts for alternative
implementations. The qualitative evaluation resulted in a score for each organization
studied. In order to address hypotheses 2a and 2b, a qualitative comparative analysis
will be completed using fuzzy sets. The four components and the interactions of the
ABCD implementation framework will be considered causal conditions that may or may
not lead to the outcome in question: successful implementation of an EBP.
In order to assess hypotheses 3a and 3b, statistical analysis methods will be employed
using the ABCD score developed in the qualitative analysis as the independent variable.
This statistical analysis determined the ability of the components and interactions of the
ABCD implementation framework to predict the success of an implementation based on
implementation outcomes (hypothesis 3a) and patient health outcomes (hypothesis 3b).
In addition to addressing the primary hypotheses, other exploratory analyses were
conducted that compare the effects of the components versus the interactions, the
significance of each of the components individually, and the significance of the
interactions from one component to all others. The exploratory analyses will provide
insight as to which components or interactions should be focused on more closely when
developing an implementation strategy. The evaluation will use a secondary data set
from the Improving Chronic Illness Care Evaluation (ICICE) of the implementation of the
Chronic care Model (CCM) that includes both qualitative and quantitative data.
69
4.2 ICICE CCM Data Sets
The secondary data set used for the remainder of this research was collected as part of
the ICICE study highlighting the effects of implementation of the CCM through three
quality improvement collaboratives. The CCM summarizes the basic elements for
improving chronic illness care at the organizational, practice, community and patient
levels. It suggests six areas in which to focus organizational change: delivery system
redesign, patient self-management support, decision support, information support,
community linkages, and health system support. Quality improvement collaboratives
provide a set of comprehensive strategies for restructuring the care delivery system.
The collaboratives studied in this research combined rapid-cycle change methods with
strategies suggested by the CCM to improve care for patients with chronic illnesses.
One of the collaboratives focused on improving care for diabetes and congestive heart
failure, another on depression and asthma care, and the third focused exclusively on
diabetes. Because each site involved in the study selected only one of the four chronic
conditions, analysis of patient health outcomes can only be conducted across sites that
addressed the same condition.
The data available included qualitative data for 34 sites, and complete implementation
outcomes data for 29 of those sites. Patient health outcomes data was available for
sites that focused on improvement to asthma and diabetes care. Although due
diligence was performed to locate missing patient health outcomes data for the
remaining sites, data was unavailable for the sites that addressed depression and
congestive heart failure. As mentioned, this research utilized a secondary data set for
70
the validation methodology, which restricted the analysis due to data availability and
quality. The patient health outcomes data for the depression sites was not collected
and the data for the sites studying CCM on congestive heart failure was not cleaned to
be analyzable.
4.2.1 Qualitative Data Set
The qualitative data included in the analysis was collected throughout the
implementation of the Chronic Care Model in 34 clinics across three chronic care
collaboratives conducted between 1999 and 2002. The qualitative data includes
statements regarding activities undertaken by each of the organizations in an effort to
implement the elements of the Chronic Care Model. Change activities were reported
monthly to the researchers. The qualitative data also included implementation team
survey and interview data. The researchers that conducted the ICICE study evaluated
the intensity of the CCM implementation activities by counting each of the activities that
respondent organizations completed relating to each element of the CCM during the
course of the implementation. The descriptive statistics are shown in Table 4, which
shows sufficient variation in the data for this validation analysis.
Table 4 Descriptive Statistics for Counts of Change Activities
Count of Activities Mean Std Dev Minimum Maximum
Delivery system redesign 7.29 4.34 0.00 24.00
Self-management support 8.40 4.55 1.00 23.00
Decision support 7.20 4.60 0.00 19.00
Information support 8.63 6.15 1.00 34.00
Community linkages 3.09 2.02 0.00 7.00
Healthcare systems support 6.06 5.38 0.00 30.00
CCM as a whole 40.66 21.62 8.00 130.00
71
4.2.1 Implementation Outcome Data
During the ICICE study, an Assessment of Chronic Illness Care (ACIC) score was
developed based on each site’s responses to the ACIC survey instrument (Appendix B)
[122], [123]. The ACIC score provides an evaluation of the strengths and weaknesses of
an organization’s chronic illness care delivery across the six areas of the CCM. The score
for each of the six subscales can range from 0 to 11, and the overall CCM score is
calculated as an average of the six subscales. The sites provided survey responses both
before and after the implementation. The pre-implementation survey was intended to
identify areas for improvement and the post-implementation survey was completed to
assess progress towards complete implementation of the CCM. 29 of the 42 sites
included in the three collaboratives completed both pre- and post- implementation ACIC
surveys. The descriptive statistics related to the pre- and post- implementation scores
are shown in Table 5.
Table 5 ACIC Score Descriptive Statistics
CCM Element
Pre-Implementation ACIC Post-Implementation ACIC
Mean SD Min Max Mean SD Min Max
Delivery system redesign 5.64 2.23 1.50 9.67 8.06 1.83 5.00 11.50
Self-management support 5.21 2.00 1.20 9.00 7.88 1.55 4.17 11.00
Decision support 5.00 1.83 2.33 10.33 7.94 1.97 4.33 12.00
Information support 4.62 2.18 1.00 9.00 7.98 1.94 4.00 12.00
Community linkages 5.60 2.34 1.00 9.33 7.40 2.20 3.00 11.00
Healthcare systems support 6.28 1.77 2.80 10.17 8.35 1.51 4.00 11.00
CCM as a whole 5.47 1.72 1.95 9.25 8.02 1.33 5.32 11.00
The mean ACIC pre-intervention score for the CCM as a whole is 5.47 (standard
deviation of 1.72) and the mean post-intervention score is 8.02 (standard deviation of
1.33). On average the sites scores increased by 2.08 with a standard deviation of 1.99.
72
Not all organizations improved their ACIC score. Three of the 29 sites with pre- and
post- ACIC scores showed a decrease in their score, with the worst decreasing by 1.70.
The organization with the most improvement improved their ACIC score by 7.38.
4.2.3 Patient Health Outcomes Data for Patients with Asthma
Patient health outcomes data was available for 11 evaluation sites that implemented
the CCM for asthma care improvement. Both pediatric and adult patients were involved
in the asthma intervention sites. In order to analyze across sites, only outcome
variables common to both patient sets will be studied. The data includes demographic
and site variables including age, male or female, race, education level, type of insurance,
income level, and intervention site location. Asthma severity and number of
comorbidities is also provided for each of the patients. Sample characteristics based on
the above variables for each of the 11 asthma sites are shown in Table 6. The
demographic variables show quite a bit of variation across sites. It is interesting to note
that 4 sites have no patients without insurance. The majority of patients at each of the
sites have less than 2 comorbidities and intermittent asthma severity.
The data also includes outcome variables relating to patient involvement in care and
self-management, use of long term controller medication, quality of life, satisfaction
with care, resource utilization, comorbidities, and lost productivity. The only variable
category that lacked common variables for adults and children was lost productivity so
this outcome will not be studied. The common variables of potential interest to study
include the following process level outcomes which are binary variables: use of written
73
action plan, education sessions attended, goals set, use of peak flow monitoring, use of
medication, and satisfaction with provider communication. It also includes the
following patient level outcomes which are continuous variables: patient self-efficacy,
asthma specific quality of life, and overall quality of life.
Table 6 Sample Characteristics for Asthma Sites
In the ICICE analysis of this data, three process level outcomes, having a written action
plan, monitoring peak flow rate, and attending educational sessions were proven to
have significant improvement in sites involved in the collaborative versus control sites.
74
Overall quality of life, a patient level outcome, was also found to be significantly higher
in the intervention group as well [124], [125]. Descriptive statistics for these variables
are included in Table 6. The results for each of the four variables vary quite a bit
between sites.
4.2.3 Patient Health Outcomes Data for Patients with Diabetes
Patient health outcomes data was available for 6 evaluation sites that implemented the
CCM for improvement in diabetes management. The data relating to improvements in
diabetes care includes the same demographic variables as the asthma sites: age, gender,
education, race, insurance, income, and site location. Age is not a continuous variable,
but instead stored in terms of ranges. General patient health variables including
number of comorbidities and severity of diabetes as determined from duration of
diagnosis are also included in the data. Table 7 shows the sample characteristics for
each of the six sites. The median age group for four of the six sites was 55-64 and the
majority of patients at all but site 5 were white, non-Hispanic. The majority of the
patients at all but site 3 had fee for service insurance. Education level and income
varied amongst the sites. The number of comorbidities also varied amongst the sites,
but more than half of the patients at every site were diagnosed with diabetes for more
than five years ago.
Diabetes specific variables include information relating to scales including the physical
and mental SF12 scale, a sum of self-care actions, mean of diabetes symptoms,
satisfaction with access to care and provider communication and an overall provider
75
rating, and patient education levels relating to diabetes care. Outcome variables are
related to adherence, patient involvement with care, knowledge and self-efficacy,
resource utilization, duration of diabetes as a method to determine severity, medication
usage, and number of comorbidities. These are all stored as continuous variables.
There was no previous analysis conducted for the diabetes sites, so it is unclear what
outcome variables will be significant.
Table 7 Sample Characteristics of Diabetes Sites
Site 1 Site 2 Site 3 Site 4 Site 5 Site 6
Number of Patients 51 108 86 53 22 122
Median Age Group 55-64 65-74 65-74 55-64 55-64 55-64
Male, % 39.22% 50.93% 100.00% 67.92% 27.27% 97.54%
Race/Ethnicity, %
White (non-Hispanic) 76.47% 92.59% 86.05% 94.34% 27.27% 82.79%
Hispanic - 3.70% 1.16% 3.77% - 2.46%
Black (non-Hispanic) 15.69% 0.93% 12.79% - 72.73% 7.38%
Other 7.84% 2.78% - 1.89% - 7.38%
Respondent's education, %
< High School 21.67% 25.00% 29.07% 7.55% 40.91% 19.67%
High School 27.45% 28.70% 37.21% 32.08% 45.45% 17.21%
Some College 25.49% 32.41% 24.42% 47.17% 4.55% 40.16%
College Grad or More 25.49% 13.89% 9.30% 13.21% 9.10% 22.96%
Income >= $30,000, % 47.06% 54.63% 33.73% 75.47% 9.09% 44.26%
Insurance
Fee for Service (FFS) 56.86% 75.00% 37.21% 54.72% 63.64% 41.80%
PPO 9.80% 11.11% 5.81% 11.32% 13.64% 15.57%
HMO 31.37% 9.26% 39.53% 30.19% 0.00% 14.75%
No Insurance 1.96% 4.63% 17.44% 3.77% 22.73% 27.87%
Comorbidities
0 23.53% 19.44% 18.60% 16.98% 9.09% 13.11%
1-2 39.22% 13.94% 16.28% 3.77% - 22.13%
3-4 23.53% 25.00% 31.40% 1.89% 22.73% 20.49%
>= 5 9.80% 22.22% 32.56% 9.43% 4.55% 19.67%
Unreported 3.92% 6.48% 1.16% 67.92% 63.64% 16.39%
Disease Severity
>5 years with diabetes 50.98% 52.78% 60.47% 49.06% 59.09% 62.30%
76
4.3 Research Methods
The methods associated with the study of complex systems theory typically revolve
around simulation, including system dynamics and agent based modeling, or network
analysis [95]. In order to use these techniques, the system must be clearly defined and
developed. In the case of implementation science, the system under which
implementations are taking place is poorly defined and under-developed [32].
Therefore, to validate this framework it was necessary to use more traditional methods
– qualitative data analysis, qualitative comparative analysis (QCA), and statistical
analysis.
4.3.1 Qualitative Data Analysis
The first question was answered using qualitative data analysis methods. Analysis of
qualitative data provides the opportunity to gain insight regarding activities that are not
necessarily quantifiable, such as the specific activities related to the stages of an
implementation project. The goal of qualitative data analysis is to understand how
people see the world, the world in which people act, the actions and activities people
do, or why people act and behave the way they do. Qualitative data can include written
field notes, responses to open ended survey questions, interview transcriptions,
descriptive notes from study participants, or audio or video recordings of conversations
or activities [126], [127].
Qualitative analysis differs from quantitative analysis in five primary areas. First,
qualitative analysis uses words to explain the phenomenon, whereas quantitative
analysis uses numerical evidence to explain a phenomenon. Second, qualitative analysis
77
is subjective versus the objective nature of quantitative analysis. The analysis
performed in qualitative research is influenced by personal opinion and the creativity of
the researcher in developing and asking questions. Third, qualitative analysis requires
strong inductive reasoning skills through the development of hypothesis after the
observation of small samples of data. Quantitative data uses a deductive reasoning
approach beginning with a hypothesis and then experimenting to prove or disprove it.
Fourth, qualitative research follows the general process of data collection followed by
data analysis to determine variable to study, then refining the research method and
questions. A qualitative researcher repeats the process until all phenomena in the data
are explained. The process of quantitative data analysis involves collecting data first and
then doing analysis based on predefined variables and measurements. Fifth, qualitative
data provides meanings and descriptions of the phenomena versus the cause and effect
relationships determined from a quantitative analysis [128].
In order to analyze this type of data, a coding schema is typically developed. The
schema can be developed inductively by allowing the codes to be developed by the
researcher during the examination of the data as was done throughout the analysis of
the literature detailed in the chapter 3. The coding schema can also be developed a
priori by creating it prior to examining the data. This is also referred to as systematic
coding. In this case, codes for all data are recorded based on the coding categories
created ahead of time from the existing literature or a previous open coding of similar
data. This method can be used to test and validate a reverse engineered hypothesis
78
created from the grounded theory method by using the same coding schema developed
in this method. Coding can be done manually or through the use of a qualitative coding
software program such as Atlas.ti, NUDIST, or NVivo [126], [127].
To validate the results of a systematic coding, inter rater reliability is calculated on a
portion of the data. A high inter-rater reliability demonstrates that the coding scheme is
appropriately defined for each category. Quasi-statistics, or enumerations, are used to
summarize the data numerically with descriptive statistics. Through the development of
a rough estimate of frequency, the quasi-statistics justify the dominant themes
uncovered by the researcher in the coding process. Other methods to demonstrate the
validity of the results include using more data including repeated observation and
interviews with rich detailed descriptions, validating the results with the respondents
(can introduce another version of bias), searching for negative cases, collecting data
from a variety of settings and methods, and comparing cases and studies [127], [128].
For the analysis of the qualitative data set described above, each statement was
systematically coded as relating to one of the attributes related to each of the four
components in Atlas.ti. If the activity did not reflect any of the components, it was not
coded. We chose not to code these statements because there were few statements in
the data that did not apply to the ABCD implementation framework. Also the
interrelationships between the components were coded if the activities indicate such a
relationship. Any barriers to implementation relating to any one of the component
attributes were coded. From this coding, an ABCD Score was calculated by summing the
79
coding responses for each attribute and interaction and subtracting any barriers found
at the site.
When coding the data, it is important to note the differences between intensity
(quantity) of improvement actions and the depth of the action. In previous analysis of
this data, researchers defined intensity as the total count of the organization’s change
activities in a CCM category and depth as a qualitative rating of the depth of the change
on a 0-2 scale. The implementation depth variables were created by rating the site’s
change activities as one of three depth levels: 0 was assigned when no change activity
was made in that area of the CCM, 1 was assigned when the change was determined not
likely to have an impact, and 2 was assigned to change activities that were likely to have
an impact based on CCM theory. The researchers then analyzed the relationship
between the two measures. The results showed significant positive correlation
between the intensity and depth ratings – sites that tested the most changes also
tended to have exerted the greatest amount of depth of effort. All in all, the quantity of
improvement activities is as indicative of implementation success as depth of the
activity [129] .Therefore, I did not differentiate in my coding schema – focusing only on
intensity as measured by a count of activities relating to the ABCD implementation
framework.
A formal coding protocol was developed based on Table 3 in section 3.3 and pilot-
tested, and was iteratively revised during the analysis of the remaining data. Two
reviewers independently coded the qualitative data for all 34 sites. The second
80
reviewer was a novice researcher completing her Master’s degree in Industrial and
Systems Engineering at the University of Southern California. I provided her with a copy
of my proposal and we discussed the ABCD implementation framework, its components
and attributes, and how they were defined at length. The data was coded in batches to
ensure we remained on the same page regarding the interpretation of the coding tree.
Further validation of the coding schema was completed at each iteration. Definitions of
each attribute were refined and section 3.3 reflects the changes made during the
qualitative analysis process. The final coding tree, which reflects coding decisions made,
is included in Appendix C. The coding for each site was reviewed jointly by the two
reviewers and any misalignment of codes were discussed and a final set of codes for
each site was agreed upon jointly. Inter-rater reliability of the coding results was
calculated using the Kappa Statistic at each iteration. This initial analysis addressed the
hypothesis that the components of the ABCD implementation framework describe the
change activities undertaken in the implementation of evidence based practice beyond
the case of the central line implementation.
In addition, two exploratory analyses were conducted by carefully examining the data
from the most and least successful sites. The purpose of the case study analyses was to
determine the validity of the assumptions of the ABCD implementation framework: (1)
it is not necessary to address the components in any particular order, and (2) an
implementation can be successful regardless of the outer setting in which it takes place.
The third assumption requiring that the EBP being implemented was well studied and
81
accepted as beneficial by providers was controlled for since each implementation in this
study was of the CCM. Most successful sites were identified as the top 10% of sites
based on ABCD score (4 sites) as well as based on their ACIC scores (3 sites). The
analysis of successful sites focused on addressing the first assumption – order. It was
my hypothesis that the implementation process in the most successful sites varied the
order in which components were addressed and often addressed multiple components
within one change process.
To determine the validity of the second assumption, both the most and least successful
sites were reviewed. Least successful sites were identified in a similar manner as most
successful sites, but instead focused on sites in the bottom 10% based on their ABCD
and ACIC scores. I first determined if there were common factors relating to the outer
setting among the least successful sites that may have contributed to their lack of
implementation success beyond the lack of application of the elements of the ABCD
implementation framework. Factors related to outer setting were those identified in
the Greenhalgh’s Conceptual Model for Considering the Determinants of Diffusion,
Dissemination and Implementation of Innovations: inter-organizational networks,
intentional spread strategies, environmental stability, and political directives [33]. I also
considered the size of the organization as a potential factor that could have been
associated with difficulties in implementation. The second step was to compare these
factors to successful sites if such factors existed. It was my belief that if common factors
relating to the outer setting did in fact exist, some of the successful sites would also
82
exhibit these characteristics. Therefore, the outer setting would not be necessary to
include in this implementation framework as was assumed.
4.3.2 Qualitative Comparative Analysis Methods
The second question was answered by employing QCA with fuzzy sets. QCA is a method
that bridges the gap between case study and variable oriented analysis when the
number of cases of interest expands beyond what can be successfully understood
through case study analysis, but fails to reach the level of cases necessary for
statistically significant results when applying variable oriented statistical methods. QCA
focuses on a configuration analysis of cases, where the result is considered the outcome
of interest and the variables that may lead to that outcome are considered conditions.
The result and the conditions that may lead to that result are viewed as sets. The
method was originally developed based on the binary logic of Boolean algebra, but
required that each set considered in the analysis be dichotomous – having a designation
of 0 for full non-membership in the set and 1 for full membership in the set. By
contrast, a fuzzy set permits membership in the interval between 0 and 1 while
retaining the two qualitative states of full membership and full non-membership. Fuzzy
sets combine qualitative and quantitative assessment in a single instrument [130].
In defining a continuous fuzzy set, it is important to have a clear definition of three
qualitative breakpoints: full non-membership in the fuzzy set (membership value of 0),
those who are not fully in or out of the set (0.5), and those with full membership in the
fuzzy set (membership value of 1). The qualitative break points root the fuzzy set in the
83
core set theoretic principles and operations of qualitative comparative analysis and
allow for qualitative assessments to be made [130], [131]. After defining the qualitative
breakpoints for the fuzzy set analysis, it is necessary to calibrate the fuzzy set using the
breakpoints as external criteria. The direct method of calibration employed in this
research converts an interval scale predictor into a fuzzy set using an estimate of the log
of the odds of full membership in a set as an intermediate step. The fuzzy set
membership scores that result from these calculations assign a truth value to a
statement. This calibration technique begins by assigning the three qualitative
breakpoints and then calculating the deviation of the interval scale score for each case
from the value of the crossover breakpoint. The deviations are scaled by multiplying
those above the crossover point by the log odds of full membership (5.0) divided by the
deviation between the full membership breakpoint and the crossover point. The
deviations below the crossover point are scaled by multiplying by the negative log odds
of full non-membership (-5.0) divided by the deviation of the full non-membership
breakpoint. These scaled values are then converted to degree of membership by
applying the standard formula for converting log odds to scores that range from 0.0 to
1.0:
⁄
Using this method of calibration, it is possible to develop a fuzzy set that reflects the
asymmetric properties of the set-theoretic relationships while retaining an
understanding of both the qualitative and quantitative nature of the fuzzy set [132].
84
Fuzzy set analysis focuses on tests of the set-theoretic relationships between a causal
condition and an outcome. To understand the set-theoretic relationship, it is also
necessary to understand how operations on fuzzy sets occur. To calculate the negation
of a fuzzy set, simply subtract its membership in set A from 1. Negation is indicated with
the use of a tilde, “~”. When two conditions are joined together with the “logical and”,
it is equivalent to taking the minimum fuzzy set membership score between the
conditions in question. This relationship stems from the logic of every day experience.
For example, consider the sets short and blonde. A person that is short with brown hair
would have a high membership in the set short, but low membership in the set blonde.
Therefore, she would have low membership in the set short and blonde. An excess in
membership in one set does not compensate for the low membership in the second.
Therefore, the membership in the set using the “logical and” is ruled by the minimum.
“Logical and” is represented by a midlevel dot. A similar argument is made to describe
the relationship determined with the “logical or”. This relationship is determined by the
maximum and is represented by a plus sign. Using the same example above, if the
question was membership in the set short or blonde, the short brown headed person
would have high membership in the set because of her high membership in the set short
despite the fact that she has brown hair [131].
A subset relationship is indicated when membership scores in one set are consistently
less than or equal to membership scores in another set [131]. Perfectly consistent set
relations are relatively rare in social research, so it was necessary to develop a
85
descriptive measure for the degree to which a set relationship has been approximated.
Consistency assesses the degree to which the empirical evidence is consistent with the
set theoretic relation in question. Consistency is determined using the following
equation:
∑
∑
X
i
represents membership scores in a combination of conditions, and Y
i
represents
membership scores in the outcome. This method of calculating consistency gives credit
for near misses and penalties for causal membership scores that differ from the
outcome membership score by a wide margin.
For any relationship that is found to be consistent, coverage must be calculated.
Coverage is defined by the percentage of all cases that are included in the solution and
is determined using the following equation:
∑
∑
Again, X
i
represents membership scores in a combination of conditions, and Y
i
represents membership scores in the outcome. Coverage provides a gauge of empirical
importance of the solution found and allows a researcher to evaluate the importance of
different causal paths. There is often a trade-off between consistency and coverage as
high consistency may lead to low coverage due to the narrow formulation of conditions
required to reach this consistency [133].
86
The study of necessity focuses on examining cases sharing a given outcome and
attempting to identify their shared causal conditions. A condition is necessary when all
instances of the outcome are preceded by that condition. In the analysis of fuzzy sets, a
condition that is necessary, but not sufficient, requires that all instances of the outcome
form a subset of instances of the causal condition. This analysis tests a set-theoretic
relationship between a causal condition and an outcome. The relationship can be
visualized as a lower triangular plot of the membership in the outcome (Y) versus the
membership in the casual condition (X). All points plotted fall below the diagonal line
from 0 with a slope of 1 for causal conditions that are necessary, but not sufficient, for
an outcome [134].
To test for necessity, we are looking at the reverse inequality relationship
than described previously. Therefore, to measure the consistency of a subset
relationship for a necessary condition we calculate the opposite:
∑
∑
If the relationship is found to be consistent, it is then necessary to calculate the
coverage of the necessary condition by calculating the following:
∑
∑
Again, the denominator is opposite the denominator in the previous discussion of
coverage because we are assessing the opposite relationship [133]
87
In order to account for imprecision in the measurement of fuzzy membership scores, an
adjustment factor of 0.10 fuzzy-membership units is often included to implement a
more lenient test of necessity. The adjustment shifts the line separating consistent and
inconsistent cases upward (the slope of the line remains 1). Any case whose score on
the outcome exceed the score on the causal condition by more than 0.10 fuzzy-
membership unites is considered inconsistent. Tests of necessity should be conducted
on both the casual condition and its negation. If multiple conditions are found to be
necessary, the resulting solution joins those causal conditions with the “logical and”
[134].
The test of sufficiency in fuzzy set QCA is more complex and requires the introduction of
the term property space. In QCA, the property space is determined by all possible
combinations of the causal conditions of interest. A property space includes 2
K
logically
possible combinations, where k is the number of causal combinations being considered.
In fuzzy set analysis, the property space becomes more than a list of possible
combinations of categories. It is viewed as a multidimensional vector space with as
many dimensions as fuzzy sets. The fuzzy membership scores position cases along each
dimension of the vector space. The corners of the vector space are equivalent to the
crisp-set property space. Membership in each corner of the vector space is defined
through the “logical and” relationship. Therefore, the fuzzy set property space can be
reduced into a crisp set property space, or truth table, for the purpose of analysis. A
case is considered an instance of a crisply defined location if its fuzzy membership in
88
that location is greater than 0.5, the crossover point. Using this definition, each case
will be an instance of only 1 crisply defined location, assuming it has no scores of exactly
0.5 in any of the component sets that comprise the space [131].
After determining the crisp-set location of each case, frequency scores can be calculated
for each crisp-set location in the property space. Assessing these frequencies allow a
researcher to determine which regions of the property-space are essentially vacant and
are in turn not relevant for analysis. The frequency threshold is set based on the
number of cases included in the study, the number of casual conditions, the familiarity
of the researcher with each case, the degree of measurement error in defining the fuzzy
sets, etc. Those locations that do not meet the frequency threshold are treated as
remainders in the analysis since there are no solid empirical instances of them [135].
The remainder of the analysis is conducted using the empirically relevant causal
combinations. The subset relationship for sufficient conditions is represented by points
plotted in the upper triangular portion of a plot of membership values in the outcome
(Y) versus membership values in the combination of conditions in question (X) because a
sufficient condition (or combination of conditions) must be a subset of the outcome be
analyzed. This is the exact opposite of the relationship found with necessary conditions.
Sufficiency tests assess whether cases with the same causal conditions share the same
outcome. In order to assess sufficiency, we evaluate the consistency of all combinations
with the outcome itself using the equation for consistency mentioned previously. The
researcher sets a consistency threshold and then assigns a value of 1 to the rows of the
89
truth table that have a consistency above this value, indicating that they are in fact fuzzy
subsets of the outcome. Those below the consistency threshold are coded as 0 because
they are not considered fuzzy subsets. The consistency threshold is either set at a
specific value such as 0.75 or identified by determining a gap in consistency scores
between the casual combinations [135].
Each location for which a value of 1 is assigned is considered a sufficient combination of
the outcome in question. The combinations found to be sufficient can be reduced
through minimization, a mathematical approach to reducing solutions to the fewest,
simplest number of combinations possible. This is done by applying the containment
rule: the membership of a case in a combination of conditions is always less than or
equal to its membership in any one of the component conditions. If multiple causal
expressions are found to be sufficient, they are joined by the “logical or” after
reduction. In assessing a solution, it is important to balance consistency with coverage
so coverage for all solutions will be calculated. As mentioned previously, larger
coverage increases the importance of a result [135]. This algorithm is known as the
truth table algorithm and ensures that all 3
k
– 1 logically possible combinations of causal
expressions given k conditions are tested [135].
4.3.2.1 Data for QCA Analysis
The QCA analysis was conducted by converting the coding results from the qualitative
analysis and the ACIC scores into fuzzy sets. The total number of codes for each of the
component attributes and each interaction was first determined for each site based on
90
the results of the qualitative analysis. Each value represented the score for the
corresponding attribute. A fuzzy set membership score was calculated for each
component (A, B, C, and D) and the interactions (I) based on these scores. Each
attribute of each component was weighted equally when calculating the fuzzy set
membership value for that component. For the interaction fuzzy set, each of the 12
interactions was treated as an attribute of that component. Due to the limited
knowledge regarding the interpretation of the scale of the scores as the development of
the ABCD implementation framework is still in infancy, I turned to statistical properties
to define and calibrate the fuzzy set. Full membership in the fuzzy sets was determined
by a score for each component attribute that was equal to at least 3 times the standard
deviation of the scores for that attribute across all sites. Assuming a normal
distribution, this would allow 0.3% of the population to have full membership. Full non-
membership in the fuzzy sets was if the score for each attribute of that component was
less than or equal to zero. I considered any activity that was coded to one of the
attributes as effort on the part of the site and therefore they would have some value of
membership in the fuzzy set. The crossover point was defined as the mean of the scores
across all sites for each component. Assigning qualitative breakpoints allowed me to
calibrate the fuzzy sets for the five causal conditions following the method described in
the previous section.
The outcome result in question for this analysis was the successful implementation of
the CCM. The ACIC score determines the success of the implementation from the
91
perspective of the site and was therefore utilized to determine fuzzy-set membership
values in this set. For this fuzzy-set, a site reached full membership in the set
“successful implementation” if their change score was greater than three times the
standard deviation across the baseline ACIC score. This allowed us to account for the
initial status of the ACIC score. Full non-membership in the “successful
implementation” set included sites that failed to make any changes or whose final score
was less than or equal to their baseline ACIC score. The mean ACIC change score was
used as the crossover point.
The analysis for necessary and sufficient conditions was conducted using the five causal
conditions: A, B, C, D, and I to describe the outcome of interest “successful
implementation”. Each of the five conditions was assessed for necessity followed by
testing the sufficiency of the 242 (3
5
– 1) possible causal expressions. I used a
consistency threshold of 0.75 and an adjustment factor of 0.10 for the tests of necessity
due to the possible imprecision in measurement of the fuzzy sets for the causal
conditions. In accordance with the QCA protocol, the necessary tests were conducted
on both the presence and absence of the causal conditions.
To assess the sufficiency, the fs/QCA software will be used, which applies the truth table
algorithm described in the previous section [136]. By utilizing this method, it is only
necessary to run a single model including the five causal conditions and the outcome of
interest in order to test all possible combinations of causal expressions. Therefore, two
models will be assessed in accordance with QCA protocol: one to determine the
92
sufficiency relationship between the conditions and the positive outcome and the
second to determine the relationship with the negation of the outcome. Due to the
relatively small n, I used a frequency threshold of 1 for eliminating vacant areas of the
vector space. Model results were assessed using a variety of consistency thresholds
from 0.75 to 0.90. For each model, the fsQCA software generated three solutions:
complex, parsimonious, and intermediate. The varying solutions incorporate
counterfactual analysis of what causal conditions lead to the outcome. Counterfactual
cases are those for which a causal combination lacks empirical instances and therefore
must be imagined. “Easy” counterfactuals occur when a redundant causal condition is
added to a set of causal conditions that already leads to the outcome. “Difficult”
counterfactuals on the other hand occur when a condition is removed from a set of
causal conditions leading to the outcome on the assumption that it is a redundant
condition. The parsimonious solution is developed by including both “easy” and
“difficult” counterfactuals. The intermediate solution includes only “easy”
counterfactuals. The complex solution does not include counterfactual cases and
provides the most detailed result. By comparing the solutions, a researcher can
determine core conditions and peripheral conditions. Core conditions are part of both
the parsimonious and complex solution whereas peripheral solutions are only included
in the more complex solutions [137]. The software also provides a value for the
coverage for each solution. In analyzing the results of the models using a variety of
consistency levels, coverage was used as a balancing factor to determine the best choice
of consistency level. A final solution is presented that reflects the best tradeoff
93
between consistency and coverage after close examination of the effects of increased
model consistency on the coverage values [138].
4.3.3 Statistical Analysis Methods
The third question was answered through statistical analysis to determine the
relationships between the ABCD scores determined from the above qualitative coding
analysis and implementation and patient outcome data. The independent variable,
ABCD score, was calculated as a sum of coding responses that weighted each attribute
and interaction equally. This score weights the components with more attributes more
heavily. Although this weights behavior change lower than the other three
components, I believe this is the best form of the score to use for analysis as behavior
change strategies have been shown to be ineffective on their own [9], [34–36]. This
analysis assesses the correctness of hypothesis 3a and 3b.
As mentioned in the research design, I conducted additional exploratory analysis to
expand the findings beyond the primary hypothesis given that the overall ABCD score
was significantly associated with the outcome in the initial analysis. First, I compared
the effects of the components versus those of the interaction elements separately. This
allowed us to determine whether the components or the interactions are more
significant to the success of the ABCD implementation framework in predicting
outcomes. It was my prediction that the components themselves are more significant
while both aspects have a positive relationship with outcomes data. Second, I assessed
the relationship between each individual component score and the outcome data.
94
Studying the effects of the individual components allowed us to determine which
component best determines outcome results at the clinics and which outcomes may be
specifically affected by different components. I hypothesize that the behavior
component will be least effective in affecting outcome results. Third, I studied the
effects of the sum of interactions from one component to all others. I did not study the
effects of each of the interaction components separately as they may be very minimal in
many organizations. Understanding the significance of each grouping of interactions
highlighted the more important interactions and expands our understanding of the
implementation context. Each of the above investigations was applied to the analysis of
both implementation and patient health outcomes data in order to gain additional
insights beyond the primary hypothesis.
The first set of analysis was related to the ACIC implementation score data. I utilized the
ACIC change score as the dependent variable in question for this analysis. I focused my
analysis on the change score because a nonlinear relationship exists between the
organization’s initial assessment and their subsequent performance relating to the
implementation effort [139]. For example, teams that began with middle level scores
typically performed the most change activities. Higher scores often reflected limited
expectations on the part of the responder to continue to improve and make changes,
while those that rank their organization lower initially believed there were too many
hurdles to be faced and felt the implementation would be extremely difficult [139].
Therefore, I could not base my assessment exclusively on only the sites’ final ACIC score
95
and I compared the ABCD score to the ACIC change scores for the 29 sites with available
pre- and post- implementation scores using Pearson correlation coefficients to
determine the type of relationship between the two scores. The Pearson correlation
coefficient determines the direction and the strength of the association between the
two variables of interest. Based on hypothesis 3a, it was my belief that the ABCD score
would be positively correlated with the ACIC change score – the higher the ABCD score,
the greater the change in ACIC score.
The second set of analysis focused only on patient health outcomes for the diabetes and
asthma clinics. As mentioned in section 4.2, data was unavailable for the two remaining
disease types studied in the collaboratives, which is a limitation associated with the use
of a secondary data set. The number of sites for each disease group is small, but we
could not perform a combined analysis due to a lack of common outcome variables.
This analysis was included in this research to increase generalizability by showing that
better compliance with the elements of ABCD implementation framework has an effect
on patient health outcomes as well as implementation outcomes.
For the analyses, which was performed separately on data from both disease groups,
multilevel models were developed using regression analysis of generalized linear mixed
models. Regression analysis is executed through an analysis of variance focusing on the
relationship between a dependent variable and one or more independent variables. It is
a statistical method that helps in understanding how the value of the dependent
variable changes when any one of the independent variables is held fixed. In this way,
96
regression analysis helps to identify relationships between the independent and
dependent variables. By clearly identifying such relationships, researchers can infer
causal relationships as well. It is important to be vigilant when analyzing and defining
causal relationships because correlation identified by regression analysis does not
always imply true causation. Multilevel regression analysis assumes that there exists a
hierarchy of different populations within the data and any differences relate to that
hierarchy [140], as is the case with the ICICE patient health outcomes data. In this case,
we have patient individuals nested within a naturally occurring hierarchy, the site at
which they were treated.
This modeling technique allows us to examine the behavior of a level-1 outcome
(patient health outcomes) as a function of level-1 covariates (patient characteristics)
and a level-2 predictor (ABCD score) [141]. Each model controlled for patient level
characteristics including age, gender, education, income, insurance type, disease
severity, and number of comorbidities. Only one level-2 predictor, ABCD score, was
included due to the restricted degrees of freedom at the second level caused by the
small number of sites available for analysis for the different disease types. Controlling
for the patient level covariates results in the following Level I Model structure:
(
)
(
)
(
)
(
)
(
)
(
)
(
)
97
The level 1 equation expresses the patient health outcome as the sum of a random
intercept for the patient’s treatment site, the patient level covariates and a random
error associated with the i
th
patient at the j
th
site. For the purpose of this analysis, all
continuous variables were centered on their grand mean in order to make the
interpretation of the coefficients more meaningful [141]. The initial analysis assumed a
random intercept for site only (Level-2), which implies that the effects of the patient
level covariates are the same regardless of the site at which the patients received
treatment. Based on this, the level 2 model structure is as follows:
(
)
The level 2 model expresses the site level intercept,
, as the sum of an overall mean,
a fixed effect for the ABCD score, and a series of random deviations from that mean,
, which represents the net impact of all the unknown sources of variation in the sites’
intercepts beyond the variation explained by the ABCD score. By only allowing the
intercepts to vary, the remaining level 2 equations impose a constraint that the slopes
depicting the effect of patient level covariates remain the same across sites. Combining
the Level 1 and Level 2 models, results in the combined model:
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
98
The applicability of varying the slopes as well as the intercepts to account for any
clustering that may occur at sites according to the patient level predictors was assessed
as well in order to validate the decision to assume this was not the case. Additional
analysis was also conducted to determine if it was necessary to control for the type of
site (public versus non-public), as research has shown that public sites tended to over
rate themselves and not be as successful in their implementation of CCM based on ACIC
scores [142]. The GLIMMIX procedure in SAS 9.2 was used to test all model
relationships. The quadrature approximation method was used for determining
parameter estimates as it allows a true log likelihood to be calculated in order to
complete model comparisons [143]. Model comparisons were conducted using the
likelihood ratio test and supplemented with comparisons of the AIC, AICC, BIC, CAIC, and
HQIC fit statistics.
If it was found that the relationship between the total ABCD score and the patient
health outcome in question was significant based on traditional p-values of 0.01 and
0.05, then further exploratory analyses will be conducted as discussed in the beginning
of this section. Results were also reviewed using a relaxed p-value of 0.5 in an effort to
highlight the effects of the ABCD scores on the direction of the varying patient health
outcomes across sites. For understanding the direction of improvement, a relaxation of
p-value to 0.5 has been suggested for studies of quality improvement research [144].
Based on hypothesis 3b, it is predicted that there is a positive relationship between
better health outcomes and higher ABCD scores.
99
For asthma sites the dependent patient health outcome variables studied included:
patient involvement in care, patient knowledge of asthma care, access to care, patient
self-efficacy, asthma specific quality of life, and overall quality of life. Patient
involvement with care was equivalent to the sum of individual outcome measures for
using a written action plan, attending education sessions, setting goals, and using peak
flow monitoring. Ad hoc analysis included the study of utilization measures for ER visits
and outpatient visits in the last six months. I used normal regression for the patient
knowledge of asthma care, patient self-efficacy, and overall and asthma specific quality
of life variables. I used Poisson regression for the utilization measures, ER visits and
total outpatient visits in the last six months. The models for the remaining outcomes
were tested using logistic regression. Appropriate assumptions were assessed for each
of the model types. It was believed that ABCD score would have a significant effect on
overall quality of life and patient involvement with care based on previous research
[124], [125]. Additional analysis was also conducted to determine if it was necessary to
control for the patient group (child, adolescent, or adult) in addition to patient age as
utilization and outcome measures often vary greatly across these groups.
Diabetes specific outcomes of interest include a sum of self-care actions, overall rating
of providers, patient education regarding diabetes factors, adherence variables (physical
activity, diet, general), patient involvement with care (perceptions of doctors and nurses
and treatment plans and goals), knowledge of glycemic control, knowledge of eye and
foot care, patient self-efficacy, and resource utilization. Poisson regression was used for
100
the utilization measures including the number of ER visits, hospitalizations, and times
went for medical care in the last six months, as well as the sum of self-care actions.
Normal regression was used to test the association between the ABCD score and the
remaining patient health outcomes. No predictions have been made for diabetes sites.
4.5 Limitations
My validation methodology has several limitations. First, the qualitative data only
considers change actions taken place during the course of the collaborative. If an
organization had previously implemented an element of CCM in full prior to the
implementation, they may make no changes to this element during the intervention.
This could cause the organization to score poorly on one element of the ABCD as several
elements of the CCM align more significantly with specific ABCD components than
others. However, this was unusual for the organization as the baseline assessments of
the ACIC suggest that the organizations perceived themselves as deficient across all
CCM areas, with no one area of advantage prior to implementation.
The second limitation stems from the self-reported nature of the qualitative data. Some
organizations were more detailed in reporting their change actions taken to implement
CCM in their organization, whereas others reported actions on a more aggregate level.
Since I am only measuring quantity of actions, this could bias the results to those
organizations with more detailed responses. The positive correlation found in a
previous study between quantity and depth of action suggests that this bias is minimal
[129]. In an effort to further minimize the effects of self-reporting bias, data was
101
collected from multiple sources including monthly reports, exit interviews, and
collaborative materials.
The qualitative comparative analysis conducted was based on fuzzy sets developed
using the results of the qualitative coding. However, the data set used for qualitative
analysis was a secondary data set, which results in limited knowledge of the sites and
the change activities that were undertaken beyond what was included in the qualitative
data. This made it difficult to set qualitative breakpoints for the development of fuzzy
sets. In order to combat this, the data was plotted and properties of the data itself were
used to define the fuzzy sets. An adjustment factor was used in the assessment of
necessary conditions as that is a stricter test than that of sufficiency using combinations.
The number of sites with data available to study the improvements to care for the
disease groups is also a limitation. Although there are 29 sites with complete ACIC data,
there are only 11 sites with data relating to asthma care and 6 relating to diabetes care.
This small number of sites may prevent finding any significant effects. Although this
may be the case, effects have been found between the ABCD implementation of the
CCM and the patient health outcomes for the asthma care sites [124], [125]. I also
employed a mixed model with only one predictor variable at the site level, which
increased the chance at finding significant effects. In the analysis of results, a relaxed p-
value of 0.5 was utilized in an effort to gain an understanding of the direction of the
effect of the ABCD score on the patient health outcomes. Regardless of the results, this
102
additional analysis is exploratory and may hint at a relationship and drive future
research.
The study is also limited by the lack of pre-intervention patient health measurements.
The researchers faced issues with local institutional review boards that prevented the
collection of the pre-intervention data. In addition, each site chose to participate in one
of the collaboratives due to the knowledge that they were deficient in their
management of chronic illness. Therefore, baseline data were not collected; only post-
collaborative data were collected. This limits the analyses conducted in that we cannot
comment on whether a more intensive implementation improves patient health
outcomes based on pre-post differences. Rather, we can only comment on the effect on
the levels of patient health outcomes across sites based on post-implementation data.
Although pre-intervention measures were not available, medical records were reviewed
for the pre- and post- intervention period. The data confirmed the results from the
patient surveys and showed no secular trend between the two periods [125]. The
analyses conducted for this study focus on the estimation of the effect of a site level
implementation variable on the varying patient health outcomes. This is a comparison
of patient health outcomes among peer organizations known to have deficient patient
health outcomes based on their willingness in participating in a collaborative to improve
care. Therefore, the before and after effects of the implementation on patient health
outcomes by sites can be deemed as similar to the post-intervention differences
103
between patient health outcomes between sites; and thus justifies using post-
intervention data in this analysis.
104
Chapter Five: Results
In this chapter I present the results of the varying methods of analysis utilized to
validate the ABCD implementation framework. First, I present the results of the
qualitative data analysis conducted to determine the existence of the four components
and their interactions in the varying implementation efforts of the CCM. Second, I
present the results of the qualitative comparative analysis using fuzzy sets to determine
which conditions of the ABCD implementation framework are necessary or sufficient to
bring about a successful implementation of the CCM. Finally, the results of the
statistical analyses to estimate the relationships between the ABCD implementation
score and the implementation and patient health outcomes are presented.
5.1 Qualitative Analysis Results
Qualitative data collected from 34 sites as part of their participation in one of three
chronic care collaboratives focusing on implementing CCM was coded using a formal
coding rubric with two reviewers. The Kappa-statistic for inter-rater reliability was 0.28
after the initial training set. This was to be expected due to the unfamiliarity of the
second researcher with the framework and her lack of experience. With each batch of
site data reviewed, our inter-rater reliability steadily increased. In the final iteration, the
Kappa statistic was calculated to be 0.69, an acceptable value for a coding tree of this
complexity. The final coding for each activity at each site was determined through joint
review with the second reviewer. The results of the qualitative analysis for the
component attributes are shown in Table 8. The tables shows the minimum, maximum,
and mean values for the number of items coded for each attribute across all sites.
105
Table 8 Component Frequency Results of Single Site Coding
Component Attributes Min Max
Mean
(std dev) Sample Response
Assistive
Systems (A)
Supply of
Materials for
EBP (A1)
0 15
6.94
(3.71)
“We also stocked our pilot sites with the
videos so they can easily distribute to
patients who are interested”
Process
Redesign for
EBP (A2)
3 24
11.29
(5.25)
“Our asthma team refined the elements
of the planned visit procedure including
team roles and responsibilities”
EBP Data
Collection
System (A3)
0 16
7.21
(4.03)
“The data entry form has been
completed. Patient information is being
entered directly into the asthma
registry.”
Behavior
Activation
(B)
Awareness of
EBP (B1)
1 18
7.94
(4.44)
“Our organization authorized a subset
of the NHLBI Asthma Guidelines to be
distributed to the providers in our two
pilot clinics”
Understanding
of EBP
(B2)
0 11
4.12
(2.23)
“Pilot Site #2 has held an Asthma
educational in-service using the
guidelines as a basis for introducing
standardized Asthma care”
Culture
Building (C)
Culture of
Coordination
(C1)
1 18
7.68
(4.47)
“Our new medical director has joined
the team and will serve as a link
between the asthma team and medical
management”
Quality culture
(C2)
0 6
2.85
(1.74)
“Expanded the CCM into preventive
health”
Readiness to
Change
(C3)
1 14
5.79
(3.36)
“The organization was willing to make a
big commitment because it was the
right thing to do”
Commitment to
Change
(C4)
0 15
5.00
(3.36)
“Our organization continues to support
our asthma program by authorizing
funding for preventative management”
Data Focus
(D)
Performance
Measurement
(D1)
1 30
9.62
(6.08)
“ Goal: Increase rates of standard
diabetes monitoring tests and exams,
increase rates of patient visits with
providers”
Accessibility of
data for EBP
(D2)
0 28
6.91
(5.05)
“The call center provides monthly
reports detailing utilization of this
service”
Completeness
of Information
(D3)
0 15
3.32
(3.16)
“We have incorporated an Asthma
Severity Scale into our flow sheet to
assist in consistent evaluation of
severity at each encounter”
106
Efforts to address the process redesign attribute of the assistive system component
(11.29 on average) and the performance measurement attribute of the data focus
component (9.62 on average) were the strongest. Efforts to address quality culture
were the weakest with 2.85 activities on average. All of the change activities were
undertaken with the intention of increasing the quality of care provided to the patients
at the various sites as it was a cornerstone of the collaborative efforts, but only activities
that explicitly addressed quality culture were coded as such in this analysis to ensure
that this element was not over-represented. Efforts to address the completeness of
data were also weak, with only 3.32 activities focusing on this attribute on average.
Performance measurement values for each of the four diseases in the three
collaboratives were well defined. Therefore, the sites focused on collecting data for the
pre-defined set of performance measures. In doing this, they ensured completeness of
the data, but this was not reflected in the reporting documents as each site understood
what data must be collected when beginning implementation.
An example change activity from one of the 34 sites that was coded as each component
attribute is also listed in table 8 to provide an example change activity that aligns with
the varying attributes. Appendix D provides additional examples from the qualitative
coding analysis and allows an understanding of the overarching types of activities
relating to each attribute conducted in order to implement the CCM in these sites. The
activities coded as each component attribute served to advance the implementation of
the CCM and are discussed further in the next chapter.
107
Only seven of the 34 sites lacked evidence one component attribute or more. Figure 4
shows the number of attributes missing from these sites as well as defines which
attributes were missing. There was a lack of evidence for the completeness of
information attribute for the data focus component in six of the seven sites. As
discussed previously, this is believed to be associated with the well-defined
performance measures that exist for the diseases studied in the collaboratives. Only
two sites lacked evidence of more than one component of the ABCD implementation
framework. Based on the counts of change activities from the ICICE data, these sites
made the least effort to change as they had the two lowest counts. Therefore, it is not
surprising to find a lack of evidence of the ABCD implementation framework in the
efforts undertaken by these two organizations.
0
1
2
3
4
5
6
31 71 77 7 55 73 70
Number of Missing Component Attributes by Site
C2 D3 D3 D3 D3
A1
A3
D3
D2
B2
C2
C4
D3
Figure 4 Missing Component Attributes by Site
108
Not every change activity reported for each site could be coded as associated with one
of the twelve interactions due to a lack of detailed information in the reporting
processes, but there was still evidence found for each of the component relationships.
Eighteen of the 34 sites lacked evidence of at least one interaction, but of these eleven
were only lacking evidence of a single interaction – most commonly the interaction
between behavior activation and data (4 sites) or behavior activation and culture (4
sites). It was difficult to determine the results of the behavior activation efforts were
often not included in the description of the change activities. Typically, the statements
just mentioned disseminating information or conducting a training session with no
further information. The remaining sites lacked evidence of two (two sites), three (four
sites) or five interactions (1 site). As with the sites lacking evidence of framework
component attributes, these seven sites underwent the smallest number of change
activities. Table 9 shows the minimum, maximum, and mean number of actions coded
as associated with the interactions between components. Table 9 also includes an
example change activity representing evidence of each of the interactions between
components. Appendix D provides additional examples of the varying types of
interactions. The overarching characteristics of the different interactions that resulted
from the implementations of the CCM are discussed further in the next chapter.
109
Table 9 Count of Actions Relating to Interaction Elements
Component
1
Component
2 Min Max
Mean
(Std Dev) Sample Response
Assistive
Systems (A)
Behavior
Activation
0 9
3.68
(2.27)
“Asthma guidelines are posted in
the clinician rooms” [A1 B1]
Culture
Building
1 14
3.79
(2.69)
“Office distributing patient
education packet and encouraging
its use” [A1 C4]
Data Focus 1 19
6.41
(4.50)
“Health Assistants use list to call
clients to improve client
appointment compliance.” [A2
D1]
Behavior
Activation
(B)
Assistive
Systems
0 7
3.76
(1.58)
“Use of “flag system for reminders
of phone-follow up” [B A2]
Culture
Building
0 6
1.56
(1.40)
“The pilot team provided education
to FCHC providers on community
mental health services to better
utilize specialized services available
to our patients” [B2 C1]
Data Focus 0 10
5.32
(2.43)
“Introduce ADA diabetes guideline
to pilot MDs: Increase rates of foot
exams, lipid testing, and HbA1c
testing” [B1 D1]
Culture
Building (C)
Assistive
Systems
0 11
5.32
(2.43)
“RCSs set up back-to-back visits
with the PCPs” [C1 A2]
Behavior
Activation
0 8
2.94
(2.04)
“Physician champions one-on-one
with other physicians” [C3 B2]
Data Focus 0 7
1.97
(1.51)
“Continue to work with the
[PROVIDER] from CHIPS for further
refinement of the cube data” [C1
D3]
Data Focus
(D)
Assistive
Systems
0 12
4.65
(2.59)
“They used the registry for
scheduling follow-up visits” [D2
A2]
Behavior
Activation
1 12
3.68
(2.18)
“They did generate a report of
screening rate etc. outcomes from
patient registry and provided the
feedback report to providers” [D2
B2]
Culture
Building
0 8
2.32
(2.03)
“Routine queries of the population
for follow-up needs” [D2 C2]
The most common relationship was found in the link between the assistive systems and
data focus component (6.41). A primary attribute of the assistive systems is the
development of an EBP data collection system in order to provide information relating
110
to performance measures that are accessible and complete, which makes this
interaction a natural link. An organization that focuses too heavily on this link may miss
opportunities for improvement that can be found between the behavior activation and
data focus components or between the culture and data focus components. The
weakest relationships found were between behavior activation and culture building, and
both directions of the interconnection between culture building and data focus. All of
the relationships with the least amount of evidence on average were associated with
the culture building component. Often statements regarding culture building were
independent of other changes undertaken during implementation, which created an
inability to understand the relationship between activities focused on addressing the
culture of the organization and the other components of the ABCD implementation
framework. This effect was hypothesized during the development of the framework as
the relationship between culture and the other components is less studied than the
other interactions. The statistical analysis that follows explored the significance of the
interactions to determine the necessity of including each set in the framework.
Barriers to implementation associated with any one of the component attributes were
also coded. The evidence relating to barriers was scarce as that was not the purpose of
the monthly reports provided to the collaborative. Barriers were coded to ensure that
the ABCD scores for sites that mentioned barriers experienced were not inflated by
ignoring such barriers. Because barriers were not the direct focus of this analysis and
111
evidence was scarce, the coding results will not be discussed directly. Details of the
barriers coding can be found in Appendix E.
In this analysis, each change activity could have been coded as relating to more than
one component or interaction. Therefore, the total ABCD scores exceeded the counts of
change activities developed by the researchers in the ICICE analysis. The total ABCD
score was calculated for each site by summing the number of codes across all
component attributes and interaction elements at that site and then subtracting the
number of barriers coded for that site. The ACIC score ranged from 25 to 221, with a
mean of 115.65 (standard deviation of 48.42) and a median score of 113.5. The
distribution of ABCD scores across the sites is shown in Figure 5. The bar chart is color
coded based on the ACIC change score that corresponds to each site to visually show
the relationship between the ABCD score and the ACIC change scores. The ACIC change
scores were partitioned based on quartiles – low scores were in the first quartile,
medium scores fell in the second or third quartile, and high scores fell in the fourth
quartile of the ACIC change scores. Low scores were represented by a red bar, medium
scores by an orange bar, and high scores by a yellow bar. The white bars represent
sites lacking pre-post ACIC data.
Based on the results of the qualitative analysis, it is clear that hypothesis 1 presented in
Chapter 4 has been validated. There is evidence of all four components and their
interactions in the implementation of the CCM across the 34 sites evaluated in this
analysis. It is also important to note that aside from definition refinement, nothing was
112
identified to suggest that the component attributes did not provide a complete picture
of the activities required in by implementation effort.
Figure 5 ABCD Scores Resulting From Qualitative Coding
Expanding beyond hypothesis 1, I also conducted exploratory analysis to assess the
validity of the assumptions made in the development of the ABCD implementation
framework. Six sites fit the criteria set for determining the most successful sites – there
was one site that was in the highest 10% of both the ACIC change score and ABCD
implementation score from the qualitative analysis. Significantly, each of the three
sites that were in the highest 10% based on their ACIC change score addressed all of the
component attributes and each of the twelve interaction elements even though they
were not in the top 10% of ABCD scores.
-25
25
75
125
175
225
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334
Total ABCD Score
Site
ABCD Scores Across All Sites
Low ACIC Change Medium ACIC Change High ACIC Change No pre-post ACIC
113
The objective of this review was to assess the order with which the sites underwent
their implementation with regard to addressing the components of the ABCD
implementation framework. The data for one of the six sites did not have any
identifying time markers so I excluded this site from the analysis of order. Each of the
five sites addressed the four primary components of the ABCD implementation
framework during the first month of change activities. The sites also addressed a variety
of interactions, although no site addressed all 12 interactions within the first month.
The pattern continued with the sites addressing a variety of components each month –
showing no particular order in which the components must be addressed in order to be
successful in implementing the CCM. Table 10 provides documentation of which
components were addressed over time by the different successful sites.
In a few months, no change activities were completed. The successful sites typically
addressed three or more components of the ABCD implementation framework if they
provided change activity documents for a given month. There were only two instances
in which change activities were undertaken, but only one or two components were
addressed. There is clear justification for our first assumption that it is not necessary to
address the components in any particular order, as was done in the central line checklist
implementations. It is more important that all components are addressed throughout
the implementation.
114
Table 10 ABCD Implementation Framework Elements Addressed Over Time
Month Site*
A B C D I
1 2 3 1 2 1 2 3 4 1 2 3
A TO B TO C TO D TO
B C D A C D A B D A B C
1
1
2
3
4
5
2
1
2
3
4
5
3
1
2
3
4
5
4
1
2
3
4
5
5
1
2
3
4
5
6
1
2
3
4
5
7
1
2
3
4
5
8
1
2
3
4
5
9
1
2
3
4
5
10
1
2
3
4
5
11
1
2
3
4
5
115
The second element of the exploratory analysis was conducted to determine if any
outside factors could have contributed to the lack of implementation success for those
sites with the lowest 10% of ACIC change scores and ABCD implementation scores.
Again, six sites were identified for this analysis with one site falling in the least
successful range on both measures. None of the sites were involved in an inter-
organizational network that could have dissuaded the organization from adopting the
innovation if it did not appear to be a norm across the network. The least successful
sites were part of the same intentional spread strategies that the six most successful
sites participated in so the failure cannot be attributed to the failure of the
collaboratives. The implementations were undertaken at the same time as the
successful implementations so any environmental factors or political directives in place
would have affected all sites. The sites varied in size and location so there was no
common denominator relating to these factors that would require their inclusion in the
ABCD implementation framework. The assumption that an implementation can be
successful regardless of the outer setting in which it takes place can be justified. The
decision to exclude the outer setting allows this work to focus exclusively on the internal
implementation context of a healthcare organization - those factors that an organization
designing an implementation have the ability to change and in turn create an
environment in which they are more likely to be successful in the implementation of
EBP.
Table 10 ABCD Implementation Framework Elements Addressed Over Time
116
5.2 Qualitative Comparative Analysis Results
Fuzzy set QCA was conducted to understand the set-theoretic relationships between the
elements of the ABCD Implementation framework (causal conditions) and the success of
the various implementation efforts (outcome). Two strategies were implemented to
discover these relationships. First, tests of necessity were conducted to identify causal
conditions shared by cases with the same outcome. Second, tests of sufficiency were
conducted to examine cases with the same causal condition to see if they share the
same outcome.
5.2.1 Necessary Conditions of Successful Implementation
As mentioned, tests of necessity were conducted by calculating how consistently a site
had an outcome fuzzy set score that was less than or equal to the score for each of the
five causal conditions (A, B, C, D, I) or their negation. Using a consistency threshold of
0.75 with an adjustment factor of 0.10, A, B, C, and I were found to be necessary for a
successful implementation of CCM. For each of the five conditions, the absence of the
condition as indicated by the negation of the fuzzy set was not found to be a necessary
condition. The coverage for each condition found to be necessary was calculated. As
expected, higher consistency yielded lower coverage. Complete results of necessity
tests including consistency and coverage calculations are found in Table 11.
It was my hypothesis that all five causal conditions would be necessary to bring about a
successful implementation (Hypothesis 2a). The causal condition that represented the
fuzzy set for the level to which interactions were addressed by the sites showed the
highest level of consistency and was found to be necessary for successful
117
Table 11 Results of Necessity Tests
Causal Condition Consistency Coverage
A: Assistive System 0.787 0.652
B: Behavior Activation 0.757 0.735
C: Culture Building 0.795 0.726
D: Data Focus 0.744* 0.777
I: Interactions 0.893 0.677
~A: No Assistive System 0.497
~B: No Behavior Activation 0.527
~C: No Culture Building 0.484
~D: No Data Focus 0.614
~I: No Interactions 0.443
*Coverage calculated because of proximity to threshold
implementation at the 0.85
consistency threshold. Three of
the conditions (assistive system,
behavior activation, and culture
building) were also found to be
necessary at the 0.75 consistency
threshold. Although 0.75
consistency reflects weak evidence, there is still some evidence of the components in all
instances of successful implementation as the coverage level is relatively high –
highlighting the importance of the findings. The data focus component failed to pass
the test of necessity, but it was by a very narrow margin.
It is important to note that additional tests of necessity were conducted to determine if
the causal conditions were necessary for the set of cases without evidence of the
outcome based on the negation of the fuzzy set for successful implementation (results
not shown). In these tests, the only condition found to be necessary was the negation
of the data focus component (consistency of 0.794 with 0.760 coverage). Therefore, the
finding holds that the absence of the data focus component will lead to an unsuccessful
implementation of the CCM – justifying the inclusion of this element in the ABCD
implementation framework.
118
Table 12 Configurations for Achieving
Successful Implementation
5.2.2 Sufficient Conditions of Successful Implementation
Sufficiency allows the researcher to gain an understanding of what causal conditions
lead to the outcome of interest. Two models were tested to determine sufficiency of
conditions at varying levels of the consistency threshold from 0.75 to 1.00, as necessary.
A frequency threshold of 1 case was used for both models as discussed in the methods.
A final solution was determined that represents the best tradeoff between consistency
and coverage.
The first model served to uncover evidence of any combination of the five causal
conditions (A, B, C, D, I) that result in a successful implementation of the CCM. The final
solution was selected by estimating solutions at a series of consistency cut off values
from 0.75 to 0.90. A close examination of the results showed that increasing the
consistency threshold to 0.90 did not
result in significant decreases of model
coverage for the substantive increase in
consistency. Table 12 presents the
results of the fuzzy set analysis and
shows the configurations that are
sufficient for achieving successful
implementation – in this case only one. I
used the notation introduced by Ragin
and Fiss [138]. Large circles indicate core
conditions, those present in the parsimonious solution. Small circles represent
Configuration
1
Assistive System
Behavior Activation
Culture Building
Data Focus
Interactions
Consistency 0.91
Raw Coverage 0.53
Unique Coverage 0.53
Overall Solution Consistency 0.91
Overall Solution Coverage 0.53
119
peripheral conditions, those only present in the complex solution. Full circles denote
conditions that must be present and crossed out circles represent those conditions that
must be absent. Blank spaces indicate a situation in which we do not care if the
condition is present or absent.
As we can see in the above table, the four components of the ABCD implementation
framework (assistive system, behavior activation, culture building, and data) are core
causal conditions, while interaction is only a peripheral condition. The result finds that
there is only one path to a successful implementation, the presence of change activities
associated with all five elements of the ABCD implementation framework. This finding
is consistent with hypothesis 2b.
In accordance with QCA protocol, the second model assessed the potential empirical
relationship between the five elements of the ABCD implementation framework as
causal conditions and the absence of a successful implementation of CCM represented
by the negation of the successful implementation fuzzy set. The final solution was
determined by close examination of solutions at 0.75 and 0.80 consistency thresholds.
Higher consistency thresholds could not be tested since no results met the requirement.
In this case, results are reported based on the utilization of a 0.75 consistency threshold.
The increase in acceptable consistency threshold to 0.80 led to significant decreases of
model coverage (38.6% decrease) without resulting in substantive increase in
consistency as the overall solution coverage at the 0.75 consistency threshold is 0.78
and the solution coverage at the 0.80 consistency threshold is only 0.80. The overall
120
coverage of the solution presented is 0.75, a relatively strong value highlighting the
importance of this finding. Configurations are shown in Table 13. Again, I used the
notation introduced by Ragin and Fiss [138] – large circles indicate core conditions, small
circles indicate peripheral conditions, full circles represent the presence of the
condition, and crossed out circles represent the negation of the condition.
Table 13 Configurations for Failed Implementations
Configuration
1 2 3 4 5
Assistive System
Behavior Activation
Culture Building
Data Focus
Interactions
Consistency 0.82 0.76 0.77 0.77 0.77
Raw Coverage 0.45 0.37 0.19 0.29 0.14
Unique Coverage 0.02 0.04 0.01 0.17 0.04
Overall Solution Consistency 0.78
Overall Solution Coverage 0.75
The results show that five possible paths lead to a failed implementation, as defined by
the inverse of the fuzzy set for successful implementation. The first path shows that an
organization that does not address the behavior activation, culture building and the
interaction component lead to an unsuccessful implementation. The second path shows
that the absence of the assistive system, behavior activation, culture building, and data
focus components results in an unsuccessful implementation. The third path shows that
121
the presence of assistive systems and data focus components combined with a lack of
activities relating to behavior activation and culture building will result in an
unsuccessful implementation. The core conditions of the first three paths include the
absence of behavior activation and the absence of the culture building component. The
fourth path includes three core conditions: presence of assistive system, presence of
behavior activation, and absence of the data focus component. The path shows that the
core conditions in addition to one peripheral condition, a lack of implementation of the
culture building component, leads to an unsuccessful implementation. The final
possible path to an unsuccessful implementation requires that an organization fail to
address the system component, but addresses the remaining elements (behavior
activation, culture building, data focus, and interactions). The core conditions of this
path are the absence of the assistive system component and the presence of the data
focus component.
The absence of each of the five elements is represented in at least one of the
configurations. This result shows that eliminating any one element from the framework
would cause the framework to be incomplete. In addition, one or more of the elements
of the ABCD implementation framework must be absent in each of the possible paths
leading to an unsuccessful implementation. More importantly, the core conditions for
each configuration include the absence of at least one of the framework elements.
This supports my hypothesis that failing to address any one of the components will
cause an implementation to be unsuccessful.
122
5.3 Statistical Analysis Results
The statistical analysis conducted consisted of two methods of analysis. The first
employed correlational analysis to understand the effects of the ABCD implementation
score on the implementation outcome, ACIC change score. The second estimated the
effects of the ABCD score on a variety of patient health outcomes for asthma and
diabetes patients using random effects modeling.
5.3.1 Effects on Implementation Outcomes
The total ABCD score and the ACIC change score show significant positive correlation (r
= 0.48, p = 0.008). Because the result was significant, additional exploratory analyses
were conducted to understand the relationship between each of the elements of the
ABCD framework and the implementation outcome. Table 14 shows the correlation
between the ACIC change scores and the ABCD scores for the components taken
together, the components individually, the interactions taken together, and the set of
interactions from one component to all other components. P-values are provided in the
table. In general, the ABCD scores for the change efforts were positively correlated with
the site’s pre-post differences in the self-reported ACIC scores, with many of those
correlations significant.
As hypothesized, the relationship between the ACIC change score and the components
taken together (r = 0.49, p = 0.0076) was stronger than the correlation between the
ACIC change score and the interactions taken together (r = 0.46, p = 0.0113) although
123
ABCD Implementation
Framework Element Score
ACIC Change Score
and ABCD Scores
p-value
ABCD as a whole 0.48 0.008**
ABCD components as a whole 0.49 0.0076**
Assistive System 0.45 0.0138*
Behavior Activation 0.46 0.0101*
Culture Building 0.47 0.0098**
Data Focus 0.29 0.1227
ABCD Interactions as a whole 0.46 0.0113*
Assistive System to Others 0.47 0.0106*
Behavior Activation to Others 0.37 0.0652
Culture Building to Others 0.36 0.0499*
Data Focus to Others 0.25 0.1845
*Correlation is significant at the 0.05 level (2-tailed).
**Correlation is significant at the 0.01 level (2-tailed).
only slightly. The most significant correlation was found between the culture building
component and the ACIC change scores (r = 0.47, p = 0.0098), which is often the hardest
element of the ABCD implementation framework in which to affect change as shown
through the qualitative analysis results. The data focus component scores showed the
weakest positive correlation (r = 0.29) and were the only component of the ABCD
implementation framework not significantly correlated with the self-reported ACIC
change scores (p-value = 0.1227).
There was a positive correlation between each of the four sets of interaction studied.
There was a significant correlation between the interactions from the assistive systems
component to all other components and the ACIC change score (r = 0.47, p = 0.0106)
and between the interactions from the culture building component to all other
components and the ACIC change score (r = 0.37, p = 0.0499). The correlation between
Table 14 Correlations between ABCD Scores and ACIC Change Scores
124
the interactions from the assistive system component to all other components was also
the highest among the correlations found in the breakdown of the interactions. The
weakest correlation was associated with the interactions stemming from the data focus
component and the ACIC change scores (r = 0.25, p = 0.1845).
Based on the results of the correlation analysis between the elements of the ABCD
implementation framework and the pre-post differences in the self-reported ACIC
scores, hypothesis 3a has been validated by the positive correlations found. There is
clear evidence that an implementation that involves more activities related to the ABCD
implementation framework shows better improvement in implementation outcomes for
the CCM implementation. This analysis provides justification for the inclusion of all
framework components and their interactions in the ABCD implementation framework
as each promotes success in implementation efforts.
5.3.2 Effects on Patient Health Outcomes
Patient health outcomes for patients treated at sites that participated in collaboratives
to improve asthma and diabetes care were treated as dependent variables in
generalized mixed linear models. Unadjusted models were developed first to
understand the basic relationship between the health outcome (level 1 variable) and the
level 2 predictor, ABCD score. Then, models were developed that adjusted for patient
level covariates including age, gender, education level, income, insurance type, disease
severity, and number of comorbidities. A second adjusted model was tested for each
outcome dependent variable that included the above covariates and an additional site
125
level covariate, organization type, which represented whether the organization was
publicly funded or not. Model comparisons were conducted between the two adjusted
models using the likelihood ratio test and supplemented with comparisons of the fit
statistics provided by the SAS output. It was found in the majority of models that we
could not reject the null model excluding the organization type covariate and the fit
statistics was more often than not smaller in the model excluding organization type.
Therefore, it was decided to exclude this covariate from the analysis and the reported
results.
I also assessed the benefits of applying a random slope parameter as well as a random
intercept to account for site level clustering according to certain patient level predictors.
The variance component for slopes was quite small in all models tested, and the null
hypothesis that the slopes do not differ across sites could not be rejected. The findings
suggested the use of the simple model, in which intercepts only varied across sites.
5.3.2.1 Effects on Asthma Patient Health Outcomes
As mentioned, the data for the asthma sites included 508 patients who received
treatment at 11 sites with varying degrees of success regarding the implementation of
the CCM. In addition to testing the addition of the organization type covariate, I
conducted model comparisons between the base adjusted model and an adjusted
model including all the standard covariates (age, gender, education level, income,
insurance type, disease severity, and number of comorbidities) and an additional
covariate representing the age group of the patient being treated. Based on the model
126
comparisons, we chose to include this covariate in the results as it increased the fit of
the model and provided a more clear understanding of the effects. The final results for
both the unadjusted and the adjusted model are shown in Table 15. The first column
displays the dependent variable upon which the effects of the ABCD score are being
studied. The second column indicates the model type that was fit using PROC GLIMMIX.
The remaining columns display the value of the coefficient for the ABCD predictor
variable on the 10 point scale.
The coefficients for the three model types should be interpreted as follows:
Logistic: For every 10 unit increase in the ABCD score, the odds of patient
involvement in care increases by 4.422% (exponentiation of the coefficient
estimate provided in Table 15) controlling for age, age group, sex, race, income,
insurance type, severity, and comorbidities.
Patient Health Outcome
(dependent variable)
Unadjusted Adjusted
a
Model β(A B C D) p-value β(A B C D ) p-value
Patient involvement in care Logistic 0.03214 0.5389 0.04327 0.338*
Patient knowledge of asthma care Gaussian -0.01560 0.7502 0.02451 0.2345*
Access to care Logistic -0.02250 0.6998 -0.01990 0.7474
Patient self-efficacy Gaussian -0.01900 0.4259* 0.01474 0.1814*
Quality of Life Indicators
Asthma Specific Gaussian 0.51130 0.5041 0.4741 0.1476*
Overall Gaussian 1.85000 0.1337* 0.1518 0.4382*
Utilization Measures (Last 6 mos)
ER visits Poisson 0.10010 0.0505* 0.1153 0.0029**
Outpatient visits Poisson -0.00890 0.4129* -0.0083 0.4879*
a
Adjusted for age, age group, sex, race, income, insurance type, severity, comorbidities
*Significant at the 0.50 level
**Significant at the 0.05 level
Table 15 Parameter Estimates for ABCD score Effects on Asthma Outcomes
127
Gaussian: For every 10 unit increase in the ABCD score, the patient knowledge
of asthma care increases by 0.02451 controlling for age, age group, sex, race,
income, insurance type, severity, and comorbidities.
Poisson: For every ten unit change in the ABCD score, the number of ER visits in
the last six months increases by 12.221% (exponentiation of the coefficient
estimate provided in Table 15) controlling for age, age group, sex, race, income,
insurance type, severity, and comorbidities.
I focused my analysis on the results of the adjusted model. The only relationship found
to be statistically significant at the 0.01 level was between ABCD score and the number
of ER visits in the past six months. I found that ER visits actually increase by 12.221% (p-
value = 0.0029) with an increase of ten units in the site’s ABCD score. This is the
opposite effect than was expected, which will be discussed further in the next chapter.
Due to the significance of this relationship, I assessed the relationship between the
number of ER visits in the last six months and the varying components of the ABCD
score. Table 16 shows the results of this analysis.
The interactions, taken together, have a stronger and more significant relationship with
the number of ER visits in the past six months (0.3313, p=0.0065) than the components,
as a group (0.1699, p=0.0071) after controlling for covariates. This was the opposite of
my initial hypothesis. The behavior activation component alone had the strongest
effect, which was also the opposite of our hypothesis. The significance and strength of
128
Table 16 Relationships between ER Visits and Framework Components
Independent Variable:
Unadjusted Adjusted
a
Β ( I n d e p ) p-value β(Indep) p-value
Total ABCD Score 0.10010 0.0505 0.1153 0.0029*
Components 0.1517 0.0459* 0.1699 0.0071*
Interactions 0.2793 0.0679 0.3313 0.0065*
A 0.5135 0.0272* 0.5042 0.0096*
B 0.9556 0.0185* 0.9733 <0.0001*
C 0.4254 0.1546 0.6552 0.0164*
D 0.3513 0.1404 0.4118 0.0455*
A to ALL 0.6192 0.0549 0.6481 0.0145*
B to ALL 1.176 0.0412* 1.189 0.0092*
C to ALL -0.5265 0.5674 0.44 0.6443
D to ALL 0.768 0.1035* 0.9799 0.0141*
a
Adjusted for age, age group, sex, race, income, insurance type, disease severity,
comorbidities
*Significant at the 0.05 level
the effect of the behavior activation component provides validation for the decision to
weight each component equally in the analysis. The results also show that the sum of
the interactions from the behavior component to all other components have the
strongest and most significant effect (1.189, p=0.0092) on the number of ER visits in the
past six months controlling for covariates. The only component not significantly
associated with the number of ER visits in the last six months after adjusting for
covariates at the 0.05 level is the sum of the interactions from the culture building
component to all other components (p=0.6443).
Although the only significant relationship found was between the ABCD score and the
number of ER visits in the last six months, it has been suggested that for quality
improvement research p-values should be relaxed from 0.05 or 0.01 to a larger value to
highlight the direction of improvement in the results [144]. As mentioned in section
129
4.3.3, we relax the p-value to 0.5 in order to assess the direction of the relationship
between the dependent patient health outcome variables and the ABCD score. Based
on this relaxation, the direction of the relationship between all the outcome variables
and the ABCD score should be evaluated except for access to care. As shown in Table
15, the relationship between ABCD score and patient involvement in care, patient
knowledge of asthma care, patient self-efficacy, and both quality of life indicators is
positive when adjusted for covariates, as we expected. The negative relationship
between ABCD score and number of outpatient visits in the last six months after
adjustment for covariates was also expected. As discussed, the relationship between
ABCD score and the number of ER visits in the past six months after adjusting for
covariates was positive and the only truly significant relationship. Although there is
much evidence in support of hypothesis 3b applying a relaxed p-value, it is only partially
valid for patient health outcomes relating to asthma care.
5.3.2.1 Effects on Diabetes Patient Health Outcomes
As mentioned, the data for the sites that implemented the CCM focusing on improving
care for diabetes patients included 442 patients who received treatment at 6 sites with
varying degrees of success regarding the implementation of the CCM. The final results
for both the unadjusted and the adjusted model are shown in Table 17. The adjusted
model is adjusted for the following patient covariates: age, gender, education level,
income, insurance type, disease severity, and number of comorbidities. The first
column displays the dependent variables upon which the effects of the ABCD score were
studied. The second column indicates the model type that was fit using PROC GLIMMIX.
130
The remaining columns display the value of the coefficient for the ABCD predictor
variable on the 10 point scale.
Table 17 Parameter Estimates for ABCD Score Effect on Diabetes Outcomes
The coefficients for the two model types should be interpreted as follows:
Poisson: For every ten unit change in the ABCD score, the number of ER visits in
the last six months decreases by 5.31% (exponentiation of the coefficient
estimate provided in Table 17) controlling for age, sex, race, income, insurance
type, severity, and number of comorbidities.
Patient Health Outcome
(dependent variable)
Unadjusted Adjusted
a
Model β(A B C D ) p-value β(A B C D ) p-value
Sum of self-care actions Poisson -0.0043 0.5257 -0.00160 0.8448
Overall rating of provider Gaussian 0.00726 0.7908 -0.01300 0.6276
Patient Education of Diabetes Gaussian -0.0541 0.1385* -0.04730 0.2531*
Adherence:
Physical Activity Gaussian 0.0095 0.5011 0.00864 0.3209*
Diet Gaussian -0.0011 0.88 0.00215 0.8059
General Gaussian -0.005 0.7092 -0.00130 0.9103
Patient involvement with care:
Perceptions of Doctor/Nurse Gaussian 0.01311 0.1143* 0.01028 0.1701*
Treatment plans and goals Gaussian -0.004 0.7727 0.00153 0.9149
Knowledge of glycemic control Gaussian 0.02481 0.1835* 0.01219 0.3562*
Knowledge of eye and foot care Gaussian 0.00093 0.8753 0.00088 0.9021
Self-efficacy Gaussian 0.00581 0.3267* 0.00311 0.6178
Resource Utilization (Last 6 mos.)
Number of ER visits Poisson -0.0626 0.2092* -0.05460 0.1297*
Number of hospitalizations Poisson -0.0548 0.0387** -0.07100 0.0572*
Number times got medical care Poisson -0.0079 0.3217* -0.02150 0.0755*
Adjusted for age, sex, race, income, severity, and comorbidities
*Significant at the 0.50 level
**Significant at the 0.05 level
131
Gaussian: For every 10 unit increase in the ABCD score, the patient adherence to
care regarding physical activity increases by 0.00864 controlling for age, age
group, sex, race, income, insurance type, severity, and comorbidities.
Again, I relaxed the p-value to 0.5 in this analysis in an effort to highlight the direction of
the relationships between the dependent patient health outcome variables and the
ABCD score for this quality improvement study. Although none of the adjusted models
are significant at the stricter p-values, patient education of diabetes factors, patient
adherence to care regarding physical activity, patient involvement with care as
perceived by the doctors and nurses, patient knowledge of glycemic control, and the
three resource utilization parameters are significant at the 0.5 level. There is a positive
relationship between ABCD score and patient adherence to care regarding physical
activity, patient involvement with care as perceived by the doctors and nurses, and
patient knowledge of glycemic control and a negative relationship between the ABCD
score and the resource utilization variables as expected. The relationship between the
ABCD score and the patient education of diabetes factors was negative. Again, there is
mixed evidence of Hypothesis 3b for patient health outcomes relating to diabetes care.
5.4 Summary of Results
In completing the analyses, I answered the following research questions:
1. Are the four components and the interactions of the ABCD implementation
framework employed in other successful implementations of EBP?
132
Hypothesis 1: All four components and their interactions will be represented in
other EBP implementation efforts.
Hypothesis 1 was validated through the use of qualitative coding analysis of the
qualitative data including monthly reports of change activities, team survey, and
team interview data from 34 implementation of the CCM. There was evidence
of all four components of the ABCD implementation framework and their
interactions in the qualitative data relating to change activities provided by the
34 sites that participated in the varying collaborative to implement CCM.
Significantly, the framework sufficiently described all the change activities
undertaken by the organizations. In addition, I found that the successful sites
did not address the components in any particular order, justifying my relaxation
of the initial conceptual model developed from the central line checklist
implementation case study. I also found no underlying outside factors
associated with the least successful implementations – an implementation can
be successful regardless of the outer setting in which it takes place, which was
my assumption.
2. Are the components and interactions included in the ABCD implementation
framework either necessary or sufficient in bringing about a successful
implementation of EBP?
Hypothesis 2a: The components and their interactions will be necessary in
bringing about a successful implementation of EBP.
133
The interactions component was found to be necessary with a consistency of
0.89 based on the QCA analysis. The assistive system, behavior activation, and
culture building were found to be necessary in bringing about a successful
implementation of EBP at a 0.75 consistency level, which is a weak level of
necessity. Although the data focus component itself was not found to be
necessary, it was important that the negation of the data focus component was
necessary in bringing about an unsuccessful implementation implying that it
remains a crucial element of the ABCD implementation framework.
Hypothesis 2b: The components and their interactions will be sufficient when
considered as a combination of conditions, but will not be sufficient individually.
At a 0.90 consistency level, the components and their interactions taken
together are sufficient to bring about a successful implementation based on the
QCA analysis. It is also important to note, that all causal combinations leading to
the absence of a successful outcome required the negation of one or more of the
framework components. This supports my hypothesis that failing to address any
one of the components will cause an implementation to be unsuccessful.
3. Does an implementation project that involves more activities related to the ABCD
implementation framework also show better improvement in implementation
and patient outcomes?
134
Hypothesis 3a: An implementation that involves more activities related to the
ABCD implementation framework will show better improvement in
implementation outcomes.
The Pearson correlation analysis between the ABCD score and the ACIC change
score showed positive and significant correlation between the two variables –
validating hypothesis 3a. In addition, positive correlation was shown between
the individual components of the ABCD score and the ACIC change score. The
culture building component showed the largest significant positive correlation
with the ACIC change score when compared to the other three components.
When assessing the breakdown of the interaction component, I found that the
interactions between the assistive system component and all other components
showed the highest and most significant positive relationship with the ACIC
change score when comparing the four sets of interactions.
Hypothesis 3b: An implementation that involves more activities related to the
ABCD implementation framework will show better improvement in patient health
outcomes.
Based on the multilevel modeling of sites that focused on implementing the CCM
in order to improve care for asthma and diabetes patients, there was mixed
evidence to suggest that an implementation that involves more activities relating
to the ABCD implementation framework results in better patient health
outcomes. There was minimal evidence based on traditional findings using strict
135
p-values of 0.01 or 0.05, but when applying a relaxed p-value the ABCD score
tended to have the appropriate effect on the patient health outcomes. Although
the majority of patient health outcomes were in the expected direction, one
health outcome for each of the diseases studied (number of ER visits in the last
six months for asthma patients and patient education levels for diabetes
patients) had the opposite relationship than was expected.
136
Chapter Six: Discussion
Eliminating the gap between research evidence and research practice is a crucial area of
interest in the growing field of implementation research. It has been said that it takes
17 years to translate only 14% of original research from basic science research to being
widely implemented as part of standard clinical care [1]. NIH identified three primary
roadblocks that cause delays on the road to implementation. One of which is the
translation to practice – the dissemination and implementation of EBP [30]. In order to
overcome this roadblock, researchers must gain an understanding of what makes an
implementation successful [11–13] and how to translate that success into a framework
that provides widespread guidance for the development of future implementation
strategies [14].
Through this research, I aimed to characterize the internal context of a healthcare
organization that affects the likelihood of success in implementing evidence based
practice. I began by conducting a critical evaluation of the literature relating to the
central line checklist implementation, a prominent and successful implementation [7],
[19–29]. The results of this evaluation provided the backbone of the framework
developed, which was then supplemented through a comparison to other
implementation frameworks [33], [35], [38], [43], [44], [49], [52], [58], [61–63], [90] to
ensure that a complete picture of the implementation system was developed. The
application of the systems perspective [95–99] transformed the ABCD implementation
framework from a simple understanding of what makes an implementation successful
137
to a cohesive view of the internal contextual factors and their relationships that drive
the success (or failure) of an implementation of an actionable EBP.
Other frameworks have been developed to describe the factors critical for efforts
focusing on implementation of an EBP, which were identified through the comparative
literature review described in section 3.2. Unlike these existing frameworks, the ABCD
implementation framework provides an internal system perspective for
implementations complete with a comprehensive validation. The majority of existing
frameworks includes factors relating to the external setting under which an
implementation is being conducted [33], [35], [38], [43], [58], [62], [63] and fail to
understand the interconnections of the individual elements included in the frameworks
[35], [43], [44], [49], [52], [58], [62], [63], [90]. In addition, only one framework
addressed the actual care process that is affected by an implementation [35]. The ABCD
implementation framework provides a more generalizable framework that provides an
organization with levers that they actually have the ability to affect when developing an
implementation strategy due to its focus on the internal implementation setting.
6.1 Evidence of Framework Elements
For each component, clear attributes were developed based on knowledge of the
central line checklist implementation and supplemented through a review of existing
theoretical frameworks and literature relating to theories regarding each framework
component specifically. The qualitative validation highlighted the comprehensive
nature of the list of attributes developed as one or more component attributes
138
sufficiently described each of the change activities conducted by the 34 different
organizations implementing the CCM. The qualitative analysis also provided an
understanding of the overarching types of change activities required to address the
individual component attributes of the ABCD implementation framework and the
interactions between these components for the implementation of the CCM, one
example of a well-studied EBP.
The assistive system component of the ABCD implementation framework focuses on the
creation of enabling or removal of impeding tools, processes, and policies to facilitate
implementation. It is defined by three attributes: the supply of materials for EBP, the
process redesign for EBP, and the EBP data collection system. A deeper understanding
of the attributes can be gained through identifying the overarching types of change
activities that occurred throughout the implementations of the CCM. The activities
associated with the materials attribute were associated with developing patient
education materials and providing those materials as well as items necessary for
patients to perform self-management to the patients. The activities associated with the
process redesign attribute varied most greatly, but often required the implementation
of a planned visit and a protocol to guide providers in conducting such visits. Activities
relating to the EBP data collection system attribute included the development of data
entry forms, both paper and electronic depending on the type of medical record
employed by the site in an effort to collect data relating to the performance
measurement associated with different disease types and the CCM elements. Activities
139
relating to implementing data entry processes and assigning data entry tasks to varying
employees were also associated with the EBP data collection system attribute. The
types of activities implemented relating to the assistive system component will vary
based on the nature of the known EBP being implemented. For example, the materials
In the case of the central line checklist implementation were not for patient education,
but rather materials required for the actual central line placement. The process
redesign and EBP data collection implemented were also different in that they focused
on the use of a checklist as a protocol for process redesign and EBP data collection.
The behavior activation component serves to increase awareness and understanding of
the performance of the EBP. The attributes associated with the behavior activation
component, awareness and understanding of the EBP, required change activities
relating to the dissemination of guidelines relating to chronic illness care and the
changes being made for the implementation of the CCM and educating the providers
through in-services, respectively. In addition, the awareness of EBP attribute was
associated with activities that indicated the development of triggers or alerts to remind
providers of the new process of care required by the CCM. These change activities
represent the overarching types of activities conducted for the implementation of the
CCM. When other EBP are the focus of the implementation, healthcare organizations
should activate awareness and behavior stimuli for that particular EBP.
The culture building component highlights the need to develop a culture that supports
and increases coordination and quality of care through a readiness and commitment to
140
change efforts. The component is defined by four attributes: culture of coordination,
quality culture, readiness to change, and commitment to change. In the CCM
implementations, activities associated with the culture of coordination attribute
included actions to improve communication and coordination across groups, both
internal and external, such as, developing teams to aid in implementation efforts. The
most difficult activities to code were relating to the quality culture attribute as the
majority of activities were associated with this attribute due to the fact that the
organizations were implementing the CCM to improve the quality of care provided to
patients at the sites. Activities associated with this component were generally related
to improving safety culture, participating in other collaborative efforts, increasing
patient satisfaction with care, and increased understanding patient needs and follow up
with patients. Readiness to change was characterized by change activities relating to
organization commitment to change including hiring new employees as well as the
appointment of physician champions and recognizing the problems with the care
provided. Activities associated with the commitment to change attribute centered on
continued support for the implementation through funding and efforts to spread the
CCM throughout the organization. Efforts relating the culture component showed the
least variation between the two EBP implementations that were studied in this
research. Efforts vary primarily due to the varying groups that may be involved in the
specific EBP implementation.
141
The data focus component emphasizes the use of data to create tension for change and
an understanding of the effects of the change efforts. This component is characterized
by performance measurement, accessibility of data for EBP, and the completeness of
information. In each of the three collaboratives in which the varying organizations
participated, clear performance measures were defined in order to track improvements
to patient care. Therefore, the activities associated the performance measurement
attribute centered on the maintenance and acknowledgement of the performance
measures, which varied by disease type being studied. Performance measurement was
also conducted to track compliance with the process changes implemented, such as
elements of the planned visits or patient education sessions. The accessibility of data
for EBP was characterized by change activities that included development of reports and
other methods to allow providers access to the data relating to the varying performance
measures. Activities relating to the completeness of information were associated with
change activities that revised, modified, or edited the data that was being collected.
Efforts to address the data focus component should vary based on the nature of the EBP
as the performance measurement, accessibility of data, and completeness of
information should be defined by the metrics and data tracking needs defined by the
focus of the EBP.
Just as with the component attribute, the qualitative analysis of the CCM
implementation provided an understanding of overarching types of activities that relate
to the interactions between components. Although interactions will vary based on the
142
EBP being implemented, the analyses of the overarching constructs relating to the
interactions in this implementation provide insight into the interaction element of the
ABCD implementation framework. The change activities associated with the interaction
from the assistive system to the behavior activation component tended to be associated
with using elements of the supply of materials or the data collection forms to provide a
trigger to the provider that created increased awareness within the providers. The
interaction from the assistive system component to the culture building component was
typically characterized by change activities relating to process changes that increased
the care communication and coordination between providers. The change activities
associated with the interaction from the assistive system to the data focus component
were often a reflection between the natural link between the EBP data collection
system and the accessibility of data for EBP required by an organization that is data
driven. In addition, this type of interaction was associated with changes activities in
which changes to the process or supply of materials caused increased compliance with
the CCM implementation performance measures.
The interaction from the behavior activation component to all other components
(assistive system, culture building, and data focus) was driven by increased provider
awareness of the EBP and understanding caused by change activities related to the
behavior activation component, The behavior activation resulted in providers that
were aware of process changes, were willing to make those changes due to increased
readiness to change resulting from the recognition of the discrepancy between current
143
care and best practice. In turn, the providers’ behavior activation affected performance
measures relating to compliance.
The interaction from the culture building component to the assistive system component
was typically a result of improvements to the process of care due to the increased
coordination between providers and the support of senior leadership through hiring to
ensure staff or equipment was available to ensure assistive system changes occurred.
The interaction from the culture building component to the behavior activation
component was typically associated with the actions of physician champions and leaders
to ensure that providers were educated regarding the guidelines and implementation.
The interaction from the culture building component to the data focus component were
a result of coordination of providers to ensure accessibility and completeness of
information and to reduce the time required of providers by shifting responsibilities.
Having access to data and the ability to quickly generate reports causes the ability to
tailor visits and appointment scheduling (process redesign) based on patient need,
which indicates existence of the interaction from the data focus component and the
assistive system component. Addressing the varying attributes of the data focus
component also served as a feedback mechanism to providers to increase
understanding of the effect of the changes being undertaken. The availability of data
and the tracking and maintenance of performance measures also allowed the
identification of discrepancies that led to increased focus on quality, readiness to
change, and commitment to change – the interaction from the data focus component to
144
the culture building component. The components, their attributes, and the interactions
provide an outline for leaders in health care organizations planning an implementation
strategy to understand what elements must be understood and assessed to successfully
tackle the components and for researchers studying quality improvement and
implementation science to guide future research. For the implementation of the CCM,
overarching activities relating to each component attribute were uncovered. This
framework allows an organization to translate a well-studied EBP into actionable
practice and provides the key components to ensure sustainment. By viewing the well-
studied EBP through the lens of the ABCD implementation framework, an organization
can abstract and adapt specific change activities related to the specific well-studied EBP.
In addition, the elements can be utilized to identify facilitators and barriers to an
implementation within an organization as the qualitative evaluation confirmed that
external settings did not contribute to the failure or the success of any one
implementation.
6.2 Validation of the ABCD Implementation Framework Elements
The ABCD Implementation Framework is comprised of five elements – the four
components (assistive system, behavior activation, culture building, and data focus) and
their interactions as the fifth element. The three part methodology for validating the
framework provided justification for each of the framework elements. The findings
across the three types of analyses echo each other and support the inclusion of each
element in the framework. This section discusses the findings across the three types of
analyses for each of the elements.
145
6.2.1 The Assistive System Component
The attributes of the assistive system component are the most familiar to industrial
systems engineers, but clearly apply in the case of EBP implementations. Change
activities conducted by the organizations were most commonly associated with the
process redesign attribute of the assistive system component. Implementation of
evidence based practice often requires changes to the current care delivery process
[66], [69], [70] so this result was expected. Overall, the assistive system component was
found to be necessary for a successful implementation, which implies that all instances
of successful implementation addressed this component. Echoing this result, the
statistical analyses that separately addressed each framework element, showed that
there was significant positive correlation with the ACIC score and a significant effect on
the number of ER visits in the last six months at asthma sites. The results of the
validation strongly support the inclusion of the assistive system component in the ABCD
implementation framework.
6.2.2 The Behavior Activation Component
The behavior activation component was thought to be the weakest element of the
implementation framework as many studies have shown the failure of implementations
that focus exclusively on this type of didactic method [9], [39], [76]. However, the
results of the validation analysis repeatedly showed the evidence of this component as
it was found to be a necessary condition for successful implementation and had a
significant positive relationship with ACIC change scores. Even more importantly, the
behavior activation component scores alone had the largest significant effect on the ER
146
utilization in the last six months for the patients at the asthma sites when comparing the
four components. This was the opposite of my hypothesis. Although the literature
shows that this component alone can lead to implementation failure, it is still imperative
to include these types of techniques in any implementation strategy as it affects patient
outcomes in implementations that involve change activities relating to other
components as well. Behavior activation may involve changes to the first attribute of
the assistive system, supply of materials for EBP, by providing reminders to providers in
the patient treatment area of the EBP or key components of the disease. This not only
triggers the providers to address the EBP, but also provides information for the patient
while waiting to be seen that would also increase their awareness of their symptoms
and the importance of monitoring them, which has been attributed with increased
utilization in asthma patients [145]. The significance and strength of the effect of the
behavior activation component provides validation for the decision to weight each
component equally throughout the validation process regardless of the initial belief of
its lack of importance.
6.2.3 The Culture Building Component
The culture building component is often neglected in implementation efforts [7], but
nevertheless remains extremely important. This component is not only necessary for an
implementation to be successful, but also had the highest and most significant positive
relationship between the count of activities relating to this component and the ACIC
change score. Culture is often the hardest element of the ABCD implementation
framework in which to affect change [146], but at the same time change at this level
147
often leads to sustained change beyond the initial implementation effort [7]. The
results of the multilevel modeling analysis regarding the effect of the cultural
component on the number of ER visits in the past six months for patients being treated
at asthma sites also showed that the culture building component was significant. The
significance of these relationships highlights the importance of the inclusion of this
element and its interactions in the ABCD implementation framework.
6.2.4 The Data Focus Component
The data focus component was initially thought to be the most important component in
which to affect change as properly addressing this component provides the trigger that
stimulates change and commitment to an implementation project [33], [38], [61], [63],
[68], [69]. The qualitative analysis provided evidence to support this belief as change
efforts were commonly associated with the development and maintenance of
performance measurement, the first attribute of the data focus component. The tests
for necessary conditions showed that the failure to address the data focus component
would lead to an unsuccessful implementation, but did not show that the data focus
component itself was necessary in creating a successful implementation although by a
very narrow margin. The failure of the data focus component to pass the necessity test
may be associated with the nature of participating in a collaborative similar to those in
this study. The collaborative highlighted performance measures of interest so there
was no motivation for sites to continually assess if the proper data was being collected
to ensure a complete picture of the implementation beyond those measures required
by the collaborative. Therefore, there was scarce evidence of the third data focus
148
component attribute, completeness of information, across sites that exhibited all levels
of success. Also, many of the more successful sites already had a strong focus on data in
place and therefore required less focus on this particular change element, which in turn
caused them to have low membership in this fuzzy set. It is important to note that the
lack of the data focus component was the only condition found to be necessary for
cases with unsuccessful implementation – providing validation of the inclusion of the
data focus component in order to create a successful implementation.
The data focus component scores also showed the weakest positive correlation with the
ACIC change score and were the only component of the ABCD implementation
framework that was not significantly correlated with the self-reported ACIC change
scores. Two factors are believed to have contributed to this. First, scarce evidence of
the third data focus component attribute, completeness of information, was found in
sites across all levels of change effort as mentioned above. Second, sites with low pre-
post differences in self-reported ACIC scores primarily focused their change efforts on
initiating performance measurement as this is a key element of the CCM. The focus of
implementation science literature on the importance of tracking data and using it as a
tool to trigger change and provide a validation for efforts [33], [35], [43], [61] may have
led to sub-optimized implementation efforts in these sites, placing too high a focus on
understanding and tracking the data in these sites when in reality a more
comprehensive implementation approach is necessary. Regardless of the lack of
significance, the direction of the relationship with the ACIC scores was as hypothesized
149
and the data focus component was shown to be significant in affecting the number of
ER visits in the past six months for patients at asthma sites. Although the results of the
validation were not as strong as for the other components, the results support the
inclusion of this component with an equal weighting to the other components.
6.2.5 The Interactions Element
The final element of the ABCD implementation framework, interactions, has been
neglected in current implementation frameworks. It was my hypothesis that the
systems perspective [95–99] would allow us to eliminate the assumption that the
specific order followed by a successful implementation must be replicated to ensure
success in a different organization. Rather it is the focus on the interaction between
the four components of the ABCD implementation framework that must be
acknowledged and addressed in order to create success. The findings from the
validation analyses confirm this as there was no particular order in which the
components were addressed in successful sites. Even further, the four components and
their interactions were addressed throughout the implementation efforts.
The findings from the remaining validity tests echoed the importance of the interactions
element of the ABCD implementation framework. First, the interactions condition was
the only element to be found necessary at the 0.85 consistency threshold, the remaining
conditions were necessary at a 0.75 threshold which is a weak result. Second, overall
the interactions were shown to have a significant positive relationship with the ACIC
change score. In the analysis of the correlation between the sets of interaction
150
components and the ACIC change score, the interactions from the assistive system
component to all other components were found to have the not only highest, but also
most significant positive relationship with the ACIC change score. The assistive system
component has a natural link between itself and the remaining components as the
assistive system attributes represents the basic elements necessary to function as an
operating healthcare organization. Each attribute naturally affects the other three
because the other three must operate within the confines of the assistive system
component. The correlation between the interactions from the data focus component
to all others was the weakest positive correlation. This may be associated with the
nature of the reporting of the qualitative data. It was obvious to see how the other
elements affected the data, but the second hand effects of how the new data in turn
drove further change was not present in the reporting. The correlation between the
ACIC change score and the interactions stemming from the culture building component
was also positive and significant, regardless of the lack of evidence of many of these
interactions in the qualitative data. Although this may be the least studied set of
interactions, addressing these is important to ensuring a successful implementation as
shown by the significance of this relationship.
Third, in the breakdown analysis of the relationship between the ABCD components and
the number of ER visits in the past six months, the interactions, taken together, have a
stronger and more significant relationship with the number of ER visits in the past six
months than the components, as a group, after controlling for covariates. This was the
151
opposite of my initial hypothesis, but draws further support for the inclusion of
interactions in the framework in accordance with the systems perspective advocated by
this work. The results also show that the sum of the interactions from the behavior
component to all other components have the strongest and most significant effect on
the number of ER visits in the past six months at asthma sites after controlling for
covariates. The effects of behavior activation strategies ripple through the organization
when properly supported by change activities that address behavior and one or more of
the other components – a multidimensional approach [39]. The remaining sets of
interactions were all significant with the exception of the interactions from the culture
building component to all others, but this relationship was proven significant in affecting
implementation improvement.
6.3 Validation of the ABCD Implementation Framework as a Whole
Overall, the ABCD implementation framework was determined to characterize the
implementation context of a healthcare organization that affects the likelihood of
success in implementing evidence-based practice. The validation provided clear
evidence of this as the framework components and their interactions sufficiently
described the change activities conducted by each organization. In addition, sufficiency
tests found that the components and their interactions taken together are sufficient in
bringing about a successful implementation with very high consistency level (0.90),
implying that sites that exhibited this set of causal conditions also exhibited a successful
outcome (Hypothesis 2b). By failing to address any one of the components, the
resulting implementation will not be successful. It was also interesting that in the study
152
of the paths leading to an unsuccessful implementation, each clause of the results
required that at least one of the components not be addressed in the implementation
echoing the results of the initial sufficiency test. The final tests of validity showed that
overall an implementation that involves more activities related to the ABCD
implementation framework will show better improvement in implementation
outcomes. In addition, the majority of the results studying the relationship between
ABCD score and the patient health outcomes for asthma and diabetes sites were in the
expected direction. Sites that better addressed the elements of the ABCD
implementation framework showed better patient outcomes relating to patient
involvement in care, patient knowledge of asthma care, patient’s access to care, patient
self-efficacy, overall and asthma specific quality of life, and the number of outpatient
visits in the last six months for asthma sites. Sites that better addressed the elements of
the framework also showed better patient outcomes relating to patient adherence to
care relating to physical activity, patient involvement in care based on the perceptions
of doctors and nurses, patient’s knowledge of glycemic control, and all resource
utilization measures for diabetes sites.
However, the most unexpected results also stemmed from the multilevel modeling
analysis of patient health outcomes. For asthma patients, I found that the relationship
between higher ABCD scores and the number of ER visits in the last six months was
positive, the opposite of what was expected. A meta-analysis conducted of asthma
quality improvement implementations showed that increased asthma care quality
153
causes mixed effects on utilization as people often become more aware of the
importance of monitoring symptoms closely and therefore seek treatment more
frequently, especially urgent visits [145]. Due to our population characteristics, it makes
sense that this manifested itself in ER visits rather than outpatient visits as the majority
of the patients are low-income and seek urgent care in the ER. Although this
relationship was unexpected, it is a common occurrence in implementations focusing on
improving the quality of asthma care according to the literature. This knowledge
combined with the fact that all the other relationships between the ABCD score and
asthma health outcomes were in the expected direction strengthens the validation of
Hypothesis 3b relating to patient health outcomes.
Just as in the analysis of asthma health outcomes, only one relationship was not in the
direction that was expected in the analysis conducted using diabetes health outcomes
as the dependent variable. The relationship between the ABCD score and the patient
education of diabetes factors was negative instead of positive. This initially seems
contradictory, but patient education is one of many self-management strategies
embedded in the CCM, with the majority of the strategies focusing on a more active role
of the provider in encouraging self-management. It has also been shown that
implementations that focus solely on didactic measures are typically less effective [36],
[39]. Therefore, it is not a surprise that the sites with higher ABCD scores showed a
decrease in patient education of diabetes factors, but still reflected an increase in
patient involvement with care and knowledge of glycemic control. The elements of the
154
CCM these sites chose to focus on instead of simple patient education proved to be
effective. Assessing the results in this manner changes our initial declaration of a lack of
support of hypothesis 3b. By viewing the results of the analysis through the lens
provided by the literature, we can see that an implementation of the CCM that involves
more activities related to the components and interactions of the ABCD implementation
framework results in better patient health outcomes for patients with both diabetes and
asthma using a relaxed p-value evaluation approach suggested for quality improvement
studies [144].
6.4 Study Limitations
This study methodology is limited by the fact that only EBP implementation was used for
the framework development, and only one other EBP implementation was used in the
validation tests. Although these two implementations differ greatly in their setting and
the structure of the EBP, the generalizability of these findings is limited. Attempts were
made to gather additional data relating to other EBP implementations, but many
implementations, especially those on the large scale, do not collect the detailed
qualitative data necessary to conduct this type of validation analysis. In the future,
studying more EBP implementations will strengthen the generalizability of the
knowledge gained from the current research. The validation methodology also had
several limitations that resulted from the nature of the data sets utilized in the
validation, which were discussed in Chapter 4.
155
6.5 Future Work
Moving forward, it would be of interest to strengthen our finding from the qualitative
case study analysis that order is not important by conducting a pattern recognition
analysis to mathematically confirm the lack of a visual pattern. If a pattern were found
among the order of the sites implementation efforts relating to the ABCD
implementation framework, additional tests should be conducted to test if order is
correlated with the implementation outcomes. This analysis would further strengthen
the validation effort by mathematically proving the lack of an order in the change
activities of the sites.
Additionally, future research should focus on expanding beyond the well-defined EBP to
include an element to evaluate the EBP itself and tailor the innovation directly to the
organization. Moreover, the ABCD implementation framework should be translated and
tested to address other types of implementation, for example, the factors associated
with implementing electronic medical records. The implementations of health
information technology (HIT) have been associated with emergent problems, referred to
as unintended consequences, that result from the implementation [147]. Expansion is
possible by combining an understanding of these unintended consequences with the
knowledge and backbone of the ABCD implementation framework.
To further narrow the gap of implementing EBP, a toolkit could be developed to guide
organizations in handling the different component attributes of the ABCD
implementation framework. As medicine becomes more digitalized, more data is
156
available to be used, not only to drive the tension for change, but also to understand the
exact effects of an implementation. The availability of this data could be incorporated
in the toolkit to quantify change based on the varying attributes. It would also be of
interest to see this framework in action and work with a healthcare organization to plan
an implementation by following and addressing the varying components of the ABCD
implementation framework.
157
Chapter Seven: Conclusions
This research develops and validates the ABCD implementation framework, which
provided a system’s perspective of the internal contextual factors of a healthcare
organization that affect the likelihood of success in implementing EBP. It provides
several contributions to the existing literature. The first is found in the scientific
methods of studying an actual implementation to develop and validate a theoretical
framework. The methods used to develop the ABCD implementation framework were
similar to that of a realist synthesis [148], [149], which allows the review of complex
interventions by accounting for context as well as outcomes in the process of
synthesizing relevant literature. Only a handful of studies have been published using
this method within the health sciences literature that address the implementation of an
EBP [150–153]. In addition, only two of these studies provided a validation through the
use of multiple case studies [150], [152]. It is also important to note that of the other
implementation frameworks identified in this research only three provided simple
validation of the framework through case study [35], [44], [52].
As mentioned, the ABCD implementation framework was validated through a novel
approach combining a variety of methods including qualitative analysis, qualitative
comparative analysis, and statistical analysis. The validation methodology also utilized a
very different set of cases than the initial development of the framework itself. The
framework was developed using the central line checklist implementation, which
addressed a very specific infection concern and was undertaken in ICU hospital settings.
158
The second portion of this research validated the ABCD implementation framework
using data from the implementation of the CCM mostly in primary care settings and
related to improving the overall quality of care for chronic diseases.
Secondly, this research delivers an opportunity for the evolution of the implementation
science literature – through the gains in knowledge found through proposing and
validating a framework. These knowledge gains include the novel validation
methodology in both its comprehensive nature and the actual methods utilized which
allowed the identification of necessary and sufficient organizational contexts that lead
to a successful EBP implementation as well as other key aspects of the framework itself.
In addition, the validation methodology allowed an understanding not only of conditions
leading to successful implementation, but also those leading to failed implementation
efforts. Understanding failed implementation efforts is critical to the future of
implementation science research by providing a guide to avoid those types of pitfalls in
the future. The validation showed the outer setting can be ignored when designing an
implementation effort and no particular sequence is necessary as the interaction among
components are more important than the order in which the components are treated.
Future research in implementation science should pay specific attention to capturing
the interactions among components of the implementation as without an
understanding of this system’s view implementation success will not be achieved.
Thirdly, the ABCD implementation framework serves as a practical guide for leaders in
health care organizations (primary care or hospital settings) and for quality
159
improvement and implementation researchers for the design of an implementation
strategy for a well-designed and well-studied EBP. By addressing each of the ABCD
components and their interactions, they can develop an implementation that will utilize
a combination of current strategies for successful implementation unique to their
setting and the well-studied EBP for which they are planning an implementation. Thus,
effectively creating the multidimensional approach required to successfully implement
and sustain change.
In conclusion, the ABCD implementation framework serves as a validated mechanism to
describe the internal contextual factors of a healthcare organization that affect the
success of an implementation project developed in an effort to reduce the gap between
research evidence and clinical practice. Each of the elements was repeatedly proven to
be a critical element in describing the success of implementing EBP through a variety of
methods including qualitative analysis, qualitative comparative analysis, and statistical
analysis. The validation process also uncovered no additional elements that should be
included in this framework.
It is equally important to note that the validation process uncovered evidence that
predicted the failure of implementation efforts. Failing to address any one of the
framework elements leads to a failed implementation effort. Therefore, it is imperative
that an organization addresses each element of the ABCD implementation framework in
order to be successful in their implementation of a well-studied EBP.
160
The system’s perspective taken in this research provides a clear understanding of the
complexity faced by an organization implementing EBP by appropriately defining the
boundaries of the system in the four primary components of the framework and the
interactions between them that are a crucial element of any system. Future studies
developing implementation projects should focus on addressing the components of the
ABCD implementation framework and acknowledging the importance of the
interactions between them.
161
References
[1] E. Balas and S. Boren, “Managing clinical knowledge for health care
improvement,” in Yearbook of medical informatics 2000: patient-centered
systems., J. Bemmel and A. McCray, Eds. Stuttgart, Germany: Schattauer, 2000,
pp. 65–70.
[2] E. Tello-Bernabe, T. Sanz-Cuesta, I. Del-Cura-Gonzalez, and E. Al., “Effectiveness of
a clinical practice guideline implementation strategy for patients with anxiety
disorders in primary care: cluster randomized trial,” Implementation science : IS,
vol. 6, no. 1, p. 123, 2011.
[3] S. Woolf, R. Grol, A. Hutchinson, and E. Al., “Potential benefits, limitations, and
harms of clinical guidelines,” BMJ, vol. 318, pp. 527–530, 1999.
[4] R. Grol, J. Dalhuijsen, S. Thomas, and E. Al, “Attributes of clinical guidelines that
influence use of guidelines in general practice: observational study,” BMJ, vol.
317, no. 858–861, 1998.
[5] G. Worrall, P. Chaulk, and D. Freake, “Effects of clinical practice guidelines on
patient outcomes in primary care: a systematic review,” Can Med Assoc J, vol.
156, pp. 1705–1712, 1997.
[6] S. Martin, I. Del-Cura-Gonzalez, T. Sanz-Cuesta, and E. Al., “Effectiveness of an
implementation strategy for a breastfeeding guideline in Primary Care: cluster
randomized trial,” BMC Fam Pract, vol. 12, no. 1, p. 144, 2011.
[7] P. Pronovost and E. Vohr, Safe Patients, Smart Hospitals. Hudson Street Press,
2010.
[8] R. Grol, “Successes and failures in the implementation of evidence-based
guidelines for clinical practice.,” Medical care, vol. 39, no. 8 Suppl 2, pp. II46–54,
Aug. 2001.
[9] R. Grol and J. Grimshaw, “Evidence-based implementation of evidence-based
medicine.,” The Joint Commission journal on quality, no. Idd, 1999.
[10] L. Bero, R. Grilli, J. Grimshaw, and E. Harvey, “Closing the gap between research
and practice: an overview of systematic reviews of interventions to promote the
implementation of research findings,” Bmj, vol. 317, pp. 465–468, 1998.
162
[11] L. V Rubenstein and J. Pugh, “Strategies for promoting organizational and practice
change by advancing implementation research,” J Gen Intern Med, vol. 21, no.
Supp 2, pp. S58–64, 2006.
[12] B. Rabin and R. C. Brownson, “Developing the Terminology for Dissemination and
Implementation Research,” in Dissemination and Implementation Research in
Health, R. C. Brownson, G. A. Colditz, and E. K. Proctor, Eds. New York, NY: Oxford
University Press, 2012, pp. 23–51.
[13] G. A. Colditz, “The Promise and Challenges of Dissemination and Implementation
Research,” in Dissemination and Implementation Research in Health, R. C.
Brownson, G. A. Colditz, and E. K. Proctor, Eds. New York, NY: Oxford University
Press, 2012, pp. 3–22.
[14] B. S. Mittman, “Introduction to Implementation Science in Health: Part 1 -
Overview of Implementation Science [PowerPoint Slides].” 2012.
[15] R. Valdez, E. Ramly, and P. Brennan, “Industrial and Systems Engineering and
Health Care: Critical Areas of Research - Final Report,” AHRQ Publication No 10-
0079, 2010.
[16] “IMPACT: Evidence-based depression care,” University of Washington,
Department of Psychiatry & Behavioral Sciences. [Online]. Available:
http://impact-uw.org/about/key.html. [Accessed: 04-Oct-2012].
[17] L. V Rubenstein, L. S. Meredith, L. E. Parker, N. P. Gordon, S. C. Hickey, C. Oken,
and M. L. Lee, “Impacts of Evidence-Based Quality Improvement on Depression in
Primary Care: A Randomized Experiment,” J Gen Intern Med, vol. 21, no. 10, pp.
1027–1035, 2006.
[18] “Transforming Care at the Bedside,” Institute for Healthcare Improvement, 2012.
[Online]. Available:
http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/TCAB/Pages/defa
ult.aspx. [Accessed: 04-Oct-2012].
[19] S. M. Berenholtz, P. J. Pronovost, P. a. Lipsett, D. Hobson, K. Earsing, J. E. Farley, S.
Milanovich, E. Garrett-Mayer, B. D. Winters, H. R. Rubin, T. Dorman, and T. M.
Perl, “Eliminating catheter-related bloodstream infections in the intensive care
unit*,” Critical Care Medicine, vol. 32, no. 10, pp. 2014–2020, Oct. 2004.
[20] J. B. Sexton, S. M. Berenholtz, C. a Goeschel, S. R. Watson, C. G. Holzmueller, D. a
Thompson, R. C. Hyzy, J. a Marsteller, K. Schumacher, and P. J. Pronovost,
163
“Assessing and improving safety climate in a large cohort of intensive care units.,”
Critical care medicine, vol. 39, no. 5, pp. 934–9, May 2011.
[21] a. Lipitz-Snyderman, D. Steinwachs, D. M. Needham, E. Colantuoni, L. L. Morlock,
and P. J. Pronovost, “Impact of a statewide intensive care unit quality
improvement initiative on hospital mortality and length of stay: retrospective
comparative analysis,” Bmj, vol. 342, no. jan28 1, pp. d219–d219, Jan. 2011.
[22] P. J. Pronovost, C. a Goeschel, E. Colantuoni, S. Watson, L. H. Lubomski, S. M.
Berenholtz, D. a Thompson, D. J. Sinopoli, S. Cosgrove, J. B. Sexton, J. a Marsteller,
R. C. Hyzy, R. Welsh, P. Posa, K. Schumacher, and D. Needham, “Sustaining
reductions in catheter related bloodstream infections in Michigan intensive care
units: observational study,” Bmj, vol. 340, no. feb04 1, pp. c309–c309, Feb. 2010.
[23] R. Hyzy, S. Flanders, and P. Pronovost, “Characteristics of intensive care units in
Michigan: Not an open and closed case,” J Hosp Med, vol. 5, no. 1, pp. 4–9, 2010.
[24] C. A. Goeschel and P. J. Pronovost, “Harnessing the Potential of Health Care
Collaboratives : Lessons from the Keystone ICU Project,” in Advances in Patient
Safety: New Directions and Alternative Approaches, K. Henriksen, J. Battles, M.
Keyes, and M. Grady, Eds. Rockvillle, MD: AHRQ, 2008.
[25] P. Pronovost, “Interventions to decrease catheter-related bloodstream infections
in the ICU: the Keystone Intensive Care Unit Project.,” American journal of
infection control, vol. 36, no. 10, pp. S171.e1–5, Dec. 2008.
[26] P. J. Pronovost, S. M. Berenholtz, C. Goeschel, I. Thom, S. R. Watson, C. G.
Holzmueller, J. S. Lyon, L. H. Lubomski, D. a Thompson, D. Needham, R. Hyzy, R.
Welsh, G. Roth, J. Bander, L. Morlock, and J. B. Sexton, “Improving patient safety
in intensive care units in Michigan.,” Journal of critical care, vol. 23, no. 2, pp.
207–21, Jun. 2008.
[27] P. Pronovost and D. Needham, “An intervention to decrease catheter-related
bloodstream infections in the ICU,” The New England Journal of Medicine, vol.
355, no. 26, pp. 2725–2732, 2006.
[28] D. J. Murphy, D. M. Needham, C. Goeschel, E. Fan, S. E. Cosgrove, and P. J.
Pronovost, “Monitoring and reducing central line-associated bloodstream
infections: a national survey of state hospital associations.,” American journal of
medical quality : the official journal of the American College of Medical Quality,
vol. 25, no. 4, pp. 255–60, 2010.
164
[29] P. Pronovost and J. Marsteller, “Preventing bloodstream infections: a measurable
national success story in quality improvement,” Health Affairs, vol. 30, no. 4, pp.
628–634, 2011.
[30] J. M. Westfall, J. Mold, and L. Fagnan, “Practice-Based Research - ‘Blue Highways’
on the NIH Roadmap,” JAMA: the journal of the American Medical Association,
vol. 297, no. 4, pp. 403–406, 2007.
[31] W. Smith, “Evidence for the effectiveness of techinques to change physician
behavior,” Chest, vol. 118, no. Suppl 2, p. 8S–17S, 2000.
[32] B. S. Mittman, “Implementation Science in Health Care,” in Dissemination and
Implementation Research in Health, R. C. Brownson, G. A. Colditz, and E. K.
Proctor, Eds. New York, NY: Oxford University Press, 2012, pp. 400–418.
[33] T. Greenhalgh, G. Robert, F. Macfarlane, P. Bate, and O. Kyriakidou, “Diffusion of
innovations in service organizations: systematic review and recommendations,”
The Milbank quarterly, vol. 82, no. 4, pp. 581–629, Jan. 2004.
[34] J. Lomas, M. Enkin, G. Anderson, W. Hannah, E. Vayda, and J. Singer, “Opinion
leaders vs audit and feedback to implement practice guidelines: Delivery after
previous cesarean section,” JAMA, vol. 265, no. 17, pp. 2202–2207, 1991.
[35] S. Cretin, D. O. Farley, K. J. Dolter, and W. Nicholas, “Evaluating an Integrated
Approach to clinical quality improvement: clinical guidelines,” Medical Care, vol.
39, no. 8, Supp II, pp. II70–II84, 2001.
[36] P. a Gross, S. Greenfield, S. Cretin, J. Ferguson, J. Grimshaw, R. Grol, N. Klazinga,
W. Lorenz, G. S. Meyer, C. Riccobono, S. C. Schoenbaum, P. Schyve, and C. Shaw,
“Optimal methods for guideline implementation: conclusions from Leeds Castle
meeting.,” Medical care, vol. 39, no. 8 Suppl 2, pp. II85–92, Aug. 2001.
[37] S. Kritchevsky and B. Simmons, “Continuous Quality Improvement: Concepts and
applications for physician care,” JAMA1, vol. 266, no. 13, pp. 1817–1823, 991.
[38] A. C. Feldstein and R. E. Glasgow, “A Practical, Robust Implementation and
Sustainability Model (PRISM) for Integrating Research Findings into Practice,” The
Joint Commission Journal on Quality and Patient Safety, vol. 34, no. 4, pp. 228–
243, 2008.
[39] R. Grol, “From best evidence to best practice: effective implementation of change
in patients’ care,” The Lancet, vol. 362, pp. 1225–1230, 2003.
165
[40] R. E. Glasgow and D. Chambers, “Developing robust, sustainable, implementation
systems using rigorous, rapid and relevant science.,” Clinical and translational
science, vol. 5, no. 1, pp. 48–55, Mar. 2012.
[41] S. C. Mathews and P. J. Pronovost, “The need for systems integration in health
care.,” JAMA : the journal of the American Medical Association, vol. 305, no. 9,
pp. 934–5, Mar. 2011.
[42] E. Proctor, H. Silmere, R. Raghavan, P. Hovmand, G. Aarons, A. Bunger, R. Griffey,
and M. Hensley, “Outcomes for implementation research: conceptual
distinctions, measurement challenges, and research agenda.,” Administration and
policy in mental health, vol. 38, no. 2, pp. 65–76, Mar. 2011.
[43] L. J. Damschroder, D. C. Aron, R. E. Keith, S. R. Kirsh, J. a Alexander, and J. C.
Lowery, “Fostering implementation of health services research findings into
practice: a consolidated framework for advancing implementation science.,”
Implementation science : IS, vol. 4, p. 50, Jan. 2009.
[44] A. L. Kitson, J. Rycroft-Malone, G. Harvey, B. McCormack, K. Seers, and A. Titchen,
“Evaluating the successful implementation of evidence into practice using the
PARiHS framework: theoretical and practical challenges.,” Implementation
science : IS, vol. 3, p. 1, Jan. 2008.
[45] G. Bammer, “Integration and Implementation Sciences: building a new
specialization,” Ecology and Society, vol. 10, p. 6, 2005.
[46] R. Foy, J. Ovretveit, and P. Shekelle, “The role of theory in research to develop
and evaluate the implementation of patient safety practices,” BMJ quality &
safety, vol. 20, pp. 453–459, 2011.
[47] J. C. Ovretveit, P. G. Shekelle, S. M. Dy, K. M. McDonald, S. Hempel, P. Pronovost,
L. Rubenstein, S. L. Taylor, R. Foy, and R. M. Wachter, “How does context affect
interventions to improve patient safety? An assessment of evidence from studies
of five patient safety practices and proposals for research.,” BMJ quality & safety,
vol. 20, no. 7, pp. 604–10, Jul. 2011.
[48] V. Ward, A. House, and S. Hamer, “Developing a framework for transferring
knowledge into action: a thematic analysis of the literature,” Journal of health
services, vol. 14, no. 3, pp. 156–164, 2009.
[49] A. Wandersman, J. Duffy, P. Flaspohler, R. Noonan, K. Lubell, L. Stillman, M.
Blachman, R. Dunville, and J. Saul, “Bridging the gap between prevention research
and practice: the interactive systems framework for dissemination and
166
implementation.,” American journal of community psychology, vol. 41, no. 3–4,
pp. 171–81, Jun. 2008.
[50] A. Kitson, G. Harvey, and B. McCormack, “Enabling the implementation of
evidence based practice: a conceptual framework.,” Quality in health care : QHC,
vol. 7, no. 3, pp. 149–58, Sep. 1998.
[51] T. E. Oxman, H. C. Schulberg, R. L. Greenberg, A. J. Dietrich, J. W. Williams, P. a
Nutting, and M. L. Bruce, “A fidelity measure for integrated management of
depression in primary care.,” Medical care, vol. 44, no. 11, pp. 1030–7, Nov. 2006.
[52] C. B. Stetler, L. McQueen, J. Demakis, and B. S. Mittman, “An organizational
framework and strategic implementation for system-level change to enhance
research-based practice: QUERI Series.,” Implementation science : IS, vol. 3, p. 30,
Jan. 2008.
[53] B. S. Mittman, “Introduction to Implementation Science in Health: Part 3 -
Evaluating Implementation Programs [Power Point Slides].” 2012.
[54] D. Sackett, W. Rosenberg, J. Gray, and R. Haynes, “Evidence based medicine: what
it is and what it isn’t,” Bmj, vol. 312, pp. 71–72, 1996.
[55] S. Berenholtz and P. J. Pronovost, “Barriers to translating evidence into practice.,”
Current opinion in critical care, vol. 9, no. 4, pp. 321–5, Aug. 2003.
[56] B. S. Mittman, “Creating the evidence base for quality improvement
collaboratives.,” Annals of internal medicine, vol. 140, no. 11, pp. 897–901, Jun.
2004.
[57] J. W. Dearing and K. F. Kee, “Historical Roots of Dissemination and
Implementation Science,” in Dissemination and Implementation Research in
Health, R. C. Brownson, G. A. Colditz, and E. K. Proctor, Eds. New York, NY: Oxford
University Press, 2012, pp. 55–71.
[58] D. T. Ubbink, H. Vermeulen, a M. Knops, D. a Legemate, K. Oude Rengerink, M. J.
Heineman, Y. B. Roos, C. J. Fijnvandraat, H. S. Heymans, R. Simons, and M. Levi,
“Implementation of evidence-based practice: outside the box, throughout the
hospital.,” The Netherlands journal of medicine, vol. 69, no. 2, pp. 87–94, Mar.
2011.
[59] S. Tunis, R. Hayward, M. Wilson, and E. Al, “Internists’ attitudes about clinical
practice guidelines,” Annals of internal medicine, vol. 120, no. 11, pp. 956–963,
1994.
167
[60] E. Ginexi and T. Hilton, “What’s next for translation research?,” Eval Health Prof,
vol. 29, no. 3, pp. 334–347, 2006.
[61] H. Kaplan, L. Provost, and C. Froehle, “The Model for Understanding Success in
Quality (MUSIQ): building a theory of context in healthcare quality
improvement,” BMJ quality & safety, vol. 21, pp. 13–20, 2012.
[62] D. a Dzewaltowski, R. E. Glasgow, L. M. Klesges, P. a Estabrooks, and E. Brock,
“RE-AIM: evidence-based standards and a Web resource to improve translation of
research into practice.,” Annals of behavioral medicine : a publication of the
Society of Behavioral Medicine, vol. 28, no. 2, pp. 75–80, Oct. 2004.
[63] B. J. Powell, J. C. McMillen, E. K. Proctor, C. R. Carpenter, R. T. Griffey, A. C.
Bunger, J. E. Glass, and J. L. York, “A compilation of strategies for implementing
clinical innovations in health and mental health.,” Medical care research and
review : MCRR, vol. 69, no. 2, pp. 123–57, Apr. 2012.
[64] H. Kaplan, P. Brady, M. Dritz, D. Hooper, W. Linam, C. Froehle, and P. Margolis,
“The influence of context on quality improvement success in health care: a
systematic review of the literature,” The Milbank quarterly, vol. 88, no. 4, pp.
500–559, Dec. 2010.
[65] P. Shekelle and P. Pronovost, “Advancing the science of patient safety,” Annals of
internal medicine, vol. 154, no. 10, pp. 693–697, 2011.
[66] P. Pronovost and S. Berenholtz, “Translating evidence into practice: a model for
large scale knowledge translation,” BMJ, vol. 337, pp. 963–965, 2008.
[67] G. a Aarons, D. H. Sommerfeld, and C. M. Walrath-Greene, “Evidence-based
practice implementation: the impact of public versus private sector organization
type on organizational support, provider attitudes, and adoption of evidence-
based practice.,” Implementation science : IS, vol. 4, p. 83, Jan. 2009.
[68] J. C. Ovretveit, L. Leviton, and G. Parry, “Increasing the generalisability of
improvement research with an improvement replication programme,” BMJ
quality & safety, vol. 20, no. Supp 1, pp. i87–i91, 2011.
[69] P. Pronovost, C. G. Holzmueller, D. M. Needham, J. B. Sexton, M. Miller, S.
Berenholtz, A. W. Wu, T. M. Perl, R. Davis, D. Baker, L. Winner, and L. Morlock,
“How will we know patients are safer? An organization-wide approach to
measuring and improving safety.,” Critical care medicine, vol. 34, no. 7, pp. 1988–
95, Jul. 2006.
168
[70] J. M. Rodriguez-Paz, L. J. Mark, K. R. Herzer, J. D. Michelson, K. L. Grogan, J.
Herman, D. Hunt, L. Wardlow, E. P. Armour, and P. J. Pronovost, “A novel process
for introducing a new intraoperative program: a multidisciplinary paradigm for
mitigating hazards and improving patient safety.,” Anesthesia and analgesia, vol.
108, no. 1, pp. 202–10, Jan. 2009.
[71] A. Donabedian, “Bringing About Behavior Change,” in An Introduction to Quality
Assurance in Health Care, New York, NY: Oxford University Press, 2003, pp. 124–
132.
[72] L. McKeon, P. Cunningham, and J. Detty Oswaks, “Improving Patient Safety:
Patient-Focused, High-Reliability Team Training,” J Nurs Care Qual, vol. 24, no. 1,
pp. 76–82, 2009.
[73] F. W. Taylor, The Principles of Scientific Management. Dover Publications, 1911, p.
80.
[74] A. Sharp and P. McDermott, Workflow Modeling: Tools for Process Improvement
and Application Development. Boston, MA: Artech House, 2001.
[75] R. Grol, “Beliefs and evidence in changing clinical practice,” BMJ, vol. 315, no.
August, pp. 418–425, 1997.
[76] P. Greco and J. Eisenberg, “Changing Physicians’ Practices,” New England Journal
of Medicine, vol. 329, no. 17, pp. 1271–1274, 1993.
[77] R. Amarasingham and P. Pronovost, “Measuring clinical information technology in
the ICU setting: application in a quality improvement collaborative,” Journal of
the American Medical Informatics Association, vol. 14, no. 3, pp. 288–294, 2007.
[78] D. M. Needham, D. J. Sinopoli, D. a. Thompson, C. G. Holzmueller, T. Dorman, L.
H. Lubomski, A. W. Wu, L. L. Morlock, M. a. Makary, and P. J. Pronovost, “A
system factors analysis of ‘line, tube, and drain’ incidents in the intensive care
unit,” Critical Care Medicine, vol. 33, no. 8, pp. 1701–1707, Aug. 2005.
[79] A. Gawande, “The Checklist,” in The Checklist Manifesto: How to Get Things Right,
New York, NY: Picador, 2009, pp. 32–47.
[80] J. Rycroft-Malone, a Kitson, G. Harvey, B. McCormack, K. Seers, a Titchen, and C.
Estabrooks, “Ingredients for change: revisiting a conceptual framework.,” Quality
& safety in health care, vol. 11, no. 2, pp. 174–80, Jun. 2002.
169
[81] D. Tschannen, G. Keenan, M. Aebersold, M. J. Kocan, F. Lundy, and V. Averhart,
“Implications of nurse-physician relations: report of a successful intervention.,”
Nursing economic$, vol. 29, no. 3, pp. 127–35, 2011.
[82] L. Miller, “Patient safety and teamwork in perinatal care: resources for clinicians,”
The Journal of Perinatal & Neonatal Nursing, vol. 19, no. 1, pp. 46–51, 2005.
[83] L. M. Mckeon, J. Oswaks, and P. Cunningham, “Complexity Science, High
Reliability Organizations, and implications for team training in healthcare,”
Clinical Nurse Specialist, vol. 20, no. 6, pp. 298–304, 2008.
[84] G. Luria, “The social aspects of safety management: trust and safety climate.,”
Accident; analysis and prevention, vol. 42, no. 4, pp. 1288–95, Jul. 2010.
[85] P. Pronovost and B. Sexton, “Assessing safety culture: guidelines and
recommendations.,” Quality & safety in health care, vol. 14, no. 4, pp. 231–3,
Aug. 2005.
[86] J. B. Sexton, R. L. Helmreich, T. B. Neilands, K. Rowan, K. Vella, J. Boyden, P. R.
Roberts, and E. J. Thomas, “The Safety Attitudes Questionnaire: psychometric
properties, benchmarking data, and emerging research.,” BMC health services
research, vol. 6, p. 44, Jan. 2006.
[87] P. J. Pronovost, B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B.
Haller, E. R. Feroli, J. B. Sexton, and H. R. Rubin, “Evaluation of the culture of
safety: survey of clinicians and managers in an academic medical center.,” Quality
& safety in health care, vol. 12, no. 6, pp. 405–10, Dec. 2003.
[88] D. T. Huang, G. Clermont, J. B. Sexton, C. a Karlo, R. G. Miller, L. a Weissfeld, K. M.
Rowan, and D. C. Angus, “Perceptions of safety culture vary across the intensive
care units of a single institution.,” Critical care medicine, vol. 35, no. 1, pp. 165–
76, Jan. 2007.
[89] B. S. Mittman, “Introduction to Implementation Science in Health: Part 2 -
Implementing Effective Practices [Power Point Slides].” 2012.
[90] C. B. Stetler, B. S. Mittman, and J. Francis, “Overview of the VA Quality
Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI
Series.,” Implementation science : IS, vol. 3, p. 8, Jan. 2008.
[91] P. Pronovost and R. Lilford, “A road map for improving the performance of
performance measures,” Health Affairs, vol. 30, no. 4, pp. 569–573, 2011.
170
[92] P. J. Pronovost and C. a Goeschel, “Viewing health care delivery as science:
challenges, benefits, and policy implications.,” Health services research, vol. 45,
no. 5 Pt 2, pp. 1508–22, Oct. 2010.
[93] P. J. Pronovost, D. A. Thompson, C. Holzmueller, L. H. Lubomski, and L. L. Morlock,
“Defining and measuring patient safety,” Critical Care Clinics, vol. 21, pp. 1–19,
2005.
[94] P. Plsek and T. Greenhalgh, “Complexity science: The challenge of complexity in
health care,” BMJ, vol. 323, no. 7313, pp. 625–628, 2001.
[95] D. a Luke and K. a Stamatakis, “Systems Science Methods in Public Health:
Dynamics, Networks, and Agents.,” Annual review of public health, Apr. 2011.
[96] B. Holmes, D. Finegood, B. Riley, and A. Best, “Systems Thinking in Dissemination
and Implementation Research,” in Dissemination and Implementation Research in
Health, R. C. Brownson, G. A. Colditz, and E. K. Proctor, Eds. New York, NY: Oxford
University Press, 2012, pp. 175–191.
[97] R. McDaniel, H. Lanham, and R. Anderson, “Implications of complex adaptive
systems theory for the design of research on health care organizations,” Health
Care Management REVIEW, no. April-June, pp. 191–199, 2009.
[98] C. a Brown, “The application of complex adaptive systems theory to clinical
practice in rehabilitation.,” Disability and rehabilitation, vol. 28, no. 9, pp. 587–93,
May 2006.
[99] D. Meadows, Thinking in Systems - A Primer. Sustainability Institute, 2008.
[100] R. MacIntosh and D. MacLean, “Complex Adaptive Systems,” International
Encyclopedia of Organization Studies. SAGE, pp. 224–226, 2007.
[101] I. McCarthy and T. Rakotobe-Joel, “Complex systems theory: implications and
promises for manufacturing organisations,” Int. J. Manufacturing Technology and
Management, vol. 2, pp. 559–579, 2000.
[102] L. M. Holden, “Complex adaptive systems: concept analysis.,” Journal of advanced
nursing, vol. 52, no. 6, pp. 651–7, Dec. 2005.
[103] M. Jordon, H. J. Lanham, R. a Anderson, and R. R. McDaniel, “Implications of
complex adaptive systems theory for interpreting research about health care
organizations.,” Journal of evaluation in clinical practice, vol. 16, no. 1, pp. 228–
31, Mar. 2010.
171
[104] M. E. Jordan, H. J. Lanham, B. F. Crabtree, P. a Nutting, W. L. Miller, K. C. Stange,
and R. R. McDaniel, “The role of conversation in health care interventions:
enabling sensemaking and learning.,” Implementation science : IS, vol. 4, p. 15,
Jan. 2009.
[105] L. K. Leykum, R. Palmer, H. Lanham, M. Jordan, R. R. McDaniel, P. H. Noël, and M.
Parchman, “Reciprocal learning and chronic care model implementation in
primary care: results from a new scale of learning in primary care.,” BMC health
services research, vol. 11, no. 1, p. 44, Jan. 2011.
[106] L. Malhi, Ö. Karanfil, T. Merth, M. Acheson, A. Palmer, and D. T. Finegood, “Places
to Intervene to Make Complex Food Systems More Healthy, Green, Fair, and
Affordable,” Journal of Hunger & Environmental Nutrition, vol. 4, no. 3–4, pp.
466–476, Nov. 2009.
[107] D. Meadows, “Leverage Points: Places to Intervene in a System,” The
Sustainability Institute, 1999.
[108] S. L. Krein, R. N. Olmsted, T. P. Hofer, C. Kowalski, J. Forman, J. Banaszak-Holl, and
S. Saint, “Translating infection prevention evidence into practice using
quantitative and qualitative research.,” American journal of infection control, vol.
34, no. 8, pp. 507–12, Oct. 2006.
[109] K. Charmaz, “Grounded Theory,” in Rethinking Methods in Psychology, J. Smith, R.
Harre, and L. Van Langenhove, Eds. 1995.
[110] D. McGee and M. Gould, “Preventing Complications of Central Venous
Catheterization,” N Engl J Med, vol. 348, no. 12, pp. 1123–1133, 2003.
[111] D. Maki, D. Kluger, and C. Crinch, “The risk of bloodstream infection in adults with
different intravascular devices: a systematc review of 200 published prospective
studies,” Mayo Clin Proc, vol. 81, pp. 1159–1171, 2006.
[112] J. Marschall, L. Mermel, and D. Classen, “Strategies to prevent central line-
associated bloodstream infections in acute care hospitals,” Infect Control Hosp
Epidemiol, vol. 29, no. Suppl 1, pp. S22–S30, 2008.
[113] N. O’Grady, M. Alexander, L. Burns, and E. Al., “Guidelines for the prevention of
intravascular catheter-related infections, 2011,” Center for Disease Control, 2011.
[114] D. M. Lin, K. Weeks, L. Bauer, J. R. Combes, C. T. George, C. a Goeschel, L. H.
Lubomski, S. C. Mathews, M. D. Sawyer, D. a Thompson, S. R. Watson, B. D.
Winters, J. a Marsteller, S. M. Berenholtz, P. J. Pronovost, and J. C. Pham,
172
“Eradicating central line-associated bloodstream infections statewide: the Hawaii
experience.,” American journal of medical quality : the official journal of the
American College of Medical Quality, vol. 27, no. 2, pp. 124–9, 2011.
[115] L. Lin and B. Liang, “Addressing the nursing work environment to promote patient
safety,” Nursing Forum, vol. 42, no. 1, pp. 20–30, 2007.
[116] A. Beck, D. a Bergman, A. K. Rahm, J. W. Dearing, and R. E. Glasgow, “Using
Implementation and Dissemination Concepts to Spread 21st-century Well-Child
Care at a Health Maintenance Organization.,” The Permanente journal, vol. 13,
no. 3, pp. 10–8, Jan. 2009.
[117] L. McQueen and B. Mittman, “Overview of the veterans health administration
(VHA) quality enhancement research initiative (QUERI),” Journal of the American,
vol. 11, no. 5, pp. 339–343, 2004.
[118] D. Magid, P. Estabrooks, D. Brand, and M. Raebel, “Translating patient safety
research into clinical practice,” Advances in Patient Safety, vol. 2, pp. 163–172,
2005.
[119] R. E. Glasgow, L. M. Klesges, D. A. Dzewaltowski, S. S. Bull, and P. Estabrooks, “The
future of health behavior change research: what is needed to improve translation
of research into health promotion practice?,” Ann Behav Med, vol. 27, no. 1, pp.
3–12, Apr. 2004.
[120] “Nuclear Regulartory Commission. Final Safety Culture Policy Statement,” Federal
Register, vol. 76, no. 144, pp. 34773–34778, 2011.
[121] G. Curran, M. Bauer, B. Mittman, and J. Pyne, “Effectiveness-implementation
Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation
Research to Enhance Public Health Impact,” Medical Care, vol. 50, no. 3, pp. 217–
226, 2012.
[122] A. E. Bonomi, E. H. Wagner, R. E. Glasgow, and M. VonKorff, “Assessment of
chronic illness care (ACIC): a practical tool to measure quality improvement.,”
Health services research, vol. 37, no. 3, pp. 791–820, Jun. 2002.
[123] “Improving Chronic Illness Care (ICIC),” “The Assessment of Chronic Illness Care
(ACIC)”, 2013. .
[124] R. Mangione-Smith, M. Schonlau, and K. Chan, “Measuring the effectiveness of a
collaborative for quality improvement in pediatric asthma care: Does
173
implementing the Chronic Care Model improve processes and outcomes of
care?,” Ambulatory Pediatrics, vol. 5, no. 2, pp. 75–82, 2005.
[125] M. Schonlau and R. Mangione-Smith, “Evaluation of a quality improvement
collaborative in asthma care: does it improve processes and outcomes of care?,”
Annals of Family Medicine, vol. 3, pp. 200–208, 2005.
[126] E. Taylor-Powell and M. Renner, “Analyzing qualitative data,” Program
Development and Evaluation, 2003.
[127] C. Neustaedter, “Qualitative data analysis [PowerPoint Slides].” .
[128] M. Sandelowski, “Whatever Happened to Qualitative Description?,” Research in
Nursing & Health, vol. 23, pp. 334–340, 2000.
[129] M. L. Pearson, S. Wu, J. Schaefer, A. E. Bonomi, S. M. Shortell, P. J. Mendel, J. a
Marsteller, T. a Louis, M. Rosen, and E. B. Keeler, “Assessing the implementation
of the chronic care model in quality improvement collaboratives.,” Health services
research, vol. 40, no. 4, pp. 978–96, Aug. 2005.
[130] C. Ragin, “Using Fuzzy Sets to Constitute Cases and Populations,” in Fuzzy-Set
Social Science, The University of Chicago Press, 2000, pp. 181–202.
[131] C. Ragin, “Fuzzy Sets and the Study of Diversity,” in Fuzzy-Set Social Science, The
University of Chicago Press, 2000, pp. 149–180.
[132] C. Ragin, “Calibrating Fuzzy Sets,” in Redesigning Social Inquiry: Fuzzy Sets and
Beyond, The University of Chicago Press, 2008, pp. 85–105.
[133] C. Ragin, “Evaluating Set Relations,” in Redesigning Social Inquiry: Fuzzy Sets and
Beyond, 2008, pp. 44–68.
[134] C. Ragin, “Fuzzy Sets and Necessary Conditions,” in Fuzzy-Set Social Science, The
University of Chicago Press, 2000, pp. 203–229.
[135] C. C. Ragin, “Qualitative Comparative Analysis Using Fuzzy Sets (fsQCA),” in
Configurational Comparative Analysis, B. Rihoux and C. Ragen, Eds. Thousand
Oaks, CA and London: Sage Publications, 2007, pp. 87–121.
[136] C. Ragin, K. Drass, and S. Davey, “fsQCA 2.0.” 2009.
[137] C. Ragin, “Easy versus Difficult Counterfactuals,” in Redesigning Social Inquiry:
Fuzzy Sets and Beyond, 2008, pp. 160–175.
174
[138] C. Ragin and P. Fiss, “Net Effects versus Configurations: An Empirical
Demonstration,” in Redesigning Social Inquiry: Fuzzy Sets and Beyond, 2008, pp.
190–212.
[139] S. Wu, M. Pearson, J. Schaefer, A. E. Bonomi, S. M. Shortell, P. J. Mendel, J.
Marsteller, T. Louis, and E. B. Keeler, “Assessing the Implementation of the
Chronic Care Model in Quality Improvement Collaboratives: Does Baseline System
Support for Chronic Care Matter?,” in Human Factors in Organizational Design
and Management - VII, 2003, pp. 595–601.
[140] A. Adewale, L. Hayduk, C. Estabrooks, G. Cummings, W. Midodzi, and L. Derksen,
“Understanding hierarchical linear models: applications in nursing research,”
Nursing Research, vol. 56, no. 4, pp. 40–46, 2007.
[141] J. Singer, “Using SAS PROC MIXED to fit multilevel models, hierarchical models,
and individual growth models,” Journal of educational and behavioral statistics,
vol. 23, no. 4, 1998.
[142] S. Wu, M. Pearson, J. Schaeffer, A. E. Bonomi, Stephen M. Shortell, P. Mendel, J.
A. Marstellar, T. A. Louis, and E. B. Keeler, “Assessing the Implementation of the
Chronic Care Model in Quality Improvement Collaboratives: Does Baseline System
Support for Chronic Care Matter?,” in ODAM Conference, 2003.
[143] O. Schabenberger, “Growing up fast: SAS 9.2 enhancements to the GLIMMIX
procedure,” SAS Global Forum, vol. Statistics, pp. 1–20, 2007.
[144] K. Cheung and N. Duan, “Design of Implementation Studies for Quality
Improvement Programs: An Effectiveness/Cost-Effectiveness Framework.” 2013.
[145] J. M. Coffman, M. D. Cabana, H. A. Halpin, and E. H. Yelin, “Effects of asthma
education on children’s use of acute care services: a meta-analysis.,” Pediatrics,
vol. 121, no. 3, pp. 575–86, Mar. 2008.
[146] M. L. Render and L. Hirschhorn, “An irreplaceable safety culture.,” Critical care
clinics, vol. 21, no. 1, pp. 31–41, viii, Jan. 2005.
[147] E. Campbell, D. Sittig, J. Ash, K. Guappone, and R. Dykstra, “Types of unintended
consequences related to computerized provider order entry,” JAMA, vol. 13, no.
5, pp. 547–556, 2006.
[148] J. Rycroft-Malone, B. McCormack, A. M. Hutchinson, K. DeCorby, T. K. Bucknall, B.
Kent, A. Schultz, E. Snelgrove-Clarke, C. B. Stetler, M. Titler, L. Wallin, and V.
175
Wilson, “Realist synthesis: illustrating the method for implementation research.,”
Implementation science : IS, vol. 7, p. 33, Jan. 2012.
[149] R. Pawson, T. Greenhalgh, G. Harvey, and K. Walshe, “Realist review: a new
method of systematic review designed for complex policy interventions.,” J
Health Serv Res Policy, vol. 10, no. Suppl 1, pp. 21–34, 2005.
[150] J. Rycroft-Malone, M. Fontenla, D. Bick, and K. Seers, “A realistic evaluation: the
case of protocol-based care.,” Implementation science : IS, vol. 5, p. 38, Jan. 2010.
[151] T. McMahon and P. R. Ward, “HIV among immigrants living in high-income
countries: a realist review of evidence to guide targeted approaches to
behavioural HIV prevention.,” Systematic reviews, vol. 1, no. 1, p. 56, Jan. 2012.
[152] B. Hunter, S. MacLean, and L. Berends, “Using realist synthesis to develop an
evidence base from an identified data set on enablers and barriers for alcohol and
drug program implementation,” The Qualitative Report, vol. 17, no. 1, pp. 131–
143, 2012.
[153] J. Leeman, Y. Chang, and E. Lee, “Implementation of antiretroviral therapy
adherence interventions: a realist synthesis of evidence,” Journal of advanced
nursing, vol. 66, no. 9, pp. 1915–1930, 2010.
176
Appendix
A. Details of Existing Implementation Frameworks
177
Table 18 Details of Existing Implementation Frameworks
178
Table 18 Continued
179
Table 18 Continued
180
B. ACIC Survey Instrument
Assessment of Chronic Illness Care
Version 3
Please complete the following information about you and your organization. This information
will not be disclosed to anyone besides the ICIC/IHI team. We would like to get your phone
number and e-mail address in the event that we need to contact you/your team in the future.
Please also indicate the names of persons (e.g., team members) who complete the survey with
you. Later on in the survey, you will be asked to describe the process by which you complete
the survey.
Your name:
Date:
________/________/________
Month Day Year
Organization & Address:
Names of other persons completing the
survey with you:
1.
2.
3.
Your phone number: (______) __ __ __ -
__ __ __ __
Your e-mail address:
Directions for Completing the Survey
This survey is designed to help systems and provider practices move toward the “state-of-the-
art” in managing chronic illness. The results can be used to help your team identify areas for
improvement. Instructions are as follows:
1. Answer each question from the perspective of one physical site (e.g., a practice, clinic,
hospital, health plan) that supports care for chronic illness.
Please provide name and type of site (e.g., Group Health Cooperative/Plan)
________________________________
2. Answer each question regarding how your organization is doing with respect to one
disease or condition.
Please specify condition ________________________________
3. For each row, circle the point value that best describes the level of care that currently
exists in the site and condition you chose. The rows in this form present key aspects of
chronic illness care. Each aspect is divided into levels showing various stages in
improving chronic illness care. The stages are represented by points that range from 0 to
181
11. The higher point values indicate that the actions described in that box are more fully
implemented.
4. Sum the points in each section (e.g., total part 1 score), calculate the average score (e.g.,
total part 1 score / # of questions), and enter these scores in the space provided at the end
of each section. Then sum all of the section scores and complete the average score for the
program as a whole by dividing this by 6.
For more information about how to complete the survey, please contact:
Judith Schaefer, MPH tel. 206.287.2077;
Schaefer.jk@ghc.org
Improving Chronic Illness Care
A National Program of the Robert Wood Johnson Foundation
Group Health Cooperative of Puget Sound
1730 Minor Avenue, Suite 1290
Seattle, WA 98101-1448
182
Assessment of Chronic Illness Care, Version 3
Part 1: Organization of the Healthcare Delivery System. Chronic illness management
programs can be more effective if the overall system (organization) in which care is
provided is oriented and led in a manner that allows for a focus on chronic illness care.
Components Level D Level C Level B Level A
Overall
Organizational
Leadership in
Chronic Illness
Care
Score
…does not exist or
there is a little
interest.
0 1 2
…is reflected in vision
statements and
business plans, but
no resources are
specifically
earmarked to execute
the work.
3 4 5
…is reflected by
senior leadership and
specific dedicated
resources (dollars and
personnel).
6 7 8
…is part of the
system’s long term
planning strategy,
receive necessary
resources, and
specific people are
held accountable.
9 10 11
Organizational
Goals for
Chronic Care
Score
…do not exist or are
limited to one
condition.
0 1 2
…exist but are not
actively reviewed.
3 4 5
…are measurable and
reviewed.
6 7 8
…are measurable,
reviewed routinely,
and are incorporated
into plans for
improvement.
9 10 11
Improvement
Strategy for
Chronic Illness
Care
Score
…is ad hoc and not
organized or
supported
consistently.
0 1 2
…utilizes ad hoc
approaches for
targeted problems as
they emerge.
3 4 5
…utilizes a proven
improvement
strategy for targeted
problems.
6 7 8
…includes a proven
improvement
strategy and uses it
proactively in
meeting
organizational goals.
9 10 11
Incentives and
Regulations
for Chronic
Illness Care
Score
…are not used to
influence clinical
performance goals.
0 1 2
…are used to
influence utilization
and costs of chronic
illness care.
3 4 5
…are used to support
patient care goals.
6 7 8
…are used to
motivate and
empower providers
to support patient
care goals.
9 10 11
Senior Leaders
Score
…discourage
enrollment of the
chronically ill.
0 1 2
…do not make
improvements to
chronic illness care a
priority.
3 4 5
…encourage
improvement efforts
in chronic care.
6 7 8
…visibly participate in
improvement efforts
in chronic care.
9 10 11
Benefits
Score
…discourage patient
self-management or
system changes.
0 1 2
…neither encourage
nor discourage
patient self-
management or
system changes.
3 4 5
…encourage patient
self-management or
system changes.
6 7 8
…are specifically
designed to promote
better chronic illness
care.
9 10 11
Total Health Care Organization Score ________
Average Score (Health Care Org. Score / 6) _________
183
Part 2: Community Linkages. Linkages between the health delivery system (or provider
practice) and community resources play important roles in the management of chronic
illness.
Components Level D Level C Level B Level A
Linking
Patients to
Outside
Resources
Score
…is not done
systematically.
0 1 2
…is limited to a list of
identified community
resources in an
accessible format.
3 4 5
…is accomplished
through a designated
staff person or
resource responsible
for ensuring
providers and
patients make
maximum use of
community
resources.
6 7 8
… is accomplished
through active
coordination
between the health
system, community
service agencies and
patients.
9 10 11
Partnerships
with
Community
Organizations
Score
…do not exist.
0 1 2
…are being
considered but have
not yet been
implemented.
3 4 5
…are formed to
develop supportive
programs and
policies.
6 7 8
…are actively sought
to develop formal
supportive programs
and policies across
the entire system.
9 10 11
Regional
Health Plans
Score
…do not coordinate
chronic illness
guidelines, measures
or care resources at
the practice level.
0 1 2
…would consider
some degree of
coordination of
guidelines, measures
or care resources at
the practice level but
have not yet
implemented
changes.
3 4 5
…currently
coordinate
guidelines, measures
or care resources in
one or two chronic
illness areas.
6 7 8
…currently
coordinate chronic
illness guidelines,
measures and
resources at the
practice level for
most chronic
illnesses.
9 10 11
Total Community Linkages Score ___________
Average Score (Community Linkages Score / 3) _________
184
Part 3: Practice Level. Several components that manifest themselves at the level of the
individual provider practice (e.g. individual clinic) have been shown to improve chronic
illness care. These characteristics fall into general areas of self-management support,
delivery system design issues that directly affect the practice, decision support, and
clinical information systems.
---------------------------------------------------------------------------------------------------------------------
Part 3a: Self-Management Support. Effective self-management support can help
patients and families cope with the challenges of living with and treating chronic illness
and reduce complications and symptoms.
Components Level D Level C Level B Level A
Assessment and
Documentation
of Self-
Management
Needs and
Activities
Score
…are not done.
0 1 2
…are expected.
3 4 5
…are completed in a
standardized manner.
6 7 8
…are regularly
assessed and
recorded in
standardized form
linked to a treatment
plan available to
practice and patients.
9 10 11
Self-
Management
Support
Score
…is limited to the
distribution of
information
(pamphlets,
booklets).
0 1 2
…is available by
referral to self-
management classes
or educators.
3 4 5
…is provided by
trained clinical
educators who are
designated to do self-
management
support, affiliated
with each practice,
and see patients on
referral.
6 7 8
…is provided by
clinical educators
affiliated with each
practice, trained in
patient
empowerment and
problem-solving
methodologies, and
see most patients
with chronic illness.
9 10 11
Addressing
Concerns of
Patients and
Families
Score
…is not
consistently done.
0 1 2
…is provided for
specific patients and
families through
referral.
3 4 5
…is encouraged, and
peer support, groups,
and mentoring
programs are
available.
6 7 8
…is an integral part of
care and includes
systematic
assessment and
routine involvement
in peer support,
groups or mentoring
programs.
9 10 11
Effective
Behavior Change
Interventions
and Peer Support
Score
…are not available.
0 1 2
…are limited to the
distribution of
pamphlets, booklets
or other written
information.
3 4 5
…are available only
by referral to
specialized centers
staffed by trained
personnel.
6 7 8
…are readily available
and an integral part
of routine care.
9 10 11
Total Self-Management Score_______
Average Score (Self Management Score / 4) _______
185
Part 3b: Decision Support. Effective chronic illness management programs assure that
providers have access to evidence-based information necessary to care for patients--
decision support. This includes evidence-based practice guidelines or protocols,
specialty consultation, provider education, and activating patients to make provider
teams aware of effective therapies.
Components Level D Level C Level B Level A
Evidence-
Based
Guidelines
Score
…are not available.
0 1 2
…are available but
are not integrated
into care delivery.
3 4 5
…are available and
supported by
provider education.
6 7 8
…are available,
supported by
provider education
and integrated into
care through
reminders and other
proven provider
behavior change
methods.
9 10 11
Involvement
of Specialists
in Improving
Primary Care
Score
…is primarily through
traditional referral.
0 1 2
…is achieved through
specialist leadership
to enhance the
capacity of the
overall system to
routinely implement
guidelines.
3 4 5
…includes specialist
leadership and
designated specialists
who provide primary
care team training.
6 7 8
…includes specialist
leadership and
specialist
involvement in
improving the care of
primary care patients.
9 10 11
Provider
Education for
Chronic Illness
Care
Score
…is provided
sporadically.
0 1 2
…is provided
systematically
through traditional
methods.
3 4 5
…is provided using
optimal methods (e.g.
academic detailing).
6 7 8
…includes training all
practice teams in
chronic illness care
methods such as
population-based
management, and
self-management
support.
9 10 11
Informing
Patients about
Guidelines
Score
…is not done.
0 1 2
…happens on request
or through system
publications.
3 4 5
…is done through
specific patient
education materials
for each guideline.
6 7 8
…includes specific
materials developed
for patients which
describe their role in
achieving guideline
adherence.
9 10 11
Total Decision Support Score_______
Average Score (Decision Support Score / 4) _______
186
Part 3c: Delivery System Design. Evidence suggests that effective chronic illness
management involves more than simply adding additional interventions to a current
system focused on acute care. It may necessitate changes to the organization of practice
that impact provision of care.
Components Level D Level C Level B Level A
Practice Team
Functioning
Score
…is not addressed.
0 1 2
…is addressed by
assuring the
availability of
individuals with
appropriate training
in key elements of
chronic illness care.
3 4 5
…is assured by
regular team
meetings to address
guidelines, roles and
accountability, and
problems in chronic
illness care.
6 7 8
…is assured by teams
who meet regularly
and have clearly
defined roles
including patient self-
management
education, proactive
follow-up, and
resource
coordination and
other skills in chronic
illness care.
9 10 11
Practice Team
Leadership
Score
…is not recognized
locally or by the
system.
0 1 2
…is assumed by the
organization to reside
in specific
organizational roles.
3 4 5
…is assured by the
appointment of a
team leader but the
role in chronic illness
is not defined.
6 7 8
…is guaranteed by
the appointment of a
team leader who
assures that roles and
responsibilities for
chronic illness care
are clearly defined.
9 10 11
Appointment
System
Score
…can be used to
schedule acute care
visits, follow-up and
preventive visits.
0 1 2
…assures scheduled
follow-up with
chronically ill
patients.
3 4 5
…are flexible and can
accommodate
innovations such as
customized visit
length or group visits.
6 7 8
…includes
organization of care
that facilitates the
patient seeing
multiple providers in
a single visit.
9 10 11
Follow-up
Score
…is scheduled by
patients or providers
in an ad hoc fashion.
0 1 2
…is scheduled by the
practice in
accordance with
guidelines.
3 4 5
…is assured by the
practice team by
monitoring patient
utilization.
6 7 8
…is customized to
patient needs, varies
in intensity and
methodology (phone,
in person, email) and
assures guideline
follow-up.
9 10 11
Planned Visits
for Chronic
Illness Care
Score
…are not used.
0 1 2
…are occasionally
used for complicated
patients.
3 4 5
…are an option for
interested patients.
6 7 8
…are used for all
patients and include
regular assessment,
preventive
interventions and
attention to self-
management
support.
9 10 11
Continuity of
Care
…is not a priority.
…depends on written
communication
between primary care
providers and
…between primary
care providers and
specialists and other
relevant providers is
…is a high priority and
all chronic disease
interventions include
active coordination
187
Components Level D Level C Level B Level A
Score
0 1 2
specialists, case
managers or disease
management
companies.
3 4 5
a priority but not
implemented
systematically.
6 7 8
between primary
care, specialists and
other relevant
groups.
9 10 11
(From Previous Page)
Total Delivery System Design Score_______
Average Score (Delivery System Design Score / 6) _______
188
Part 3d: Clinical Information Systems. Timely, useful information about individual
patients and populations of patients with chronic conditions is a critical feature of
effective programs, especially those that employ population-based approaches.
Components Level D Level C Level B Level A
Registry (list of
patients with
specific
conditions)
Score
…is not available.
0 1 2
…includes name,
diagnosis, contact
information and date
of last contact either
on paper or in a
computer database.
3 4 5
…allows queries to
sort sub-populations
by clinical priorities.
6 7 8
…is tied to guidelines
which provide
prompts and
reminders about
needed services.
9 10 11
Reminders to
Providers
Score
…are not available.
0 1 2
… include general
notification of the
existence of a chronic
illness, but does not
describe needed
services at time of
encounter.
3 4 5
…includes indications
of needed service for
populations of
patients through
periodic reporting.
6 7 8
…includes specific
information for the
team about guideline
adherence at the
time of individual
patient encounters.
9 10 11
Feedback
Score
…is not available or
is non-specific to
the team.
0 1 2
…is provided at
infrequent intervals
and is delivered
impersonally.
3 4 5
…occurs at frequent
enough intervals to
monitor performance
and is specific to the
team’s population.
6 7 8
…is timely, specific to
the team, routine and
personally delivered
by a respected
opinion leader to
improve team
performance.
9 10 11
Information
about Relevant
Subgroups of
Patients
Needing
Services
Score
…is not available.
0 1 2
…can only be
obtained with special
efforts or additional
programming.
3 4 5
…can be obtained
upon request but is
not routinely
available.
6 7 8
…is provided
routinely to providers
to help them deliver
planned care.
9 10 11
Patient
Treatment
Plans
Score
…are not expected.
0 1 2
…are achieved
through a
standardized
approach.
3 4 5
…are established
collaboratively and
include self
management as well
as clinical goals.
6 7 8
…are established
collaborative an
include self
management as well
as clinical
management.
Follow-up occurs and
guides care at every
point of service.
9 10 11
Total Clinical Information System Score_______
Average Score (Clinical Information System Score / 5) ________
189
Briefly describe the process you used to fill out the form (e.g., reached consensus in a
face-to-face meeting; filled out by the team leader in consultation with other team
members as needed; each team member filled out a separate form and the responses
were averaged).
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Scoring Summary
(bring forward scoring at end of each section to this page)
Total Org. of Health Care System Score _______
Total Community Linkages Score _______
Total Self-Management Score _______
Total Decision Support Score _______
Total Delivery System Design Score _______
Total Clinical Information System Score _______
Overall Total Program Score (Sum of all scores) ______
Average Program Score (Total Program / 6) ______
190
What does it mean?
The ACIC is organized such that the highest “score” (an “11”) on any individual item,
subscale, or the overall score (an average of the six ACIC subscale scores) indicates
optimal support for chronic illness. The lowest possible score on any given item or
subscale is a “0”, which corresponds to limited support for chronic illness care. The
interpretation guidelines are as follows:
Between “0” and “2” = limited support for chronic illness care
Between “3” and “5” = basic support for chronic illness care
Between “6” and “8” = reasonably good support for chronic illness care
Between “9” and “11” = fully developed chronic illness care
It is fairly typical for teams to begin a collaborative with average scores below “5” on
some (or all) areas the ACIC. After all, if everyone was providing optimal care for
chronic illness, there would be no need for a chronic illness collaborative or other
quality improvement programs. It is also common for teams to initially believe they are
providing better care for chronic illness than they actually are. As you progress in the
Collaborative, you will become more familiar with what an effective system of care
involves. You may even notice your ACIC scores “declining” even though you have made
improvements; this is most likely the result of your better understanding of what a good
system of care looks like. Over time, as your understanding of good care increases and
you continue to implement effective practice changes, you should see overall
improvement on your ACIC scores.
Copyright 1996-2013 The MacColl Center. The Improving Chronic Illness Care program is
supported by The Robert Wood Johnson Foundation, with direction and technical
assistance provided by Group Health's MacColl Center for Health Care Innovation
191
C. Final Coding Tree for Qualitative Analysis
A. Assistive System
1. Supply of Materials for EBP: what, how, and where materials required to
perform work related to the EBP are stored in order to increase access
and proper use of said materials
2. Process Redesign for EBP: Alterations to the delivery of care that increase
compliance with the EBP
1
3. EBP Data Collection System: the method by which data elemental to the
implementation of the EBP is collected and stored (paper or electronic)
B. Behavior Activation
1. Awareness of EBP: notification of the elements of the EBP and the reason
for implementation
2
2. Understanding of EBP: Providing training and feedback to ensure the
benefits of the EBP are understood and the required changes in care
delivery are performed accordingly
C. Culture Building
1. Culture of Coordination: Characteristics of communication and
coordination between varying member groups of the organization that
affect the implementation of the EBP
3
1
Decision Rule #1: Any change activities that involve decisions tools or aids to remind providers of the
new process will also be coded as A2
2
Decision Rule #2: Any change activities that indicate an alert or trigger of the new process of care or to
the elements of the CCM in practice will also be coded as B1.
192
2. Quality Culture: core values and behaviors resulting from a collective
commitment by leaders and individuals to emphasize quality over
competing goals
4
3. Readiness to Change: The degree to which the organization recognizes
the benefits and effectiveness of the change, the discrepancy, and the
principle support provided in undertaking the implementation
5
4. Commitment to Change: A dedication by the organization and its
employees to the project that connects personal success with project
success and a desire to spread the adoption of the EBP
6
D. Data Focus
1. Performance Measurement: Identifying and measuring the degree of
performance for pre-defined implementation and health outcomes that
are affected by the implementation of the EBP
2. Accessibility of data for EBP: The degree to which data relating to the
implementation is available for review and analysis
3
Decision Rule #3: Any change activities that address care coordination between internal and external
groups will be coded as C1.
4
Decision Rule #4: Any change activities relating to safety culture, participation in other collaboratives,
patient satisfaction, or meeting and understanding patient needs or follow ups will be coded as C2.
5
Decision Rule #5: Any change activities relating to hiring (organizational readiness to change) and any
activities relating to physician champions will be coded as C3.
6
Decision Rule #6: Any change activities relating to making an effort to ensure the implementation
success or continuing the spread of the CCM throughout the organization will be coded as C4.
193
3. Completeness of Information: The degree to which the data depicts the
full picture of factors relating to the EBP that affect outcomes
7
7
Decision Rule #7: Any change activities relating to revised, modified, or edited data will be coded as D3.
194
D. Examples of ABCD Implementation Framework Elements
Assistive System
o Supply of Materials for EBP
1. “We also stocked our pilot sites with the videos so they can easily distribute
to patients who are interested”
2. “DME closet has been established at 2 sites. All clinics have a demonstration
kit. [DME closet] is a closet that contains peak flow meters, nebulizers, and
other items pertinent to patient education”
3. “Testing self-management visual aid for Low Na+ diet in the home setting.
Status: Well received by patients”
4. “Free scales available for patients who can’t afford them”
5. “All education material that is currently being used by nursing, home care,
pharmacy, and the dietitian has been standardized throughout the St. Louis
SSM hospitals.”
o Process Redesign for EBP
1. “Our asthma team refined the elements of the planned visit procedure
including team roles and responsibilities”
2. “Each pilot clinic developed and implemented tools and a method to
schedule pre-planned diabetes visits”
3. “RCSs set up back-to-back visits with the PCPs”
4. “They instigated multi-disciplinary case conferencing prior to visits. Went
through charts of their population”
195
5. “Planned visits were established and have been completed on 264 patients.
Visits include standard assessment, self-management support, and planning”
o EBP Data Collection System
1. “The data entry form has been completed. Patient information is being
entered directly into the asthma registry.”
2. “Testing new pulmonary/CHF self-mgt assessment form”
3. “Patient registry system designed and in place”
4. “Asthma flow sheet revised for main site with initial feedback from staff, now
inserted in every chart with asthma sticker or problem note, Xeroxed after
completion for data entry into registry”
5. “Flow sheet set-up to capture co-morbidities and treatment”
Behavior Activation
o Awareness of EBP
1. “Our organization authorized a subset of the NHLBI Asthma Guidelines to be
distributed to the providers in our two pilot clinics”
2. “Flag reminders on charts. The hospital adopted evidence based guidelines
and integrated them into the system through education of physicians, staff
and chart reminders” [also B2]
3. “Uniform guidelines established for pt. education and made available to
continuum”
196
4. “Geraldine put up little notes and flyers (based on guidelines) around exam
rooms.”
5. “Diabetes guidelines and preventive tools have been developed and are
available via the Computerized Medical Record”
o Understanding of EBP
1. “Pilot Site #2 has held an Asthma educational in-service using the guidelines
as a basis for introducing standardized Asthma care”
2. “Educated staff to further implement self-management goals”
3. “A teaching session on self-management techniques was done by Dr. Dillon
with the pilot team”
4. “All staff trained on using peak flow meters and evaluating results”
5. “Two RNs were trained as “Master Trainers” for the chronic disease self-
management course”
Culture Building
o Culture of Coordination
1. “Our new medical director has joined the team and will serve as a link
between the asthma team and medical management”
2. “Test communication form between outpatient clinic and physician"
3. “Extensive coordination with pharmaceutical companies (Merck, Glaxo
Wellcome, Aventis, and Key) for medication samples, patient education
materials, donations, in-services, and program support”
197
4. “A pulmonologist from this clinic has agreed to serve a an ad hoc member to
our asthma team”
5. “Team approach to depression care (multiple providers of care) reviewed
and integrated into practice to include MD or NP, RN, care manager, social
worker”
o Quality Culture
1. “Expanded the CCM into preventive health”
2. “Coordinating with regional representatives of HRSA to join various efforts to
improve pediatric asthma outcomes in the State of Connecticut”
3. “Protocols for following up with patients who have dropped out or not
responded have been developed”
4. “Care manager actively reviewed charts with primary nurse to identify those
lost to follow-up or needing more prompt follow-up or specific referrals”
5. “COSSMA entered the Puerto Rico Asthma Coalition to help establish
networks of services for our patients”
o Readiness to Change
1. “The organization was willing to make a big commitment because it was the
right thing to do”
2. “Physician champions one-on-one with other physicians”
3. “The hospital organization has listed CHF as a priority in its business plan”
4. “Approval to hire case manager for collaborative given by administration.
Case manager hired”
198
5. “Nurse hired in main clinic, as manager of pediatrics department as well as to
provide vital role in flow of asthma visits”
o Commitment to Change
1. “Our organization continues to support our asthma program by authorizing
funding for preventative management”
2. “Personnel and delivery system changes are being made to facilitate spread
of the CCM to pulmonary patients”
3. “Spread use of registry to main site, and ultimately to include all [SITE]
pediatric patients with asthma”
4. “Our administration has authorized the funding for covering peak flow
meters and DME sullies for our asthma members. Ongoing and expanded”
5. “FCN established the pilot site group leaders as the FCN Diabetes Steering
Group for 2000, and is funding monthly meetings to facilitate continued
implementation of changes in clinics”
Data Driven
o Performance Measurement
1. “They received a 5-15% response to a patient letter”
2. “Peak flow rate percentages calculated for every possible pediatric personal
best or expected for height, displayed in chart format, laminated, and posted
at charting station.”
199
3. “Their glucose levels were running in acceptable ranges and the patients felt
in control and pleased with their progress”
4. “Patient self-management compliance rate increased from 3-10% to around
100%”
5. “Goal: Increase rates of standard diabetes monitoring tests and exams,
increase rates of patient visits with providers”
o Accessibility of Data for EBP
1. “The call center provides monthly reports detailing utilization of this service”
2. “Asthma tracking plan developed. Tracking plan has NIH guidelines
incorporated. Registry produces a report with client’s name and guidelines
for what needs to occur at next visit. Health Assistants use list to call clients
to improve client appointment compliance.”
3. “Registry querying individual clinician compliance with collaborative aims”
4. “Automatically print new flow sheet based on scheduled pts for the day”
5. “Generate report to show patients with > 10 HbA1c and doing an audit of
their meds, then have the MD review to look for opportunities”
o Completeness of Information
1. “We have incorporated an Asthma Severity Scale into our flow sheet to assist
in consistent evaluation of severity at each encounter”
2. “Asthma flow sheet revised for main site with initial feedback from staff”
3. “Database redesigned to accommodate comorbidities and for easier data
entry”
200
4. “Added documentation of self-management goals to flow sheet”
5. “We reformatted data during the last two months to better reflect our
gradual spread”
Interactions
o Assistive System Behavior Activation
1. “Peak flow rate percentages calculated for every possible pediatric personal
best or expected for height, displayed in chart format, laminated, and posted
at charting station.” [A1 B1]
2. “Asthma guidelines are posted in the clinician rooms” [A1 B1]
3. “Place to document whether self-management/educational pamphlets have
been given to the patients has been added to the depression flow sheet” [A3
B1]
4. “Implemented the use of a Healthy Changes form to help providers and
patients identify self-management goal” [A3 B1]
5. “Laminated pocket copies of ADA Standards were distributed to all FCN
physicians and clinical staff” [A1 B1]
o Assistive System Culture Building
1. “They instigated multi-disciplinary case conferencing prior to visits. Went
through charts of their population”
2. “A list of local resources and how to contact them was given to each pilot
clinic, and is being added to over time” [A1 C1]
201
3. “The team met twice a month during the collaborative to bounce ideas off
each other and plan” [A2 C1]
4. “The development of a peer review system begins for the purpose of insuring
quality and continuity of care” [A2 C2]
5. “Office distributing patient education packet and encouraging its use” [A1
C4]
o Assistive System Data Focus
1. “All clients being seen with asthma, have an action asthma plan place in
charge, we ran tests with 10 clients, and all had plans in their chart. The
patient participates in completing the action plan” [A2 D1]
2. “Health Assistants use list to call clients to improve client appointment
compliance.” [A2 D1]
3. “Using the CHF database registry as a management tool for easy tracking of
outcomes” [A3 D1]
4. “Patient registry system designed and in place” [A3 D2]
5. “Patients remove shoes and socks before MD enters exam room. Increases
rates of foot exams, lipid testing, and HbA1c testing” [A1 D1]
6. “We have implemented our automated data collection program for this
collaboration, which will facilitate our ability to quickly compile demographic
and Prime MD Scores” [A3 D2/1]
7. “The self-care plan is an electronic encounter form including three
components: 1) A structure list of community support groups and
202
framework, financial support program etc. Doctors just need to check the
boxes. 2) A planned therapy by collaborative goal and an emergency action
plan. 3) A goal-specific plan with pleasurable activity etc. It is individualized,
address barriers, and patient readiness to change. About 30% ~ 70% of time
providers will go through the self-care plan.” [A3 D1]
o Behavior Activation Assistive System
1. “Use of “flag system for reminders of phone-follow up” [B A2]
2. “Providers engage client in participating in treatment modalities based on
clinical guidelines and patient preference” [B1 A2]
3. “Guidelines for when to refer patients to outside care now included on flow
sheet” [B1 A2]
4. “Depression care protocols are in place to guide staff practice as the CCM
spreads beyond the pilot team” [B1 A2]
5. “Medical assistants instructed in use of all asthma equipment” [B2 A1]
o Behavior Activation Culture Building
1. “This month there has been a sustained level of performance. We believe
that the individual interaction with each provider has increased their
awareness and commitment to quality care.” [B2 C2]
2. “In-service home health nurses on education guidelines. [..] Now all patients
with CHF have a home scale and nurses incorporating self-mgt education.”
[B2 C3]
203
3. “The pilot team provided education to FCHC providers on community mental
health services to better utilize specialized services available to our patients”
[B2 C1]
4. “Personnel from record department, health educator, and customer service
were oriented about collaborative and their active role was designated by
team and accepted by senior leader” [B2 C1/3]
5. “Continuous education to health care community through presentations to
medical committees. The education effort went beyond the hospital to the
MD offices where office staff, as well as MDs were educated” [B2 C1/4]
o Behavior Activation Data Focus
1. “This month there has been a sustained level of performance. We believe
that the individual interaction with each provider has increased their
awareness and commitment to quality care.” [B2 D1]
2. “17 out of 20 physicians attending dinner meeting agreed to refer to clinic”
[B2 D1]
3. “Introduce ADA diabetes guideline to pilot MDs: Increase rates of foot
exams, lipid testing, and HbA1c testing” [B1 D1]
4. “We have noticed an increase of referrals to the collaborative as a result of
our medical providers comfort level with administering the 2-question
screener” [B2 D1]
5. “Guidelines were well received and used by physicians” [B1 D1]
o Culture Building Assistive System
204
1. “RCSs set up back-to-back visits with the PCPs” [C1 A2]
2. “Senior aid hired to assist in registry update, patient contact, and scheduling
of preventive asthma visits” [C3 A2/3]
3. “Collaborating with Visiting Nurses Association to provide comprehensive
home visits for pediatric patients with moderate to severe persistent asthma
and problematic environmental issues” [C1 A2]
4. “The asthma team decided to offer support to our pilot site through the
purchase of a combination TV/VCR for patients to use for viewing the asthma
videos” [C3 A1]
5. “Partnered with pharmaceutical company to purchase scales for distribution
to those patients unable to purchase” [C1 A1]
6. “Our senior leader has facilitated the development of a computer field
formatted program to assist this project with better data tracking” [C3 A3]
o Culture Building Behavior Activation
1. “Physician champions one-on-one with other physicians” [C3 B2]
2. “Pilot team leader educated local psychiatrists regarding collaborative
activities” [C3 B2]
3. “Interest would increase after something appeared” [C3 (discrepancy) B1]
4. “We utilized the services of a respiratory therapist for peak flow meter
instruction for an in-service. His company continues to supply our clinics
with teaching models for our providers.” [C1 B2]
205
5. “Participating staff are recognized and rewarded for their efforts by senior
leadership” [C3 B1]
6. “Through participation in the county asthma coalition community resource
information to support patient care has been identified and shared with all
members of the coalition.” [C1 B1]
o Culture Building Data Focus
1. “Continue to work with the [PROVIDER] from CHIPS for further refinement of
the cube data” [C1 D3]
2. “They achieved delegation of care from PCPs to the respiratory RNs. This
reduced the MD time spent” [C1 D1]
3. “Physician champions discuss progress of collaborative at monthly meetings”
[C3 D1]
4. “All [SITE] clients being seen in that ER will have the discharge summary and
action faxed within48 hours. Liaison assigned from hospital team and
chronic III to maintain communication and monitor progress” [C1 D1]
5. “Early spread data showed positive signs that action thresholds would likely
be met within a quarter” [C4 D1]
6. “Other central California county coalitions have formed and met at a regional
coalition meeting to share data, resources, materials and to establish
common goals” [C1 D2]
o Data Focus Assistive System
206
1. “Registry produces a report with client’s name and guidelines for what needs
to occur at next visit.” [D2 A2]
2. “They instigated multi-disciplinary case conferencing prior to visits. Went
through charts of their population” [D2 A2]
3. “They used the registry for scheduling follow-up visits” [D2 A2]
4. “Registry being utilized to contact all patients to update information,
schedule visits for asthma, and conduct outreach” [D2 A2]
5. “Monthly a report is printed from the Asthma registry so that the nurse can
make follow-up calls and identify any patient that may need an office visit or
to be followed more closely than on a monthly basis” [D2 A2]
o Data Focus Behavior Activation
1. “Peak flow rate percentages calculated for every possible pediatric personal
best or expected for height, displayed in chart format, laminated, and posted
at charting station.” [D1 B1]
2. “Asthma flow sheet revised for main site with initial feedback from staff, now
inserted in every chart with asthma sticker or problem note, Xeroxed after
completion for data entry into registry” [D3 B1]
3. “They did generate a report of screening rate etc. outcomes from patient
registry and provided the feedback report to providers” [D2 B2]
4. “Members of our team met with our Healthy Steps providers to share data
and offer feedback about their performance on the asthma measures and
completion of the flow sheets” [D2 B2]
207
5. “The case manager met with the providers this month. Measures and goals
were discussed. Ideas were suggested and discuss how to reach the goals.”
[D1 B1]
6. “The RCS’s reviewed patient documentation to present data to the PCPs on
gaps in practices” [D1 B1]
o Data Focus Culture Building
1. “Measurement regarding outcomes, patient satisfaction and utilization of
service are reviewed and presented to administrative leaders and providers
to demonstrate the value of population-based disease management and to
identify improvement opportunities” [D2 C3]
2. “Establish database for registry system. Make database available for
continuum” [D2 C1]
3. “Routine queries of the population for follow-up needs” [D2 C2]
4. “Spread to 1-2 more departments when clinic-wide data available” [D2 C4]
5. “Regular meetings established with pilot team and senior leader to evaluate
progress, work toward buy-in from providers” [D2 C3]
6. “The RCS’s reviewed patient documentation to present data to the PCPs on
gaps in practices” [D2 C3 (recognizing discrepancy)]
208
E. Detailed Results of Qualitative Coding By Site
Table 19 Complete Results of Qualitative Coding
Site
#
Disease
Studied
A1 A2 A3 B1 B2 C1 C2 C3 C4 D1 D2 D3
1 CHF 9 18 8 9 4 15 5 4 4 14 11 3
2 diabetes 9 18 16 15 6 9 3 7 14 19 14 10
3 CHF 8 24 12 9 11 18 6 14 10 16 12 4
4 diabetes 3 10 11 16 6 9 1 7 15 23 28 15
6 CHF 11 22 8 7 5 5 4 2 6 12 7 6
7 diabetes 2 3 5 7 2 5 1 3 2 8 6 0
8 CHF 9 13 7 13 4 8 2 9 10 13 8 4
10 CHF 4 11 9 5 3 6 3 4 4 13 7 4
12 diabetes 12 5 12 18 6 8 6 5 8 9 9 7
14 CHF 11 21 16 16 3 10 6 5 6 30 15 6
18 CHF 4 16 11 10 1 3 3 1 7 14 7 5
20 CHF 4 17 7 7 2 10 2 4 3 10 7 2
21 CHF 10 12 8 7 8 11 3 6 5 8 7 3
24 CHF 9 12 11 14 3 11 1 6 4 15 8 2
26 diabetes 10 10 10 6 1 7 3 4 5 10 8 5
31 diabetes 8 8 6 11 5 13 0 8 5 10 4 3
40 asthma 13 7 11 9 6 9 3 11 6 11 6 6
46 asthma 15 13 12 12 4 17 6 10 3 6 3 4
49 asthma 6 12 5 4 4 4 3 5 7 5 9 1
50 asthma 3 6 1 2 4 4 1 6 4 5 3 1
55 asthma 3 7 2 4 3 4 2 5 3 1 3 0
57 depression 5 15 7 4 6 14 5 8 3 9 8 1
58 depression 5 13 6 8 7 7 4 4 1 4 5 4
60 asthma 4 11 6 4 3 3 4 4 1 6 3 2
64 asthma 14 13 8 10 4 6 3 8 5 5 6 4
70 depression 4 6 4 1 0 1 0 1 0 3 0 0
71 depression 5 5 2 5 3 4 3 5 2 6 3 0
73 asthma 0 4 0 1 2 1 1 5 2 3 2 0
75 asthma 8 8 5 7 3 6 1 4 5 9 7 2
77 asthma 8 6 4 9 3 3 1 4 3 4 5 0
78 asthma 5 10 2 4 4 7 3 5 7 4 3 1
81 depression 6 11 6 5 5 14 3 11 3 9 5 5
84 depression 2 7 4 3 2 2 1 3 3 4 1 2
90 asthma 7 10 3 8 7 7 4 9 4 9 5 1
209
Table 19 Continued
Site
# A B A C A D B A B C B D C A C B C D D A D B D C
1 4 5 12 4 2 1 6 4 1 5 3 5
2 5 5 13 3 6 4 7 3 3 7 6 3
3 7 14 14 4 2 5 8 8 4 6 4 8
4 2 3 10 3 4 10 9 4 5 5 12 8
6 2 5 5 5 2 4 5 2 1 6 3 4
7 3 1 2 3 1 1 2 3 1 3 3 0
8 8 3 8 4 1 1 7 2 2 4 4 4
10 3 1 4 1 2 0 5 2 1 8 1 1
12 9 2 7 7 1 1 5 7 7 5 6 1
14 9 5 19 5 3 2 7 3 1 12 8 3
18 4 3 13 4 1 0 0 0 2 9 3 3
20 2 4 6 5 1 1 5 0 1 4 4 3
21 3 3 8 4 4 0 7 2 2 5 2 1
24 5 5 10 7 2 4 4 4 0 8 3 4
26 1 5 13 5 0 1 5 0 0 3 3 1
31 3 3 4 5 1 2 4 4 3 4 5 1
40 6 2 12 4 0 1 9 7 2 4 6 3
46 4 5 7 3 1 0 11 6 1 4 3 1
49 2 4 5 3 1 0 5 1 1 9 4 1
50 0 2 2 3 1 0 2 4 4 1 3 1
55 2 1 1 2 0 0 4 6 2 1 2 0
57 4 10 5 4 0 2 9 2 2 6 2 1
58 5 6 4 6 3 1 2 2 1 3 1 3
60 3 2 4 3 1 3 5 1 2 2 2 3
64 5 5 4 4 3 4 6 2 4 9 3 1
70 1 1 1 3 0 0 2 0 0 2 2 0
71 2 2 2 3 3 1 5 3 1 4 3 5
73 1 1 3 0 1 0 2 2 3 0 1 1
75 4 4 4 5 0 3 8 3 2 6 3 1
77 6 1 2 2 1 2 3 3 1 2 3 1
78 2 4 2 3 1 1 6 3 1 3 4 2
81 3 5 5 2 1 4 6 3 3 3 5 3
84 1 1 3 2 0 1 5 1 1 3 2 2
90 4 6 4 6 3 0 5 4 2 4 6 0
210
Table 19 Continued
Site
#
Barriers Associated with ABCD Framework Components
A1 A2 A3 B1 B2 C1 C2 C3 C4 D1 D2 D3
1 0 1 0 0 1 0 0 0 1 0 0 0
2 0 0 2 0 1 0 0 4 2 0 2 0
3 0 0 0 1 0 1 0 4 1 0 0 0
4 0 1 1 2 0 0 0 1 0 1 0 2
6 0 0 0 0 0 0 0 2 3 0 0 0
7 0 0 0 0 1 0 0 1 0 0 0 0
8 0 0 0 0 1 0 0 2 1 1 0 0
10 0 0 2 1 2 0 0 5 4 0 0 0
12 0 0 2 0 1 1 0 1 0 0 0 0
14 2 1 1 0 0 1 0 7 2 0 0 3
18 1 0 0 1 0 0 0 3 0 0 0 1
20 0 0 1 0 1 0 0 1 0 0 0 0
21 0 1 1 1 0 0 0 0 1 0 0 0
24 0 2 0 0 0 1 0 0 0 0 0 0
26 0 1 0 0 0 0 0 3 1 0 0 1
31 0 1 1 2 0 2 0 4 1 0 0 0
40 0 0 0 3 1 1 0 2 0 0 0 0
46 0 0 0 0 0 1 0 0 0 0 0 0
49 0 0 0 1 0 0 0 2 0 0 0 0
50 0 0 0 1 0 0 0 2 2 0 1 0
55 0 0 0 0 0 1 0 0 0 0 0 0
57 0 2 0 0 1 0 0 3 1 0 0 0
58 0 1 0 1 0 1 0 0 1 0 0 0
60 0 0 0 1 1 0 0 2 1 0 0 0
64 0 0 0 0 0 0 0 0 0 0 0 0
70 0 0 0 0 1 0 0 3 3 0 0 0
71 0 0 0 0 0 0 0 0 1 0 0 0
73 0 0 0 0 0 0 0 1 0 0 0 0
75 0 0 1 0 0 0 0 0 1 0 0 0
77 0 0 0 0 0 0 0 0 1 0 0 0
78 0 0 1 0 0 0 1 0 1 0 0 0
81 0 0 0 1 1 0 0 0 1 0 0 0
84 0 0 0 0 0 0 0 1 0 1 0 0
90 0 0 0 0 0 0 0 1 0 0 0 0
Abstract (if available)
Abstract
Background: The literature has shown mixed results for the ability of implementation efforts to actually improve patient care. Implementations that employ a multicomponent strategy are more effective - an example of employing system's thinking in designing implementations. A system's perspective requires an understanding of how interconnected elements are organized to achieve a specific purpose. Failing to understand the systems view of an organization's implementation context contributes to this lack of success. Therefore, the purpose of this work was to answer the following: How do we characterize the context of a healthcare organization that affects the likelihood of success in implementing evidence based practice (EBP)? ❧ Research Design: In an effort to answer the above question, the objective of this research was to develop a validated framework that characterizes the internal contextual and behavioral factors of a health care organization that affect implementation. Phase 1 of this work consisted of an in-depth case study of the central line checklist implementation, a successful implementation of an EBP, which was complemented using a comparative analysis of existing implementation frameworks. ❧ The finalized implementation framework was validated in Phase 2 through a mixed methods analysis of the implementation of another EBP, the Chronic Care Model (CCM) across 34 sites. First, qualitative evaluation used systematic coding to determine if the framework categorized change efforts. The qualitative comparative analysis allowed us to make causal inferences regarding the necessity and sufficiency of the elements of the framework in causing a successful implementation. The statistical analysis determined whether or not elements of the framework affect the success of an implementation based on (1) implementation outcomes using Pearson correlation analysis and (2) patient health outcomes using multilevel modeling techniques. ❧ Phase 1 Results: Through review of the central line checklist implementations, I identified four components critical to the implementation success - assistive systems (A), behavior activation (B), culture building (C), and data focus (D). The comparative analysis of existing frameworks uncovered that no existing framework addressed the four components together and often include elements not critical to the organization level. Many of the existing frameworks also failed to identify the interactions between the components included. The finalized ABCD implementation framework includes the four components identified above, but and interactions between the components. ❧ Phase 2 Results: There was evidence that the framework sufficiently described the change activities conducted by the sites. At the 0.75 consistency level, the assistive system, behavior activation, culture, and interactions were found to be necessary for a successful implementation while failing to address the data focus component was necessary for an unsuccessful implementation. At the 0.90 consistency level, cases that addressed all five elements together lead to successful implementation. It was found that an implementation that involves more activities related to the elements of the framework results in better improvement in implementation and patient health outcomes. ❧ Discussion and Conclusion: The ABCD implementation framework characterizes the context of a healthcare organization that affects the likelihood of success in implementing EBP. Each of the elements is critical as none of the validity tests suggested the elimination (or addition) of an element. The framework recognizes the complexity of designing an implementation, but correctly identifies system boundaries that allow focused efforts on factors that an organization can change. The resulting framework provides a guide for researchers as well as leaders in health care organizations to design future implementations. By understanding the framework, it is possible to tailor the implementation of an EBP to the internal context found within the specific organization.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Developing an agent-based simulation model to evaluate competition in private health care markets with an assessment of accountable care organizations
PDF
How organizations adapt during EBP implementation
PDF
A framework for examining relationships among electronic health record (EHR) system design, implementation, physicians’ work impact
PDF
Impacts of system of system management strategies on system of system capability engineering effort
PDF
An agent-based model to study accountable care organizations
PDF
Investigation of health system performance: effects of integrated triple element method of high reliability, patient safety, and care coordination
PDF
Designing health care provider payment systems to reduce potentially preventable medical needs and patient harm: a simulation study
PDF
Quantifying the impact of requirements volatility on systems engineering effort
PDF
The identification, validation, and modeling of critical parameters in lean six sigma implementations
PDF
Assessing implementation of a child welfare system practice change
PDF
Multilevel influences on organizational adoption of innovative practices in children's mental health services
PDF
A unified framework for studying architectural decay of software systems
PDF
Using a human factors engineering perspective to design and evaluate communication and information technology tools to support depression care and physical activity behavior change among low-inco...
PDF
A series of longitudinal analyses of patient reported outcomes to further the understanding of care-management of comorbid diabetes and depression in a safety-net healthcare system
PDF
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
PDF
Effectiveness of engineering practices for the acquisition and employment of robotic systems
PDF
Deep learning models for temporal data in health care
PDF
Simulation modeling to evaluate cost-benefit of multi-level screening strategies involving behavioral components to improve compliance: the example of diabetic retinopathy
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
Planning care with the patient in the room: a patient-focused approach to reducing heart failure readmissions
Asset Metadata
Creator
Hawkins, Caitlin
(author)
Core Title
A system framework for evidence based implementations in a health care organization
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
05/07/2013
Defense Date
03/07/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
evidence based practice,implementation,OAI-PMH Harvest,system framework
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wu, Shinyi (
committee chair
), Adler, Paul S. (
committee member
), Settles, F. Stan (
committee member
)
Creator Email
cait.hawkins@gmail.com,hawkinsc@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-253143
Unique identifier
UC11294604
Identifier
etd-HawkinsCai-1665.pdf (filename),usctheses-c3-253143 (legacy record id)
Legacy Identifier
etd-HawkinsCai-1665.pdf
Dmrecord
253143
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Hawkins, Caitlin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
evidence based practice
implementation
system framework