Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A framework for examining relationships among electronic health record (EHR) system design, implementation, physicians’ work impact
(USC Thesis Other)
A framework for examining relationships among electronic health record (EHR) system design, implementation, physicians’ work impact
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A FRAMEWORK FOR EXAMINING RELATIONSHIPS AMONG
ELECTRONIC HEALTH RECORD (EHR)
SYSTEM DESIGN, IMPLEMENTATION, PHYSICIANS’ WORK
IMPACT
by
Fei Li
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements of the Degree
DOCTOR OF PHILOSOPHY
(INDUSTRIAL AND SYSTEMS ENGINEERING)
May 2014
Copyright 2014 Fei Li
ii
Acknowledgement
I feel extremely lucky to have wonderful family, friends and mentors along my life,
guiding and supporting me to make achievements, which I could not have accomplished
on my own. I owe the most sincere appreciation to many people who are always there to
encourage me.
I want to thank my family, most of who are in China, for always believing in me all my
life; specially, for my grandpa who I respect most and missed most, thank you for
showing me since my childhood how a person should be motivated by his or her interest
and age does not matter to someone who always wants to learn and grow. My lovely
parents have always supported whatever decision I made and always pointed out the
bright and positive side when I was frustrated.
Second, my graduate advisor, Dr. Shinyi Wu, for guiding me into the world of research
and for being patient in the past five years. I will never forget in the middle of the
program, I was not sure about if I could complete the journey, but Shinyi did not give me
up and instead allowed me time to find some research area that I am interested in and also
fits my background. I am very honored to have Shinyi as my mentor and advisor, who is
also a role model of perfect balance between family and work. I am working on my way
to become someone like her.
Also, I really appreciate my dissertation committee members Dr. Stan Settles and Dr.
Robert Myrtle, as well as Dr. Najmedin Meshkati and Dr. Joshua Sapkins, for providing
valuable inputs and feedback to develop my research and become a better industrial
engineer.
iii
Last but not least, gratitude goes to all my friends, who are always there alongside me
during all the up and down times. To my best friends, Cherry Liu,Yihan Xie and Shawn
Gong, Trey Farrell and Pai Liu, you are the most sincere friends I have ever had; your
encouragements and beliefs in me made my graduate school one of the most precious
times in my life. To my dear, Bo Zhang, although you showed up much later in the
timeline, with you by my side, I have always been confident to tackle any difficulty in my
life. All of my Ph.D colleagues have been a great support to help me overcome hurdles
and provide precious feedback, thank you, Christine, Catlin, Irene and all my colleagues
for improving my presentation and writings. I am very honored to have the opportunity to
work with such a talented group of colleagues and to share precious moments with you.
iv
Table of Contents
Acknowledgement ……………………………………………………………………..... ii
List of Tables .................................................................................................................... vii
List of Figures .................................................................................................................... ix
Abstract ............................................................................................................................... x
Chapter 1 Introduction.................................................................................................... 1
1.1 Background ............................................................................................................ 1
1.2 Problem Statement ................................................................................................. 4
1.3 Purpose of the Study .............................................................................................. 6
1.4 Organization of the Dissertation ............................................................................ 8
Chapter 2 Literature Review and the Development of Research Model ........................ 9
2.1 TAM and UTAUT ................................................................................................ 11
2.2 Delone &McLean IS Success Model ................................................................... 13
2.3 Technology to Performance Chain Model ........................................................... 15
2.4 Research Model (EHR Success Model) ............................................................... 16
Chapter 3 Development of the Measures for EHR Success Model ............................. 21
3.1 Bailey and Pearson’s User Satisfaction Instrument ............................................. 22
3.2 User Evaluation of Information Quality ............................................................... 24
3.2.1 Accuracy ........................................................................................................ 26
3.2.2 Completeness ................................................................................................. 27
3.2.3 Timeliness ...................................................................................................... 28
3.2.4 Accessibility .................................................................................................. 28
3.3 User Evaluation of System Quality ...................................................................... 29
3.3.1 Usability ......................................................................................................... 30
3.3.2 Flexibility....................................................................................................... 33
3.3.3 Integration Capability .................................................................................... 34
3.3.4 Response Time .............................................................................................. 34
3.4 User Evaluation of Service Quality ...................................................................... 35
3.4.1 Senior Management ....................................................................................... 36
3.4.2 Project Leadership ......................................................................................... 37
3.4.3 Stakeholder Involvement ............................................................................... 37
3.4.4 Assessment of the Environment .................................................................... 38
3.4.5 Training ......................................................................................................... 38
3.4.6 Support ........................................................................................................... 38
3.4.7 Workflow Redesign ....................................................................................... 39
3.4.8 Feedback ........................................................................................................ 39
3.5 Social Influence .................................................................................................... 41
v
3.6 User Acceptance ................................................................................................... 41
3.7 Physicians’ Work Impact ..................................................................................... 42
3.7.1 Autonomy ...................................................................................................... 43
3.7.2 Referral Capabilities ...................................................................................... 44
3.7.3 Patient Care Issues ......................................................................................... 45
3.7.4 Productivity ................................................................................................... 45
3.7.5 Time (Personal Time) .................................................................................... 46
3.7.6 Compensation ................................................................................................ 47
3.7.7 Administrative Responsibility ....................................................................... 47
3.7.8 Work Satisfaction and Practice Satisfaction .................................................. 47
Chapter 4 Study Methodology ..................................................................................... 48
4.1 Research Questions .............................................................................................. 48
4.2 Model Specification ............................................................................................. 52
4.3 Research Design ................................................................................................... 55
4.3.1 Survey Instrument .......................................................................................... 55
4.3.2 Survey Review and Pretest ............................................................................ 58
4.3.3 Sample ........................................................................................................... 59
4.3.4 Survey Administration ................................................................................... 60
4.4 Data Analysis ....................................................................................................... 61
4.4.1 Reliability Analysis ....................................................................................... 61
4.4.2 Construct Validity Analysis........................................................................... 62
4.4.3 Multiple Regression Analysis ........................................................................ 63
4.4.4 Structural Equation Model Evaluation .......................................................... 64
Chapter 5 Results.......................................................................................................... 66
5.1 Survey Respondents’ Characteristics ................................................................... 66
5.2 Factor Model of the Three Quality Constructs ..................................................... 68
5.2.1 Reliability of Measures of Factors within Constructs ................................... 68
5.2.2 Confirmatory Factor Analysis of Three Quality Constructs ......................... 70
5.2.3 Oblique Rotation of Measures within Quality Constructs ............................. 71
5.2.4 Factor Model Exploration and Validation ..................................................... 75
5.3 Residents’ Usage and Satisfaction with EHR System ......................................... 78
5.4 Multiple Regression Analysis .............................................................................. 79
5.4.1 Regression Analysis of EHR Usage .............................................................. 80
5.4.2 Regression analysis on EHR satisfaction ...................................................... 80
5.4.3 Regression Analysis on Work Impact Measures ........................................... 81
5.5 Structural Equation Model Evaluation ................................................................. 83
5.5.1 Model Structure Comparison ......................................................................... 83
5.5.2 Work Impacts from Using EHR .................................................................... 86
5.6 Summary of Key Findings ................................................................................... 90
Chapter 6 Discussion and Conclusion .......................................................................... 91
6.1 Discussion of the EHR Success Model ................................................................ 91
vi
6.1.1 Congruence Model of Organizational Behavior ............................................ 91
6.1.2 Fit Between Information Technology and Work ........................................... 93
6.2 Discussion of Physicians’ Work Impacts of EHR Use ........................................ 95
6.3 Discussion of the Three Quality Constructs ......................................................... 97
6.4 Contribution of the EHR Success Model and Our Study ..................................... 99
6.5 Limitations ......................................................................................................... 101
6.6 Future Work ....................................................................................................... 102
References ....................................................................................................................... 104
Appendices ...................................................................................................................... 117
Appendix A ................................................................................................................. 117
Appendix B ................................................................................................................. 122
vii
List of Tables
Table 1 Comparison of Previous Frameworks and EHR Success Model......................... 18
Table 2 List of Attributes and Measures for Each Construct in EHR Success Model ..... 22
Table 3 Survey Questions for Accuracy Attribute and References .................................. 27
Table 4 Survey Questions for Completeness Attributes and References ......................... 28
Table 5 Survey Questions for Timeliness Attribute and References................................ 28
Table 6 Survey Questions for Accessibility Attribute and References............................. 29
Table 7 Survey Questions for Usability............................................................................ 32
Table 8 Comparisons of EHR Implementation Frameworks............................................ 36
Table 9 Question Statements of Service Quality Construct.............................................. 40
Table 10 Physician Work Impact Measures ..................................................................... 43
Table 11 List of Research Hypothesis and Expected Sign of Path Coefficient................ 54
Table 12 Attributes with Multiple Measures .................................................................... 61
Table 13 Survey Respondents' Characteristics ................................................................. 67
Table 14 Attributes' Reliability......................................................................................... 69
Table 15 Constructs' Reliability........................................................................................ 70
Table 16 Confirmatory Factor Analysis Factor Loadings of All Measured Variables..... 70
Table 17 Factor Loadings in System Quality Construct after Promax Rotation .............. 73
Table 18 Factor Loadings in Service Quality Construct after Promax Rotation .............. 73
Table 19 Factor Loadings of New Observed Scores ........................................................ 74
Table 20 Fit Indices of the Three Potential Valid Research Models ................................ 76
Table 21 Usage of EHR .................................................................................................... 78
Table 22 User Satisfactions towards EHR........................................................................ 78
Table 23 Regression Analysis of Usage ........................................................................... 80
Table 24 Regression Analysis of EHR Satisfaction ......................................................... 81
viii
Table 25 EHR Satisfaction as Predictor of Work Impact Measures................................. 82
Table 26 Usage as Predictor of Work Impact Measures................................................... 82
Table 27 Structural Equation Model Comparisons........................................................... 84
Table 28 Structural Equation Modeling Analysis............................................................. 87
Table 29 Structure of Final Survey Instrument................................................................. 98
ix
List of Figures
Figure 1: Research Model: EHR Success Model (with usage)......................................... 19
Figure 2 Research Model: EHR Success Model (with user satisfaction) ......................... 19
Figure 3 Research Model (with Use) for Structural Equation Model Evaluation ............ 53
Figure 4 Research Model (with User Satisfaction) for Structural Equation Model ......... 53
Figure 5 Updated Research Model for Structural Equation Modeling............................. 86
x
Abstract
In the current healthcare industry, physicians’ resistance toward healthcare information
technology (HIT) adoption and mixed result of impact that electronic health record
(EHR) brings to physicians’ work have always been a concern. This study have three
primary objectives: (1) build and contextualize a socio-technical evaluation model to
assess the interaction between EHR and physician; (2) identify critical factors of EHR
system design and organizational support as determinants of physicians’ acceptance of
EHR; (3) assess the impacts of EHR on physicians’ work.
Although our objectives are to study the physicians at large, our model is tested based on
a survey of residents for the following reasons that (1) residents are the main EHR users
and will become more important in the future; (2) residents function as technical support,
when senior physicians encounter problems during EHR use; (3) the similar demographic
background of residents eliminates one source of variance in the outcome variables; (4)
residents are relatively young comparing to all other physicians and they are more
familiar with computer use since childhood, therefore, lack of experience in general
computer use is less likely to be a reason they refuse to use EHR. Therefore, in this study,
the terms “physicians” and “residents” are used interchangeably.
Our evaluation framework integrated three most widely used frameworks in information
system (IS) research and is named as EHR Success Model. Based on the theoretical
models and successful case studies, a comprehensive list of factors is developed for each
of the following constructs in the EHR Success Model: (1) the quality of information, (2)
the quality of the system, (3) the quality of support service provided by organizations, (4)
xi
social influence, (5) user acceptance of technology, including usage and user satisfaction
and (6) physicians’ work impact of EHR use.
Survey methodology was employed in this study and a questionnaire based on the
literature review was designed to evaluate the framework. The survey sample is 219
residents of any specialty from diverse residency programs located in California. The
survey data is analyzed using multiple regression analysis and structural equation
modeling to investigate the interdependent relationships among constructs in the EHR
Success Model.
The analysis result shows that the three quality constructs have one common latent
construct, the fit construct, which stands for the degree to which EHR system functions
meet physicians’ demands at work. The fit is a significant predictor of all work impact
measures except for compensation and the better the fit is, the more positive impact
physicians experience at work, such as increased productivity and improved quality of
care.
This EHR Success Model conceptualizes and operationalizes Delone and Mclean
(D&M)’s IS success measures in the context of HIT for the first time. And it goes beyond
previous models and evaluation studies in three important ways. First, the model and its
measures are contextualized in physicians’ work patterns; when physicians fill out the
survey, they could understand the questions better and connect the scenarios described in
the questions to their real work, so we get more accurate measurements and more
validated result. Second, EHR Success Model includes work impact measures as the final
dependent variables, rather than intention to use, which is often the case in most of the
xii
previous information system studies. It also performs well in predicting most of the work
impact measures and could explain up to 42% of the variance in the work impact
measures; therefore, it could be utilized as diagnostic tool to assess EHR implementation
success and guide system designers and hospital administrators to design more desirable
system and environment with the ultimate objective to improve physicians’ work. Third,
the process of EHR Success Model validation examines the relationships between
constructs and highlights the importance of the technology task fit construct, which
provides deeper insights in understanding what of and how EHR lead to physicians’ work
impact from a systematic perspective.
1
Chapter 1 Introduction
1.1 Background
The national expenditure of healthcare in 2009 was $2,593 billion comprising 17.8% of
same year’s GDP [1] and it is projected to keep increasing [2]. However, the outcome of
care has not been improved following the pace of rising cost; about 98,000 Americans die
of preventable medical errors with an approximately cost of $38 billion per year[3] [4].
Healthcare information technology (HIT), has the potential to reduce cost, medical error
and adverse events and hence to improve care quality, and the national population’s
health [5] [6]. Electronic Health Record (EHR) is one of the most influential application
of HIT and defined as a repository of patient data in digital form, stored and exchanged
securely, and accessibly by multiple authorized users by International Organizational of
Standardization. EHR (1) documents patient demographic and clinical health
information; (2) supports physician order entry; (3) provides clinical decision support; (4)
authorizes providers access to the aggregated detailed data across organizations, time and
geographically regions [7]. Therefore, EHR can improve both efficiency and
effectiveness of care service, by providing more complete data, facilitating
communication among care providers, freeing physicians from administrative work, and
providing decision support at the point of care [8] [9] [10].
Due to the remarkable benefits EHR could bring to healthcare, federal government
advocates the use of EHR and commits remarkable resources to legislate the incentive
program such as the Health Information Technology for Economic and Clinical Health
2
(HITECH) Act, which includes an estimated $44.7 billion for Medicare and Medicaid to
pay for providers who are meaningful users of certified electronic health record
technology [11]. Clinicians and hospitals could receive incentive payments up to $44,000
through Medicare and $63,750 through Medicaid if they meet a list of core set of
objectives and some items from another additional menu set, developed by the Center of
Medicare and Medicaid Services (CMS) with the purpose of promoting the meaningful
use of EHR. The core objectives function as essential starting points of meaningful use of
EHR, such as to record patient demographics, to use computerized provider order entry
and to implement at least one clinical decision support rule.
However, even with incentives and the recognized benefits of EHR, low adoption rates
exist prevalently, especially considering the meaningful use of EHR. In a recent
international survey conducted in 2012, among all of the ten developed countries, United
States has the lowest rate of EHR adoption among primary care physicians: 69% of
physicians reported they were using either a minimally functional or a complete EHR,
while the rate dropped to 27% considering the percentage of physicians using more than
one functions for each of the following four domains: generating patient information,
generating panel information, computer order entry and decision support [12]. The Health
Information and Management System Society (HIMSS) rated EHR capabilities in
hospitals from stage 0 with no electronic system installed to stage 7 with no existence of
paper documentation, and 80% of hospitals are identified as stage 3 or below, which
indicates most of the hospitals are only using fundamental documentation functions in
EHR with a lack of physician order entry and decision support tools employed.
3
Understanding the reasons of low physician EHR adoption rate could help decision
makers design better regulations to spur and motivate providers and hospitals to accept
EHR. The authors of a RAND report concluded that the reasons for this slow adoption of
EHR from the physicians’ side include physicians’ fragmented structure, financial
constraint, complicated system to master and immature HIT strategies [13]. With limited
capability of changing underlying healthcare structure and dealing with the budget issues,
our interest of this study is how to address the problem of poorly designed EHR system
and currently immature organizational HIT strategy. If a system is too complicated to
master and users could not receive sufficient and desired support from organizations,
adopting EHR is at the expense of extensive unexpected time and effort consumption.
Sometimes even though the technical design of EHR is sound, the failure rate still
reaches as high as 50% for social-technical issues [14]. So far, systematic implementation
strategy is a less described topic in literature review [15]; most of the studies throw out
pieces of implementation suggestions based on specific case studies, which may not be
able to be applicable to other organizations.
Moreover, the lack of evidence proving the relationship between adoption of EHR and
improved outcomes is also a major barrier keeping physicians and hospitals from
investing time and effort in accepting EHR [16]. There are numerous possible unintended
consequences triggered by the implementation of EHR [4], which has been illustrated in
ample EHR studies. For instance, EHR increased cognitive overload; physicians have to
change old workflow significantly to accommodate EHR. Unpreparedness and
unfamiliarity to EHR slows down their diagnosis process and, hence, decreases their
4
productivity; change occurs in their relationship with patients and also with other
providers because EHR changes their original communication behavior [17] [18] [19]
[20] [21]. All of the above unintended consequences impair enjoyment and
accomplishment of physicians’ experience at their work, which in turn leads to emotional
burnout and decreased job satisfaction. Even worse, burnout leads to poor quality of
provided care and low career satisfaction, which results in resignation [22] [23]: in some
cases, physicians even choose to retire or resign to protest EHR [24]. It is urgent and
significant to understand how EHR could lessen the existing physicians’ burnout related
issues.
Therefore, merely identifying the determinants of physicians’ EHR adoption is
inadequate. It is important to investigate what factors in EHR design and implementation
strategy could lead to positive consequential individual impact. In this study, we focus on
physicians’ work impact and job satisfaction. This investigation of the relationship
between system design, implementation and job satisfaction enables system designers, IT
departments in healthcare organizations and administrators to understand physicians
better and as a result, to develop a more desirable EHR system and also a facilitating
support system.
1.2 Problem Statement
Many EHR evaluation studies have been performed to assess EHR systems, to explore
the solutions to the issues of low adoption rate of EHR and negative impacts on
physicians’ work. However, it has been a challenge for researchers and practitioners to
apply the evaluation results in their own work due to (1) the specific context of the
5
evaluation object: evaluation is often conducted for a specific system in one or some
particular organizations based on various research questions, (2) the lack of the
availability of comprehensive measures to assess EHR functionalities and organizational
support, (3) the lack of a systematic framework that links the underlying system
characteristics, acceptance of EHR and user satisfaction, (4) specifically for practitioners,
the current mix results and the lack of quantitative evidence that will convince them that
EHR adoption will improve physicians’ work impact.
Technology adoption and user satisfaction are the two most extensively studied directions
in the management of information system (MIS) by both researchers and practitioners.
The literature has suggested that usage behavior and the impact of technology adoption
on individual user could be explained, empirically supported, accurately predicted and
hence efficiently managed by a collection of factors [25]. Those factors are composed of
individual users’ characteristics, characteristics of technology and organizational context.
Multiple frameworks have been designed to include specific combination of relevant
factors to study the determinants of user technology acceptance.
Moreover, based on the findings from technology adoption and user satisfaction research
in the field of management of information system (MIS), people adopt systems, which
are designed to improve their work experience and, as a result, users’ job satisfaction
increase. This also applies to EHR, which is a specific information system designed for
healthcare. Therefore, physicians would employ EHR systems, which are designed to
accommodate to their preference, to improve their work experience.
6
There has been a call for knowledge innovation to transfer existing Industrial System
Engineering (ISyE) tools, techniques, and methods to meet the needs in healthcare field
and furthermore develop new ISyE knowledge. With widely used and validated
technology adoption models from information system management field as theoretical
foundation, and even more importantly, with a systematic view to assess human-
technology interaction, our model will be capable to provide a comprehensive framework
to evaluate the integrated systems of people (physicians), technology (EHR) and process
(operations in hospitals).
1.3 Purpose of the Study
Based on the previous discussion, we find it is necessary to build a socio-technical
evaluation model to assess the interaction between user and technology, specifically in a
healthcare setting: a technology adoption model to explain the acceptance of Electronic
Health Record (EHR) in physicians and also to assess changes in physicians’ work
impact after EHR adoption.
The purposes of this study are as following:
(1) To develop a comprehensive EHR evaluation model to provide measures evaluating
EHR system functionalities and organizational support, assessing physicians’ actual
use of EHR and its work impact.
(2) To identify which factors regarding design and support service of EHR are
determinants of actual utilization of EHR, individual impact on physicians’ work
performance and job satisfaction.
7
(3) Gain knowledge in understanding the interaction between user and technology from a
systematic perspective, more specifically, it will provide insights for human factor
engineers on what factors contribute to good usability of EHR system from
physicians’ perspective; it will gain knowledge on how to choose implementation
strategy to adapt the change from a systematic view, in order to generate positive
work impact as outcomes.
As a result, this study could also be used:
(1) To provide a tool to find causes of problems, such as low user satisfaction, low
adoption rate and unintended consequences.
(2) To facilitate system designers, service supporters, hospital administrators and other
stakeholders to develop better-customized technology and to provide more
satisfactory support service.
(3) To provide an instrument that technology evaluators or survey designers could refer
to when evaluating HIT products, especially EHR.
Survey methodology is employed and questions are developed for each construct in our
model to capture desired characteristics of EHR, implementation process, user
experience, and outcomes. Multiple multivariate analysis techniques will be performed to
examine the interrelationships among the independent factors, usage and outcomes
quantitatively; it is more convincing when comparing to other qualitative evaluation EHR
studies.
8
1.4 Organization of the Dissertation
Chapter 2 is literature review of technology adoption theories in the information system
management field and the development of our research model, the EHR Success Model;
Chapter 3 is the development process of the measures for each component in our model
and are used in the survey; Chapter 4 explains the research methodology, such as survey
refinement and distribution, and data analysis techniques; Chapter 5 presents the results
of data analysis and validation; Chapter 6 provides a discussion of the results,
summarizes the findings of this research, contributions of the model, limitations of the
study and future work.
9
Chapter 2 Literature Review and the Development of
Research Model
There have been extensive studies and researches in the Information System (IS) field,
which have developed many valuable theories and frameworks that explain technology
adoption and information system success [26]. Studying theories from the IS field
provides a beneficial research resource in understanding EHR, since EHR is an
application of information system in the healthcare industry. The potential benefits of
EHR could only be exerted when physicians adopt the system and use it regularly.
One of the most influential models is Davis’ technology acceptance model (TAM) [27],
which was developed to explain why a user accepts or rejects an information system.
TAM has sound theoretical ground and substantial support from empirical validation as
well [25]. As a result, TAM is highly significant in Information System literature; about
10% of IS publications are using TAM as theoretical foundation [28]. With over 20
studies testing it and even more research referencing it, the IS research literature shows
how TAM is a fitting theory in understanding HIT adoption [29].
However, not many TAM related research has investigated system features or
implementation environment as antecedents to the independent constructs of TAM, such
as perceived usefulness and perceived ease of use, therefore, analysis or evaluation based
on TAM could only give holistic suggestions and limited guidance to system designers,
policy decision makers and other stakeholders on how to influence actual usage through
changes in system design and implementation operations [30]. For example, it’s not
10
meaningful enough for designers intending to improve the system when they are only
informed that the reason for EHR not being used sufficiently is that users could not see
the usefulness of the system in their future work. In contrast, abundant studies and
researches focus on IS user satisfaction, which is another criteria of the most commonly
accepted measures of IT acceptance [31] [32] [33], and enumerate system design
characteristics and support service features, which provides a more useful diagnostic tool
and more guidance to design the system and also implementation process successfully.
Moreover, as Rogers suggests in his publish of diffusion theory [34], later research
should concentrate more on answering the question “what are the impacts after accepting
a technology?” instead of “what are the determinants of technology acceptance?”
Nevertheless, the original objective of TAM is to trace the impact of external independent
variables on the internal beliefs, attitudes and intentions. However, TAM lacks an
examination of the consequences after accepting information technology, which is
exactly the purpose of this study.
In recognition of the importance of integrating individual impact EHR brings to
physicians’ work, the Delone & McLean IS Success model is also reviewed in this study.
Designed to synthesize previous studies of information system effectiveness and thus
provide a more integrated framework to measure success, the Delone & McLean IS
success model, originally published in 1992 and later updated in 2002, describes the IT
effect on users with a focus on the interaction among dependent variables. The IS success
model provides a comprehensive taxonomy, presenting six different aspects of success of
information system: system quality, information quality, use, user satisfaction, individual
11
impact and organization impact [35]. The IS success model structured extensive literature
on information system use, user satisfaction and impact; it has been widely used and
referred in various studies, but seems not to be sufficiently validated [36].
Consistent with the Delone & McLean IS Success model’s belief that utilization and user
satisfaction of the system contribute to individual impacts, Goodhue proposed the
Technology-to-Performance model in 1995 [37]. It highlighted that task-technology fit
(TTF) and also determines the impact of IT on individual’s work performance. The
author asserted that both utilization and task-technology fit should be included when
explaining individual performance impact. There are eight final TTF factors measured in
their study as following: data quality, location of data, authorization to access data, data
compatibility (between systems), ease of use/training, production timeliness (IS meeting
required operations), systems reliability and relationship between IS support and users,
with antecedents of characteristics of individual, task and technology. In the TTF model,
it was proved that characteristics of individual, task and technology not only have
influence on TTF, but they have a direct effect on individual impact as well. Due to the
similarities of the dimensions of the TTF constructs and the dimensions of information
quality, system quality and service quality constructs in the IS success model, which will
be explained in details in IS Success model paragraph, it is reasonable to consider the
TTF model as one theoretical foundation in our model development.
2.1 TAM and UTAUT
Research of Technology Acceptance Model (TAM) originated to address the issue of
employees’ reluctance to use IT, even though it was available to them at work [38, 39]. It
12
was developed to predict people’s behavioral intention and is capable of explaining users’
behavior when deciding whether to accept a specific new information technology. TAM
is one of the most significant frameworks used in later research [39].
To arrive to this model, the author accepted the Theory of Reasoned Action (TRA) as
theoretical foundation, drawn from social psychology, which has proved to be useful in
explaining diverse behaviors [40]. Davis customized this theory in the context of
understanding IT use. Since the actual IT use is difficult to measure, the behavioral
intention to use is the mostly accepted and reliable predictor of actual use, therefore,
intention to use is sometimes the only outcome to measure [41] [42]. Behavioral intention
is generated by a user’s attitude towards the information technology, which are
influenced by two determinants: perceived usefulness and perceived ease of use.
Moreover, perceived usefulness (PU), affected by perceived ease of use (PEOU), has a
direct effect on behavioral intention to use IT.
Later research proves that the two independent variables from the TAM model, PU and
PEOU, are always used in explaining actual usage of technology [43]. Social Norms (SN)
is added in the extended version of TAM, TAM2, to capture the influence from social
environment on individual user’s decision on whether to adopt information technology or
not [44].
More recently, a unified model integrating eight structural model explaining user
acceptance towards information technology, with very much resemblance to TAM, was
proposed and named as Unified Theory of Acceptance and Use of Technology (UTAUT)
[45]. In the UTAUT model, the perceived usefulness, perceived ease of use and social
13
norm (from TAM2) constructs were incorporated into performance expectancy, effort
expectancy, social influence constructs, respectively. A new construct called facilitating
conditions was added in UTAUT as another important determinant in explaining
behavior intention and actual use. A facilitating condition is the belief users hold
regarding how much support they will receive from organizational and technical
infrastructures.
UTAUT could explain 70% of the variance of behavior intention and 50% of the variance
of actual use, while the other eight models could explain about 17% to 53% of the
variance in behavior intention [45].
2.2 Delone &McLean IS Success Model
As the previous explanation shows, the original objective of TAM is to trace the impact
of external independent variables on the internal beliefs, attitudes and intentions [39], but
lacks an examination of the consequences after accepting information technology. Only
after users realize and experience the positive impacts that information technology brings
to their work and life, they would continue to employ the system, which could create
more value for the individuals and also organizations. Therefore, IT acceptance, which is
the ultimate dependent variable studied in TAM, should be considered as a predictor of a
new dependent variable, the impact, since the ultimate goal is enabling information
technology to improve users’ work experience and hence increase performance.
The D&M IS success model presents the relationships among complex-dependent
variables, starting from information quality and system quality, which influence use and
user satisfaction. Both of the use and user satisfaction, as two mostly accepted surrogates
14
of information system success and effectiveness, are determinants of individual impact
and its successor, organizational impact.
The IS Success model suggests that an IS was developed first, and it can be depicted as
different degrees of system quality and information quality, which could be evaluated in
diverse success criteria. Then the user experiences these characteristics by using the
system, and satisfaction is derived from users’ feelings. The use of the system is the
primary variable through which IT can affect individuals in the conduct of his or her
work [30].
Each major construct could be measured by diverse criteria. System quality measures
technical success. It assesses information processing system itself, with criteria including
response time, flexibility, integration of systems and usability; information quality
measures semantic success. It assesses the quality of input and output of the information
system, with criteria including accuracy, timeliness, completeness and accessibility; use,
user satisfaction, individual impacts and organizational impacts measure effectiveness
success. Use measures users’ actual consumption of the system with criteria including
frequency of use, amount of use and number of queries. User satisfaction measures user’s
attitude towards the system and information product generated by the system with criteria
of overall user satisfaction. Individual impact measures the influence that information
system causes on individual’s behavior, such as improved productivity. Organizational
impact measures the effect that information system has on an organization, such as cost
reduction and increased return on investment [35].
15
In the updated D&M model, a new construct, namely service quality, was added in the
model to be part of IS success as another independent construct, which measures the
support provided to a user, such as training and response time to user’s requirements. In
the advanced version of IS success model, the individual impact and organizational
impact constructs were combined into net benefits [46]. In this study, we concentrate on
the individual impact EHR brings to physicians, which includes productivity, relationship
with patients and time consumption at work.
It is believed that when Healthcare IT, including EHR, is properly used and deployed, it
can bring significant improvements to physicians’ work and the quality of care they
provide [47]. Our interest in this study is to identify the determinants of physicians’
actual usage of EHR and also evaluate the impacts EHR brings to physicians’ work.
Therefore, we choose the D&M IS Success Model instead of TAM as the backbone of
our research model.
2.3 Technology to Performance Chain Model
Both research streams: utilization focused and task-technology fit focused, have
limitations in investigating the linkage of technology and individual impact, Goodhue
proposed and tested an integration of these two complementary streams [37]. This theory
holds that the match between the capabilities of information system and requirements of a
user when completing his or her tasks determines a user’s utilization of the information
system and also influences work performance at the individual level.
The characteristics of task, technology, individual, and the interactions between task,
technology and individual are the antecedents of TTF. Different users possess different
16
technological skills; the fit of task and technology increases with the familiarity of a user
towards a particular technology. Completing specific tasks can be assisted with certain
technology; as the gap between the requirement of completing and what technology can
provide decreases, TTF increases as well. Individual characteristics include his or her
work title and previous experience with the information system or relative technology;
task characteristics could be described as difficulty level, variety level and
interdependency level with other groups in the organization; example of technology
characteristics is the integrated level of system: whether it is used by the whole
organization or a particular department [48].
In our study, the only users we are studying are physicians in organizations, such as
hospitals or clinics; and it is meaningless to separate physicians’ tasks into categories by
complexity level or interdependency level because physicians have very similar routines
at their work depending on their specialty. We are interested in a complete functional
EHR system, which is an integrated system employed by the whole organization.
Therefore, we are not going to include technology characteristics and task characteristics
in our research model, however, individual characteristics will be included in our model
as factors influencing user evaluation of the information system as suggested in TTF
model.
2.4 Research Model (EHR Success Model)
The objective of our research is to identify significant factors of system design and
implementation of EHR that impact physicians’ work experience. Factors of system
design and implementation are actually success criteria of information quality, system
17
quality and service quality constructs in the IS Success model. Physicians’ work
experience are concrete measures of individual impact from the IS Success model as
well. Therefore, the IS Success model is adopted as the backbone of this research and we
named our model after the IS Success model as EHR Success Model to highlight our
study object: EHR. “Success” here shares the same meaning as it was explained in IS
Success model: different dimensions of effectiveness of an information system.
However, by comparing the IS Success model with other models (Table 1), there are also
some important constructs that are not covered by IS Success model but may also
contribute to our research objectives. Therefore we include the following specific
constructs from other frameworks.
(1) We include individual characteristics in our model; Many previous studies show that
age, previous computer experience and other individual characteristics have a non-
ignorable impact on individuals’ work performance [37]. This is shown in blue in EHR
Success Model.
(2) How much freedom physicians have, or the perceived level of voluntariness during
the EHR adoption process, in different healthcare organizations varies, which could also
be a factor in determining the actual use as recommended by Venkatesh [45]. One of the
key findings of comparing eight models in information technology acceptance is that
under voluntary environment, Social Influence construct never has a significant influence
on behavior intention to use a new information system, but always has a statistically
important effect on the low level of perceived voluntariness. This is shown in purple in
EHR success Model.
18
(3) We also slightly changed the names of information quality, system quality and service
quality by adding “users evaluation of” in front of the name of each construct. It was
asserted by Goodhue that user evaluation was always used as surrogate of objective
characteristic of the underlying system [48], which is in blue in EHR Success Model.
(4) Based on our research objectives and suggested measures from the IS Success model,
we have several measures for the work impact construct, which are discussed in multiple
EHR evaluation studies and some are also factors affecting physician job satisfaction
according to a validated study supported by a national survey [49].
Table 1 Comparison of Previous Frameworks and EHR Success Model
TAM UTAUT IS Success TTF EHR
Success
Individual
characteristics
N Y N Y Y
Technology
characteristics
N N N Y N
Task
characteristics
N N N Y N
Evaluation of
information
and system
Perceived
usefulness;
Perceived
ease of use.
Performance
expectancy;
Effort
expectancy.
Information
quality;
System
quality;
Information
quality;
System
quality;
Evaluation of
support
service
N Facilitating
conditions
Service
quality;
Task
Technology
Fit
Service
quality;
Social
Influence
N Y N N Y
Intention to
use
Y Y Y N Y
Utilization Y Y Y Y Y
Overall
Satisfaction
Attitude N Y N Y
Individual
Impact
N N Y Y Y
Organizational
Impact
N N Y N N
19
Figure 1 Research Model: EHR Success Model (with usage)
Figure 2 Research Model: EHR Success Model (with user satisfaction)
In figure 1, intention to use and usage are mediate variables connecting three quality
constructs with the work impact measures, which is the case in most of the IS theories;
however, if the technology use is not volitional, usage is not an appropriate measure of
User Evaluation
of Information
Quality
User Evaluation
of System
Quality
User Evaluation
of Service
Quality
Use
Intention to
Use
Social
Influence
Physicians'
Work Impact
Delone & McLean IS Success Model
Task Technology Fit Model (TTF)
United Theory of Acceptance and Use of
Technology (UTAUT)
Individual
Characteristics
User Evaluation
of Information
Quality
User Evaluation
of System
Quality
User Evaluation
of Service
Quality
User
Satisfaction
Physicians'
Work Impact
Delone & McLean IS Success Model
Task Technology Fit Model (TTF)
United Theory of Acceptance and Use of
Technology (UTAUT)
Individual
Characteristics
20
information system success, instead, users satisfaction will be utilized in the model, as
shown in figure 2. Existing theoretical evidences support the impact of social influence
on intention to use, but not on user satisfaction, therefore, social influence is not included
in the research model in figure 2.
In our study, we will examine EHR Success Model using survey methodology to evaluate
each construct quantitatively and perform multivariate analysis to test the relationships
among different constructs. No previous study has linked the user evaluation of the
system to physicians’ work impact and job satisfaction using EHR. No research model of
studying EHR was designed based on the integrated model of more than two models from
technology acceptance in the information system management field. Moreover, the
existing EHR implementation frameworks are only based on qualitative analysis. This
study could provide more insights of EHR implementation with quantitative evidence.
21
Chapter 3 Development of the Measures for EHR Success
Model
In this chapter, we designed the measures for each construct in the EHR Success Model:
we identified comprehensive but necessary attributes for three quality constructs, then we
developed corresponding measures for each attribute in the quality constructs and the rest
constructs.
For the user evaluation of information quality, user evaluation of system quality and user
evaluation of service quality, we chose attributes for each construct after reviewing
extensive literature of IS Success model and developed a comprehensive list of measures
for each attribute citing in the context of EHR [35]. We used the same measures as
questions in the survey. For the social influence construct, we refer to the UTAUT model,
which integrated and outperformed other eight models in information technology
acceptance discipline [45]. For the Individual impact construct, we obtained measures
from a previously validated national survey on physician work performance and job
satisfaction [50]. The following table is a conclusion of all attributes designed for each
quality construct and the measures for the rest constructs. The complete list of measures
is presented in Appendix A. Since it is impossible to capture all important characteristics
of most constructs, we have more than one attribute included for each quality construct.
For some attributes with different understanding in different scenarios, each of them has
multiple measures such as the accuracy factor in information quality construct, while for
22
other attributes, we think only one measure is enough to assess that attribute, such as the
response time attribute in system quality construct.
Table 2 List of Attributes and Measures for Each Construct in EHR Success Model
1. Accuracy
2. Completeness
3. Timeliness
4. Accessibility
User
Evaluation of
Information
Quality
(attributes)
1. Usability
2. Flexibility
3. Integration
capability
4. Response
time
User
Evaluation of
System
Quality
(attributes)
1. Senior management
2. Stakeholder
involvement
3. Project leadership
4. Assessment of
environment/culture
5. Workflow redesign
6. Training
7. Support
8. Feedback
User Evaluation of
Service Quality
(attributes)
1. Incentives
2. Freedom
Social
Influence
(measures)
1.Intention
to use EHR
in the
future
2.
Percentage
of work
time using
EHR
3. User
satisfaction
towards
EHR
User
Acceptance
(measures)
1. Autonomy
2. Referral
3. Patient care
quality
4. Error rate
5. Patient
relationship
6. Productivity
7. Personal time
8. Compensation
9. Administrative
responsibilities
10. Overall
satisfaction
towards work
11. Overall
satisfaction
towards practice
Work Impact
(measures)
23
3.1 Bailey and Pearson’s User Satisfaction Instrument
Bailey and Pearson (B&P)’s user satisfaction instrument is the only resource referred to
in both information quality and system quality constructs in the original IS Success
model. Service quality construct was not added until 2002 in the updated version of the
IS success model; however, B&P covers the dimension of service quality as well.
Considering the consistency of questions in our survey and also the popularity of B&P’s
instrument, we decided to employ it as our theoretical foundation when developing
attributes and measures of the following three constructs: user evaluation of information
quality, system quality and service quality.
User evaluation is recognized to be the most efficient surrogate of information system
effectiveness and successfulness, so there have been a lot of attempts generating a
validated and comprehensive measure mechanism [51]. It is more valuable to evaluate
users’ perceptions of system functionalities rather than to measure the technical
functionalities themselves [52], for the reason that a “good” system perceived as a “poor”
system by users is still a poor system [51]. Therefore, in this study, we choose
physicians’ perceptions or evaluations towards the system functions as measures of the
quality of EHR technical functionalities.
Among all research studying user satisfaction of information system, Bailey and
Pearson’s instrument have been a standard due to its empirical derivation, adequate
empirical support, and level of coverage (both system and support service). As a result,
B&P’s instrument is one of the most widely and intensively used and validated
mechanisms in evaluating user satisfaction towards IS. It examines an inclusive list of
24
system characteristics, information design attributes and organizational factors [53],
shown in Appendix B. It provides important attributes to capture characteristics of IS,
and has been used in the healthcare information system setting, but only partially[54].
The evidence that the factors from B&P’s instrument are also important in physicians’
adoption of EHR are found in abound literature review; evidence will be referenced in
later sessions where measures for each factor are developed.
3.2 User Evaluation of Information Quality
Based on the advice from a panel of experts who defined functionalities of EHR in the
Institute of Medicine’s framework [55], there are four key functionalities that EHR
should provide: (1) documenting patients’ demographical, clinical information and the
information of care provided by different healthcare professionals; (2) viewing and
managing test results; (3) computerized order entry; (4) supporting the decision-making
process in patient care [56].
An EHR should possess the following components: electronic documentation of
physician’s notes, electronic viewing of laboratory and radiology results, electronic
prescribing and ordering [57], and clinical decision support system. Longitudinal and
integrated patient information enables physicians to understand patient’s history more
completely; the electronic laboratory and radiology results eliminates the delay caused by
all hand offs in the paper system and guarantees faster availability to the providers;
together with clinical decision system, it helps prevent medical errors and adverse events
[10] [58]. The implementation of EHR could also facilitate providers to adhere clinical
guidelines.
25
With integrated information of patients and other clinical sources being more accessible,
better and safer diagnoses and treatments could be achieved with the assistance of
available clinical decision support; physicians’ productivity should be increased since
time is saved from stopping dealing with paperwork. Therefore, the quality of
information output of EHR is crucial to improve physicians’ work performance. Some
attributes of information quality based on Bailey and Pearson’s instrument include:
convenience of access, accuracy, timeliness, precision, reliability, currency, completeness
and relevancy. Completeness and accuracy are the most frequently used attributes [59].
By understanding the definitions of those attributes, some could be combined into one.
Precision means the variability of the output, and if the information is not precise, then it
is not accurate; therefore, we include precision into accuracy. The accuracy of
information generates users’ dependency on it, which is defined as reliability by Bailey
and Pearson; so we combine reliability into accuracy as well. We combine currency into
timeliness and define it as the availability of new information in an up-to-date manner,
which includes both the update of existing information and new data from other sources.
Relevancy is the degree of congruence between what users need and what is provided to
users through the information system. Lack of relevancy occurs in the following two
scenarios: lack of desired data, which is reflected in the completeness attribute, or too
much unnecessary data, which could be reflected in the volume attribute in the quality of
system construct. Comprehensiveness is the percentage of essential information desired
by users available through the system, which could be included in completeness attribute.
As a result, we choose accuracy, completeness, timeliness and accessibility as the final
26
attributes in the information quality construct. Most of the attributes are used in another
study examining EHR impact on physicians’ work by Joos [60]. Some of the survey
questions in our research are from that study, mainly in sections 3.2.3 and 3.2.4.
3.2.1 Accuracy
Generally in the IS field, accuracy stands for the correctness of the output. In the context
of EHR, it means the extent to which the output data is in conformity to the true state of
the patient [61]. In his study of variability and accuracy of medical data, Komaroff
provided a comprehensive list of sources leading to inaccurate medical history, physician
examinations and laboratory tests; he also concluded that the computer based medical
information system could reduce variability and inaccuracy by (1) storing information in
the system, which is more reliable than depending on memories of patients and providers
and also more legible than providers’ handwriting; (2) providing a standard questionnaire
or checklist, which clarifies and defines specific data and also guides providers to elicit
desired information; (3) in a more subtle aspect, the system is developed to define some
medical data quantitatively, which eliminates the variability and inaccuracy caused by
commonly imprecise data in medicine [62].
Based on the previous explanation of how EHR reduces inaccuracy, we designed the
three question statements to reflect the accuracy attribute with sufficient references
shown in table 3. The first question is designed based on the definition of accuracy of
data in EHR. However, without legible and understandable representation, it will not aid
physicians in their decision-making processes even though the information is accurate.
27
Table 3 Survey Questions for Accuracy Attribute and References
Survey questions References
ACC1: Information stored and displayed by EHR reflects the true state of
patient; the information provided by sections of medical history, physician
examinations, lab tests, symptoms, diagnosis, treatment, referrals in EHR
are correct.
[63] [62]
ACC2: The representation of displayed information by EHR is legible. [62] [64]
ACC3: Reminders and on line alerts provide correct information at right
time, for example drug interaction.
[65] [66]
3.2.2 Completeness
In our study, completeness is defined as the extent to which all necessary data to users are
recorded in EHR [61]; it measures the prevalence of missing essential data [59]. In a
comparative study of documentation completeness pre and post EHR implementation, it
shows that missing data occurs significantly more in paper patient record, such as chief
complaints and laboratory results [67]. By providing complete data, more specifically,
longitudinal and integrated information of the patient, EHR could facilitate physicians to
evaluate the patient thoroughly without overlooking any important details.
Several studies proposed that EHR was evidentially contributing to improved
completeness of documentation by providers [68]. Structured entry could prompt and
encourage complete data documentation with more details [69] [68] [70] [66]. The
controlled terminologies in the structured data entry could capture clinical information
and represent it, which facilitates physicians to document. Physicians should always be
able to select matching terms after they type in a phrase when completing tasks, such as
entering patient’s past medical history and medication [71]. However, sometimes
physicians are not satisfied with the obligatory responsibility and inflexibility that
structured entry EHR generates, as a result, capability of inputting free text comment in
28
EHR is also commonly appreciated by physicians [69]. We designed the following
survey questions to indicate the completeness attribute, as it is shown in table 4.
Table 4 Survey Questions for Completeness Attributes and References
Survey questions References
COM1: EHR can always provide me with the right term(s) to match the
concept(s) I am looking for, when I complete tasks such as updating
problem list, adverse reactions and medications.
[71]
COM2: Free-text notes (narrative text) are decipherable by EHR:
information from text could be captured and stored in the system.
[69] [79]
[72]
COM3: I can find specific patient information whenever I need, such as
information of patient medical history, symptoms, diagnosis, treatment or
referrals.
By definition
of
completeness
3.2.3 Timeliness
In our study, timeliness refers to how timely the information flow is or how soon the data
is updated after new results come out. The electronic laboratory and radiology results
eliminate the delay caused by all hand offs in the paper system, which is perceived by
physicians as one of the most helpful benefits of using EHR [73]. With faster access to
lab results, physicians can respond to abnormal lab result values more quickly [74]. We
designed the following survey questions to indicate the timeliness attribute, as it is shown
in table 5.
Table 5 Survey Questions for Timeliness Attribute and References
Survey questions References
TIM1: New results for patients are available to me sooner than old
paper system.
[60]
TIM2: The messaging feature in EHR allows me to communicate more
quickly with my staff concerning patients.
[60]
TIM3: The messaging feature in EHR allows me to communicate more
quickly with providers outside my clinic concerning patients.
[60]
29
3.2.4 Accessibility
Accessibility to the data across a network of computers within the clinic enables
physicians to view patients’ record when they need to. The ability of accessibility outside
the clinic aids physicians working offsite. Accessibility is argued to be sufficiently good
for physicians’ decision making processes [75]. In a study assessing the impact of using
EHR, it was found that patient visits decreased due to the increase of contact between
patients and physician via telephone given the enhanced accessibility to medical record
through EHR [84].
We designed the following survey questions to indicate the accessibility attribute, as it is
shown in table 6.
Table 6 Survey Questions for Accessibility Attribute and References
Survey questions References
ACS1: I like that I have access to my message while I am away from
clinic.
[60] [67]
ACS2: When a patient calls on the telephone, I can answer his or her
questions faster.
[60] [76]
3.3 User Evaluation of System Quality
Being another important construct of Delone & McLean IS Success model, system
quality has also been analyzed in a lot of studies, among which usability is the main
concern [59]. The National Institute of Standards and Technology (NIST) defines
usability as the “… effectiveness, efficiency and satisfaction with which the intended
users can achieve their tasks in the intended context of product use.” [77]. A physician
will refuse a system that is too complicated to learn, which could posit a barrier during
care service. Therefore, the following features should be included for an EHR system to
be easy to use, even for people with minimal computer experience: (1) ease of navigation,
30
(2) minimal typing and (3) clear representation [67]. According to the B&P instrument,
except for usability, response time, flexibility and integration ability with other existing
systems are also important attributes of system quality, which will be explained in
following sections.
3.3.1 Usability
As we mentioned earlier, usability is one of the major factors preventing physicians from
adopting and using EHR. Usability is directly related to the loss of physicians’
production, alert fatigue, error rate and overall user satisfaction; most of current EHR
systems provide poor cognitive support and cause interruption on physicians’ workflow,
according to the National Research Council [78]. Even worse, poor usability leads to
patient safety issues. Joint Commission pointed out that about 25% of medication errors
in the 2006 Pharmacopeia MEDMARX were caused by computer technology [79]. About
82% of the errors stem from EMR, EHR, computer physician order entry (CPOE) and
clinical decision support system (CDSS) [77], with usability as one of the major causes.
According to Healthcare Information and Management System Society (HIMSS) EHR
task force [77], a system with good usability should be intuitive and easy to use, which
requires minimal mental effort and forgives mistakes. The following characteristics in the
HIMSS report are developed as measures of usability, which are shown in the following
table and used in our survey.
(1) Simplicity
Simplicity refers to only displaying the necessary information on the screen and, hence,
the information display is concise and clear without visual clusters. And physicians do
31
not waste time in trying to find the information. This is even more important in complex
systems with complicated information to display. For example, when there is too much
information to be presented, some processing work should be performed at the backend
and, as a result, only the most essential raw or processed data is presented.
(2) Effectiveness of information presentation
In order to achieve the conciseness of displayed information, the effectiveness of
information presentation should be guaranteed, for example, appropriate density by using
certain font size, meaningful use of colors and understandable terminologies.
(3) Consistency
Consistency includes external consistency and internal consistency. External consistency
is the similarity between users’ experience with EHR and other clinical computer systems
they used before, while internal consistency refers to terminologies, appearance, layout
and system operation consistency.
(4) Efficient interaction
To complete one task, physicians need to interact with the system frequently and one
measure of efficient interaction is to minimize the steps to complete tasks. Moreover,
system functionalities such as auto-tabbing (cursor automatically moves to the next data
entry field to be filled out), good default values and large enough list box to limit
excessive scrolling could also improve physicians’ efficiency in using EHR. Also
frequent switches between mouse and keyboard could cause frustration.
(5) Forgiveness and feedback
Physicians should feel safe to explore the newly implemented EHR system without
having to fear that they may cause some unforgivable unintended consequences. In other
32
words, the system should possess a forgiveness feature. This will help accelerate the
learning curve, especially in the case that physicians do not have enough time for
training. Effective feedback informs users about the effect of actions they have taken;
combining the features of forgiveness and feedback will decrease user errors.
(6) Minimizing cognitive workflow
Already dealing with tremendous time pressure and frequent unexpected demands and
interruptions, physicians should not endure more cognitive load created by EHR. Having
all the needed information displayed, with data displayed in the same visual field as
better, physicians don’t need to change screens frequently, which could be an interruption
on their thinking process upon completing clinical tasks. Moreover, the extent to which
screen flows match physicians’ workflow is also crucial to make physicians naturally
familiar with the system. For some of the characteristics, there is more than one measure,
such as for efficient interaction, forgiveness and feedback and effective information
presentation. For most of the survey questions, there are matching factors from Bailey
and Pearson’s instrument, which are also listed in the last column of table 7.
Table 7 Survey Questions for Usability
Characteristics Measures Corresponding
Factors from B&P
Simplicity USE1: The screen design is clean; the displayed
information only includes functionality that is
needed to effectively accomplish tasks.
Volume of output
USE2: Character count, resolution, font, font size
are well designed to help display information.
USE3: Colors are used consistently and could
convey meaning.
Effective
information
presentation
The terminology, abbreviations and acronyms used
are commonly understood and unambiguous.
Documentation
33
Characteristics Measures Corresponding
Factors from B&P
Consistency USE4: Concepts (terminologies), behavior (how to
operate the system), appearance and layout are
used consistently throughout the system.
Format
USE5: The number of steps it takes to complete
tasks is acceptable.
USE6: The EHR provides me the following
functionalities: auto-tabbing (cursor moves
automatically from one data entry field to the next),
good default values, large enough list and text
boxes to limit scrolling.
Efficient
interaction
USE7: There is no need to switch between
keyboard and mouse frequently.
USE8: I am able to explore the system without fear
of disastrous results.
Forgiveness
and feedback
USE9: Error messages are clear. They explain the
reason why the error occurred and suggest what
the users should do next.
Error recovery
Minimizing
cognitive load
USE10: The screen flows map to the users task and
workflow.
Adaptability to
workflow
Table 7 (Continued)
3.3.2 Flexibility
Flexibility is defined as the capability of the information system to change or adjust to
different circumstances. More specifically, in the context of EHR, flexibility could be
interpreted as the flexibility of computer interface [80], end users’ preferences in creating
forms or screens to document care [81], user center designed system [82] and customized
prefilled templates through simple typed entry [36].
From interviews with physicians, their biggest concern regarding the flexibility feature of
EHR is whether they could customize order sets, which facilitates their delivery of care
the most. The previous study from Kaiser Permanente showed that physicians were
34
inclined to create customized interfaces according to their preferences [80]. Therefore, in
our questionnaire, we include the following survey question as following: I am able to
create customized interfaces (such as forms or screens) to document care, for example,
order sets.
3.3.3 Integration Capability
Integration stands for the ability of systems to communicate/transmit data with other
systems servicing different functional areas, for example, old existing clinical systems,
electronic image systems such as PACS and other organizational systems. When the new
EHR system cannot communicate with existing systems, it causes a lot of inconveniences
for physicians and drives physicians away from adopting EHR. For example, if
physicians cannot import historically necessary data from other systems, they always
need to switch from EHR to another clinical system where data was stored, which makes
physicians reluctant to use EHR. Therefore, we include the following question in our
survey: The EHR can integrate with other clinical systems in the facility; for example, I
can import data from an existing system.
3.3.4 Response Time
Response time refers to the time lapse from initiative request from users to the moment
when a result is returned by the system to users. The quality of the speed and efficiency
of the system guarantees that using EHR does not require physicians’ time excessively.
Response time is of vital importance in physicians’ experience and hence EHR’s
successful adoption [60] [15] [80] [83]. Therefore, in our study we include the following
question: The EHR responds to my request in a speedy manner.
35
3.4 User Evaluation of Service Quality
Except for information quality and system quality, service quality is another important
component to facilitate users to adopt and employ information systems. It measures the
quality of support users receive from existing organizational and technical infrastructure
[45]. Most existing EHR implementation frameworks, or “EHR implementation best
practice”, provided recommendations from many different focuses. But those frameworks
are actually unorganized pieces of suggestive information other than a systematic
approach. As a result, previous research shows the lack of EHR implementation theory in
hospitals; no existing EHR implementation framework could explain the features of the
implementing process well enough to assist hospital administrators to achieve more
successful implementations. From abound studies in HIT implementation strategy, the
following research are chosen to be theoretical foundation: (1) the over-arching best
practice framework proposed by Keshavjee et al.: after a systematic review of the records
of over 50 EMR/EHR implementations projects with physicians involved, it provides a
comprehensive integrated framework explaining and predicting the success of EHR
implementation [84]; (2) empirical IT implementation theory drawn from interviews with
thirty clinicians and program managers based on a successful EHR implementation in a
Swedish teaching hospital [85]. Attributes of effective implementation from the two
studies are listed in the following table; corresponding attributes from Bailey and
Pearson’s instrument are also listed.
36
Table 8 Comparisons of EHR Implementation Frameworks
Timel
ine
Best practice framework
[84]
Empirical implementation theory
[85]
Corresponding
attributes from
B&P
Governance
(Senior Management)
Prioritization and driving by
management team
Project leadership Competent IT project leader and
team;
Physician champion
Top management
involvement
Along the whole
project
Involve stakeholders NA
Choose software Selection of the system was made
locally with extensive
consultation with clinical
personnel.
Feeling of
participation &
control
Sell benefits Consensus about need for the
system.
Pre-implementation
Assessment of readiness NA
Understanding the
system
Workflow redesign NA Job effects
Implementation assistance
from vendors
NA Vender support
Training Education provided at the right
times, amount and quality
Degree of training
Implementation& post
implementation
Support Ordinal workload readjustment
due to time spent on
implementation
NA
Feedback Physician acceptance and
implementer’s responsiveness to
concerns.
User support from
HIS department
Job effects NA NA
Post
implementatio
n
Incentives NA NA
3.4.1 Senior Management
Governance refers to senior management’s missions and behaviors through the whole
EHR implementation course. Only with the support from senior management, the
organizational vision is clear and could guide the whole team. Moreover, commitment
from senior management could guarantee substantial resources, aid workflow redesign
37
and tackle unexpected change and stress. A lot of research show that commitment from
institutional leadership is crucial to the success of EHR implementation [86] [87] [88].
3.4.2 Project Leadership
Project leadership includes two important roles: a project manager who keeps
implementation on schedule, and physician champions who can bridge the gap between
the implementation project and other physicians. Physician champions are selected from
and respected within the physician population. They are people who know the system,
understand the requirements and, hence, could advocate the adoption [89] [90]; they
should be committed to the success of the implementation project and function as an
EHR early adopters, who facilitate the communication between physician groups and the
information technology department in hospital. The importance of a physician champion
to a successful EMR adoption is repeatedly reported in a lot of studies [91] [92] [93]. As
a representative of physicians, physician champions could also increase physicians’
involvement and participation in the whole implementation process by understanding the
required skills and time commitment from the physicians’ own perspective and soliciting
physician support for implementation, which is crucial in EHR diffusion [94].
3.4.3 Stakeholder Involvement
Gaining physicians’ participation and endorsement directly improves their acceptance of
EHR and satisfaction [84]. In a lot of studies, it shows that the main reason for an EHR
implementation failure is inaccurate assumptions of the user requirements by
implementers. Communicating with end users directly and receiving their input could
38
effectively correct this. Moreover, inputs from physicians and other care providers could
also point implementation towards a realistic goal [95].
3.4.4 Assessment of the Environment
Studying the organizational and cultural context is necessary and important to design a
realistic implementation plan; understanding the practice environment is crucial to be
capable of modifying potential problematic operations before and during implementation
[95], for example, workflow redesign. Moreover, the ordinary workload should be
reduced to compensate the extra required personnel time from physicians getting familiar
with EHR, especially at the beginning of implementation [85].
3.4.5 Training
Both technological and psychological preparation before physicians start to use EMR
should be in place. Therefore, as Studer proposed, adequate and timely training to prepare
users with EHR should be available and also be tailored to specific needs and experience
[96]. Wager concluded that initial and ongoing training provided physicians time to get
comfortable with EHR and intensive training before going live is also necessary [92]
[97]. Demonstration of realistic benefits EHR could bring and the potential challenges
they may encounter is beneficial. Unrealistic expectation leads to disappointment and
physicians’ negative attitudes towards adoption [73].
3.4.6 Support
Since physicians start to use EHR, ongoing on-site support from IT department in
hospital guarantees a venue for physicians to go for help whenever they encounter some
problems in system use. Support since the early implementation phase is crucial [98] [99]
39
[100] [86] [101] [87] [92] [97]. It is also essential to provide a venue for physicians to
discuss and negotiate their concerns with IT department [84].
3.4.7 Workflow Redesign
One of the most frequently mentioned concerns from physicians before using EHR is the
compatibility of the system with current workflow and what negative effects EHR might
bring to their work. Workflow redesign should not be considered as an affiliation to EHR
implementation; instead, as Brokel reported, it should be treated as a mean to remove
waste in the current outdated workflow and turn it into a more efficient care process
[102]. Five principles were concluded based on the experience of implementing EHR in
Trinity Health: (1) identify and address safety problems; (2) promote evidence-based
practices; (3) reduce practice variations and standardize terminologies and care processes;
(4) improve communication and relationships among clinicians and (5) augment multiple
uses of data in HIT-supported care processes.
3.4.8 Feedback
Feedback is the most direct and efficient method to improve current systems according to
users’ experiences and suggestions. If physicians feel some functionalities of the system
are actually hindering their work, HIS or vendor should react correspondingly in a timely
manner to prevent physicians from discontinuing using EHR.
In conclusion, all of the above aspects could be concluded in three main aspects: senior
management strategy, workflow reengineering and IT support service, which are
strategic, tactic and operational decisions.
40
Table 9 Question Statements of Service Quality Construct
Attributes Question Statements Factors
STR1: Commitment of top management helped to
ensure adequate resource allocation, supports
redesign effort and facilitates implementation
steps.
Senior management
STR2: During decision-making process, physician
representatives’ inputs were considered.
Stakeholder
involvement
STR3: Standardized communication process was
in place, having a single person (implementation
coordinator or project manager) act as a liaison
between the implementation team and the clinic
undergoing implementation.
Project leadership
STR4: There was consensus among physicians
about need for EHR before implementation; I
could see the benefit of using EHR before
implementation.
STR5: After assessing the state of readiness and
change-capability of the organization, realistic
timetable of EHR implementation was designed
and followed.
Senior
management
strategy
(STR)
STR6: Original workload was reduced to
compensate the extra-required time spent on EHR.
Assessment of
environment/culture
WOR1: New workflow steps required by EHR
were discussed and reviewed; final workflows
incorporated in the EHR system were approved by
all physicians in my clinic.
WOR2: Moving to new workflows required by
EHR helped eliminate potential errors (for
example, inappropriate variations) and risks in the
old care.
Workflow
reengineering
(WOR)
WOR3: Evidence-based guidelines and standards
for every workflow decision were incorporated
into the EHR.
Workflow redesign
ITS1: Training on how to use EHR was provided
at the right times, amount and quality.
Training
ITS2: There is provision of a venue for me to ask
for help after implementation.
Support
IT support
service (ITS)
ITS3: EHR implementation team responds to my
feedback in a timely manner.
Feedback
41
3.5 Social Influence
Venkatesh defines social influence in the UTAUT model to be the extent to which users
think their peers, family and organization, believe they should use the system [45].
Venkatesh also concluded that in a voluntary environment, social influence is not
significant; however, it is significant in mandatory settings, especially in the early stage.
On one hand, users react to potential gain after system use; on the other hand, compliance
mechanism affects users on their intention to adopt the system due to social pressure.
In the context of EHR adoption, social influence for physicians mainly comes from peer
physicians and hospitals. In most cases, physicians are using EHR for the sake of the
financial incentives they could receive when they reach the goals of Meaningful Use and
also of the punishment of no such behavior.
Therefore, we include the following question statements in our survey: SN1: Personal
financial rewards are the main reason that I am using EHR. SN2: The main reason I am
using EHR is to gain financial rewards for our organization. SN3: I have complete
freedom to chose whether to use EHR in our organization or not.
3.6 User Acceptance
In the information system (IS) discipline, the most generally accepted measures of IS
acceptance are system usage and user satisfaction [103] [104]. In our model, we also use
these two measures as possible intermediate variable connecting IS features and
individuals’ work impacts.
As TAM and UTAUT model proposed, intention to use is the most powerful variable in
explaining and predicting the actual usage of technology. Davis showed that students’
42
intention to use an electronic processing application in the future and current usage
measured contemporaneously significantly correlated with value of 0.63 [39]. Therefore,
we use these two factors and corresponding survey questions to reflect physicians’
acceptance of EHR. For intention to use, we have the following question: whenever
possible, I intend to use EHR in my patient care and management in the future. For the
current actual use, we have the question: how often do you use EHR at work? It requires
physicians to estimate and choose a percentage range from following options to reflect
their utilization of EHR: less than 10%, from 10% to 30%, from 30% to 50%, from 50%
to 70%, from 70% to 90% and more than 90%.
We still include user satisfaction for the reason that if the system use is not voluntary,
there is not enough variance in usage as a predictor and how much users use EHR is not
determined by the quality of the system anymore. Therefore, system use is not an
appropriate measure representing technology acceptance; in that case, user satisfaction
will be utilized in the model instead [30]. As a result, we also include user satisfaction
measure in the survey questions as following: Overall I am satisfied with the EHR I am
using.
3.7 Physicians’ Work Impact
Many different impacts on physicians’ work after EHR adoption have been investigated
in previous studies such as changed clinical work patterns, efficiency and effectiveness of
work, changed documentation habits, more administrative work and job satisfaction
[105].
43
A previous validated national survey provides a comprehensive list of questions
measuring physician work performance, which is exactly our research interest. A list of
10-facet, 36-item of work impact measures was developed in the previous survey;
reliabilities of the 10 facets are in the range of 0.65 to 0.77 [49]. For the reason that this
survey is validated and tested by substantial researchers, we decide to choose questions
from this survey as our survey questions assessing physicians’ work impact. According to
our research interest, the following measures are chosen with new items added (referral
capability, error rate and productivity) and are listed in table 10 [50]:
Table 10 Physician Work Impact Measures
Measure Content
Autonomy The independence of actions
Referral capability How easy physicians could refer patients to other healthcare
organizations or receive referrals from other facilities.
Quality of care Quality of care delivered
Relationship with
patients
Relationship with patients
Error rate Mistakes physicians do not intend to make
Productivity Efficiency of physicians dealing with patients’ encounters,
such as the number of patients physicians could see per day
Time Personal time to self and family interrupted by work
Compensation To what extent physicians feel their total compensation
package is fair comparing to their colleagues
Administrative
responsibilities
Time and effort to deal with administrative work
Work satisfaction Overall satisfaction with work
Practice satisfaction Overall satisfaction with their practice
3.7.1 Autonomy
Physicians were the only ultimate decision makers in the care providing process and
enjoy high degree of autonomy; the functionality of providing computer-generated
instructions by EHR is definitely a threat to physicians’ autonomy [106]. Moreover,
44
EHR forces physicians to follow predetermined clinical patterns and physicians’
behaviors are monitored and recorded in EHR, which also influence physicians’
autonomy [107].
Plenty of previous studies show that using EHR could significantly increase compliance
to guidelines by structured data entry and by keeping track of compliance rate [66] [67].
It could also reduce the variability of physicians’ diagnosis [62]. Studies show that
application of clinical guidelines in the management of breast cancer and venous
thromboembolism significantly improved treatment effectiveness [108]. However, the
improved adherence to guidelines by EHR decision support interventions may make
physicians feel more restricted while making treatment and referral decisions [6].
Moreover, e-prescription systems may provide physicians with detailed formulary and
benefit, as a result, physicians become more aware of the formularies and are forced to
follow prescription instructions in EHR [29], although most of studies show a positive
outcome generated by the increased concordance with prescription recommendations,
such as decreased medical error rate [108].
Therefore, we have following question statement to evaluate the change of physicians’
autonomy after using EHR.
• Following clinical guidelines required by EHR restricts my freedom to practice,
such as restrictions on treatments or medications.
3.7.2 Referral Capabilities
One benefit of using EHR is to connect physicians with their colleagues in other
healthcare organizations, which makes their communications more convenient and hence
45
facilitates the referral processes. Therefore, the following statement is included to
evaluate whether EHR could assist physicians with the referral procedures.
• This EHR enables me to refer patients or receive referrals more easily.
3.7.3 Patient Care Issues
Patient care issues include the following three measures: quality of care, relationship with
patient and error rate. The reason that these three measures might be affected by the use
of EHR is explained as following.
Whether patient care quality is improved by EHR or not is always the focus of attention;
although the purpose of utilizing EHR is to improve the quality of care, the outcomes are
inconclusive. EHR changes the way physicians interact with patients, hence physician-
patient relationship is more likely to change as a result. For example, physician staring at
monitor using EHR decreases eye contact with patient. Furthermore, once the patients
experience the benefits EHR brings to them, they will only demand more from their
physicians, which could affect their relationship with physicians as well [109].
• This EHR enables me to provide higher quality medicine than paper system.
• My relationship with patients is more adversarial than it used to be.
• Treatment and prescription recommendations by this EHR help decrease error
rate.
3.7.4 Productivity
Straightforward entries and personal order sets are proved to save time but order entry
seems to require more time per patient per day from physicians [110]. Moreover,
physicians also need to record the whole consultation process into EHR system, which
46
they are not used to and requires extra time. The primary way of patient health record in
paper system is narrative record; it is not well translated in most EHR systems. Therefore,
due to the mixed effects of these multiple factors, there is no guarantee of better
efficiency and higher productivity for physicians [111]. An increase of the length of
consultation has been observed in multiples studies from additional 2.2 to 9.3 minutes per
patient [112] while the length was found to be decreased in another study as well [113].
Because productivity is one of the most important determinants of physicians’ acceptance
and satisfaction towards HIT [112], concerns with increased required time from
physicians always lead to high resistance among them. It is worthwhile to investigate the
impact of EHR on overall productivity.
• This EHR helps me see more patients per day (or go home earlier) than I could
with paper charts.
3.7.5 Time (Personal Time)
Physicians’ clinical time has been measured and evaluated in substantial research as it is
shown in the discussion of the productivity factor. However, physicians’ personal time is
not discussed enough in previous studies [114]. Actually, no literature was found to
report whether the potential extra workload caused by EHR posed a problem in
physicians’ personal lives. Therefore, the following question needs to be explored and is
also included in our survey.
• The interruption of my personal life by work is a problem after using this EHR.
47
3.7.6 Compensation
Physicians need to sacrifice more time and effort to get familiar with EHR and as a result,
their expectations of compensation adjust. Moreover, the benefit EHR brings to different
stakeholders are not proportionally distributed: though physicians are direct end users,
payers and insurance companies benefit more than physicians [115]. So physicians’
perceptions of compensation may change as well. Therefore, the following question is
included in our survey as well.
• I am not well compensated compared to physicians in other specialties.
3.7.7 Administrative Responsibility
Administrative and duplicative tasks may be reduced after EHR implementation, which
could compensate the increased workload EHR imposes on physicians [116]; However,
there are also opposite opinions towards the outcome; existing studies show that the time
providers spend on administrative functions increases [117].
• I have more administrative work to do after using EHR.
3.7.8 Work Satisfaction and Practice Satisfaction
One important goal of introducing EHR to physicians’ work is to make their work easier
and hence improve their job satisfaction. And from the previous study, physicians’
satisfaction towards work is highly correlated to their satisfaction towards the practice
they are affiliated to. Therefore, we have following questions in our survey:
• Overall, I am pleased with my work.
• Overall, I am satisfied with my current practice.
48
Chapter 4 Study Methodology
4.1 Research Questions
Based on the literature review, we justified that it’s necessary to build on the existing
available research and develop an organizationally relevant and EHR specific evaluation
instrument. It could provide insights and meet needs from EHR system designers, HIT
consultants, hospital administrators and researchers. In order to validate the EHR Success
Model, we have the following questions to address:
Primary research questions:
(1) To what extent are physicians’ acceptances of EHR associated to their evaluations of
information quality, system quality, support service quality and freedom when
choosing to use EHR or not?
Ample empirical testing and validation of IS Success model provide strong evidence
that information system with better user evaluation of information quality, system
quality and support service quality, generates a higher intention to use the system,
which subsequently leads to more actual system usage [46]. Since IS Success model
is the main theoretical foundation for our EHR Success Model, we have reason to
believe the relationships still hold in our model. Therefore, we have the following
hypotheses. The hypotheses H1, H2 and H3 are described in both a and b versions
since intermediate variable, either the usage or user satisfaction, varies depending on
whether usage is volitional. Since there is no social influence construct connecting to
49
user satisfaction, H4 only applies to research model using use measure, so there is no
H4b.
Hypothesis 1a: The higher a physician evaluates the information quality of EHR,
the more likely he or she uses the system.
Hypothesis 2a: The higher a physician evaluates the system quality of EHR, the
more likely he or she uses the system.
Hypothesis 3a: The higher a physician evaluates the support service quality of
EHR, the more likely he or she uses the system.
Hypothesis 4a: The less freedom a physician has when deciding to use EHR or not,
the more likely he or she uses the system.
Hypothesis 1b: The higher a physician evaluates the information quality of EHR,
the more satisfied he or she is towards EHR.
Hypothesis 2b: The higher a physician evaluates the system quality of EHR, the
more satisfied he or she is towards EHR.
Hypothesis 3b: The higher a physician evaluates the support service quality of
EHR, the more satisfied he or she is towards EHR.
(2) To what extent is a physician’ actual usage of EHR associated with their intention to
use EHR?
It was proved in TAM and UTAUT model that intention to use is the most powerful
predictor of actual usage [39] [45]. Therefore, we believe the following hypothesis.
Still, this hypothesis does not apply to the research model with user satisfaction
measure and hence there is no hypothesis 5b.
50
Hypothesis 5a: The more likely a physician intends to use EHR in the future, the
more frequently he or she is using it now.
(3) To what extent do physicians’ work impacts due to EHR adoption associated to their
acceptance of EHR?
There are no clear conclusions about whether EHR could bring positive work impacts
or not. As it is explained in section 3.7, if the system and support are desirable by
physicians, EHR will bring positive impacts to physicians’ work. Therefore, as for the
research model with use measure, the following hypotheses are exploratory and
conditional; for the research model with user satisfaction measure, the following
hypotheses are confirmative.
Hypothesis 6a: If a physician is satisfied with EHR, the more he or she uses, the
more liberty he or she has in decision making when providing care.
Hypothesis 7a: If a physician is satisfied with EHR, the more he or she uses, the
easier he or she refers patients or receives referrals.
Hypothesis 8a: If a physician is satisfied with EHR, the more he or she uses, the
better quality of care he or she delivers.
Hypothesis 9a: If a physician is satisfied with EHR, the more he or she uses, the
better relationships he or she has with patients.
Hypothesis 10a: If a physician is satisfied with EHR, the more he or she uses, the
smaller error rate he or she achieves.
Hypothesis 11a: If a physician is satisfied with EHR, the more he or she uses, the
better productivity he or she achieves.
51
Hypothesis 12a: If a physician is satisfied with EHR, the more he or she uses, the
less frequently his or her personal time is interrupted by work.
Hypothesis 13a: If a physician is satisfied with EHR, the more he or she uses, the
more fairly he or she perceives compensation package.
Hypothesis 14a: If a physician is satisfied with EHR, the more he or she uses, the
less administrative work he or she has to perform.
Hypothesis 15a: If a physician is satisfied with EHR, the more he or she uses, the
more satisfied he or she feels towards work.
Hypothesis 16a: If a physician is satisfied with EHR, the more he or she uses, the
more satisfied he or she feels towards practice.
Hypothesis 6b: The more satisfied a physician is with EHR, the more liberty he or
she has in decision making when providing care.
Hypothesis 7b: The more satisfied a physician is with EHR, the easier he or she
refers patients or receives referrals.
Hypothesis 8b: The more satisfied a physician is with EHR, the better quality of
care he or she delivers.
Hypothesis 9b: The more satisfied a physician is with EHR, the better relationships
he or she has with patients.
Hypothesis 10b: The more satisfied a physician is with EHR, the smaller error rate
he or she achieves.
Hypothesis 11b: The more satisfied a physician is with EHR, the better
productivity he or she achieves.
52
Hypothesis 12b: The more satisfied a physician is with EHR, the less his or her
personal time is interrupted by work.
Hypothesis 13b: The more satisfied a physician is with EHR, the more fairly he or
she perceives compensation package.
Hypothesis 14b: The more satisfied a physician is with EHR, the less
administrative work he or she has to perform.
Hypothesis 15b: The more satisfied a physician is with EHR, the more satisfied he
or she feels towards work.
Hypothesis 16b: The more satisfied a physician is with EHR, the more satisfied he
or she feels towards practice.
Secondary research questions:
(4) How are physicians’ work impact measures affected by their gender, years in
professional, general experience with computer, previous experience with EHR and
other covariates?
4.2 Model Specification
53
Figure 3 Research Model (with Use) for Structural Equation Model Evaluation
Figure 4 Research Model (with User Satisfaction) for Structural Equation Model
User
Evaluation of
Information
Quality
User
Evaluation of
System
Quality
User
Evaluation of
Service
Quality
Physician
work Impact
Use
Intention to
Use
Social
Influence
Accuracy
Completeness
Timeliness
Accessbility
Usability
Senior
management
efficacy
Workflow
reengineering
IT support
service
ACC1,2,3
COM1,2,3
TIM1,2,3
ACS1,2,3
USE1…10
Flexibility
Integration
Response
time
STR1…6
WOR1,2,3
ITS1,2,3
SN1,2,3
intention to
use
Use_freq
Main Construct (Latent variables -
LV)
Construct (Latent variables - LV)
Measurements (question
statemetns)
Legend
H1a
H2a
H3a
H4a
H5a
H6a-16a
Autonomy
Satisfaction
towards work
Satisfaction
towards
practice
Referral
Quality of care
Pt relationship
Error rate
Productivity
TIme
Compensation
Administrative
work
User
Evaluation of
Information
Quality
User
Evaluation of
System
Quality
User
Evaluation of
Service
Quality
Physician
work Impact
Accuracy
Completeness
Timeliness
Accessbility
Usability
Senior
management
efficacy
Workflow
reengineering
IT support
service
ACC1,2,3
COM1,2,3
TIM1,2,3
ACS1,2,3
USE1…10
Flexibility
Integration
Response
time
STR1…6
WOR1,2,3
ITS1,2,3
Autonomy
Satisfaction
towards work
Satisfaction
towards
practice
Main Construct (Latent variables -
LV)
Construct (Latent variables - LV)
Measurements (question
statemetns)
Legend
H1b
H2b
H3b
H6b - 16b
Referral
Quality of care
Pt relationship
Error rate
Productivity
TIme
Compensation
Administrative
work
User
satisfaction
User
satisfaction
54
Similar to figure 1 and figure 2, depending on which intermediate measure is chosen,
usage or user satisfaction, either figure 3 or 4 will be accepted as the final research
model. According to our EHR Success Model, the above model exhibits constructs,
measures and hypotheses shown on top of paths in both figure 3 and figure 4.
As it is labeled, all of the oval shapes are constructs, or latent variables in our research
model, and the ovals in shadow are the main constructs, which we explained in our EHR
Success Model earlier. Each of the numbered labels in square shapes corresponds to a
question in our survey. For example, ACC 1 is the first question designed for accuracy
attribute in the construct of user evaluation of information quality.
The paths with hypothesis indices above the arrows are the relationships we are interested
in and will be evaluated to find out whether it is significant. Here is a list of the research
hypotheses and the expected signs of corresponding path coefficients in table 11. The
expected sign is also affected by the question wording; for example, H9b is tested by the
physician-patient relationship question in the survey, which is phrased as “my
relationship with patients is more adversarial than it used to be”, therefore, the expected
sign should be negative.
Table 11 List of Research Hypothesis and Expected Sign of Path Coefficient
Research hypothesis
with use measure in
the model
Expected sign Research hypothesis
with user
satisfaction measure
in the model
Expected sign
H1a Positive H1b Positive
H2a Positive H2b Positive
H3a Positive H3b Positive
H4a Negative
H5a Positive
55
Research hypothesis
with use measure in
the model
Expected sign Research hypothesis
with user
satisfaction measure
in the model
Expected sign
H6a H6b Negative
H7a H7b Positive
H8a H8b Positive
H9a H9b Negative
H10a H10b Positive
H11a H11b Positive
H12a H12b Negative
H13a H13b Negative
H14a H14b Negative
H15a H15b Positive
H16a
Depends on specific
work impact
measure and also if
physicians are
satisfied with EHR
or not.
H16b Positive
Table 11 (Continued)
4.3 Research Design
In this research, a survey methodology is used to perform the research and to collect data.
Questionnaires allow us to design specific questions according to the research objectives
and research model. Using survey could also break geographical limitations and provide
participants enough time to think about the questions; also due to its cost-effectiveness,
survey is always considered to be more effective and efficient comparing to field
observations and interviews [118].
4.3.1 Survey Instrument
The survey consists of four parts. The first part includes demographic information of
respondents, information of their organizations, their experiences in using EHR and their
experiences in general computer use. In the following parts, respondents are asked to
indicate their assessments of the EHR system they are currently using, implementation
support and work impact measures on a five-point Likert scale.
56
As the most commonly used measure in scale design, Likert Scale is used in this study as
well. Five-point Likert Scale is most reliable and proposed in most studies [119]; three-
point Likert Scale is not functioning as accurate as five-point Likert Scale in capturing
participants’ attitudes, while seven-point Likert Scale is hard for people to distinguish the
right level [120]. Therefore, this study chooses five-point Likert Scale.
For all our survey questions, physicians are required to choose the best answer from the
following options to describe their agreement level with the questions: disagree,
somewhat disagree, neither, somewhat agree and agree. We also include the answer
don’t know as an option since physicians may not encounter every scenario described in
the questions.
In the first part of our questionnaire, we include general demographic questions, which
are proved to be potential factors affecting EHR adoptions based on literature review,
namely, gender, specialty, years in profession, practice size, brand of EHR in use,
experience and skills in using EHR and to which extent this EHR helping them achieve
meaningful use incentives.
In the second part, since the EHR Success Model was built on the IS success model, we
inherited same constructs from IS success model to build our questionnaire, namely
information quality, system quality and service quality. For each construct, the attributes
are developed using Bailey & Pearson’s instrument as theoretical reference, while for
each attribute, measures or questions are designed based on intensive literature review in
the areas of healthcare information system design, implementation, physicians’ adoption
57
and satisfaction, most of which are cited in systematic review of IS success model in
healthcare field [59] [121] [122] [5].
The third part involves questions soliciting information about how frequently physicians
are using EHR, how much they plan to continue using it, their satisfaction towards EHR
and perceived freedom when deciding to use EHR or not. We used scales and measures
from UTAUT with minor adaption to fit in our study, in order to assess respondents’
intention to use EHR in the near future [45]. Question measuring the current usage of
EHR is assessed in the scale ranging from 10% or less, 10% - 30%, 30% - 50%, 50% -
70% or 70% - 90%, 90% or more of entire work time. To assess perceived voluntariness,
we designed question asking how much freedom physicians feel when they consider
using EHR.
For the last part assessing the impacts EHR brings to physicians’ work, most questions
are adopted from previous validated survey instrument [50]. The questions are selected
based on (1) item loadings of factor analysis are greater than 0.68; (2) comparing and
discussing potential questions with one resident and other two researchers in healthcare
field.
We also have the following two open ended questions to allow physicians to leave
comments with no specific guidance: (1) what are the most important factors facilitating
EHR adoption? (2) What are the barriers in the process of EHR adoption? The purpose of
open-ended questions are twofold: (1) to verify important factors regarding EHR design
and organizational support are included in the questionnaire; (2) to gain knowledge on
58
what facilitators and barriers of EHR adoption are the most important from physicians’
perspectives. The entire survey is shown in Appendix A.
4.3.2 Survey Review and Pretest
First, three research faculties, three physicians and three researchers in healthcare
reviewed the complete version of survey separately to check the following components:
wording of questions, structure of questions, response scale, order of questions,
instructions for administering the questionnaire and the navigational rules of the
questionnaire.
Based on the feedback, we made following changes: (1) a slightly different layout of the
questions following the order: system quality, information quality and service quality. It
is more natural to guide respondents from usability questions to questions of information
output, and then to questions regarding organizational support. (2) Some wording
adaptations. For example, in the questions regarding physicians’ work satisfaction chosen
from Konrad’s survey [50], we substituted the word “physician colleagues” with
“physician peers” for the reason that most physicians are working with medical assistants
as a team nowadays and they interpret colleagues as medical assistants who they work
with all the time, (3) improvement of question structures to represent only one item of our
study interest in each single question.
Before the administration of the survey, a pretest of the survey was set up with 5 subjects:
two residents in family medicine and internal medicine, a master student in public policy,
a physician in family medicine and a physician in Obstetrics and Gynecology, each of
who completed the first version of the survey. We searched for items with following
59
criteria: high rates of missing data, inconsistencies with other questions and of little
variance. As a result, we didn’t find such items.
4.3.3 Sample
In our study, eligible respondents are required to be (1) with experience of using EHR (or
EMR
1
); (2) working in mid-sized or larger organizations, clinics or hospitals, where
exists a particular department in charge of EHR implementation in the whole
organization.
In our study, we decided to target residents as respondents to our questionnaire for the
following reasons: (1) they are the main EHR users in the hospital since many teaching
faculties assign EHR related work to residents, moreover, current residency programs
focus on preparing their residents to be savvy EHR users [123]; (2) residents will play an
important role in EHR user group in the next following years after they become certified
physicians; (3) when they encounter difficulties using EHR, physicians tend to seek help
from residents, even more frequently than from support personnel [124]; (4) the
similarity of respondents’ background helps eliminate background as a source of
variability of the dependent constructs, which could otherwise be generated by the
diversity of respondents’ backgrounds and hence we could achieve more meaningful
results regarding the relationships between system design, organizational support,
utilization and the impact on physicians’ work; (5) Since residents are the youngest
1
HIMSS defines Electronic Medical Record (EMR) as a component of an EHR that is
owned by the health care provider, while EHR could be assessed by both providers and
patients. For this study, our main focus is physicians instead of patients; therefore,
physicians with EMR experience are also eligible in our research.
60
population among professional healthcare providers, they are more exposed to the
internet since their childhood in their personal lives and are more familiar to use
computer, which helps eliminate the possibility that lack of previous computer use as a
reason to reject EHR in the physician population, especially senior physicians.
Therefore, we searched graduate medical education programs on the website of American
Medical Association (AMA) by setting the searching criteria as all specialties including
dermatology, emergency medicine, family medicine, internal medicine, neurology,
obstetrics and gynecology, ophthalmology, otolaryngology, pediatrics, physical medicine
and rehabilitation, preventive medicine, psychiatry and surgery. 211 residency programs
in California are listed as searching result.
4.3.4 Survey Administration
Survey is available online and survey data were collected and managed using REDCap
electronic data capture tools hosted at USC [125]. We called or emailed the program
directors or coordinators to ask if the residents were using EHR system. If the answer was
positive, we explained the purpose of our study trying to get their assistance to distribute
the survey to the residents in their programs. An email with an introduction of the study
and the online link of the survey was sent to the directors or coordinators who agreed to
distribute the survey to all of their residents. Residents completed the survey online
anonymously and the survey results were available through REDCap.
Another email was sent to the medical director or coordinator one week after the first
email was sent if the response rate is not satisfactory, to remind him or her to send
another email to residents encouraging them to fill out the online survey. Since all
61
respondents completed the survey anonymously, we could not track the response rate of
any specific program, therefore, the follow up email was sent to all of the program
directors or coordinators.
4.4 Data Analysis
All analyses were conducted in Stata v.12 [126] and R [127].
4.4.1 Reliability Analysis
In order to test the consistency of multiple-item instruments designed for each attribute in
the three quality constructs, referring to the survey questions designed for each attribute,
reliability of the all multi-item attributes are tested using Cronbach alpha, which is listed
in the following table:
Table 12 Attributes with Multiple Measures
Construct Attribute Name Measures for each attribute
ACC (Accuracy) ACC1, ACC2, ACC3
COM (Completeness) COM1, COM2, COM3
TIM (Timeliness) TIM1, TIM2, TIM3
User
evaluation of
information
quality
ACS (Accessibility) ACS1, ACS2, ACS3
Construct Attribute Name Measures for each attribute
User
evaluation of
system quality
USE (Usability) USE1, USE2, USE3, USE4,
USE5, USE6, USE7, USE8,
USE9, USE10
STR (Senior management
strategy)
STR1, STR2, STR3, STR4, STR5,
STR6
WOR (Workflow reengineering) WOR1, WOR2, WOR3
User
evaluation of
service support
quality
ITS (IT support service) ITS1, ITS2, ITS3
62
Moreover, reliability of all of the items designed for each quality construct will also be
tested, for example, the Cronbach alpha of all item instruments designated for the
information quality construct including ACC1, ACC2, ACC3, COM1, COM2, COM3,
TIM1, TIM2, TIM3, ACS1, ACS2 and ACS3, will be calculated. Similarly, reliability
will be tested for the system quality and service support quality constructs.
Criteria: most alphas should meet minimal requirements of Nunnaly (>0.7) [128].
4.4.2 Construct Validity Analysis
Construct validity will be tested by performing factor analysis. Factor analysis is one of
the most widely used multivariate analysis techniques, aiming to interpret the data for
patterns, or reduce too many items to a manageable number of underlying factors
according to the correlations among the items and factors.
First, confirmatory factor analysis will be conducted on all of 37 measures (questions)
with the three latent quality constructs to calculate the factor loading of each measure on
its designated construct. And every loading should be at an acceptable magnitude on its
hypothesized construct without meaningful cross factor loading. For example, the factor
loading of ACC1 on information quality construct should be larger than a cutoff value, in
our study, 0.7; otherwise, ACC1 will be deleted from further analysis.
Second, after deleting poorly loaded measures in each quality construct, if the number of
maintained measures in any quality construct is too large (more than five), we need to
reduce the number of items in order to reach a converged result when utilizing structural
equation modeling in further analysis. To reduce the number of measures while still
keeping most of the contained information in them, we will perform rotation to find and
63
group most similar measures within each quality construct, and the new assignment of the
measures to groups might also imply a change of meanings of the attributes; the average
score of measures in each newly generated group will be used in further analysis.
Criteria: In the confirmatory factor analysis, the loadings of each measure should be at
least 0.55 as an acceptable level, which is suggested by Falk and Miller [129], in our
study since all items except for one have factor loadings larger than 0.55, we chose 0.7 as
the cutoff value. In the rotation, if the measure has no factor loadings larger than 0.4 on
the new factors after rotation, that measure will be deleted from further analysis.
4.4.3 Multiple Regression Analysis
Multiple linear regression is a widely used approach to assess the relationship between a
dependent outcome and a group of independent variables, which provides the statistic
result of whether any particular independent variable is contributing to explain the
variance in the dependent variable.
In our study, we plan to perform multiple regression analysis to (1) test if any of the paths
we are interested in EHR Success Model is significant. For example, for path from
system use to work impact, we will use frequency of EHR usage as independent variable
and one work impact measure as dependent variable, such as productivity, to test if there
is any effect of EHR usage on that specific criteria of work impact; (2) test if there is a
significant difference in the frequency of using EHR among physicians from diverse
specialties and with different experience in general computer use and in EHR. Only the
paths, which are proved to be significant, will be included in the next structure equation
model evaluation process.
64
4.4.4 Structural Equation Model Evaluation
Structural equation modeling (SEM) will be used to investigate the relationships among
constructs and find the significant paths in our model. SEM has been widely used to
assess complicated relationships among multiple endogenous (independent) and
exogenous (dependent) variables; SEM is capable of measuring the reliability and
validity of constructs and at the same time evaluating the relationships among the
constructs [130].
For each of the paths with hypothesis indices indicated in figure 3 and figure 4, three
analytical results will be reported: path coefficient (β), significance level (p value) and
variance explained (1-residue). Path coefficient and p value could show the level of the
impact of the independent variable on the dependent variable and the significance of the
path; 1 – residue is a strong indicator of explanatory power and
shows how much
variance in the predicted construct could be explained by the construct that starts the
path.
Regarding the measure, test of fit, at first, we will look into the signs of the path
coefficients to check if they are in accordance with proposed signs. Then, in order to
validate and assess the overall fitness of the structural model, we could choose the
following indicators, which compare the estimated covariance matrix produced by the
model to the covariance matrix generated by the real data. Therefore, the more accurate
our model could explain the relationships among the variables, the smaller divergence of
the estimated covariance matrix from the real covariance matrix there exists. Chi-square
goodness of fit, RMSEA (Root Mean Square Error of Approximation) and CFI
65
(Comparative Fit Index) are chosen in our study. Smaller chi-square, smaller RMSEA
and bigger CFI indicate the model fits data better. And a RMSEA of 0.1 and CFI of 0.9
are acceptable for models with good fit.
Moreover, we could use residue of outcome variable as another indicator of overall
fitness of the model. Smaller residue of outcome variable indicates more variance is
explained and hence the model is more powerful.
66
Chapter 5 Results
In this chapter, I presented the process of validating the EHR Success Model. First, a
summary of survey responses is presented. Second, the model validation process starts
from testing the consistency and the structure of all independent variables in the three
quality constructs, then the intermediate variables are included and analyzed, which are
the usage and user satisfaction towards EHR. Finally the outcome variables, which are
the work impact measures, are included to test the statistical explanatory power of the
complete model, where structural equation modeling and multiple regression analysis are
utilized and compared.
5.1 Survey Respondents’ Characteristics
In total, we sent an invitation email including the link of the online survey to the
coordinators of 197 residency programs in California, while survey participants came
from 42 programs. As a result, there were 229 residents, who opened the survey link, and
219 of them completed the survey; therefore, we have 219 valid and complete samples
for further analysis.
Of the respondents, 49.3% were male; a majority of the respondents were from primary
care practices such as family medicine, pediatrics and internal medicine (44.1%), but
other specialties were also well represented. 28%, 28% and 29% of the respondents were
1
st
year, 2
nd
year and 3
rd
year residents. Almost half of the respondents worked for large
practices with more than 50 staff.
67
Approximately 40% of the professionals had used the EHR system they are using for
over two years, while only 12.5% had experience in using their EHR for less than 6
months. More than 80% of the respondents reported themselves as average or competent
users in a range from novice to expert. About same number of respondents had used 1 or
2 or 3 or 4 and above different EHR systems. Regarding the currently used EHR system,
approximately 60% and 20% of respondents were using Epic and Cerner separately, and
the rest of the respondents were using other EHR systems.
Table 13 Survey Respondents' Characteristics
Characteristics N %
Sex M: 108
F: 111
49.3%
50.7%
Specialty Family Medicine: 44
Anesthesiology: 36
Emergency Medicine: 34
Pediatrics: 28
Internal Medicine: 23
Obstetrics and Gynecology: 14
Orthopedic Surgery: 12
Other*: 25
20.4%
16.7%
15.7%
13.0%
10.7%
6.5%
5.6%
11.4%
Years in residency program 1
st
year: 62
2
nd
year: 61
3
rd
year: 63
4
th
year: 23
5
th
year: 8
Longer than 5 years: 2
28.3%
27.9%
28.8%
10.5%
3.6%
0.9%
Size of practice 1-5: 4
6-10: 9
11-30: 62
31-50: 42
>50: 99
1.8%
4.2%
28.7%
19.4%
45.8%
Length of using current
EHR
0-3 months: 16
3-6 months: 11
6 months – 1 year: 63
1-2 year: 42
>2 years: 85
7.4%
5.1%
29.0%
19.4%
39.1%
68
Characteristics N %
Skill in EHR use Novice: 10
Advanced beginner: 13
Average user: 81
Competent: 99
Expert: 16
4.6%
5.9%
37.0%
45.2%
7.3%
Number of EHR used so far 1: 39
2: 82
3: 52
4 or more: 46
17.8%
37.5%
23.7%
21.0%
Current used EHR system Epic: 128
Cerner: 36
Other**: 55
58.4%
16.4%
25.1%
*Including Dermatology, Neurology, Ophthalmology, Psychiatry, Public Health,
Radiation Oncology, Radiology-Diagnostic, Surgery, Urology.
**Including Wellsoft, Docusys, Centricity, Nextgen, ECW, Affinity, AHLTA, Quest,
Sorian, Essentris, Allscripts, KIDS, Meditech, Innovation and Nextel.
5.2 Factor Model of the Three Quality Constructs
5.2.1 Reliability of Measures of Factors within Constructs
Since reliability concerns the extent to which measurements are repeatable, that is the
measurements could reflect a considerable high true score and a relatively low random
component, reliability is always used to evaluate the internal consistency of a multiple-
item measure [131]. The reliability of a multiple-item measure is always estimated by
Cronbach’s alpha, which assumes parallel measures. The score of the ith measure in the
set of parallel measures includes the common true score shared by all of the measures and
an error term for the ith measure. It was proposed that the reliability is at least as good as
suggested by Cronbach’s alpha by Novick and Lewis [132], therefore, it is also
reasonable to use it as an indicator of reliability of the instrument in our research.
In our research model, we used multiple-item measure for dimensions (attributes) within
constructs and also for the three quality constructs: for the information quality construct,
69
there are four detailed attributes, which are accuracy (ACC), completeness (COM),
timeliness (TIM) and accessibility (ACS); for the system quality construct, there is one
such attribute, which is usability (USAB); for the service quality, there are three
attributes, which are senior management strategy (STR), workflow reengineering (WOR)
and IT support service (ITS). The results of all above attributes are included in the
following table also with the number of measures, cornbach’s alpha, the mean and
standard deviation of the average score of each attribute. Since a reliability coefficient of
0.7 is considered to be acceptable in most social science research scenarios, as it shows,
all of the attributes achieved high levels of reliability except for accessibility, which does
not deviate from the acceptable threshold significantly and hence it is still kept in our
model at this moment.
Table 14 Attributes' Reliability
Attributess Number
of Items
Cronbach’s
alpha
Mean Standard
Deviation
Usability 8 0.92 3.76 1.03
Accuracy 3 0.72 4.29 0.83
Completeness 3 0.72 3.87 0.97
Timeliness 3 0.72 4.06 1.07
Accessibility 2 0.59 3.95 1.27
Senior
management
strategy
5 0.94 4.03 1.09
Workflow
reengineering
3 0.87 3.62 1.20
IT support
service
3 0.86 3.86 1.15
For the three constructs, the reliability indicators are listed in the following table and all
of the three quality constructs have a satisfactory value of Cronbach’s alpha. Therefore,
we will keep all of the measured independent variables for further analysis.
70
Table 15 Constructs' Reliability
Constructs Number of Items Cronbach’s alpha
System quality 11 0.93
Information quality 11 0.88
Service quality 12 0.95
5.2.2 Confirmatory Factor Analysis of Three Quality Constructs
Confirmatory factor analysis is one special form of factor analysis, which is used to test
whether the measures of constructs are consistent with researchers’ hypotheses and how
well the data fit the specified hypothesized factor model, in our study, which is the
structure of the measures and the three quality constructs.
In our research model for structural equation model evaluation as figure 2.a and 2.b in
chapter 4.2, each quality construct is a latent factor with multiple variables measuring it;
for example, for information quality construct, there are 11 measured variables in total.
For the system quality and service quality constructs, there are 11 and 12 items loaded
separately.
The loading of each measured variable on its designated construct is recorded in the
following table. From the loadings, we can see that except for the item, culture3, in the
quality of service construct having a low factor loading of 0.37, all of the others have
acceptable loadings on the construct they are designed for.
Table 16 Confirmatory Factor Analysis Factor Loadings of All Measured Variables
Quality of System Quality of Information Quality of Service
Usability 1 0.82 Accuracy 1 0.62 Senior management 0.86
Usability 2 0.81 Accuracy 2 0.64 Involvement 0.86
Usability 3 0.78 Accuracy 3 0.75 Communication 0.91
Usability 4 0.66 Completeness 1 0.70 Culture 1 0.64
Usability 5 0.76 Completeness 2 0.6 Culture 2 0.71
71
Quality of System Quality of Information Quality of Service
Usability 6 0.74 Completeness 3 0.65 Culture 3 0.37
Usability 7 0.67 Timeliness 1 0.59 Workflow 1 0.7
Usability 8 0.81 Timeliness 2 0.63 Workflow 2 0.79
Quality of System Quality of Information Quality of Service
Interoperability 0.6 Timeliness 3 0.62 Workflow 3 0.76
Customization 0.59 Accessibility 1 0.58 Training 0.74
Respond time 0.73 Accessibility 2 0.52 Support 0.71
Feedback 0.78
Correlations among three quality constructs:
System and information: 0.88
System and Service: 0.85
Service and Information: 0.85
Maintained measured independent variables in each quality construct:
Quality of System Quality of Information Quality of Service
Usability 1
Usability 2
Usability 3
Usability 5
Usability 6
Usability 8
Respond time
Accuracy 3
Completeness 1
Senior management
Involvement
Communication
Culture 2
Workflow 2
Workflow 3
Training
Support
Feedback
Table 16 (Continued)
To simplify the model while maintaining the explanatory power, with an acceptable
threshold of 0.7 of the factor loadings [133], some of the measured variables could be
eliminated from the model; all of the items in bold in table 16 are kept for further analysis
and are listed in the bottom of table 16 as well.
5.2.3 Oblique Rotation of Measures within Quality Constructs
According to table 16, there are 7, 2 and 9 measured variables maintained in system,
information and service quality constructs separately. There are too many variables in
system and service quality constructs to achieve valid analysis result from structural
equation modeling evaluation; therefore, in order to reduce the number of measured
72
variables in system and service quality constructs while maintaining most of the variance,
similar variables will be aggregated and hence a principal factor analysis with promax
rotation is used.
Rotation is always used to achieve a solution of factor loadings with simple structure to
find how to group variables based on their similarities in a most easily interpretable way
[134]; varimax (orthogonal) and promax (non-orthogonal) rotations are the most
frequently used rotation types, while varimax assumes latent factors are independent with
each other and promax allows correlations among latent factors. Because the underlying
factors, which are attributes within each quality construct in our model, are correlated
with each other, therefore, it is more reasonable to employ promax rotation. Table 17 and
table 18 report the factor loadings of all of the maintained measured variables in system
quality construct and service quality construct after promax rotation, with all factor
loadings larger than 0.4 displayed in the tables. Since there are only two items kept in the
information quality construct, it is not necessary to reduce the number of items for
information quality construct.
We named the attributes after rotation according to the measured variables loaded on
each attribute; For example, usability 1, usability 2 and usability 5 load on the first
attribute in system construct; by finding the commonness in all three items, this attribute
is named as cognitive load as mental effort is one important measurement in the usability
evaluation metrics [77]. Similarly, usability 6 and usability 8 are both testing how the
EHR systems fit in physicians’ workflow and hence the second attribute is named as
efficiency, standing for the efficiency of physicians performing task while the third
73
attribute is named as consistency since both of the items of usability 3 and respond time
measure the consistency of used color in the system design and the respond time of EHR
system.
Table 17 Factor Loadings in System Quality Construct after Promax Rotation
Variable Cognitive load Efficiency Consistency
Usability 1 0.55
Usability 2 0.53
Usability 5 0.60
Usability 6 0.56
Usability 8 0.44
Usability 3 0.45
Respond time 0.45
Same naming strategy is used to name the new attributes in the service quality construct.
The first attribute is named as governance, evaluating the extent to which top
management is involved and the prioritization of the EHR implementation in the
organization. The second attribute is named as preparation for the implementation,
evaluating how much effort of workflow redesign was made and how much training was
provided in the organization. The third and fourth attributes were loaded on by single
item and hence named after that item as proper timeline and support separately.
Table 18 Factor Loadings in Service Quality Construct after Promax Rotation
Variable Governance Preparation Timeline Support
Senior mgmt 0.43
Involvement 0.76
Communication 0.67
Workflow_3 0.63
Training 0.65
Culture_2 0.41
Support 0.65
Workflow_2*
Feedback*
*: None of the factor loadings of workflow_2 and feedback are larger than 0.4; no factor
loadings were displayed for those two items, and also eliminated for further analysis.
74
Therefore, in system quality construct, average score of measures: usability_1,
usability_2 and usability_5 is used as observed score of the cognitive load attribute;
similarly, average score of usability_6 and usability_8 and average of usability_3 and
respond time are used as the observed scores of the efficiency attribute and the
consistency attribute. In the service quality construct, average score of measure variables:
senior management, involvement and communication and average score of workflow_3
and training are used as observed scores of the governance attribute and of the
preparation attribute. To be more interpretable, although culture_2 and support are not
composite scores, they are still renamed as timeline and support. In the information
quality construct, the maintained two variables, accuracy_3 and completeness_1 are
named as accuracy and completeness.
By aggregating (averaging in this study) most similar measured variables, we reduced the
number of items in each quality construct and with the new observed scores,
confirmatory factor analysis is performed again and table 19 shows the analysis result.
Table 19 Factor Loadings of New Observed Scores
Quality of System Quality of Information Quality of Service
Cognitive load 0.86 Accuracy 0.83 Governance 0.92
Efficiency 0.85 Completeness 0.73 Preparation 0.82
Consistency 0.86 Timeline 0.69
Support 0.76
Correlations among constructs:
System and information: 0.8
System and Service: 0.82
Service and Information: 0.77
Maintained measured variables for observed score calculation:
Quality of System Quality of Information Quality of Service
Usability 1
Usability 2
Usability 3
Accuracy 3
Completeness 1
Senior management
Involvement
Communication
75
Usability 5
Usability 6
Usability 8
Respond time
Culture 2
Workflow 3
Training
Support
Table 19 (Continued)
According to the factor loadings, all of the items have satisfactory loadings on the
construct they are designed for.
The correlation coefficients among the three quality constructs are still high: 0.8, 0.82
and 0.77, which indicates the possibility of a common underlying factor affecting all of
the three quality constructs, therefore, other potential valid models should be considered,
which will be discussed in section 5.2.4.
5.2.4 Factor Model Exploration and Validation
The confirmatory factor analysis result reveals that three quality constructs were highly
correlated in section 5.2.3, therefore, we suspect whether there is single common
construct for all of the 9 observed scores in all the three quality constructs, therefore,
there are more than one potential valid factor models outlining the relationships between
measures and their underlying constructs. We will compare the fit indices of the model
with a single latent construct (named as Model 1, M1 for short), the model of three
constructs without common underlying factor (named as Model 2, M2 for short), the
model of three constructs with a common underlying factor for the three constructs
(named as Model 3, M3 for short), while M1, M2 and M3 are shown in table 20.
We chose chi-square goodness of fit, RMSEA (Root Mean Square Error of
Approximation) and CFI (Comparative Fit Index) as our model fit indices, which are the
most commonly used fit indices in structural equation modeling [135]. All of the indices
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Tested Model
M1
M2
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Tested Model
M1
M2
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Tested Model
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Table
Tested Model
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Table
Tested Model
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Table
Tested Model
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant
Table 20
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
(or M3) is significant at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two mode
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
of freedom, the two models have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
at 0.000 level.
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
0.9 are acceptable for models with good fit.
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
Comparing the three models, the values of the same fi
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
comparing M1 and M2, the difference of chi-square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Chi
square
128.06
44.92
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
Comparing the three models, the values of the same fit indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Chi
square
128.06
44.92
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Chi
square
128.06
44.92
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
square
128.06
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Degree of
freedom
27
24
measure how close the estimated covariance matrix produced by the model is comparing
to the covariance matrix generated by the real data; smaller chi-square, smaller RMSEA
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Degree of
freedom
27
24
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Degree of
freedom
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Degree of
freedom
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
Degree of RMSE
A
0.134
0.065
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and bigger CFI indicate that the model fits data better. And a RMSEA of 0.1
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
RMSE
A
0.134
0.065
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and CFI of
t indices are listed in table 20
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Fit Indices of the Three Potential Valid Research Models
RMSE
0.134
0.065
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and CFI of
t indices are listed in table 20. As it
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
RMSE CFI
0.9
07
0.9
81
76
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and CFI of
. As it
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
CFI
0.9
07
0.9
81
76
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and CFI of
. As it
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
CFI
0.9
0.9
measure how close the estimated covariance matrix produced by the model is comparing
square, smaller RMSEA
and CFI of
. As it
shows, without any work impact measurements, since M2 and M3 have the same degree
ls have same fit indices, but M1 has poorer fit indices; when
square is 83.14 with three degree of
freedom, which is statistically significant. That is, the improvement in fit from M1 to M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
quality.
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
M3
Comparison
Chi
83.14
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
quality.
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
M3
Comparison
Chi-
83.14
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
quality.
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
Comparison
-square
83.14
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
quality.
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
Comparison
square
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
Comparison between M1 and M2
square
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
between M1 and M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
between M1 and M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
variable in the model.
between M1 and M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
between M1 and M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
between M1 and M2
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
between M1 and M2
Table 20
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
between M1 and M2
Table 20
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Table 20
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Degree of Freedom
3
Table 20
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Degree of Freedom
Table 20 (Continued
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Degree of Freedom
Continued
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Degree of Freedom
Continued
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
44.92
Degree of Freedom
Continued
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
44.92
Degree of Freedom
Continued)
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
44.92
Degree of Freedom
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
24
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
24
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
P
0.000
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as th
P-value
0.000
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
power of M2 and M3 when including the work impact measures as the final outcome
0.065
value
0.000
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
0.065
value
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
0.065
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
shown in table 22, the impacts of the three quality constructs are different; the system
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
0.9
81
77
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
system
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
0.9
81
77
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
system
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
0.9
Moreover, when the average scores of the three quality constructs were used to predict
the overall satisfaction towards EHR in regression analysis, as the analysis result is
system
quality has stronger impact while the service quality has similar impact as information
Thus, we will still keep the three constructs in the model and compare the explanatory
e final outcome
78
5.3 Residents’ Usage and Satisfaction with EHR System
Approximately 75% of respondents reported that they were using EHR for more than
90% of their time at work, which could be explained by the fact that 96.2% of the
respondents claimed they did not have complete freedom to choose whether to use EHR
in their organizations or not. The summary of residents’ EHR usage is listed in the
following table:
Table 21 Usage of EHR
Usage of EHR at work N (%)
10% or less of entire work time 5 (2.3%)
10% - 30% 9 (4.2%)
30% - 50% 7 (3.2%)
50% - 70% 9 (4.2%)
70% - 90% 27 (12.5%)
90% or more 159 (73.6%)
The result of their current usage corresponds to their answers to how much they plan to
use EHR in the future: more than 84% of the residents “agree” that whenever possible,
they intended to use EHR in patient care and management in the future.
However, residents have different satisfaction levels towards EHR, while almost half of
them are satisfied with the system. The following table summarized residents’ EHR
satisfaction levels:
Table 22 User Satisfactions towards EHR
Satisfaction level N (%)
Dissatisfied 16 (7.3%)
Somewhat dissatisfied 30 (13.8%)
Neither 15 (6.9%)
Somewhat satisfied 54 (24.8%)
Satisfied 103 (47.2%)
79
Therefore, since most of the respondents had restraints in their organizations in terms of
utilizing EHR systems and there is not considerable variation in the usage, EHR usage
will not be employed as a measurement of system success in our model nor used as
intermediate variable explaining the linkage between independent measured items and
work impact measures, instead, user satisfaction towards EHR is considered as the
intermediate variable between the work impact measures, as the outcome variables, and
the independent variables, which are the evaluations of system functions and
organizational support for implementation.
5.4 Multiple Regression Analysis
Linear regression analysis is used to identify which variables are significantly
contributing to the predicted dependent variables including both intermediate and
outcome variables, which in our model are EHR usage, EHR satisfaction and all the work
impact measures. In order to use variables describing respondents’ background, such as
specialty, practice and the type of EHR system in use, which were collected as
categorical variables with more than 20 categories, we redefined the residents’ specialty
into two categories: primary care and special care, the residents’ practice into two
categories: academia facility and non-academia facility, and the residents’ current used
EHR system into three categories: Epic, Cerner and other EHR systems. Since all of the
three new variables are categorical variables, in order to use them in the linear regression
analysis, we created binary variables: specID, pracID, ehrID2, ehrID3. When the resident
is in primary care, specID = 0, otherwise specID =1; when the resident is in academia
practice, pracID = 0, otherwise pracID =1; when the resident uses Epic, ehrID =1, ehrID2
80
=0, and ehrID3 = 0; if the resident uses Cerner, ehrID = 0, ehrID2 =1, and ehrID3 =0,
else if the resident uses other system, ehrID =0, ehrID2 =0, and ehrID3 =1.
5.4.1 Regression Analysis of EHR Usage
We employed backward regression analysis by including all the potential predictors in
the analysis at the beginning and select the significant predictors to perform regression
analysis iteratively until all the predictors are significant predictors of EHR usage. The
results of the analysis, including beta coefficients, t-statistics and significance level for
each independent variable, are recorded in the following table.
Table 23 Regression Analysis of Usage
From the result, it implies that residents in specialty use EHR less frequently than
residents in primary care; the more skilled residents are in using EHR, the more likely
they use the system; the less freedom when choosing whether to use EHR or not, the
more frequently EHR is used; Cerner users use the system less frequently than Epic users
while users of other EHR systems use their systems the least comparing to both Epic
users and Cerner users.
5.4.2 Regression analysis on EHR satisfaction
Similarly to EHR usage, we employed backward multiple regression analysis to identify
significant variables predicting EHR satisfaction; we included the average scores of the
Beta T P-value (Sig.)
Specialty -0.60 -3.94 0.000
Skill in HER 0.31 3.64 0.000
Freedom -0.68 -2.66 0.008
ehrID2 (Cerner) -0.54 -2.25 0.012
ehrID3 (Other
EHR)
-0.73 -4.07 0.000
Adjusted R-square = 0.22
81
maintained measured variables in the three quality constructs as predictors. This analysis
yielded a regression function with an adjusted R
2
= 0.71 based on five significant
predictors: sex, skill in using EHR, averages scores of the three quality constructs. And
the analysis result is reported in the following table.
From the result, it shows female are more satisfied with EHR than their male counterpart
in general; the more skilled residents are using the system, the more satisfied they feel
towards the system; the higher score they rate for independent measured variables implies
a higher satisfaction towards the EHR system. From the beta coefficients, the relative
explanatory powers of predictors are different. The average score of maintained measures
in system quality construct is the best predictor of EHR satisfaction. And it shows that all
of the three constructs are significantly contributing to residents’ EHR satisfaction,
however, system quality has a relatively stronger power in explaining the overall EHR
satisfaction level, which reinforces that we should keep all of the three quality constructs
in the research model instead of using a single common factor.
Table 24 Regression Analysis of EHR Satisfaction
Beta T P-value (Sig.)
Sex -0.23 -2.13 0.034
Skill in EHR 0.23 3.83 0.000
Average score of
system quality
0.57 7.51 0.000
Average score of
information quality
0.23 3.62 0.000
Average score of
service quality
0.23 3.01 0.003
5.4.3 Regression Analysis on Work Impact Measures
Both usage and EHR satisfaction are proposed to be predictors of work impact measures
in the IS Success Model by Delone and Mclean [46]; we use usage and satisfaction
82
separately to predict each work impact measure and the results, including beta
coefficients, whether result is in accord with hypothesis, significance level and R-square,
are reported in the following two tables.
From the result, it is obvious that usage performs poorly in predicting all the work impact
measures, because of limited freedom, use is mandatory for most of the users; therefore,
we will only use EHR satisfaction instead of usage as an intermediate predictor of work
impact in further analysis. Moreover, EHR satisfaction is a good predictor for most of the
work impact measures except for compensation.
Table 25 EHR Satisfaction as Predictor of Work Impact Measures
Work impact
measure
Beta
coefficient
Hypothesis Significant
level
R-square
Autonomy -0.30 Y 0.000*** 10.7%
Referral 0.58 Y 0.000*** 29.1%
Quality of care 0.42 Y 0.000*** 24.1%
Error_rate 0.44 Y 0.000*** 28.4%
Pt relationship -0.19 Y 0.003*** 4.6%
Productivity 0.42 Y 0.000*** 17.7%
Time -0.18 Y 0.010*** 3.5%
Compensation 0.007 N 0.933 0.0%
Administrative -0.10 Y 0.169 1.0%
Work satisfaction 0.34 Y 0.000*** 25.8%
Practice satisfaction 0.33 Y 0.000*** 25.6%
**: Significant at 0.05 level
***: Significant at 0.01 level
Table 26 Usage as Predictor of Work Impact Measures
Work impact Beta Hypothesis P-value R-square
Autonomy -0.15 Y 0.041** 2.1%
Referral 0.22 Y 0.015** 3.3%
Quality of care 0.08 Y 0.236 0.7%
Work impact Beta Hypothesis P-value R-square
Error_rate 0.15 Y 0.022** 2.6%
Pt relationship -0.08 Y 0.267 0.7%
Productivity 0.02 Y 0.837 0.03%
Time 0.08 Y 0.383 0.4%
83
Compensation 0.20 N 0.035** 2.6%
Administrative 0.07 N 0.358 0.46%
Work satisfaction 0.11 Y 0.026** 2.3%
Practice satisfaction 0.08 Y 0.116 0.1%
**: Significant at 0.05 level
Table 26 (Continued)
5.5 Structural Equation Model Evaluation
Structural equation modeling is a two-step model-building technique, including
confirmatory measurement, which specifies the relations of measured variables to their
posited latent construct, and confirmatory structural model, which specifies the casual
relations of the construct to one another and also to outcome dependent variable [136].
Since we have already tested the confirmatory measurement in 5.2.1 and 5.2.2, in this
section, we will test the whole model, including the independent measured variables in
three quality constructs, intermediate variable, which is EHR satisfaction, and final
outcome variables, which are the work impact measures.
5.5.1 Model Structure Comparison
From 5.2.4, when testing the structure of the three quality constructs without outcome
variables included, the model with three correlated quality constructs (M2) and the model
with an underlying factor of the three quality constructs (M3) have the same fit indices.
In M2, because of high correlations among the three latent quality constructs, whichever
work impact measure is chosen as the final dependent variable, at least one of the path
loadings from the three quality constructs to either the intermediate variable or the final
outcome variable are not significant, which is a common issue when collinearity exists.
Therefore, we will compare the explanatory power of M3 with and without EHR
satisfaction as intermediate variable explaining the work impact measures, named as M3a
84
and M3b. With each work impact measure as final dependent variable in the model, the
results, including the residues of the work impact measure, RMSEA and CFI indices,
same as the indices in session 5.2.4, are reported in the following table 27.
From the result, we can see that the model (M3b) with two structures of latent factors and
without EHR satisfaction as intermediate variable to predict work impact measures has
smaller residue and hence better explanatory powers for most of the work impact
measures and also better fit indices than M3a.
Therefore, M3b is used for further analysis and discussion; the common underlying latent
construct of the three quality constructs is named as fit construct for the reason that all of
the survey questions regarding the independent measured variables were phrased to
evaluate whether and to which extent the current EHR system fits residents’ work, which
will be further discussed in the discussion chapter. And the task technology fit model
highlights the importance of the fit construct and also suggests that both utilization and fit
have direct impacts on work performance. Therefore, the direct linkage between fit
construct and work impact measures in our research model has strong theoretical
foundation.
Table 27 Structural Equation Model Comparisons
Residue RMSEA CFI Work impact
M3a M3b* M3a M3b M3a M3b
Autonomy 0.89 0.87 0.041 0.048 0.989 0.985
Referral 0.68 0.58 0.062 0.057 0.975 0.981
Quality of care 0.75 0.68 0.058 0.056 0.979 0.981
Error rate 0.65 0.59 0.064 0.060 0.974 0.978
Residue RMSEA CFI Work impact
M3a M3b* M3a M3b M3a M3b
Pt care 0.96 0.97 0.044 0.058 0.987 0.979
Productivity 0.84 0.81 0.059 0.070 0.977 0.970
85
Time 0.85 0.86 0.047 0.058 0.985 0.979
Compensation 0.81 0.76 0.050 0.057 0.983 0.979
Admin 0.99 0.91 0.057 0.061 0.978 0.976
Work
Satisfaction
0.74 0.69 0.054 0.059 0.981 0.979
Practice
Satisfaction
0.76 0.75 0.039 0.047 0.990 0.986
*: M3b is chosen as the final model.
Table 27 (Continued)
Therefore, the final research model for structural equation model evaluation is presented
in figure 3, most of which is the exact model being tested, except for the latent construct
of physician work impact. The physician work impact construct in figure 3 is replaced by
one specific work impact measure when evaluating the model, for the reason that every
work impact measure functions as an independent outcome variable instead of one of the
multiple-item measure of the latent construct, as in the three quality constructs. The work
impact measures, which are the autonomy measure and other ten measures, are used as
outcome variables in the model separately: when performing analysis, fit construct
connects with one outcome variable at a time to predict that particular outcome variable.
Except for the fit construct, all of the other possible significant predictors of all of the
work impact measures are shown in the figure 3; the predictors include residents’
specialty, the type of EHR system they are using, their experience in the program, their
skill in using EHR, their experience in using EHR, the size of their practice and how
beneficial they think EHR is helping achieve the meaningful use objectives. The
significant predictors depend on the specific work impact measure, which is utilized as
the final outcome variable in the model. For example, there is no other significant
predictor other than the fit construct when predicting the measure of autonomy, while the
86
residents’ skill in using EHR is significant when predicting their satisfaction towards the
practice.
Figure 5 Updated Research Model for Structural Equation Modeling
5.5.2 Work Impacts from Using EHR
As described in 5.5.1, model 3b is selected as our final research model; we performed
structural equation modeling to predict each work impact measure. The results, including
the path loading from the fit construct to the work impact variable, factor loadings of
three quality constructs on the fit construct, the beta coefficient and significance level for
each significant determinant to the work impact measure except for the three quality
constructs, are presented in table 28.
87
Table 28 Structural Equation Modeling Analysis
Beta from
Fit
1
Quality
constructs
with fit
latent
2
Residue
3
Other significant
predictors and beta
coefficients
4
Implication between
fit and work impact
Autonom
y
-0.36*** 0.93***,
0.86***,
0.89***
0.87 Better fit increases
residents’ freedom
to practice.
Referral 0.65*** 0.89***,
0.87***,
0.92***
0.58 specID_bool:
-0.23**
ehrID2: -0.3**
ehrID3: -0.24**
Better fit helps
residents refer
patients or receive
referrals.
Quality
of care
0.56*** 0.91***,
0.88***,
0.90***
0.68 Exp_ehr: 0.2**
Mu_use: 0.11*
Better fit enables
residents to provide
higher quality
medicine.
Error rate 0.61*** 0.91***,
0.88***,
0.90***
0.63 Exp_prog: -
0.14**
Skill_ehr:
0.18***
ehrID2: -0.12**
mu_use: 0.29***
Better fit helps
decrease error rate.
Pt care -0.17** 0.93***,
0.86***,
0.89***
0.97 Better fit increases
residents’
relationship with
patients.
Productiv
ity
0.43*** 0.95***,
0.86***,
0.87***
0.81 Better fit helps
residents see more
patients per day.
Time -0.14* 0.93***,
0.86***,
0.89***
0.98 specID_bool:
-0.28***
size_prac:0.14*
mu_use: -0.23**
Better fit decreases
interruption of
residents’ personal
lives.
Compens
ation
0.08
(n.s)
5
0.93***,
0.86***,
0.89***
0.99 SpecID_bool:
-0.43***
Fit does not have a
significant impact
on whether and to
which extent the
residents consider
they are fairly paid.
Admin -0.2*** 0.93***,
0.86***,
0.89***
0.96 ehrID2: -0.21*** Better fit decreases
residents’
administrative
work.
Work 0.55*** 0.9***, 0.69 Better fit increases
88
Satisfacti
on
0.86***,
0.92***
residents’
satisfaction with
work.
Practice
Satisfacti
on
0.5*** 0.92***,
0.87***,
0.90***
0.75 Skill_ehr:
0.17***
Better fit increases
residents’
satisfaction with
their current
practice.
1: The beta coefficient from fit construct to the work impact measure, without any
significant predictors of the work impact measure.
2: The factor loadings of system quality construct, information quality construct and
service quality construct on fit construct are presented in order.
*: significant at 0.1 level.
**: significant at 0.05 level.
***: significant at 0.01 level
3. Standardized residue of work impact measure with fit construct as the only predictor,
which is the proportion of unexplained variance in the work impact measures.
4. All of the other significant predictors of each work impact measure and the beta
coefficient.
5. n.s stands for not significant at 0.1 level.
Table 28 (Continued)
In order to compare the model efficiency for different work impact measures, all of the
results are standardized. Also we are most interested in the effect of the fit construct on
each work impact measure, the beta coefficients in the first column of table 28 indicate
that the fit construct is a significant predictor to all of the work impact measures except
for the compensation measure; some of work impact measures were phrased in a negative
way, therefore, the signs of the beta coefficients of fit construct on those measures were
negative.
Better fit of EHR system to residents’ work brings more freedom and other positive
impacts to residents’ work, which is shown in the column of implication in table 28.
Except for the compensation measure, the fit construct has different impacts on the
outcome variables: by comparing the absolute value of the beta coefficients, the fit
89
construct has the largest impact on quality of referral capability with the beta coefficient
of 0.65 and smallest impact on personal time with a beta coefficient of -0.14. And all of
the three quality constructs have significant factor loadings on the fit construct; for most
of the work impacts, the system quality has the largest loading while the information
quality has the lowest loading indicating that system quality construct influences fit
construct more than the other two quality constructs do.
For different work impact measures, there are other different predictors significantly
contributing to the outcome variables, which will be further illustrated in the discussion
chapter.
Although the averages scores of all of the three quality constructs are highly correlated
with each work impact measure and hence are powerful predictors separately, because of
the high correlations among the three average scores, which are all correlated to the fit
construct, not all of the three average scores are significant when considered as predictors
simultaneously. For example, when predicting autonomy, only the average score of
system quality is a significant predictor; the information quality and the service quality
are not significant.
We also used regression analysis to predict the work impact measures using the average
scores of the three quality constructs as predictors. As a result, the analysis result from
multiple regression is not accurate, although multiple regression could explain some of
the outcome variables better than structural equation modeling (SEM) comparing the
residue of outcome variables; Moreover, one of the most valuable strengths of SEM
evaluation is to solve the issue of collinearity by introducing the latent construct to
90
measure the commonness of highly correlated variables. Therefore, we recommend using
SEM to interpret the outcome variables.
5.6 Summary of Key Findings
In Summary, in the finally accepted research model, the three quality constructs with one
common latent construct, which is called as fit construct and defined as the extent to
which EHR system could satisfy physicians’ need and demand at work. Fit construct has
a direct relationship with the work impact measures; for different measures, fit construct
has different levels of impacts.
91
Chapter 6 Discussion and Conclusion
6.1 Discussion of the EHR Success Model
From the analysis result in 5.5.1, the model of three quality constructs with an underlying
latent factor was proved to be the best model in the perspectives of explanatory power
and also fit indices. In the final model, the three quality constructs are measured by a set
of 16 measures evaluating EHR system characteristics and organizational support; the
three quality constructs have one common underlying construct and we named the latent
factor as fit. Fit construct is defined as to which extent the features and services provided
by technology could meet users’ demand and requirements of work; More specifically in
our EHR Success Model, the fit construct evaluates the degree of the EHR system and
organizational service supporting EHR use meeting the demands of physicians’ work in
patient care delivery and care management. And the fit construct has a direct impact on
physicians’ work performance.
Comparing to the initial research model, the fit construct is functioning as the
intermediate variable between the success factors and physicians’ work performance
instead of user satisfaction, which indicates that the application of task technology fit
model outperforms the application of IS success model in our study.
6.1.1 Congruence Model of Organizational Behavior
The concept of fit in an organizational setting has been introduced and explained in one
of the most successful and widely used frameworks in organizational behavior studies,
the congruence model [137]. The congruent model has been developed and refined in the
92
past nearly four decades of academic research and practical applications in many major
companies in diverse industries. David Nadler and Michael Tushman at Columbia
University first accepted systems theory in the study of organizational behavior, upon
which they built the congruence model as a simple and practical approach to interpret
organizational dynamics [138]. The model was designed to assist organizational leaders
to fully understand the interplay of technical and social forces, which determines the
performance of each organization.
The congruence model suggests that in order to understand an organization’s
performance completely, organization should be considered and studied as a system that
is composed of the following fundamental elements:
(1) The input, which includes the elements what an organization is given to work. There
are three main types of input: the environment, resources and history.
(2) The crucial transformation process, through which individuals in the organization
transform input to output. This is the heart of the model and has four key components: the
work, the people, and formal and informal organizations, such as structures, systems and
processes, by which organization designs to group individuals and to coordinate their
work in ways to compete and succeed in the industry.
(3) The output, which is the products and services an organization produces to achieve its
goals and objectives in order to survive and grow. The ultimate purpose of an
organization is to produce high quality output, which is to improve its performance of the
system. The performance could be defined at different levels: the total system, units
within the systems and the individuals.
93
(4) The strategy - an organization translates its vision into concrete decisions about where
and how to grow the organization and to compete in the market.
The final but most important component of the congruence model is the concept of fit.
Organization’s performance replies on the alignment of each component: the input,
transformation process, output and process, with all the others. The tighter the fit, which
also indicates a higher congruence, the better organization could perform. Therefore, fit
between each set of the components is more critical in determining organizational
performance than the components themselves [137].
Fit exists in any set of the components, for example, the fit between provided resources in
organization and individuals’ demands at work, the fit between the skills work demands
and the skills individuals actually possess. Information technology is one type of
recourses, which is the second source of input. In our EHR Success Model, our study
focuses are to understand the fit between information technology and individuals’ work
and how the degree of fit influences individual’s performance.
6.1.2 Fit Between Information Technology and Work
The concept of fit between information technology and users’ work was highlighted by
Goodhue [37], who asserted in the technology-to-performance chain model, that for an
information technology to have a positive impact on individual performance, the
technology: (1) must be utilized and (2) must be a good fit with the tasks it supports.
In our EHR Success Model, utilization of EHR is mandatory for most of the residents
who completed the survey, therefore, the fit between technology and work, more
specifically, to which extent EHR system and supporting service in healthcare
94
organization are meeting physicians’ needs, was more dominant in determining the
impact of using EHR on their work impact; It is also proven in the analysis. Although
there was no explicit fit construct in our original research model, the EHR Success Model
in chapter 2 and chapter 4, we did integrate the concept of fit when we designed the
research model and hence all of the survey questions in system and information quality
constructs are designed to evaluate the EHR features in the context of completing tasks at
work, and the survey questions in the service quality construct were intended to evaluate
the effectiveness of organizational support during and after EHR implementation to assist
physicians to adapt to EHR. Therefore, the survey questions in the three quality
constructs, which are also used as independent variables in data analysis in chapter 5,
could also be considered as surrogate measures of fit between EHR system and
physicians’ work, which also explains the reason that all of independent variables are
highly correlated to each other and share a large proportion of commonness in their
variances.
The EHR Success Model is consistent with its backbone, Delone and Mclean’s IS
Success Model, in that technology characteristics lead to individual performance,
however, it is more advantageous in the following ways: (1) it emphasizes the
signification of the fit between technology and task when explaining how information
technology lead to users’ work performance; (2) with more clear and straightforward
links between the constructs, it provides a testified and stronger theoretical basis and
guides researchers towards considering issues more related to the impact of technology
on work performance, such as designing the surrogate measures of fit between
95
technology and task, understanding the users’ involvement on their work performance
and deciding the measures of their work performance, which could provide a better
evaluative and diagnostic tool to solve the issues physicians encounter when using EHR.
6.2 Discussion of Physicians’ Work Impacts of EHR Use
In section 5.5.2, table 28 summarizes the impact of EHR use on each work impact
measure, which will be further discussed in this section.
Fit is a significant predictor to each of the work measure, except the compensation
measure, with different effects and there are other different predictors significant to some
of the work impact measures. A better fit between EHR and work, has more positive
impact on each of the work impact measure for residents, more specifically, better
technology task fit makes them feel that their liberty to delivery care is increased,
referring patients or receiving referrals is easier, they provide higher quality care, the
error rate during practice is decreased, their relationship with patients is improved, they
could see more patients per day, their personal lives are interrupted less by work, their
administrative work is decreased, and their satisfactions with work and practice are
increased.
There are other factors influencing the impact of fit construct on the work impact
measures, for example, if we divide the survey respondents into two groups depending on
whether the respondent is from Kaiser affiliated residency programs or not. By
comparing the beta coefficients from fit construct to work impact measures, the model
has different explanatory power: for the work impact measures of referral capabilities, the
model could explain 79% of the variance in outcome variable for the Kaiser group
96
comparing to 27% for the non-Kaiser group; for the quality of care work impact measure
as final outcome variable, the model could explain 52% of the variance for the Kaiser
group comparing to 24% for the non-Kaiser group; for the error rate impact measure as
final outcome variable, the model could explain 56% of the variance for the Kaiser group
comparing to 30% for the non-Kaiser group. However, for the work impact measures of
productivity, time a d administrative, the model predicts the outcome variable better for
the non-Kaiser group than the Kaiser group. There might be other variables regarding the
practice mediating the impact between fit construct and final outcome variables,
however, since our sample is not a stratified sample, program effects are less likely to be
evaluated, which is also a limitation of this study.
As for other significant predictors to work impact measures, the list varies depending on
specific work impact measure as outcome variable:
(1) For residents’ evaluations of EHR facilitating the referral process, residents’ specialty
and the type of EHR system in use are also significant predictors: residents in primary
care consider it easier to refer patients or receive referrals than residents in specialties;
Epic was rated to perform best, followed by other EHR systems and Cerner, in terms of
residents’ perceptions of EHR assisting them with referral process.
(2) For residents’ evaluations of EHR improving quality of care, residents with longer
experience in using EHR and those who believe EHR helps them achieve meaningful use
objectives, are more likely to perceive that EHR improves quality of the care they deliver.
(3) For residents’ evaluations of EHR decreasing error rate at their work, residents who
have been practiced shorter, who are more skillful in using EHR, who are not Cerner
97
users and who believe EHR helps achieve meaningful use objectives perceive more
benefits in EHR use reducing medical errors.
(4) For residents’ evaluations of EHR reducing work interruption of their personal lives,
it shows that residents who are in primary care and who practice in larger organizations
agree more that EHR helps decrease the interruption of residents’ personal lives by work.
(5) Cerner is considered not performing as well as other EHR systems in reducing
residents’ administrative work.
(6) The more skillful in EHR use, the more satisfied residents are with their practice.
6.3 Discussion of the Three Quality Constructs
All of the three quality constructs load very well on the fit construct when predicting the
work impact measures, which indicates that all of the three quality constructs influence
the fit construct significantly. And the implication is that in order to improve the degree
of fit, we should work on the qualities of the information provided by the EHR system,
the system itself and also the organizational support helping physicians adapt to EHR.
Based on the survey sample in our study, by applying a cutoff value of 0.7 to the factor
loadings of the items on their designated quality construct, there are 7, 2, 7 items
maintained in the system, information and service quality constructs separately. The
maintained sixteen measures in the three quality constructs are surrogate measures of fit
construct and could be used as diagnostic tool to evaluate EHR systems. All of the
maintained measures are listed in the following table. The structure of the measures
including the attributes within each quality construct is displayed in the following table.
98
Table 29 Structure of Final Survey Instrument
Construct Attribute Measures (Survey questions)
Accuracy 1. Reminders and online alerts provide the right
information at the right time, for example, drug
interaction.
System
quality
Completeness 2. EHR can always provide me with the right term(s)
matching the concept(s) I am looking for, when I
complete tasks such as updating problem list, adverse
reactions and medications.
Cognitive
load
3. The screen design is clean; the displayed information
only includes functionalities that are needed to
effectively accomplish tasks.
4. Character count, resolution, font and font size are well
designed to help display information.
5. Concepts (terminologies), behavior (how to operate
the system), appearance and layout are used consistently
throughout the system.
Efficiency 6. The number of steps it takes to complete tasks is
acceptable.
7. There is no need to switch between keyboard and
mouse frequently.
Information
quality
Consistency 8. Colors are used consistently and could convey
meaning.
9. The EHR responds to my request in a speedy manner.
Governance 10. Commitment of top management helped to ensure
adequate resource allocation, supported redesign effort
and facilitates implementation steps.
11. During decision-making process, physician
representatives’ inputs were considered.
12. Standardized communication process was in place,
having a single person (implementation coordinator or
project manager) act as a liaison between the
implementation team and the clinic and work
effectively.
Preparation 13. Evidence-based guidelines and standards for every
workflow decision were incorporated into the EHR.
14. Training on how to use EHR was provided at the
right times, amount and quality.
Timeline 15. After assessing the state of readiness and change-
capability of the organization, realistic timetable of EHR
implementation was designed and followed.
Service
support
quality
Support 16. There is provision of a venue for me to ask for help
after implementation.
99
6.4 Contribution of the EHR Success Model and Our Study
The EHR Success Model is the first comprehensive theoretical model integrating
Technology Acceptance Model (TAM) and its extension, Delone & Mclean’s IS Success
Model and Goodhue’s TTF model in the specific context of evaluating healthcare
information technology; it is also empirically tested and supported by an analysis of data
from 219 residents. We operationalize each key IS success construct by designing
concrete measures and test the relationships between these constructs. The study shows it
is important to use a multi-construct dependent measure to evaluate IS success in the
discipline of healthcare information technology. In context of this study, where the EHR
interface is what physicians directly interact with and extract information from, quality of
system is more critical in shaping EHR success than the quality of information and
quality of service support. Moreover, the evidence also suggests that for EHR to have a
positive impact on physicians’ work performance, the functions provided by EHR must
be a good fit with the work physicians are performing and the healthcare organization
should provide support to accommodate the implementation of EHR to help physicians
adapt to the system at work in a soonest manner.
Also, the EHR Success Model highlights the importance of the technology task fit
construct and the evidence suggests that with fit construct in the model user satisfaction is
not necessary to be included in the model, which is proven by the analysis result that
model 3b has stronger explanatory power than model 3a. In context of the present study,
technology task fit stands for the degree to which EHR assists physicians to perform his
100
or her tasks at work, which connects EHR system characteristics, organizational support
to the final work impact measures.
Moreover, EHR Success Model provides a set of surrogate measures for fit construct,
which are the maintained 16 measures in the three quality constructs. Therefore, the
model could be used as a diagnostic tool to evaluate the success of EHR implementation,
especially when any negative impacts on physicians’ work are reported after using EHR.
Organizational administrators could ask the physicians to fill out a short survey with only
16 questions to have an effective assessment of the system. The survey result could
provide actionable guidance to the healthcare administrators and IT staff on how to
improve the current EHR system.
As a result, this socio-technical evaluation framework examines interactions between
people (physicians), technical system (EHR and organizational support) and impact on
work performance and quality of physician’s work life. This study contributes to the
knowledge for different groups regarding how to optimize EHR system functions and its
implementation: (1) it provides insights for system designers and human factor engineers
on what factors contribute to good technical capability and usability of EHR by
improving the fit surrogate measures, which are the maintained measures within the three
quality constructs; (2) it also provides insights for healthcare organizations to provide
more desirable support service to make EHR fit physician’s work; (3) this study could
also serve as a survey instrument including a set of surrogate measures for fit construct
for researchers to evaluate HIT products and implementation.
101
6.5 Limitations
Although the findings of this study contribute to a better understanding of the construct of
technology task fit in the context of using EHR, and also of the linkage between EHR
system functions and physicians’ work impact, there are several limitations to this work.
First, there are several concerns regarding the survey sample; there might be a bias in the
residents and the programs that chose to respond surveys. For example, it is possible that
respondents who were most or least satisfied with EHR in use completed their surveys,
thus our results might not represent the complete group who received the surveys.
Currently, we do not have data from non-respondents to evaluate whether the bias exists,
the direction and amount if there is. Moreover, there might also be bias in the programs,
whose coordinator helped forward the survey to their residents, comparing to the other
residency programs in California. Since our sample is not a stratified sample, therefore,
no hierarchical regression could be performed and as a result, program effects may not be
accounted for in explaining the work impact measures, which is also a limitation of this
study.
Second, the model is designed for the whole population of physicians in the States who
have experience in using EHR, however, due to time constraints and physicians’ work
burden, the survey was distributed to the population of residents in California. Therefore,
the model is tested based on the data we collected from the residents’ group, and hence
these relationships may apply to the national physician group differently, which is one
limitation of this study.
102
Third, the distributed survey was developed based on the literature review in healthcare
information technology and Bailey and Pearson’s information system evaluation
instrument, for the reason that Delone and Mclean’s IS Success Model was accepted as
the backbone in our research model; although the concept of fit was integrated in the
survey question design, a survey with more questions relating to the fit construct from
Goodhue’s model of technology performance chain would also be useful and might have
better explanatory power in predicting the work impact measures.
Forth, it should be noted that the model explained a range of 3% to 42% of the variance
on work impact measures. That a large percentage of the measures remain unexplained
suggests that there is need for additional research considering unmeasured but significant
variables in this present study.
6.6 Future Work
Future work would try to overcome the above limitations we had in this study.
First regarding the sample and potential bias, a similar study examining this model on a
broader sample of physicians in a variety of states would serve to extend the findings and
results even further. Second, we could further verify that latent construct is fit and then a
similar survey with more questions testing the fit construct might be more powerful in
explaining the work impact measures. Third, consulting people who work in healthcare
industry, not only the physicians, but also administrators in the organizations and IT
support staff would help explore potential significant unmeasured variables in this study,
by including which, more variance of work impact measures could be explained and the
model will function even better. Fourth, more studies could be developed to include
103
objective work impact measures as the final outcome variables, comparing to the current
measures evaluating physicians’ perceptions towards those objective measures, such as
how their work patterns are changed. Also future studies could incorporate organizational
impacts in the model as the final outcome variable for the reason that individual impacts
collectively result in organizational impacts. Fifth, future work should test how EHR
Success Model performs in other different contexts, such as voluntary setting and
compulsory environment, which could provide recommendations to improve the current
model.
104
References
1. CMS, National Health Care Expenditures Data. 2011.
2. Sisko, A., et al., Health spending projections through 2018: recession effects add
uncertainty to the outlook. Health Affairs, 2009. 28(2): p. w346-w357.
3. Kohn, L.T., J. Corrigan, and M.S. Donaldson, To err is human: building a safer
health system. Vol. 6. 2000: Natl Academy Pr.
4. Ash, J.S., M. Berg, and E. Coiera, Some unintended consequences of information
technology in health care: the nature of patient care information system-related
errors. Journal of the American Medical Informatics Association, 2004. 11(2): p.
104-112.
5. Wu, S., et al., Systematic review: impact of health information technology on
quality, efficiency, and costs of medical care. Annals of internal medicine, 2006.
144(10): p. 742-752.
6. Devaraj, S. and R. Kohli, Information technology payoff in the health-care
industry: a longitudinal study. Journal of Management Information Systems,
2000: p. 41-67.
7. Carrie, M.G., et al., Comparison of user groups' perspectives of barriers and
facilitators to implementing electronic health records: a systematic review. BMC
Medicine. 9.
8. Bates, D., et al., Reducing the frequency of errors in medicine using information
technology. Journal of the American Medical Informatics Association, 2001. 8(4):
p. 299-308.
9. Gewald, H. and H.T. Wagner, A Research Model for Measuring IT Efficiency in
German Hospitals. Theory-Guided Modeling and Empiricism in Information
Systems Research, 2011: p. 175-185.
10. America, I.o.M.C.o.Q.o.H.C.i., Crossing the quality chasm: A new health system
for the 21st century2001: National Academies Press.
105
11. Blumental, D., Statement on Health IT Adoption and the New Challenges Faced
by Solo and Small Group Healthcare Practices. 2009.
12. Schoen, C., et al., A Survey Of Primary Care Doctors In Ten Countries Shows
Progress In Use Of Health Information Technology, Less In Other Areas. Health
Affairs, 2012. 31(12): p. 2805-2816.
13. Bower, A.G., The diffusion and value of healthcare information technology2005:
Rand Corporation.
14. Pratt, W., et al., Incorporating ideas from computer-supported cooperative work.
Journal of biomedical informatics, 2004. 37(2): p. 128-137.
15. Payne, T.H., et al. The transition to electronic documentation on a teaching
hospital medical service. in AMIA Annual Symposium Proceedings. 2006.
American Medical Informatics Association.
16. Scott, J.T., et al., Kaiser Permanente's experience of implementing an electronic
medical record: a qualitative study. Bmj, 2005. 331(7528): p. 1313-1316.
17. Valdes, I., et al., Barriers to proliferation of electronic medical records.
Informatics in Primary Care, 2004. 12(1): p. 3-9.
18. Johnson, K.B., Barriers that impede the adoption of pediatric information
technology. Archives of pediatrics and adolescent medicine, 2001. 155(12): p.
1374.
19. Eger, M.S., R.L.G. PhD, and S.R.V. DBA, Physicians' Adoption of Information
Technology. Health Marketing Quarterly, 2001. 19(2): p. 3-21.
20. Baron, R.J., et al., Electronic health records: just around the corner? Or over the
cliff? Annals of internal medicine, 2005. 143(3): p. 222-226.
21. Crosson, J.C., et al., Implementing an electronic medical record in a family
medicine practice: communication, decision making, and conflict. The Annals of
Family Medicine, 2005. 3(4): p. 307-311.
22. Freeborn, D.K., Satisfaction, commitment, and psychological well-being among
HMO physicians. Western Journal of Medicine, 2001. 174(1): p. 13.
106
23. Street, D. and J. Cossman, Autonomy, Satisfaction and Physician Burnout.
24. Berlin, A., The EHR- Exercises in Human Resistance. 2011.
25. Hu, P.J., et al., Examining the technology acceptance model using physician
acceptance of telemedicine technology. Journal of Management Information
Systems, 1999: p. 91-112.
26. Chiasson, M., et al., Expanding multi-disciplinary approaches to healthcare
information technologies: What does information systems offer medical
informatics? International journal of medical informatics, 2007. 76: p. S89.
27. Davis, F.D., A technology acceptance model for empirically testing new end-user
information systems: Theory and results, 1985, Massachusetts Institute of
Technology, Sloan School of Management.
28. Lee, Y., K.A. Kozar, and K.R.T. Larsen, The technology acceptance model: Past,
present, and future. The Communications of the Association for Information
Systems, 2003. 12(1): p. 53.
29. Holden, R.J. and B.T. Karsh, The technology acceptance model: its past and its
future in health care. Journal of biomedical informatics, 2010. 43(1): p. 159-172.
30. Wixom, B.H. and P.A. Todd, A theoretical integration of user satisfaction and
technology acceptance. Information systems research, 2005. 16(1): p. 85-102.
31. Doll, W.J. and G. Torkzadeh, The measurement of end-user computing
satisfaction. MIS quarterly, 1988: p. 259-274.
32. Etezadi-Amoli, J. and A.F. Farhoomand, A structural model of end user
computing satisfaction and user performance. Information & Management, 1996.
30(2): p. 65-73.
33. Igbaria, M., T. Guimaraes, and G.B. Davis, Testing the determinants of
microcomputer usage via a structural equation model. Journal of Management
Information Systems, 1995: p. 87-114.
34. Rogers, E.M., Diffusion of innovations1995: Free Pr.
107
35. DeLone, W.H. and E.R. McLean, Information systems success: The quest for the
dependent variable. Information systems research, 1992. 3(1): p. 60-95.
36. Ammenwerth, E., C. Iller, and C. Mahler, IT-adoption and the interaction of task,
technology and individuals: a fit framework and a case study. BMC medical
informatics and decision making, 2006. 6(1): p. 3.
37. Goodhue, D.L. and R.L. Thompson, Task-technology fit and individual
performance. MIS quarterly, 1995: p. 213-236.
38. Davis, F.D., Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS quarterly, 1989: p. 319-340.
39. Davis, F.D., R.P. Bagozzi, and P.R. Warshaw, User acceptance of computer
technology: a comparison of two theoretical models. Management science, 1989:
p. 982-1003.
40. Fishbein, M. and I. Ajzen, Belief, attitude, intention and behavior: An
introduction to theory and research1975.
41. Chau, P.Y.K. and P.J.H. Hu, Information Technology Acceptance by Individual
Professionals: A Model Comparison Approach*. Decision Sciences, 2001. 32(4):
p. 699-719.
42. Chau, P.Y.K. and P.J. Hu, Examining a model of information technology
acceptance by individual professionals: An exploratory study. Journal of
Management Information Systems, 2002. 18(4): p. 191-230.
43. Ketikidis, P., et al., Acceptance of Health Information Technology in Health
Professionals: An Application of the Revised Technology Acceptance Model.
44. Venkatesh, V. and F.D. Davis, A theoretical extension of the technology
acceptance model: Four longitudinal field studies. Management science, 2000.
46(2): p. 186-204.
45. Venkatesh, V., et al., User acceptance of information technology: Toward a
unified view. MIS quarterly, 2003: p. 425-478.
108
46. Delone, W.H. and E.R. McLean, The DeLone and McLean model of information
systems success: A ten-year update. Journal of Management Information Systems,
2003. 19(4): p. 9-30.
47. Lau, F., M. Price, and K. Keshavjee, From Benefits Evaluation to Clinical
Adoption: Making Sense of Health Information System Success in Canada.
Healthcare Quarterly, 2011. 14(1): p. 39-45.
48. Goodhue, D.L., Understanding user evaluations of information systems.
Management science, 1995. 41(12): p. 1827-1844.
49. Williams, E.S., et al., Refining the measurement of physician job satisfaction:
results from the Physician Worklife Survey. Medical Care, 1999: p. 1140-1154.
50. Konrad, T.R., et al., Measuring physician job satisfaction in a changing
workplace and a challenging environment. Medical Care, 1999: p. 1174-1182.
51. Ives, B., M.H. Olson, and J.J. Baroudi, The measurement of user information
satisfaction. Communications of the ACM, 1983. 26(10): p. 785-793.
52. Ribiere, V., et al. Hospital Information Systems Quality: A customer satisfaction
assessment tool. 1999. IEEE.
53. Bailey, J.E. and S.W. Pearson, Development of a tool for measuring and
analyzing computer user satisfaction. Management science, 1983: p. 530-545.
54. Zviran, M., Evaluating user satisfaction in a hospital environment: an
exploratory study. Health Care Management Review, 1992. 17(3): p. 51.
55. ISO/TC, Electronic health record definition, scope, and context (2nd draft).
Geneva: International Organization for Standardization, 2003.
56. DesRoches, C.M., et al., Electronic health records in ambulatory care - a
national survey of physicians. New England Journal of Medicine, 2008. 359(1): p.
50-60.
57. Jha, A.K., et al., How common are electronic health records in the United States?
A summary of the evidence. Health Affairs, 2006. 25(6): p. w496-w507.
109
58. Aspden, P., et al., Committee on identifying and preventing medication errors.
Preventing medication errors: quality chasm series, 2007: p. 17-31.
59. Hayrinen, K., K. Saranto, and P. Nykanen, Definition, structure, content, use and
impacts of electronic health records: a review of the research literature.
International journal of medical informatics, 2008. 77(5): p. 291.
60. Joos, D., et al. An electronic medical record in primary care: impact on
satisfaction, work efficiency and clinic processes. 2006. American Medical
Informatics Association.
61. Arts, D.G.T., N.F. De Keizer, and G.J. Scheffer, Defining and improving data
quality in medical registries: a literature review, case study, and generic
framework. Journal of the American Medical Informatics Association, 2002. 9(6):
p. 600-611.
62. Komaroff, A.L., The variability and inaccuracy of medical data. Proceedings of
the IEEE, 1979. 67(9): p. 1196-1207.
63. Hogan, W.R. and M.M. Wagner, Accuracy of data in computer-based patient
records. Journal of the American Medical Informatics Association, 1997. 4(5): p.
342-355.
64. Makoul, G., R.H. Curry, and P.C. Tang, The use of electronic medical records
communication patterns in outpatient encounters. Journal of the American
Medical Informatics Association, 2001. 8(6): p. 610-615.
65. Kinn, J.W., et al., Effectiveness of the electronic medical record in cholesterol
management in patients with coronary artery disease (Virtual Lipid Clinic).
American Journal of Cardiology, 2001. 88(2): p. 163-164.
66. Smith, S.A., et al., Impact of a diabetes electronic management system on the
care of patients seen in a subspecialty diabetes clinic. Diabetes Care, 1998. 21(6):
p. 972-976.
67. George, J. and P.S. Bernstein, Using electronic medical records to reduce errors
and risks in a prenatal network. Current Opinion in Obstetrics and Gynecology,
2009. 21(6): p. 527.
110
68. Essin, D.J., et al., Development and assessment of a computer-based
preanesthetic patient evaluation system for obstetrical anesthesia. Journal of
Clinical Monitoring and Computing, 1998. 14(2): p. 95-100.
69. Apkon, M. and P. Singhaviranon, Impact of an electronic information system on
physician workflow and data collection in the intensive care unit. Intensive care
medicine, 2001. 27(1): p. 122-130.
70. Rosenbloom, S.T., et al., Data from clinical notes: a perspective on the tension
between structure and flexible documentation. Journal of the American Medical
Informatics Association, 2011. 18(2): p. 181-186.
71. Cimino, J.J., V.L. Patel, and A.W. Kushniruk, Studying the human‚ computer‚
terminology interface. Journal of the American Medical Informatics Association,
2001. 8(2): p. 163-173.
72. Wagner, M.M. and W.R. Hogan, The accuracy of medication data in an
outpatient electronic medical record. Journal of the American Medical
Informatics Association, 1996. 3(3): p. 234-244.
73. Gamm, L.D., et al. Pre-and post-control model research on end-users'
satisfaction with an electronic medical record: preliminary results. 1998.
American Medical Informatics Association.
74. Menke, J.A., et al., Computerized clinical documentation system in the pediatric
intensive care unit. BMC medical informatics and decision making, 2001. 1(1): p.
3.
75. Aronsky, D. and P.J. Haug, Assessing the quality of clinical data in a computer-
based record for calculating the pneumonia severity index. Journal of the
American Medical Informatics Association, 2000. 7(1): p. 55-65.
76. Garrido, T., et al., Effect of electronic health records in ambulatory care:
retrospective, serial, cross sectional study. Bmj, 2005. 330(7491): p. 581.
77. Force, H.E.U.T., Usability Task Force Defining and Testing EMR Usability:
Principles and Proposed Methods of EMR Usability Evaluation and Rating, 2009.
111
78. Stead, W.W. and H. Lin, Computational technology for effective health care:
immediate steps and strategic directions2009: National Academy Press.
79. Mitka, M., Joint commission offers warnings, advice on adopting new health care
IT systems. JAMA: the journal of the American Medical Association, 2009.
301(6): p. 587-589.
80. Melles, R.B. and T. Cooper. User interface preferences in a point-of-care data
system. in Proceedings of the AMIA Symposium. 1998. American Medical
Informatics Association.
81. Nilasena, D.S. and M.J. Lincoln. A computer-generated reminder system
improves physician compliance with diabetes preventive care guidelines. in
Proceedings of the Annual Symposium on Computer Application in Medical Care.
1995. American Medical Informatics Association.
82. Krall, M.A. and D.F. Sittig. Clinician's assessments of outpatient electronic
medical record alert and reminder usability and usefulness requirements. in
Proceedings of the AMIA Symposium. 2002. American Medical Informatics
Association.
83. Clayton, P.D., et al. Physician use of electronic medical records: issues and
successes with direct data entry and physician productivity. in AMIA Annual
Symposium Proceedings. 2005. American Medical Informatics Association.
84. Keshavjee, K., et al. Best practices in EMR implementation: a systematic review.
in AMIA Annual Symposium.
85. Ovretveit, J., et al., Improving quality through effective implementation of
information technology in healthcare. International Journal for Quality in Health
Care, 2007. 19(5): p. 259-266.
86. Swanson, T., et al., Recent implementations of electronic medical records in four
family practice residency programs. Academic Medicine, 1997. 72(7): p. 607.
87. Townes Jr, P.G., et al., Making EMRs really work: the southeast health center
experience. The Journal of Ambulatory Care Management, 2000. 23(2): p. 43.
112
88. Chiang, M.F. and J.B. Starren. Software engineering risk factors in the
implementation of a small electronic medical record system: the problem of
scalability. 2002. American Medical Informatics Association.
89. Massaro, T.A., Introducing physician order entry at a major academic medical
center: I. Impact on organizational culture and behavior. Academic Medicine;
Academic Medicine, 1993.
90. Sittig, D.F. and W.W. Stead, Computer-based physician order entry: the state of
the art. Journal of the American Medical Informatics Association, 1994. 1(2): p.
108-123.
91. Smith, P.D., Implementing an EMR system: one clinic's experience. Family
Practice Management, 2003. 10(5): p. 37-52.
92. Wager, K.A., et al., Impact of an electronic medical record system on community-
based primary care practices. JOURNAL-AMERICAN BOARD OF FAMILY
PRACTICE, 2000. 13(5): p. 338-348.
93. Miller, D., Prenatal care: a strategic first step toward EMR acceptance. Journal
of Healthcare Information Management, 2003. 17(2): p. 47-50.
94. Ash, J.S. Factors affecting the diffusion of the Computer-Based Patient Record.
1997. American Medical Informatics Association.
95. Chan, W., Increasing the success of physician order entry through human factors
engineering. Journal of Healthcare Information Management, 2002. 16(1): p. 71-
79.
96. Studer, M., The Effect of Organizational Factors on the Effectiveness of EMR
System Implementation-What Have We Learned? Electronic Healthcare, 2005.
4(2): p. 92-98.
97. Wager, K.A., F.W. Lee, and A.W. White, Life after a disastrous electronic
medical record implementation: One clinic's experience. Annals of cases on
information technology applications and management in organizations, 2001. 3: p.
153-168.
113
98. Aydin, C. and D. Forsythe, Implementing computers in ambulatory care:
implications of physician practice patterns for system design. Evaluating the
Organizational Impact of Healthcare Information Systems, 2005: p. 295-303.
99. Miller, H., et al., Electronic medical records: lessons from small physician
practices2003: California HealthCare Foundation.
100. Miller, R.H., J.M. Hillman, and R.S. Given, Physician use of IT: results from the
Deloitte Research Survey. Journal of healthcare information management: JHIM,
2004. 18(1): p. 72.
101. Tonnesen, A., A. LeMaistre, and D. Tucker. Electronic medical record
implementation barriers encountered during implementation. 1999. American
Medical Informatics Association.
102. Brokel, J.M. and M.I. Harrison, Redesigning care processes using an electronic
health record: a system's experience. Joint Commission Journal on Quality and
Patient Safety, 2009. 35(2): p. 82-92.
103. Au, N., W. Ngai, and T. Cheng, Extending the understanding of end user
information systems satisfaction formation: An equitable needs fulfillment model
approach. MIS quarterly, 2008. 32(1): p. 43-66.
104. Gelderman, M., The relation between user satisfaction, usage of information
systems and performance. Information & Management, 1998. 34(1): p. 11-18.
105. Van Der Meijden, M., et al., Determinants of success of inpatient clinical
information systems: a literature review. Journal of the American Medical
Informatics Association, 2003. 10(3): p. 235-243.
106. Ash, J.S., et al. An unintended consequence of CPOE implementation: shifts in
power, control, and autonomy. 2006. American Medical Informatics Association.
107. Campbell, E.M., et al., Types of unintended consequences related to computerized
provider order entry. Journal of the American Medical Informatics Association,
2006. 13(5): p. 547.
108. Delpierre, C., et al., A systematic review of computer-based patient record
systems and quality of care: more randomized clinical trials or a broader
114
approach? International Journal for Quality in Health Care, 2004. 16(5): p. 407-
416.
109. Blumenthal, D. and M. Tavenner, The Meaningful use regulation for electronic
health records. New England Journal of Medicine, 2010. 363(6): p. 501-504.
110. Harrison, G., The Winchester experience with the TDS hospital information
system. British journal of urology, 1991. 67(5): p. 532-535.
111. Handel, D.A. and J.L. Hackman, Implementing electronic health records in the
Emergency Department. The Journal of emergency medicine, 2010. 38(2): p. 257-
263.
112. Overhage, J.M., et al., Controlled Trial of Direct Physician Order Entry Effects
on Physicians' Time Utilization in Ambulatory Primary Care Internal Medicine
Practices. Journal of the American Medical Informatics Association, 2001. 8(4):
p. 361-371.
113. Adams, W.G., A.M. Mann, and H. Bauchner, Use of an electronic medical record
improves the quality of urban pediatric primary care. Pediatrics, 2003. 111(3): p.
626-632.
114. Harrison, M.I., R. Koppel, and S. Bar-Lev, Unintended consequences of
information technologies in health care--an interactive sociotechnical analysis.
Journal of the American Medical Informatics Association, 2007. 14(5): p. 542-
549.
115. Poissant, L., et al., The impact of electronic health records on time efficiency of
physicians and nurses: a systematic review. Journal of the American Medical
Informatics Association, 2005. 12(5): p. 505-516.
116. Hennington, A.H. and B.D. Janz, Information Systems and healthcare XVI:
physician adoption of electronic medical records: applying the UTAUT model in
a healthcare context. Communications of the Association for Information
Systems, 2007. 19(5): p. 60-80.
117. Buntin, M.B., et al., The benefits of health information technology: a review of the
recent literature shows predominantly positive results. Health Affairs, 2011.
30(3): p. 464-471.
115
118. Lee, T.T., et al., Two-stage evaluation of the impact of a nursing information
system in Taiwan. International journal of medical informatics, 2008. 77(10): p.
698-707.
119. Pai, F.Y. and K.I. Huang, Applying the Technology Acceptance Model to the
introduction of healthcare information systems. Technological Forecasting and
Social Change, 2011. 78(4): p. 650-660.
120. Berdie, D.R., Reassessing the value of high response rates to mail surveys.
Marketing Research, 1989. 1(3): p. 52-64.
121. Castillo, V., A. Martinez-Garcia, and J. Pulido, A knowledge-based taxonomy of
critical factors for adopting electronic health record systems by physicians: a
systematic literature review. BMC medical informatics and decision making,
2010. 10(1): p. 60.
122. Victor, C. and M.G. Ana, A knowledge-based taxonomy of critical factors for
adopting electronic health record systems by physicians: a systematic literature
review. 2010.
123. Bein, B., Some Family Medicine Residencies Truly Shine in Using Health IT,
2012.
124. Lee, F., et al., Implementation of physician order entry: user satisfaction and self-
reported usage patterns. Journal of the American Medical Informatics
Association, 1996. 3(1): p. 42-55.
125. Harris, P.A., et al., Research electronic data capture (REDCap)-A metadata-
driven methodology and workflow process for providing translational research
informatics support. Journal of biomedical informatics, 2009. 42(2): p. 377.
126. StataCorp, Stata Statistical Software: Release 12., 2011, StataCorp LP.: College
Station, TX.
127. R Development Core Team, R: A language and environment for statistical
computing. , 2010, R Foundation for Statistical Computing: Vienna, Austria.
128. Nunnally, J.C., I.H. Bernstein, and J.M.F. Berge, Psychometric theory. Vol. 2.
1967: McGraw-Hill New York.
116
129. Falk, R.F. and N.B. Miller, A primer for soft modeling1992: University of Akron
Press.
130. Barclay, D., C. Higgins, and R. Thompson, The partial least squares (PLS)
approach to causal modeling: personal computer adoption and use as an
illustration. Technology studies, 1995. 2(2): p. 285-309.
131. Nunnally, J.C., et al., Introduction to statistics for psychology and education1975:
McGraw-Hill New York.
132. Novick, M.R. and C. Lewis, Coefficient alpha and the reliability of composite
measurements. Psychometrika, 1967. 32(1): p. 1-13.
133. Stevens, J., Applied Multivariate Statistics for the Social Sciences. 2002.
Lawrence Erlblaum, Mahwah, NJ.
134. Thurstone, L.L., Multiple factor analysis. 1947.
135. Hooper, D., J. Coughlan, and M. Mullen, Structural equation modelling:
guidelines for determining model fit. Articles, 2008: p. 2.
136. Anderson, J.C. and D.W. Gerbing, Structural equation modeling in practice: A
review and recommended two-step approach. Psychological bulletin, 1988.
103(3): p. 411.
137. Wyman, O., The Congruence Model: A Roadmap for Understanding
Organizational Performance. Delta Organization & Leadership, 2003: p. 1-15.
138. Nadler, D.A. and M.L. Tushman, A model for diagnosing organizational
behavior. Organizational Dynamics, 1980. 9(2): p. 35-51.
117
Appendices
Appendix A
The following is the complete survey to access EHR Success Model, which is available
online through EHR satisfaction and physicians' job satisfaction.
It includes (1) demography questions; (2) question statements about EHR system, work
impact measures with answers from disagree, somewhat disagree, neither, somewhat
agree to agree and don’t know for physicians to choose from except the question of how
often do you use EHR at work with different scales, which are listed in the following
table as well.
Demographics
1. Specialty: pull down list including all specialties
2. Gender:
M F
3. Which year in residency program
1
st
2
nd
3
rd
4
th
5
th
other
4. Number of full time physicians in your current practice:
0-5 5-10 10-50 >50
5. What is the current EHR system are you using?
6. How long have you been using this current EHR prior to the current EHR?
0-3 months 3-6 months 6 months-1year 1-2 year > 2year
7. How skilled are you in the use of this EHR?
Novice Advanced beginner Average user Competent Expert
8. How many ambulatory EHRs (including this one) have you used so far?
1 2 3 4 or more
9. This EHR makes it easy to qualify for meaningful use incentives from Medicare
or Medical.
Disagree Somewhat disagree Neither Somewhat agree
Agree Don’t know
Quality of System
1. The screen design is clean; the displayed information only includes functionalities
that are needed to effectively accomplish tasks.
2. Character count, resolution, font and font size are well designed to help display
118
information.
3. Colors are used consistently and could convey meaning.
4. The terminology, abbreviations and acronyms used are commonly understood and
unambiguous.
5. Concepts (terminologies), behavior (how to operate the system), appearance and
layout are used consistently throughout the system.
6. The number of steps it takes to complete tasks is acceptable.
7. The EHR provides me the following functionalities: auto-tabbing (cursor moves
automatically from one data entry field to the next), good default values, large enough
list and text boxes to limit scrolling.
8. There is no need to switch between keyboard and mouse frequently.
9. Error messages are clear. They explain the reason why the error occurred and suggest
what I should do next.
10. The screen flows map to my task and workflow.
11. The EHR can integrate with other clinical systems in the facility; for example, I can
import data from other existing system.
12. I like that I can create customized forms or screens to document care, for example,
order sets.
13. The EHR responds to my request in a speedy manner.
Quality of Information
14. The EHR displays accurate information, which reflects the true state of patient; the
information provided by sections of medical history, physician examinations, lab
tests, symptoms, diagnosis, treatment, referrals in EHR are correct.
15. The representation of displayed information by EHR is legible.
16. Reminders and on line alerts provide the right information at the right time, for
example drug interaction.
17. EHR can always provide me with the right term(s) matching the concept(s) I am
looking for, when I complete tasks such as updating problem list, adverse reactions
and medications.
119
18. I am able to type free-text notes (narrative text) into EHR and the information from
text could be captured and stored in the system for later use.
19. I can find specific patient information whenever I need, such as information of patient
medical history, symptoms, diagnosis, treatment, referrals.
20. New results for patients are available to me sooner in EHR than old paper system.
21. The messaging feature in EHR allows me to communicate more quickly with my staff
concerning patients.
22. The messaging feature in EHR allows me to communicate more quickly with
providers outside my clinic concerning patients.
23. I like that I have access to my messages through EHR while I am away from clinic.
24. When a patient calls on the telephone, I can answer his or her questions faster.
Quality of Support Service
25. Commitment of top management helped to ensure adequate resource allocation,
supported redesign effort and facilitates implementation steps.
26. During decision-making process, physician representatives’ inputs were considered.
27. Standardized communication process was in place, having a single person
(implementation coordinator or project manager) act as a liaison between the
implementation team and the clinic and work effectively.
28. There was consensus among physicians about need for EHR before implementation; I
could see the benefit of using EHR before implementation.
29. After assessing the state of readiness and change-capability of the organization,
realistic timetable of EHR implementation was designed and followed.
30. Original workload was reduced to compensate the extra-required time spent on EHR.
31. New workflow steps required by EHR were discussed and reviewed; all physicians in
my clinic approved final workflows incorporated in the EHR system.
32. Moving to new workflows required by EHR helped eliminate potential errors (for
example, inappropriate variations) and risks in the old care steps.
33. Evidence-based guidelines and standards for every workflow decision were
incorporated into the EHR.
120
34. Training on how to use EHR was provided at the right times, amount and quality.
35. There is provision of a venue for me to ask for help after implementation.
36. EHR implementation team responds to my feedback in a timely manner.
Freedom of using EHR
37. Personal financial rewards are the main reason I am using EHR.
38. The main reason I am using EHR is to gain financial rewards for our organization.
39. I have complete freedom to chose whether to use EHR in our organization or not.
40. How often do you use EHR at work?
10% or less of entire work time
10% - 30%
30% - 50%
50% - 70%
70% - 90
90% or more
41. Overall I am satisfied with the EHR I am using.
42. Whenever possible, I intend to use EHR in my patient care and management in the
future.
Work Impact
1. Following clinical guidelines required by EHR restricts my freedom to practice.
2. This EHR enables me to refer patients or receive referrals more easily.
3. This EHR enables me to provide higher quality medicine than paper system.
4. My relationship with patients is more adversarial than it used to be.
5. Treatment and prescription recommendations by this EHR help decrease error rate.
6. This EHR helps me see more patients per day (or go home earlier) than I could with
paper charts.
7. The interruption of my personal life by work is a problem after using this EHR.
8. I am not well compensated compared to physicians in other specialties.
9. I have more administrative work to do after using EHR.
10. Overall, I am pleased with my work.
121
11. Overall, I am satisfied with my current practice.
Open-ended questions
What are the most important factors facilitating EHR adoption in your opinion?
What are the barriers in EHR adoption in your opinion?
122
Appendix B
Bailey & Pearson user satisfaction measure instrument
External
Variables
Instrument
characteristics
Definition
Convenience of
access (15)
The ease or difficulty with which the user may
act to utilize the capability of the computer
system.
Format (22) The material design of the layout and display of
the output contents.
Language (23) The set of vocabulary, syntax, and grammatical
rules used to interact with the computer
systems.
Volume of output
(24)
The amount of information conveyed to a user
from computer-based systems. This is
expressed not only by the number of reports or
outputs but also by the voluminousness of the
output contents.
Error recovery (26) The methods and policies governing correction
and rerun of system outputs that are incorrect.
Documentation (28) The recorded description of an information
system. This includes formal instructions for
the utilization of the system.
Flexibility (38) The capacity of the information system to
change or to adjust in response to new
conditions, demands, or circumstances.
Integration (39) The ability of systems to communicate/transmit
data between systems servicing different
functional areas.
Usability
(System
quality)
Response time (13) The elapsed time between a user-initiated
request for service or action and a reply to that
request. Response time generally refers to the
elapsed time for terminal type request or entry.
Turnaround time generally refers to the elapsed
time for execution of a program submitted or
requested by a user and the return of the output
to that user.
Usefulness Accuracy (16) The correctness of the output information.
123
Timeliness (17) The availability of the output information at a
time suitable for its use.
Precision (18) The variability of the output information from
that which it purports to measure.
Reliability (19) The consistency and dependability of the output
information.
Currency (20) The age of the output information.
Completeness (21) The comprehensiveness of the output
information content.
Relevancy (25) The degree of congruence between what the
user wants or requires and what is provided by
the information products and services.
Security of data (27) The safeguarding of data from
misappropriation or unauthorized alteration or
loss.
Expectation (29) The set of attributes or features of the
computer-based information products or
services that a user considers reasonable and
due from the computer-based information
support rendered within his organization.
Confidence in the
system (32)
The user's feelings of assurance or certainty
about the systems provided.
Perceived Utility (31) The user's judgment about the relative balance
between the cost and the considered usefulness
of the computer-based information products or
services that are provided. The costs include
any costs related to providing the resource,
including money, time, manpower, and
opportunity. The usefulness includes any
benefits that the user believes to be derived
from the support.
(Information
quality)
Relationship with
EDP Staff (5)
The manner and methods of interaction,
conduct, and association between the user and
the EDP staff.
124
Communication with
EDP staff (6)
The manner and methods of information
exchange between the user and the EDP staff.
Technical competence
of EDP staff (7)
The computer technology skills and expertise
exhibited by the EDP staff.
Attitude of EDP staff
(8)
The willingness and commitment of the EDP
staff to subjugate external, professional goals in
favor of organizationally directed goals and
tasks.
Schedule of products
or services (9)
The EDP center timetable for production of
information system outputs and for provision of
computer-based services.
Time required for
new development (10)
The elapsed time between the user's request for
new applications and the design, development,
and/or implementation of the application
systems by the EDP staff.
Processing of change
requests (11)
The manner, method, and required time with
which the EDP staff responds to user requests
for changes in existing computer-based
information systems or services.
Response time (13) The elapsed time between a user-initiated
request for service or action and a reply to that
request. Response time generally refers to the
elapsed time for terminal type request or entry.
Turnaround time generally refers to the elapsed
time for execution of a program submitted or
requested by a user and the return of the output
to that user.
Means of input with
EDP center (14)
The method and medium by which a user
inputs data to and receives output from the EDP
center.
Service
quality
Top management
involvement (1)
The positive or negative degree of interest,
enthusiasm, support, or participation of any
management level above the user's own level
toward computer-based information systems or
services or toward the computer staff which
supports them.
125
Organizational
competition with EDP
(2)
The contention between the respondent's
organizational unit and the EDP unit when
vying for organizational resources or for
responsibility for success or failure of
computer-based information systems or
services of interest to both parties.
Priorities
determination (3)
Policies and procedures which establish
precedence for the allocation of EDP resources
and services between different organizational
units and their requests.
Charge-back method
(4)
The schedule of charges and the procedures for
assessing users on a pro rata basis for the EDP
resources and services that they utilize.
Vendor support (12) The type and quality of the service rendered by
a vendor, either directly or indirectly, to the
user to maintain the hardware or software
required by that organizational status.
Understanding of
systems (30)
The degree of comprehension that a user
possesses about the computer-based
information systems or services that are
provided.
Feeling of
participation (33)
The degree of involvement and commitment
which the user shares with the EDP staff and
others toward the functioning of the computer-
based information systems and services.
Feeling of control
(34)
The user's awareness of the personal power or
lack of power to regulate, direct or dominate
the development, alteration, and/or execution of
the computer-based information systems or
services which serve the user's perceived
function.
Degree of training
(35)
The amount of specialized instruction and
practice that is afforded to the user to increase
the user's proficiency in utilizing the computer
capability that is unavailable.
Organizational
factors
Job effects (36) The changes in job freedom and job
performance that are ascertained by the user as
resulting from modifications induced by the
computer-based information systems and
services.
Note. EDP= electronic data processing
Abstract (if available)
Abstract
In the current healthcare industry, physicians’ resistance toward healthcare information technology (HIT) adoption and mixed result of impact that electronic health record (EHR) brings to physicians’ work have always been a concern. This study have three primary objectives: (1) build and contextualize a socio‐technical evaluation model to assess the interaction between EHR and physician
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A system framework for evidence based implementations in a health care organization
PDF
Designing health care provider payment systems to reduce potentially preventable medical needs and patient harm: a simulation study
PDF
Investigation of health system performance: effects of integrated triple element method of high reliability, patient safety, and care coordination
PDF
The impact of healthcare interventions using electronic health records: an evaluation within an integrated healthcare system
PDF
Learning to diagnose from electronic health records data
PDF
Total systems engineering evaluation of invasive pediatric medical therapies conducted in non-clinical environments
PDF
Quantifying the impact of requirements volatility on systems engineering effort
PDF
WikiWinWin: a Wiki-based collaboration framework for rapid requirements negotiations
PDF
Human and organizational factors of PTC integration in railroad system and developing HRO-centric methodology for aligning technological and organizational change
PDF
Using a human factors engineering perspective to design and evaluate communication and information technology tools to support depression care and physical activity behavior change among low-inco...
PDF
Use of electronic health record data for generating clinical evidence: a summary of medical device industry views
PDF
Optimizing healthcare decision-making: Markov decision processes for liver transplants, frequent interventions, and infectious disease control
PDF
A series of longitudinal analyses of patient reported outcomes to further the understanding of care-management of comorbid diabetes and depression in a safety-net healthcare system
PDF
Modeling human bounded rationality in opportunistic security games
PDF
Modeling and analysis of nanostructure growth process kinetics and variations for scalable nanomanufacturing
PDF
Deformable geometry design with controlled mechanical property based on 3D printing
PDF
A stochastic employment problem
PDF
Examining the adoption of electronic health records system in patient care and students’ education using the GAP analysis approach
PDF
Semiconductor devices for vacuum electronics, electrochemical reactions, and ultra-low power in-sensor computing
PDF
Systems engineering and mission design of a lunar South Pole rover mission: a novel approach to the multidisciplinary design problem within a spacecraft systems engineering paradigm
Asset Metadata
Creator
Li, Fei
(author)
Core Title
A framework for examining relationships among electronic health record (EHR) system design, implementation, physicians’ work impact
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
04/24/2014
Defense Date
11/26/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
electronic health record (EHR),healthcare information technology,OAI-PMH Harvest,structural equation modeling,survey methodology,systematic evaluation model
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wu, Shinyi (
committee chair
), Meshkati, Najmedin (
committee member
), Myrtle, Robert C. (
committee member
), Sapkins, Joshua (
committee member
), Settles, F. Stan (
committee member
)
Creator Email
amylf2007@gmail.com,feil@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-383547
Unique identifier
UC11297150
Identifier
etd-LiFei-2408.pdf (filename),usctheses-c3-383547 (legacy record id)
Legacy Identifier
etd-LiFei-2408.pdf
Dmrecord
383547
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Li, Fei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
electronic health record (EHR)
healthcare information technology
structural equation modeling
survey methodology
systematic evaluation model