Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
(USC Thesis Other)
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE BENEFITS AND COSTS OF ACCREDITATION OF UNDERGRADUATE MEDICAL
EDUCATION PROGRAMS LEADING TO THE MD DEGREE IN THE UNITED STATES
AND ITS TERRITORIES
by
Dalal J. Muhtadi
______________________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August, 2013
Copyright 2013 Dalal J. Muhtadi
2
Acknowledgments
My most sincere gratitude and appreciation for the monumental support and
assistance in this study are extended to the incredible family, friends and colleagues who
so generously extended their valuable advice and endless affection that stemmed from
many long years of expertise not only in the fields of health and accreditation but also in
life. Indeed, this journey of knowledge would not have ever been possible without my
incredible daughter; Dr. Nora Muhtadi. In addition, this year-long research work would
not have seen the light without the incredible and unwavering support and assistance
extended by my remarkable advisor, mentor, friend and soul-mate and committee
member; Dr. Christine Neish. Furthermore, only through the remarkable wisdom,
supervision and contributions of Dr. Robert Keim, Chair of the dissertation committee,
this study was given life and the opportunity to contribute to our knowledge about
specialized accreditation. For Dr. Kristan Venegas who encouraged me to pursue this
doctoral degree from day one, I express my heartfelt appreciation for a strong bond of
respect and friendship that will last forever. Last but not least, I am deeply grateful to Dr.
Khaled Bahjri for his incredible advice and expertise and to all the participants who took
the time to share their thoughts and experiences regarding the topic of this research and
entrusted me with this opportunity to hear their voices in my quest for advancing the
wheel of relevant knowledge and the quality of medical education and patient care
nationally and globally.
3
Table of Contents
Acknowledgments...................................................................................................................…….2
List of Tables ...................................................................................................................................7
List of Figures ..................................................................................................................................9
Abstract ..........................................................................................................................................10
Chapter 1: Overview of the Study .................................................................................................12
Introduction ...............................................................................................................................12
Statement of the Problem ..........................................................................................................16
Purpose of the Study .................................................................................................................18
Importance of the Study ............................................................................................................18
Definitions.................................................................................................................................20
Chapter 2: Literature Review .........................................................................................................22
The Evolution of Accreditation in Higher Education ...............................................................22
Overview of Specialized Accreditation ....................................................................................32
Specialized Accreditation by the LCME ..................................................................................34
The Process of Accreditation ...............................................................................................36
The Accreditation Process Timeline ....................................................................................41
Costs of Accreditation..........................................................................................................43
Benefits of Accreditation .....................................................................................................49
In Conclusion .......................................................................................................................54
Chapter 3: Methodology ................................................................................................................55
Introduction ..............................................................................................................................55
Participants ...............................................................................................................................55
4
Target Population .................................................................................................................55
Study Sample .......................................................................................................................56
Selection Criteria .................................................................................................................56
Demographics ......................................................................................................................60
The Measurement Tool .............................................................................................................67
Content ................................................................................................................................67
The Cover Letter .................................................................................................................69
Selection Criteria .................................................................................................................69
Reliability and Validity .......................................................................................................69
Feasibility ............................................................................................................................70
Data Collection ..........................................................................................................................71
Data Analysis ............................................................................................................................73
Limitations, Delimitations, and Assumptions ...........................................................................74
Chapter 4: Findings ........................................................................................................................77
Introduction ...............................................................................................................................77
Demographics ............................................................................................................................77
Participants ................................................................................................................................77
Undergraduate Medical Education Programs ............................................................................81
Findings and Discussions by Research Questions ....................................................................82
Research Question #1: The Benefits of Accreditation ........................................................83
Ranked Benefits .............................................................................................................90
Total Benefits .................................................................................................................91
Other Benefits .................................................................................................................97
5
Research Question #2: The Costs of Accreditation .............................................................99
Ranked Costs ...............................................................................................................103
Total Costs ...................................................................................................................104
Other Costs ..................................................................................................................110
Research Question #3:Benefits vs. Costs of Accreditation ...............................................111
Recommended Changes ..............................................................................................115
Chapter 5: Summary and Recommendations ...............................................................................123
Summary of Study .............................................................................................................123
Research Question #1: Perceived Benefits of Accreditation .............................................125
Research Question #2: Perceived Cost of Accreditation ...................................................127
Research Question #3: Perceived Benefits vs. Cost of Accreditation ...............................130
Recommendations for Improving the Value of Accreditation ..........................................131
Continuity of Quality Control/Improvement ...............................................................132
Fostering Innovation/Use of Technology ....................................................................133
Collaboration ...............................................................................................................133
Clarity of Information/Provision of Proper Guidance .................................................134
Flexibility ....................................................................................................................134
Focus on Outcomes vs. Standards ...............................................................................135
Leadership ...................................................................................................................136
Future Research ...........................................................................................................137
Conclusion .........................................................................................................................139
References ....................................................................................................................................140
Appendices ...................................................................................................................................148
6
Appendix A: Medical Education Programs Leading to the M.D. Degree
Accredited by the LCME in the Unites States and its Territories .....................................148
Appendix B: New and Developing Medical Education Programs in the
United States and its territories .........................................................................................159
Appendix C: The Cover Letter/Followup Letter of the Survey ........................................163
Appendix D: The Survey Protocol ....................................................................................166
7
List of Tables
Table 1: Chronology of Higher Education and Accreditation in the United States.......................23
Table 2: Higher Education and Population Growth in the United States ......................................28
Table 3: Accreditation Process of LCME ......................................................................................36
Table 4: Timeline for Accreditation by LCME 7 ..........................................................................42
Table 5: Number of US Undergraduate Medical Education Programs .........................................57
Table 6: Departments of Medical Education Programs .................................................................59
Table 7: Type of Fully-accredited Four-year MD Programs .........................................................60
Table 8: Number of Permanent Department Chairs by Cluster and Gender .................................62
Table 9: Number of Permanent Department Chairs by State ........................................................62
Table 10: Distribution of Department Chairs by Race ..................................................................65
Table 11: Salaries of Department Chairs and Faculty Members ...................................................67
Table 12: Current Position of Participants .....................................................................................78
Table 13: The Study Sample and Current Positions ......................................................................78
Table 14: Number of Positions of Participants ..............................................................................79
Table 15: Participants by Type of Medical Education Programs ..................................................79
Table 16: Participants by Region of Programs ..............................................................................80
Table 17: Participation in the Accreditation Process .....................................................................81
Table 18: Number of Graduates .....................................................................................................82
Table 19: Medical Education Programs by Number of Graduates ................................................82
Table 20: Benefits of Accreditation ...............................................................................................83
Table 21: Total Overall Benefits ....................................................................................................92
Table 22: Overall Level of Benefits...............................................................................................93
8
Table 23: Rotated Component Matrix for 11 Benefits ..................................................................94
Table 24 : Accreditation and Overall Level of Benefits (Factor 1 and 2 combined)94
Table 25: Impact on School Accountability and Credibility (Factor 1) ........................................95
Table 26: Impact on Outcomes (Factor 2) .....................................................................................95
Table 27: Mean Value Score for Benefits......................................................................................96
Table 28: Mean Value Score (Overall Benefit Score) and Independent Variables .......................97
Table 29: Costs of Accreditation .................................................................................................100
Table 30: Total Overall Cost ........................................................................................................105
Table 31: Rotated Component Matrix for Five Costs ..................................................................106
Table 32: Accreditation and Total Overall Level of Costs ..........................................................106
Table 33: Impact on Personnel Cost Score (Factor 1) .................................................................107
Table 34: Impact on Program-related Cost Score (Factor 2)107
Table 35: Mean Value Score for Costs ........................................................................................108
Table 36: Mean Value Score (Overall Cost Score) .....................................................................108
Table 37: Mean Value Score (Overall Cost Score) and Independent Variables ..........................109
Table 38: Total Benefits vs. Total Costs ......................................................................................112
Table 39: Overall Benefit Score and Cost-Benefit ......................................................................113
Table 40: Cost-benefit and Indpendent Variables: Univariate Logistical Regression .................114
Table 41: Cost-benefits and Indpendent Variables: Multivariate Logistical Regression ............115
Table 42: Recommended Changes for Accreditation ..........................................................116, 132
9
List of Figures
Figure 1: Accreditation and Program Quality ................................................................................85
Figure 2: Accreditation and Program Assessment .........................................................................85
Figure 3: Accreditation as a Stimulus for Improvement ................................................................86
Figure 4: Accreditation and Benchmarking ...................................................................................86
Figure 5: Accreditation and Access to Funds ................................................................................87
Figure 6: Accreditation and Ranking .............................................................................................87
Figure 7: Accreditation and Quality Faculty Members .................................................................88
Figure 8: Accreditation and Faculty Members Experience ...........................................................89
Figure 9: Accreditation and Quality Students ................................................................................89
Figure 10: Accreditation and Student Experience .........................................................................90
Figure 11: Accreditation and SLOs ...............................................................................................91
Figure 12: Overall Total Benefit ....................................................................................................92
Figure 13: Accreditation and Direct Financial Monetary Expenses ............................................101
Figure 14: Accreditation and Total Amount of Time ..................................................................102
Figure 15: Accreditation and Total Amount of Effort .................................................................102
Figure 16: Accreditation and Impact on Morale ..........................................................................103
Figure 17: Accreditation and Impact on Academic Freedom ......................................................104
Figure 18: Overall Total Cost ......................................................................................................105
10
Abstract
This study assessed the value of accreditation of all 126 fully-accredited four-year
undergraduate medical education programs leading to the MD degree in the US through two
lenses, ‘perceived benefits and costs’ from the perspective of the leadership of internal
stakeholders of the aforementioned programs. The online survey was sent to a random cluster
sample of 1,096 department chairs/assistant/associate, faculty members and other lead
administrators in the programs. With a response rate of 8%, a total of 87 usable responses were
received and analyzed. The descriptive statistic results of the survey indicated that approximately
74% of participants were ‘department chairs, assistants or associates’, 77% held only ‘one
position’, 51% worked in ‘public’ programs and 78% worked in programs that graduated ‘more
than 100 students’ in the last academic year. Participants worked in programs that were located
in ‘all five regions’ of the continental US and approximately 72% of them ‘participated’ in the
accreditation process of their programs. Inferential statistical analyses including univariate and
multivariate logistical regression were performed for the dependent variables of benefits, costs,
costs vs. benefits (cost-benefits) and five independent variables (participation in the accreditation
process, number of positions per participant, program type, region and number of graduates in
last academic year). At a 95% confidence limit and Type I error of 0.05, the reliability
coefficient (Cornbach’s alpha) had a high value of 0.913 for benefits and 0.758 for costs and a
high magnitude of effect (odds ratio). For benefits, results indicated that respondents who ‘did
not participate’ in the accreditation process were approximately 2.4 times (141%) more likely to
report ‘high benefit’ than people who ‘participated’ in the process after adjusting for all variables
in the model. By ‘program type’, participants who worked in ‘private’ programs were
approximately 2.3 times (130%) more likely to report ‘high benefit’ than those who worked in
11
‘public’ programs. For costs, participants who worked in ‘private’ programs were approximately
2.2 times (120%) more likely to report a ‘low’ cost of accreditation after adjusting for all
variables in the model. Upon being asked to rate ‘overall level’ of costs vs. benefits (cost-
benefits), results indicated that participants who ‘did not participate’ in the accreditation process
were approximately 50% less likely to report ‘benefits exceed or equal costs’ than participants
who ‘participated’ in the process. Furthermore, a statistically-significant correlation (p < 0.001)
was found between the ‘overall benefits score’ and ‘cost-benefit’. Approximately 78% of
participants who reported ‘costs exceeded benefits’ also reported the ‘overall benefit score’ to be
‘low.’ Finally, in ranking 11 benefits of specialized accreditation, participants rated the
‘provision of a structured mechanism to assess the medical education programs’ as the ‘highest’
benefit of accreditation followed by the role of accreditation as ‘a stimulus for program
improvement.’ ‘Improved overall quality of program’ and ‘benchmarking’ were both in third
place. For five costs of specialized accreditation, participants ranked the ‘total amount of time’
spent by internal stakeholders on accreditation as the ‘highest’ cost followed by ‘total amount of
effort’ in second and third places. Based on participants’ opinions/perceptions (actual needs), this
study offered significant recommendations to improve the approach, process and outcome of
accreditation. They included: continuity of quality control and improvement, fostering
innovation, use of technology, enhancing leadership, improving horizontal and vertical
collaboration as well as provision of clear information, proper guidance, focus on outcomes vs.
standards and flexibility by the accrediting agency. In a culture of trust, mutual respect,
collaboration and open communication, these recommendations can enhance the value of
accreditation by the LCME, promote excellence in the quality of medical education/programs
and their ultimate mission of extending high standards of patient care nationally and globally.
12
Chapter 1
Overview of the Study
Introduction
Accreditation of higher education in the United States (US) is a process of external
quality control and continuous improvement that scrutinizes post-secondary academic
institutions and programs (The Council for Higher Education Accreditation [CHEA], 2012a;
U.S. Department of Education [USDE], 2012a). It originally emerged in the US over a hundred
years ago as a tool for quality assurance due to concerns about protecting public safety and
health, and serving public interests (Eaton, 2009). Over time, accreditation of higher education
programs gradually evolved, and became an integral component of the academic landscape.
From the early days, accreditation had a rich history that both influenced post-secondary
education and the government, and was shaped by them (Brittingham, 2009; Ewell, 2008). The
journey of accreditation started in the 19
th
century when higher education witnessed a rapid
expansion and proliferation of its academic institutions (Lucas, 2006). In the absence of any
governmental entity at that time to oversee the evolving tasks and needs of the academe, higher
education suffered from confusion, lack of clear identity, as well as minimum standards of
quality (Brittingham, 2009; Ewell, 2008; Lucas, 2006). As a result, accreditation gradually
emerged to fill that gap, and establish clarity and quality in the academe (Brittingham, 2009).
This private, self-regulatory tool validated, ensured, and improved the quality of education in
American colleges and universities (Brittingham, 2009; Eaton, 2009, 2011; Ewell, 2008; 2011;
CHEA, 2012a; USDE, 2012a).
Accreditation gained great momentum and endorsement of the legitimacy of its role in
higher education in the 20
th
century. The GI bill, Veterans Readjustment Act, and the Higher
13
Education Act (HEA), all aided in directing millions of dollars in federal aid to students,
including returning veterans (Lucas, 2006; Martin, 1994). As a result, academia enjoyed
expanded access, enhanced affordability, and heightened mission differentiation (Lucas, 2006).
These factors created waves of massification (The phenomenon of tremendous expansion of
enrollment in the last 30 years), rapid proliferation, and expansion that resulted, once more, in
confusion about the identity, mission, and quality of higher education (Brittingham, 2009). The
government turned to accreditation for quality assurance, hence affirming its mission in the
academe, bestowing the duties of a gatekeeper for financial aid upon it, and solidifying its
credibility when it came to quality control and assurance (Martin, 1994; Brittingham, 2009;
Eaton, 2009).
The evolution of accreditation resulted in a unique system in the US. Its uniqueness stems
from the facts that it is: private, external, voluntary, cost-effective, and mission-based (Eaton,
2011; USDE, 2012a). Vigorous self-assessments conducted by programs/schools seeking, or
renewing, accreditation, are evaluated by expert teams of peers (Eaton, 2009, 2011; CHEA,
2012d). The USDE, and CHEA promote high standards in an atmosphere of transparency by
providing recognition to accreditation agencies which are accountable to their unique systems
and imperative missions (USDE, 2012a; CHEA, 2012e, 2012f).
Accrediting agencies are divided into two major categories: institutional, and
specialized/programmatic (CHEA, 2012f, 2012g; Eaton, 2009). Institutional accreditation, in
which the entire academic institution is recognized, may be awarded by regional or national
accreditation organizations. Specialized accreditations, such as law and the health professions,
including medicine, are awarded by discipline-specific accreditation organizations (Brittingham,
2009; Eaton, 2009; Martin, 1993; CHEA, 2012f, 2012g).
14
Specialized or programmatic accreditation in the US has a history that is intertwined with
medical and public health (Brittingham, 2009; Chapman, 1974). This history dates back to the
19
th
century when the inferior quality of medical schools, and henceforth health services, raised
concerns among leaders and the public alike. The call to better prepare medical students for the
professional tasks ahead of them, demanded adequate standards and better formal curricula in
medical schools (Chapman, 1974). Such need culminated in the formation of the American
Medical Association (AMA) (AMA, 2012a). The AMA filled the gap as it played a major role in
the quality assurance of not only medicine but also the health of the public (AMA, 2012b).
Specialized accreditation of schools of medicine witnessed another milestone with quality
control and assurance. In the 20
th
century, the Carnegie Foundation for the Advancement of
Teaching and its Flexner Report raised the bar of quality control by shedding the light on a
standard model for medical education that was pioneered by the John Hopkins Medical School
(Brittingham, 2009; Cooke, Irby, Sullivan & Ludmerer, 2006; JAMA, 1989; The Liaison
Committee on Medical Education [LCME], 2012a). As a result, nearly half of the medical
colleges at the time closed due to their inferior quality (Brittingham, 2009; Chapman, 1974).
Soon enough, other professions followed suit as quality assurance emerged and evolved as a
means of improving and maintaining high standards of the academe and its institutions (Folden,
1980). Such leadership and striving for excellence in the medical arena continue unabated.
Currently, with the aid of cutting-edge technology, American medicine is in the forefront in
terms of the quality of its schools and graduates who are unparalleled worldwide.
Specialized accreditation of medical education programs leading to the MD degree in the
US and its territories is conducted by a single recognized accreditor: The LCME (2012b, 2012c).
Its mission of quality assurance and program improvement is achieved through accreditation of
15
137 “complete and independent medical education programs where students are geographically
located in the United States” (LCME, 2012b, 2012c, 2012d). These programs must be operated
by medical schools/universities that are chartered and institutionally-accredited in the US. The
LCME is sponsored equally by the AMA and the Association of American Medical Colleges
(AAMC) (LCME, 2012a, 2012b, 2012e). This agency is recognized by the USDE, medical
licensing boards, the US Congress, and international bodies as a reliable and trusted authority on
medical education programs (LCME, 2012b, 2012c).
To be accredited, the medical education program must meet numerous guidelines and
standards that are specified in the Functions and Structure of a Medical School guide (LCME,
2012a, 2012b, 2012c). The accreditation process includes: Self-study through the preparation of
an educational database according to guidelines and standards of the LCME, compilation of a
self-study report, site/survey visit by the LCME team, a survey report, and a determination of
accreditation status by the accrediting agency (LCME, 2012b, 2012c, 2012f). Regardless of its
tremendous roles and benefits, accreditation standards and processes demand significant
institutional commitments of time, effort, and funds. These place a heavy burden on medical
education programs/schools that seek specialized accreditation in the US.
During times of dwindling financial resources, increasing competition for finite
resources, decreasing access and affordability, and rising default rates on student loans, the high
costs of accreditation create a major burden on the academe and its internal and external
stakeholders (Project on Student Debt, 2012a, 2012b, 2012c). Increased public scrutiny calls for
improved accountability, transparency, and better return on investment (ROI) (Brittingham,
2009; Ewell, 2008; Martin, 1994). Such demands warrant a comprehensive analysis of
accreditation and its costs and benefits. The need for accreditation to rise to another level in face
16
of old and modern day challenges is further exacerbated by the scarcity of research on the subject
(Shibley & Volkwein, 2002). After all, “knowledge of and support for accreditation remains a
mile wide but an inch deep” (Ikenberry, 2009, p. 4).
Statement of the Problem
Throughout the last 70 years, the LCME embarked on a continuous journey of quality
assurance, and extended diligent effort to raise the bar when it came to the standards, functions,
and performance of medical education in this country (LCME, 2012b, 2012c). This did not come
without waves of tension and divergence. The economic downturn of the 1990’s gave
momentum to tension within “the triad” of government, higher education, and accreditors
(Brittingham, 2009; Ewell, 2008; Martin, 1994). It renewed the social and governmental calls for
accountability in accreditation of higher education programs. Similar circumstances and
challenges that exist today are creating more turbulent currents and divergences in the path of
accreditation. Increasing financial and time constraints, rising default rates on student loans,
heightened demands for better ROI in terms of student learning outcomes are only a few of these
challenges. Growth in knowledge through research and information technology, and the global
academic revolution which includes massification, changing demographics and mobility patterns
present national and international challenges (Altbach & Knight, 2007; Altbach, Reisberg, &
Rubley, 2009, 2010). The wave of globalization that is internationalizing American academe,
privatizing academic institutions, and intensifying competitiveness in the global market, is
creating high demands for greater accountability, transparency, and better ROI. Indeed, such
intense demands greatly impinge on, and re-direct, the progress of quality assurance in the
academe (Armstrong, 2007; Altbach & Knight, 2007; Altbach, Reisberg, & Rubley, 2009, 2010;
Qiang, 2003).
17
Accreditation in general and specialized accreditation, specifically, is constantly coupled
with resistance from within academia (Lederman & Redden, 2007; Malandra, 2008; Rourke &
Brooks, 1996; CHEA, 2007). The tremendous monetary and non-monetary costs in terms of
funds, time, and effort spent in preparing for, and going through the process of seeking, renewing
and maintaining accreditation has its toll on programs, schools, universities and their
stakeholders (Greenlaw, 2009; Malandra, 2008), including those of medical education programs.
The process starts 18 months before the site visit, and continues as long as the program/school
desires to maintain such accreditation (LCME, 2012e). During these processes, the formation of
steering committees and subcommittees composed of faculty members, administrators, and
students to overview the process from beginning to end, greatly impinges on time and work
schedules of faculty members and staff who are already working beyond full capacity due to
existing budget constraints (Greenlaw, 2009; LCME, 2012e).
Additional requirements placed upon new medical graduates by the licensing boards
further clouds the horizon for this journey. A new graduate with an MD degree will not be able
to practice medicine in the US without passing the state board licensure exam. Such exams are
proven to be more efficient in assessing the competency of these new graduates than specialized
accreditation. Hence, the existence of such a dual system of ensuring and overseeing the quality
of graduates, adds to the confusion and hesitation among stakeholders regarding the value of
specialized accreditation.
Regardless of the fact that there is “no magic bullet” or a “one-size-fits-all” accreditation
system and standards that can cater to all academic programs, schools, or universities, it is
imperative to address the aforementioned challenges (Malandra, 2009). They amplify the
pressing need for accreditation to “focus, clarify, align, and make more consistent” (Malandra,
18
2009, p. 62) and to address modification of standards in a clear and forthright manner. Such
changes must be deeply rooted in solid evidence based on up-to-date research.
Purpose of the Study
This study assessed the value of accreditation of undergraduate medical education
programs through two lenses, perceived benefits and costs, from the perspective of internal
stakeholders of the programs. To this end, the research questions were:
1. What are the perceived benefits of accreditation of undergraduate medical education
programs in the US?
2. What are the perceived costs of accreditation of undergraduate medical education
programs in the US?
3. What are the perceived benefits vs. perceived costs of accreditation of undergraduate
medical education programs in the US?
Importance of the Study
Health profession programs led the way in initiating and maintaining quality control
through specialized accreditation (Brittingham, 2009; Folden, 1980). Such leadership and
striving for high quality and standards can continue unabated because accreditation is a concept
that is deeply embedded in the values and culture of this nation (Brittingham, 2009). This study
allowed for the continued assessment, re-evaluation, and improvement of the value of such
accreditation that characterized the evolution of our quality control system from its inception. In
brief, it provided a pause along the way for educators, professionals and the public to reflect, re-
align, and continue on the course of quality control while seeking excellence in the functions,
standards, and performance of American medical education.
19
The costs and benefits of specialized accreditation of medical education programs have
numerous consequences for the academic institutions (administrators, faculty members and
students), as well as external stakeholders (Accreditor, government, and the public). The self-
study afforded internal stakeholders, faculty members, students, and administrators opportunities
to examine their perspectives regarding present and future program directions. Since faculty
members carry the brunt of accreditation efforts and often receive the fewest benefits, it was
important to take a fresh look at the costs and benefits of these activities (Malandra, 2009).
Intangible benefits such as experimentation and professional development could lead to better
teaching and learning practices among faculty members. For students, national and international,
the findings of the study could aid in enhancing the quality, experience, and outcome of learning,
not only in terms of internship, mobility and employment opportunities, but also the quality of
medical care extended to the public here and abroad (Schirlo & Heusser, 2010). Finally, for
administrators, this study would afford them a chance to re-examine and revamp school
visibility, competitiveness, recognition, ranking and prestige, student applications and
enrollment, and public and private funding opportunities Schirlo & Heusser, 2010).
In addition, this study provided a solid basis for evaluating accreditation by recipients and
provider, the LCME and its sponsors. In the US, accrediting agencies are funded by: 1. annual
dues paid by accredited programs/schools, 2. fees of accreditation reviews paid by the
programs/schools being accredited, and 3. sponsorship funds from government, sponsoring
organizations, or private foundations (Eaton, 2009, 2011). These accrediting agencies reported
an expenditure of $92 million in 2006-2007 (Eaton, 2009). The LCME is sponsored by both the
AMA and AAMC through their annual membership dues receipts of $38 million, and $12
million respectively in 2010/2011 (AMA, 2012c; LCME, 2012b, 2012c). Such a high financial
20
investment warranted a thorough analysis of the costs and benefits of specialized accreditation
by the LCME and the recipient programs in order to achieve a higher level of accountability,
transparency, and a better ROI during times of limited financial resources, heightened national
competition for available funds, and increased globalization and market forces abroad.
While there is an abundance of qualitative literature about accreditation, by sharp
contrast, there is an alarming scarcity of empirical research on the subject (El-Khawas, 1993).
Public scrutiny and intense debates about accreditation along the years failed to extend serious
effort to build a solid base of published qualitative and quantitative evidence. This study
addressed this gap by assessing accreditation, evaluating its impact on the academe, and helping
stakeholders make more educated judgment and decisions that will optimize our current system.
Finally, this study served as a solid source of up-to-date information when it came to
comparing and contrasting other accreditation systems and related issues in other countries. With
ever-changing educational, political, and social dynamics in Europe (Schirlo & Heusser, 2010)
and elsewhere, the study should guide future research globally. With pertinent country-specific
modifications, it could serve as the stepping stone for much-needed improvements, not only in
medical education, but also the health of the recipients and their quality of life, internationally.
Definitions of Terms
Specialized (Programmatic) Accreditation: Accreditation of a specific program,
department, or school within the institution or school (Eaton, 2009). The status can be fully-
accredited, partially-accredited, or not accredited by the LCME.
The LCME: The Liaison Committee on Medical Education (LCME, 2012b, 2012c).
21
Medical Education Program: An independent and complete medical education program
that is operated by a medical school or university that is chartered in the US with the medical
students geographically located in the US (LCME, 2012b, 2012c, 2012d).
Department Chair of the Medical Education Program: The lead administrator that
is in charge of a specific academic (Basic sciences, clinical and surgical) department within the
undergraduate medical education program.
Assistant/Associate Department Chair: The second- in-command lead administrator in
a specific academic department. He/she fulfills the duties of the department chair in his/her
absence.
Others: Faculty members and lead administrators of administrative (non-academic)
departments of the undergraduate medical education program. Lead administrators encompassed
associate/assistant deans, directors, accreditation liaison officers (ALOs), board member,
research provost and curriculum committee chairmen.
Benefits of Accreditation: The perceived (Tangible and intangible) gains
realized by the medical education program due to specialized accreditation by the LCME.
Costs of Accreditation: The perceived (Monetary and non-monetary)
investment of resources by the medical education program to achieve and monitor specialized
accreditation by the LCME.
Value of Accreditation: Estimated worth of accreditation by the LCME based on the
costs and benefits.
Institutional Accreditation: Accreditation of the entire academic institution (Eaton,
2009).
22
Chapter 2
Literature Review
Specialized accreditation of medical education programs leading to the MD degree by
LCME began over 70 years ago with the mission of ensuring self-evaluation and improvement of
the quality of medical education (LCME, 2012b, 2012c). Through a non-governmental peer-
review system, it provides internal and external stakeholders with a seal of approval that the
programs meet and exceed nationally-accepted standards of functions and performance relative
to quality control, and assurance (LCME, 2012b, 2012c).
Due to the limited understanding of accreditation among stakeholders, and to the scarcity
of quantitative research, a review of the body of relevant literature will be offered in this chapter.
This review will include: 1. the evolution of US accreditation in higher education, 2. overview of
specialized accreditation in the US, 3. specialized accreditation of medical education programs
leading to the MD degree by the LCME, 4. costs of accreditation along with its process and
timeline and 5. benefits of accreditation. The review will provide the context in which a cost-
benefit analysis of accreditation medical education programs by LCME can be investigated and
analyzed. Furthermore, it will address the existing gap in knowledge and empirical research
about the topic, and advance the wheel of continuous improvement and striving for excellence in
medical education.
The Evolution of Accreditation in Higher Education
Accreditation of higher education in the US is over 100 years old (Eaton, 2009). It is a
process of external quality control and continuous improvement that scrutinizes post-secondary
academic institutions and programs (CHEA, 2012a; USDE, 2012a). It originally emerged as a
tool for quality assurance due to concerns about protecting public safety and health, and serving
23
public interests (Eaton, 2009). Over time, accreditation of higher education gradually evolved,
and became an integral component of the academic landscape.
In the academe, accreditation has a rich history that both influenced post-secondary
education and was shaped by it, Table 1 (Brittingham, 2009; Ewell, 2008). In the 1800’s, the
separation of church and state, the endorsement of autonomy and privatization of the American
higher education (Dartmouth College case, 1819), and the lack of governmental control allowed
for the rapid expansion and proliferation of post-secondary colleges and universities, Table 2
(Brittingham, 2009; Ewell, 2008; Lucas, 2006). Such diversity and fast expansion created great
confusion, and lack of clear identity, as well as minimum standards of quality in higher
education. Coupled with the vacuum created by the absence of a governmental entity to oversee
such tasks and needs, accreditation gradually emerged to fill that gap (Brittingham, 2009).
Through accrediting associations that were privately established between 1885 and 1924,
accreditation started establishing clarity and quality in higher education (Brittingham, 2009). In
1929, the first institutional accreditation of post-secondary institutions, public and private, was
granted in New England, marking the undisputed and distinct emergence of accreditation as a
private, self-regulatory tool that validates, ensures, and improves the quality of education in
American colleges and universities (Brittingham, 2009; Eaton, 2009, 2011; Ewell, 2008; CHEA,
2012a; USDE, 2012a).
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
1636 Harvard College established
1791 US Bill of Rights
24
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
(Powers reserved to the states or people if not
mentioned in the Constitution)
1819 Dartmouth College case decided by the US Supreme
Court
(Preservation of rights to operate chartered private
colleges without state takeover)
1839 First state normal school started (Lexington,
Massachusetts)
1847 American Medical American (AMA) established
1862 Morrill Act (Creation of land grant colleges)
1870 US Bureau of Education official list of colleges
published
1876 John Hopkins University founded
1885 New England Association of
Schools and Colleges
(NEASC) founded
1887 Middle States Association of
Colleges and Schools
(MSACS) founded
1895 North Central Association of
Schools and Colleges
(NCASC) founded
Southern Association of
Colleges and Schools (SACS)
founded
1900 College Entrance Exam Board founded
1901 First 2-yr. institution founded by University of
Chicago (Joliet Junior College)
25
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
1905 Carnegie Foundation for Higher Education list of
recognized colleges published
1906 Carnegie Unit developed
1910 Flexner Report on medical education compiled
(Nearly half of medical colleges closed down)
1913 North Central Association
(NCA) specify criteria for
collegiate eligibility
1917 Northwest Association of
Colleges and Universities
(NACU) established
1922 American Council on Education holds
standardizing conference
1924 Western Association of
Schools and Colleges
(WASC) established
1925 American Library Association list of accredited
schools published
1926 National Home Study Council
established
(Predecessor to Distance
Education and Training
Council)
1934 North Central Association
(NCA) adopts mission-
oriented approach to
accreditation
1940 American Association of University Professors
statement on academic freedom is formulated
26
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
1944 GI Bill (Direct funding to college students is
provided)
1947 Network of community colleges is promoted by
Truman Commission (mainly to serve returning
GIs)
1949 National Commission on
Accrediting (NCA)
established by higher
education associations
(for reduction of duplication
and burden in accreditation)
1950s Mission-centered standards,
self-study, team visit,
commission decision, and
periodic review of
accreditation developed
1951 NEASC establishes permanent
office and staff
1952 Veterans Readjustment Act (VRA) (Financial aid is
tied to institutional accreditation)
1953 Black colleges accepted as full
members of SACS
1964 Federation of Regional
Accrediting Commissions of
Higher Education (FRACHE)
founded
1965 Higher Education Act (HEA) passed (Expanding
financial aid to students)
1968 Federal recognition of accreditors formal process is
established
27
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
1972 HEA reauthorization (for-profit schools can
participate in financial aid)
1975 NCA and FRACHE merge to
form Council on
Postsecondary Accreditation
(COPA)
1980 US Department of Education initiated
1984 National Institute of Education, Involvement in
Learning report published (calling for
accountability, judging institutions by effectiveness
of educating students)
SACS adopts Institutional
Effectiveness standard
1992 HEA reauthorization (almost breaks ties between
financial aid and accreditation)
National Advisory Committee for Institutional
Quality and Integrity established.
State Postsecondary Review entities (SPREs)
authorized to review institutions with high default
rates on student loans
1993 COPA dissolved. Council on
Recognition of Higher
Education formed
(to take over recognition
function)
1994 SPREs defunded by Republican Congress Council on Higher Education
Accreditation (CHEA)
established
1998 HEA reauthorization (SPREs not mentioned)
2006 Commission on the Future of Higher Education
publishes Spelling’s report
28
Table 1
Chronology of Higher Education and Accreditation in the United States
Date Higher Education History Accreditation
2008 HEA reauthorization
(accountability for accreditation retained, secretary
of education cannot regulate how accreditors judge
student learning, and Secretary, House, and Senate
appoint advisory committee)
Source: Brittingham (2009)
With the passing of the GI bill in 1944, the Veterans Readjustment Act of 1952, and the
HEA in 1965, millions of dollars in financial aid were made available directly to students,
including veterans (Lucas, 2006; Martin, 1994). Hence, access to higher education was greatly
expanded, and mission differentiation in academe was in full swing (Lucas, 2006). With the
resultant rapid rate of proliferation and expansion of academe, and confusion about the identity,
mission, and quality of its institutions, the government turned to accreditation for quality
assurance, therefore granting accreditation and its private agencies great momentum, and
credibility in the academic arena that continue to exist until this day (Brittingham, 2009; Eaton,
2009; Martin, 1994).
Table 2
Higher Education and Population Growth in the United States
Dimension/Year 1790 1870 1890 1930 1945 1975 1995 2005
US population
(millions)
3.9 29.8 62.6 123.1 139.9 215.4 262.8 295.5
Students enrolled
(millions)
0.001 0.06 0.16 1.1 1.7 11.2 14.3 17.5
Number of
institutions
11 563 998 1,409 1,768 2,747 3,706 4,216
29
Source: Brittingham (2009)
The evolvement of accreditation through time, and the interwoven relation between
accreditation, higher education, and government in the triad, did not come without waves of
concerns and tensions (Brittingham, 2009; Martin, 1994). Social interest in higher education as a
tool to fulfill the need to produce highly-skilled labor force increased to new heights
(Brittingham, 2009; Lucas, 2006). Academe, in the eyes of the public, was seen as the way to
advance the economic growth of the country and hence, in turn, academe needed to fulfill that
role and be accountable to the public. Such augmented interest and expectations from the public,
coupled with the increased complexity, differentiation of academe, confusion, and lower-than-
expected outcomes were among the key factors that created waves of tension in the triad (Astin,
Keup, & Lindholm, 2002; Brittingham, 2009; Burd, 2003; Martin, 1994; Ratcliff, 1996).
Furthermore, the unquestionable authority of private accrediting agencies as gatekeepers
of institutional access to federal funds, and the fraud and abuse of financial aid allotted for higher
education fueled the public demand for more transparency and accountability, and continued to
amplify the episodes of divergence within the triad (Brittingham, 2009; Martin, 1994). As a
result, a federal panel demanded more accountability from academe in terms of access, degree
completion, and student learning outcomes in 1984, and Congress threatened to sever the link
between financial aid and accreditation/accreditors during the reauthorization of HEA in 1992
(Brittingham, 2009). More recently, in 2004, accountability and what by now became known as a
movement, was revived and granted momentum by the Measuring Up 2004 report of the
National Center for Public Policy and Higher Education (NCPPHE). The report indicated that
academic institutions are “under performing” (NCPPHE, 2004, p. 6). In 2006, the Commission
on the Future of Higher Education, also referred to as the Spellings Commission, demanded solid
30
evidence of a better return on investment (ROI) in academe in terms of student learning
outcomes (USDE, 2012c).
Finally, in 2008, the high level of default rate on student loans renewed past tensions
between federal regulators and accreditors over issues of recognition, and accountability (Ewell,
2008; Martin, 1994). Such currents of divergence and tensions that resulted in public scrutiny
over the past years, are expected to persist as higher education continues to struggle with issues
of access; be more expensive, less affordable (Project on Student Debt, 2012a, 2012b, 2012c),
and lower rates of ROI. At the same time that the academe is more vital in the face of not only an
economic downturn, but also a fierce competition for a share of the market, nationally and
globally.
The evolution of accreditation with time resulted in a unique system that plays numerous
roles in higher education in the US. Its uniqueness stems from the facts that it is: private,
external, voluntary, cost-effective, and mission-based (Eaton, 2011; USDE, 2012a). This
vigorous self-assessment that is conducted by the program/school seeking, or renewing,
accreditation, is evaluated by expert team of peers in academe (Eaton, 2009, 2011; CHEA,
2012d). Its roles include, among many others: Ensuring quality control and incessant
improvement in academe, promoting access to federal and state funds, as well as donations by
the private business sector, and facilitating student mobility nationally and internationally
(Brittingham, 2009; Eaton, 2009, 2011). Holding accreditation and its agencies accountable to
their unique mission and imperative roles is provided through “recognition” by USDE, and
CHEA (USDE, 2012a; CHEA, 2012e, 2012f).
Eighty recognized accrediting agencies in the US compete in the provision of
accreditation of higher education (Eaton, 2012). Ironically, the historically-long competition
31
among accreditors, coupled with the tension factors discussed in detail earlier, led to adverse
results such as: lowered standards, and more simplified requirements for quality assurance by
some accrediting agencies, degree mills (USDE, 2012a). Consequently, academic
programs/schools were afforded the opportunity to shop around for accrediting agencies with the
least resistance (degree mills), in order to get access to governmental aid (Brittingham, 2009). In
addition, while competition is a key pillar of the free market, the emergence of these degree mills
over the years created a sustained challenge to the accreditation arena (CHEA, 2012b, 2012c;
USDE, 2012b). And as federal recognition evolved from provision of public information to
monitoring and regulation (Martin, 1994), the USDE and CHEA continued to ensure the
adherence of accreditors to high standards of quality assurance, and enforce accountability and
transparency.
According to CHEA (2012f, 2012g), accreditors are divided into two major categories: 1.
institutional, and 2. specialized/Programmatic. Institutional accreditation targets the entire
academic institution, and specialized accreditation focuses on a specific program, department, or
school within the institution (Eaton, 2009; Martin, 1993). Accrediting organizations are also
divided into four types: 1. regional, 2. national faith-related, 3. national career-related, and 4.
specialized/programmatic (Brittingham, 2009; Eaton, 2009, 2011; CHEA, 2012f, 2012g).
Regional accreditors serve region-specific, public and private two- and four- year institutions
that are mainly degree-granting and non-profits. National faith-related accreditors tend to be
religiously-affiliated, mainly degree-granting and non-profit institutions. National career-related
cater to degree and non-degree granting institutions that are mainly for profit, career-based,
single-purpose institutions. Lastly, specialized/programmatic accreditors target specific programs
32
such as law, health professions, and medicine (Brittingham, 2009; Eaton, 2009; CHEA, 2012f,
2012g).
Overview of Specialized Accreditation
Specialized (programmatic) accreditation in U.S.A. targets free-standing schools, and
examines specific professions and programs in academe (Eaton, 2009). These programs include
medicine, the health professions, law, and engineering that are hosted by institutionally-
accredited academic institutions (Brittingham, 2009; Eaton, 2009; CHEA, 2012f, 2012g).
Specialized and institutional accreditation in higher education overlaps in many ways.
While specialized accreditation focuses on a specific program, department, or school within the
academic institution, institutional accreditation targets the entire institution (Eaton, 2009). In
addition, specialized accreditation revolves around reviews by expert peers, who provide a
higher level of focus than national (institutional) ones when it comes to site-visit reports.
Furthermore, variations among programs tend to be higher than those between different
academic institutions. Finally, the specificity of specialized accreditation is characterized by the
fact that it is provided by not geographically-limited regional accreditors but rather national ones.
Regretfully, such overlap at many junctures resulted in waves of tension that diminished
possible opportunities of cooperation and coordination between the two types of accreditation for
the purpose of cost-effectiveness. Regardless, in the end both specialized and institutional levels
of accreditation still complement each other as they continue to advance quality control in higher
education in many areas/programs including medicine.
Interestingly, specialized/programmatic accreditation in the US has a history that is
intertwined with medical education and public health (Brittingham, 2009; Chapman, 1974). This
history dates back to the mid-1800s when the inferior quality of medical schools, and hence
33
health services offered to the general public in this country raised a lot of concerns among
leaders and people alike. The call to better prepare medical students for the professional tasks
ahead of them, demanded adequate standards and better formal curriculum in medical schools
(Eaton, 2009).
In May of 1847, such pressing need culminated in the formation of the American Medical
Association (AMA) (AMA, 2012a). The AMA played a major role in quality assurance through
its mission of promoting the art and science of medicine, being an essential part of the
professional life of every physician, and improving public health (AMA, 2012b). This mission
was espoused by its core values of excellence, leadership, integrity, and ethical behavior (AMA,
2012b). In June of 1910, Bulletin Number Four, also known as the Flexner Report, of the
Carnegie Foundation for the Advancement of Teaching raised the bar of quality control by
introducing a standard model for medical education (Brittingham, 2009; LCME, 2012a, 2012b).
The report, one of the most harsh, contentious yet influential documents compiled in higher
education, was catalytic rather than innovative in accelerating the reform of medical education
and its schools (Chapman, 1974). As a result, nearly half of the medical colleges at the time
closed down due to their inferior quality (Brittingham, 2009). Soon enough, other professions
followed suit as quality assurance emerged, and evolved through time as a mean of improving
and maintaining high standards of academe and its institutions (Folden, 1980). Indeed, the
medical profession in this country led the way and “maintained a status level that all occupations
covet” (Folden, 1980, p.37) when it came to quality control in education. Such leadership and
drive for excellence in the medical arena continues unabated until this very day. Currently, with
the aid of cutting-edge technology, the American medical field is in the forefront among its
34
colleagues in terms of the quality of its programs, schools, and graduates. Such quality is perhaps
unparalleled around the globe.
Specialized Accreditation of Medical Education Programs by the LCME
The AMA and AAMC evaluated American medical schools independently since their
inception, Table 1 (Brittingham, 2009). Over 70 years ago, the two organizations joined forces to
form a single entity, the LCME that unified and continued their mission of specialized
accreditation of medical programs/schools in U.S.A. and its territories (LCME, 2012b, 2012c).
Since its inception in 1942, the LCME established its roles of quality control and
assurance of medical education, and quickly claimed its position as the nationally-recognized
accreditor of medical education programs leading to the MD degree in the country (LCME,
2012b, 2012c). Until this day, LCME continues to provide a seal of approval and assurance to
the internal and external stakeholders that the program meets and exceeds national standards and
expectations when it comes to structure, function, and performance. Through its accreditation
process, it grants the program and its host institution, an opportunity to indulge in a rigorous self-
assessment, self-study, reflection, and continuous improvement of the quality of medical
education (LCME, 2012b, 2012c, 2012f).
The LCME is sponsored by both the AAMC, and the Council on Medical Education of
the AMA. Its administration consists of two co-secretariats hosted in two offices that are based in
Chicago, Illinois, and Washington D.C., the headquarters of AAMC and AMA respectively
(LCME, 2012b, 2012c). The LCME employs a small number of full- and part-time employees at
the offices of the two secretariats that are compensated. All other members and affiliates of
LCME are volunteers that receive no compensation (LCME, 2012b, 2012c).
35
Currently, the LCME has a steering team of 15 practicing physicians, medical educators,
and administrators (LCME, 2012b, 2012f). In addition, there are two public and two student
appointed members. The AAMC and AMA each appoint six professional members and one
student member while LCME appoints two public members. The LCME is also represented by a
pool of 200 peer volunteers that conduct site visits and field evaluations of medical programs.
These surveyors are a mix of physicians, basic science and clinical educators, researchers, and
administrators (LCME, 2012b, 2012c, 2012e).
The LCME is recognized by the Congress, USDE, medical licensing boards, national
medical schools and their hosting institutions, as the accrediting agency of medical education
programs leading to the MD degree in the US and its territories (LCME, 2012b, 2012c). Such
recognition qualifies medical students that are enrolled in the accredited programs, eligible for
federal financial aid (Title VII/Public Health Services), US medical licensing examination
(USMLE) and entry into graduate medical education programs that are accredited by the
Accreditation Council for Graduate Medical Education (ACGME). Regardless of the extensive
process of accreditation that medical education programs seeking or renewing accreditation by
LCME have to endure; USDE further requires institutional accreditation of schools/universities
hosting LCME-accredited medical education programs to ensure a rich environment and
sufficient support for academic rigor and excellence.
The LCME accredits complete, independent medical education programs leading to MD
degree in the US. The medical students enrolled in these programs are geographically located in
the US (LCME, 2012b, 2012c). These programs must be hosted/operated by institutions that are
chartered in the US. Currently, there are 137 medical education programs in the US that are
accredited by the LCME (Appendix A) (LCME, 2012c). In seeking or renewing their
36
accreditation by the LCME, the medical education programs endure a long and intensive
journey, and extend tremendous sustained efforts over a long period of time.
The process of accreditation. The process of specialized accreditation by the LCME
starts when the medical education program leading to the MD degree initiates contact to obtain
or renew accreditation (LCME, 2012c, 2012e). This long journey of self-study, reflection and
continuous improvement can be divided into several stages, Table 3.
Table 3
Accreditation Process of LCME
Prior to Site-visit
1. LCME Secretary sets survey visit dates with medical school dean.
2. LCME Secretary mails medical school dean instruction letter with institutional self-study
and medical education database forms. Dean informs student body of pending survey.
Interested students meet with dean to discuss student role.
3. Medical school dean distributes database forms to department heads, section heads,
students, etc.
4. Medical school dean appoints members of institutional self-study task force and
committees, including student representatives.
5. Self-study task force establishes its objectives and scope of study and sets committees.
Students participate in appropriate committees and conduct independent student analysis.
6. Medical school dean collects completed database forms and distributes copies to self-
study task force and committees.
7. Committees review data and write critique of assignment; report is forwarded to task
force.
8. Task force reviews reports of committees and prepares detailed lists of strengths, areas of
non-compliance, and recommendations for improvement.
9. Medical school dean sends copy of institutional self-study summary and medical
education database to each survey team member and to LCME Secretariats at AMA and
AAMC. The independent student analysis is included in this mailing.
10. Survey team visits campus, conducts interviews and inspections, and writes report for
LCME. Team meets with administrators, faculty, and student groups. Student
representatives are expected to be well informed about major issues and concerns of the
student body.
37
Table 3
Accreditation Process of LCME
Site-Visit
Post Site-Visit
11. Draft survey team report is circulated for review and correction to survey team members,
LCME secretaries, and medical school dean.
12. Final survey team report is circulated for review by LCME membership.
13. LCME considers the survey team report and makes accreditation decision.
14. Medical school dean and university president are sent report and notified of the LCME's
decision about accreditation status. Schedule of any follow-up reporting or return visits
is established.
The first stage is: Prior to self-study preparation, Table 3 (LCME, 2012c, 2012e). The
LCME secretary contacts the program and sets the date for the survey team to conduct the site
visit. These visits are usually scheduled between late September and mid-May. The LCME
secretary provides the dean of the medial program with an instruction letter and material that
offer detailed information, instructions, and forms for the second stage of accreditation.
The second stage of the accreditation process is: The self-study, Table 3 (LCME, 2012c,
2012e). The institutional self-study “is an exhaustive review of the functioning and structure of
the medical school related to the educational program leading to the MD degree” (The LCME:
The role of students in the accreditation of medical education program, 2012, p. 3) (LCME,
2012h). This stage takes over a year to complete. It revolves around compiling a comprehensive
report or catalog about the organization and processes of the program. This intensive report is
called “the medical education database” (LCME, 2012 c, 2012e).
Upon receipt of the instruction letter and forms of the institutional self-study and medical
education database from LCME, the dean distributes the database forms to heads of departments
38
and sections. The dean meets with representatives of the medical students to discuss their role in
the self-study process (LCME, 2012c, 2012h). Furthermore, the dean appoints: a. a self-study
coordinator who will oversee the entire process, b. a self-study task force, c. Committees
corresponding to each category specified by the LCME, and d. a student group that will lead and
direct the student roles in the process. The task force and committees work diligently to establish
the objectives, scope of the study and compile the different component parts of the
comprehensive database. The student leadership group participates in different committees as
relevant to their role, yet develops and conducts a “student analysis survey” and report
independently (LCME, 2012c, 2012h).
The “medical education database” has five major categories that correspond to the
standards that are specified by the LCME’s “Functions and Structure of a Medical School”
(F&S) (LCME, 2012a, 2012c, 2012i). These categories include: institutional setting (IS) of the
medical school, educational program for the MD degree (ED), the medical students (MS), faculty
members (FA), and educational resources (ER) (LCME, 2012a, 2012c, 2012i). Detailed
information about the standards of each category is provided in the “Functions and Structures of
Medical Education Programs” (LCME, 2012a, 2012c, 2012i). The LCME provides ample
assistance and detailed instructions about not only compiling the database but also throughout
the process of the self-study stage (LCME, 2012c, 2012i).
Upon completion of the data base forms, the dean receives and distributes copies to the
self-study task force and the different committees, Table 3 (LCME, 2012c, 2012f). These
committees review the data and write a report on its findings. This report is forwarded to the task
force for review. The task force reflects on the data and findings, identifies the strengths and
weaknesses of the program/institution, recognizes areas of noncompliance, specifies
39
recommendations for improvement, and prepares the institutional self-study summary report
(LCME, 2012c, 2012e). The dean reviews and forwards a copy of the summary report prepared
by the self-study task force, along with the medical education database, and the independent
student analysis to the secretariats of LCME, as well as, each member of the survey visit team
(LCME, 2012b, 2012c, 2012e, 2012f). It is noteworthy that the medical education database
includes a section on all required courses and clerkships, as well as a copy of the latest AAMC
Medical School Graduation Questionnaire (GQ) (LCME, 2012c, 2012f).
The third stage of accreditation is: the site visit, Table 3 (LCME, 2012b, 2012c, 2012e).
As aluded to earlier, the dean is informed of the date of the visit by the secretary of LCME. The
LCME supplies the program with precise and clear written instructions regarding developing the
schedule and conduct of the visit (LCME, 2012c, 2012f). Then the LCME provides the dean of
the medical education program with information about the members of the site visit team. The
team consists of a chair, secretary, two additional members, and an LCME faculty fellow. The
accrediting agency extends serious effort in selecting a balanced team from a pool of 200
physicians, basic science and clinical educators, researchers and administrators. Upon receipt of
the information about the team members, the dean of the medical program seeking accreditation
has the opportunity to challenge any team member if any conflict of interest is evident (LCME,
2012c, 2012f).
The LCME’s teams of evaluators conduct 20-30 visits to medical education programs per
year. During the site visit, the team verifies, updates, and clarifies the information in the medical
education database collected by the different committees, and presented in the summary report
compiled by the self-study task force (LCME, 2012b, 2012c, 2012f).
40
A full visit starts with a conference with the dean where the team explains the purpose of
the visit. The team then tours the facilities, gets a sense of the learning environment, and holds
meetings with faculty members, administrators, and students (LCME, 2012c, 2012f). The team
members compile a site-visit report that details the compliance (or lack of it) with each of the
five categories of the LCME and their standards. The site visit ends with a meeting with the dean
and campus chief executive officer to discuss findings and areas of strengths and weaknesses.
The surveyors provide the dean of the program, and secretaries of LCME with copies of the site
visit report to allow for review and corrections prior to finalizing the report. The final report is
submitted to all members of LCME for review, and determination of the status of accreditation
of the medical education program seeking/renewing accreditation (LCME, 2012b, 2012c, 2012e,
2012j).
The fourth stage of accreditation is determination of accreditation status, Table 3 (LCME,
2012b, 2012c, 2012f). The LCME issues its decision regarding accreditation of the program
based on the survey (site visit) report submitted by the visiting team, as well as the medical
education database and self-study report prepared by the program/institution. The “Letter of
Accreditation” and a copy of the final survey report are sent by the LCME to the president of the
university and the dean of the medical education program. The LCME discloses the accreditation
status of the program to the public while keeping all other information confidential (LCME,
2012b, 2012c).
The accreditation decision of LCME can vary from full accreditation to shorter
accreditation period, probation, or denial of accreditation (LCME, 2012c). Full accreditation is
usually granted for a period of eight years. Partial accreditation or accreditation for a shorter
duration allows LCME to request from the program one or more status reports, or schedule
41
limited follow-up visits to investigate the progress and correction of noncompliance areas
identified in the site visit report (LCME, 2012c). Status reports are smaller and more focused
databases that must be submitted to LCME and its team of surveyors. Limited follow-up visits
usually have a team of a chair, secretary, and one member only. Based on the final accreditation
decision, follow-up reports and/or visits are scheduled by the LCME with the dean of the medial
education program (LCME, 2012c).
Decisions of shorter accreditation periods, probation, and denial of accreditation can be
appealed by the medical education program (LCME, 2012c). Outcomes of the appeal can vary
from reversal of the accreditation decision to refusal to consider accreditation of the program
(LCME, 2012c). In addition, medical education programs/institutions planning to undergo
significant changes in terms of ownership, governance, class size (over 10% or 15 students in
one year), or initiating/expanding branch campuses must inform the LCME. The program must
submit the necessary forms for notification to LCME (LCME, 2012c, 2012j).
The accreditation process timeline. A typical timeline for the accreditation process of
medical education programs extends over a long period of time, Table 4 (LCME, 2012c, 2012f).
The survey visit is scheduled by the secretary of LCME 18 months in advance. Instructions and
forms for the self-study database and reports are mailed by LCME to the dean of the program
approximately 15 months ahead of the visit. With the aid of these forms, the medical education
database is compiled, and the self-study summary report is prepared by the various committees
and the taskforce. Completion of these documents is expected to take place three to six months
ahead of the scheduled site visit. Copies of the self-study report and database are mailed to
LCME, as well as each site visit team member, at least three months before the date of the visit.
42
At that time, the program receives instructions that are specific to the site visit. These
instructions detail the schedule and conduct expected during such visit (LCME, 2012c, 2012f).
Table 4
Timeline for Accreditation by LCME
Process of Accreditation by LCME
Months
Before/After
Site Visit
Site visit date scheduled 18
Self-study and medical education database forms mailed to dean of medical
education program
15
Database forms to department heads and sections 15
Self-study task force and committees formed and start self-study 15-10
Medical education database completed 10-6
Medical education database report completed 10-6
Summary report prepared by self-study task force completed 6-3
Copies of summary report, database, student analysis survey mailed to LCME
and site visit team members
3
Site visit Survey
Site visit draft report compiled by surveyors Immediate
Draft report of site visit distributed for review 1-2
Final site visit report completed and submitted to LCME for accreditation
decision
2-4
Accreditation decision by LCME 2-4
Notification of accreditation status mailed to dean of medical education program
and university president. Follow-up reports/visits (if any) scheduled.
2-4
Source: LCME, 2012.
43
The site visit is usually scheduled between late September and mid-May (LCME, 2012b,
2012c, 2012f). Full visits extend from Sunday afternoon to Wednesday afternoon. While limited
visits run from Sunday afternoon to Tuesday afternoon, a day shorter than full visits. The dean of
the medical education program being accredited, and his entire team, are expected to adhere
closely to the plan and timeline of the site visit recommended by the LCME to ensure great
efficiency in utilizing the time of the surveyors, and facilitate the preparation of their site-visit
report (LCME, 2012b, 2012c, 2012f).
After the visit, the team members prepare a draft survey report and provide copies to the
LCME, team members, and the dean of the program within one to two months, Table 3 (LCME,
2012c, 2012f). Copies of the final report are distributed for final review by the LCME
membership two to four months after the visit. The LCME renders its decision regarding the
accreditation of the medical education program and mail the letter of accreditation within two to
four months after the site visit. As aluded to earlier, full accreditation is usually granted for eight
years (LCME, 2012c, 2012f). If accreditation is partial, then further limited follow-up reports
and/or limited visits will be scheduled at that time. These limited reports, if requested by the
LCME, should be submitted to LCME and its surveyors at least one month ahead of the limited
site visit (LCME, 2012c, 2012f).
The LCME holds three meetings annually (February, June, and October) in either
Chicago, IL. or Washington, D.C., the headquarters of AAMC and AMA respectively (LCME,
2012b, 2012c). Nevertheless, additional meetings can take place as needed.
Costs of Accreditation
Accreditation has long been the center of scrutiny by both external and internal
stakeholders of the academe. Its high consumption of existing finite resources in terms of funds,
44
time, and effort is at the very core of that scrutiny, especially so at a time of serious financial
downturns, escalating demands for accountability, transparency, and better ROI when it comes to
academe. Such criticism and public scrutiny continue unabated until this day, regardless of the
fact that there is very little empirical research about accreditation, let alone specialized
accreditation.
Such absence of “rigorous cost-based analyses of accreditation” (Reidlinger & Prager,
1993, p. 39) can perhaps be traced back to general traditional core values and beliefs held in this
country that private and voluntary accreditation is more favored over governmental and
involuntary prevalent in other countries (Bettingham, 2009; Eaton, 2009; Schirlo & Heusser,
2010). Furthermore, it is quite difficult, if not almost impossible, to equate intangible and
indirect costs and benefits of accreditation to their real and tangible dollar worth (Reidlinger &
Prager, 1993). Regardless of such obstacles, it is important to explore, analyze, and reflect on the
literature that is available presently, in order to attempt to bridge the gap that exists when it
comes to the costs and benefits of accreditation in academe in this country.
The cost of accreditation in higher education at all levels, institutional or specialized, can
be assessed through different lenses. These include: Direct (financial) cost in terms of dollars
spent vs. indirect (opportunity) cost in terms of time, effort, and professional autonomy, tangible
vs. intangible, and actual vs. perceived costs. According to Willis (1994), direct costs of
accreditation in higher education include accreditor fees, self-study costs, site-visit costs, travel
costs, direct payment to individuals involved in the accreditation process, and accreditation-
related operating expenses. The researcher claimed that by best estimate, indirect costs that
include personnel time, and diversion from performing other tasks is much greater than the direct
cost of accreditation (Willis, 1994). More recently, Woolston (2012) affirmed that indirect cost
45
of accreditation in academe is much higher than direct costs. Further attempts at assessment of
the cost of accreditation encompassed the cost of training, materials, and staff support as well as
release time for the coordinators of accreditation (Wood, 2006).
In examining the direct cost of the self-study stage of the accreditation process, Kells and
Kirkwood (1979) concluded that this stage did not entail a significant monetary cost. The authors
disclosed that less than $5,000 was spent by almost half of the participants on the self-study.
Nevertheless, they observed that the cost of self-study is positively correlated with institutional
size (Kells & Kirkwood, 1979). By contrast, Schermerhorn, Reisch, and Griffith (1980)
concluded that the non-monetary cost of time commitment by personnel in preparing for
accreditation is one of the most problematic issues of the process. Similar finding were
confirmed by other researchers as well (Kennedy, Moore, & Thibadoux, 1985; Woolston, 2012).
Woolston (2012) concluded that the high nature of these costs was due largely to the cumulative
nature of the number of people involved in the process, and the high salaries of institutional
representatives.
The cost of accreditation plays a significant role in the decision of academic institutions
to seek accreditation or not. Researchers indicated that while cost is defined differently among
programs and institutions, it still constitutes a major concern in terms of time, energy, and
resources (The Florida State Postsecondary Education Planning Commission [FSPEPC], 1995).
According to FSPEPC, cost was the primary reason cited for the lack of interest among academic
institutions in pursuing accreditation (1995). Perceptions about the cost of accreditation were
further explored by other researchers. According to studies conducted by Warner (1977), Pigge
(1979), and Andersen (1987), participants indicated that the cost was indeed a significant burden
46
when it came to accreditation. Warner (1977) further denoted that accreditation results impacted
budget allocations among up to one third of the participants.
As for specialized accreditation by the LCME, “there are no fees…for already
accredited” (LCME, 2012c, p. 8) medical education programs granting MD degrees in this
country. All accreditation-related costs, operational costs (recruitment, salaries of employees,
insurance, legal expenses, meetings, data collection), and all other overhead expenses are
sponsored equally by the AAMC and AMA of the LCME (LCME, 2012c). Nevertheless, the
financial and non-financial costs of accreditation can still be exhaustive to the medical education
programs in many ways.
Along the process of sustaining or renewing accreditation every eight years, the medical
education program needs to adhere to 131 standards in the five categories (IS, ED, MS, FA, and
ER) specified by the LCME (LCME, 2012a). When areas of noncompliance are identified during
either the self-study or the survey visit, the program then needs to correct them and bring to par
with the specified standards. Areas of weakness can range from minor adjustments that have a
small cost, to high-price tags such as building facilities, renewing lab equipment, and others that
might entail spending millions of dollars in the process. Moreover, the programs might elect to
seek the services of consultants to guide them through bringing such areas to compliance and
improve their chances of success (Greenlaw, 2008). Consultants charge high fees for their
services, and hence inflate the overall monetary resources that need to be extended by the
programs for the purpose of accreditation by the LCME.
In addition, the medical programs endure part of the costs related to the survey team
visits during the accreditation process. Although the team members are peer volunteers that
receive no monetary compensation, hosting the team during the survey visit requires provision of
47
daily transportation between school and hotels, meals (breakfast and lunches at school), and a
room for the team while on campus, along with some office supplies and equipment. As
discussed earlier in this research, for a full visit, the program needs to accommodate five survey
team members for four days. For a limited visit, the team consists of three members that visit for
three days. It is noteworthy that all other expenses related to the site visit, are paid by the team
members themselves, and reimbursed by the LCME at a later time according to very specific
guidelines (LCME, 2012c).
The lack of accreditation fees by the LCME does not apply to every medical education
programs in the US. Any new, developing or unaccredited medical education program
contemplating accreditation is responsible for an application fee of $25,000 (Appendix B)
(LCME, 2012c, 2012m). Upon approval as a candidate for accreditation, the program can
proceed to prepare for, and request preliminary accreditation by the LCME. If the accreditation
process progresses successfully, then the program is granted provisional accreditation, and
eventually full accreditation for eight years (Appendix B). On the other hand, if the program has
insufficient or lack of full compliance with any of the standards specified by the LCME,
accreditation can be delayed or denied. If accreditation is delayed, then the program faces
additional costs due to more status reports, site visits, and other demands placed by the LCME in
order to bring the program to par with the requirements within a maximum of two years
(Appendix B) (LCME, 2012c). If accreditation is denied, and the new or unaccredited medical
education program elects to continue to pursue accreditation, then a reapplication fee of $10,000
is charged by the LCME (Appendix B) (LCME, 2012c, 2012m).
Finally, in spite of the fact that the LCME: 1. does not directly charge medical programs
for accreditation services, 2. does not collect any dues from the medical programs, and 3. is
48
equally sponsored by the AMA, and the AAMC for accreditation-related costs, the two
sponsoring organizations generate millions of dollars from annual membership dues and other
fees that are paid by their members among whom are the accredited medical education
programs/schools and their internal stakeholders (AAMC, 2012; AMA, 2012). In 2011, the
AAMC collected approximately $12.8 million in membership dues from the 137 accredited
medical education programs that are targeted by this study, among other members such as
hospitals, and Canadian medical programs (AAMC, 2011). Upon seeking to verify the annual
membership dues paid by the Keck School of Medicine of USC, the researcher was informed by
the AAMC staff via phone that it was set at $57,490 for 2012-2013. Comparably, in 2011, the
AMA generated $37.5 million in membership dues that were paid by individual physicians
($210-$420 each), medical residents ($45), and students ($20 each), some of whom are the
internal stakeholders of the medical education programs included in this study (AMA, 2011).
Membership dues charged by the AAMC and AMA constitute a major segment of the funds
utilized for the services offered to their members. Therefore, the costs of the LCME and its
accreditation services are paid partially and indirectly by the medical education programs and
their internal stakeholders.
Regardless of which lens is utilized for the analysis of the costs of accreditation and its
impact in academe and medical programs, as the economic situation continues to deteriorate, the
academic institutions and programs continue to face the inevitable question of whether the taxing
cost of accreditation outweighs its benefits or not. Such a debate warrants an investigation of not
only the costs of accreditation but also its benefits to higher education in this country.
49
Benefits of Accreditation
The benefits of accreditation experienced its own share of debate and scrutiny by external
and internal stakeholders when it came to higher education in the U.S. Nevertheless, its journey
perhaps enjoyed some support and relief along the way as compared to the one taken by the costs
of accreditation. According to Warner (1977), participants indicated that the benefits of
accreditation exceeded the cost. Twenty one years later, such notion was confirmed by other
researchers (Lee & Crow, 1998; Yuen, 2012) who indicated that the majority of participants
again believed that the benefits of accreditation exceeded its cost.
Benefits of accreditation in academe can be divided into different categories. According
to FSPEPC, these categories are: benefits to the students, benefits to the department, and benefits
to the institution (1995). Through that lens, for students, national and international, accreditation
can aid in enhancing the quality, experience, and outcomes of student learning, not only in terms
of internship, mobility and employment opportunities, but also the quality of medical care
extended to the public here and abroad. For administrators, accreditation affords them a chance
to re-examine and revamp school competitiveness, recognition, visibility, ranking and prestige,
student applications and enrollment, and funding opportunities, both public, and private. And last
but most certainly not least, for faculty members, accreditation grants opportunities for
experimentation and professional development that can lead to better teaching and learning
practices among them.
More recently, the fundamental benefits of accreditation were summarized by other
researchers through perhaps a broader lens. Ewell (2008), Brittingham (2009), Eaton (2009,
2011) and Schirlo and Heusser (2010) give credit to accreditation for ensuring quality control
and incessant improvement in academe, promoting access to federal and state funds, as well as
50
donations by the private business sector, facilitating student mobility nationally and
internationally, and optimal health care. Woolston (2012) classified benefits of accreditation as
primary, secondary, and others. According to the researcher, primary benefits included: the
opportunity for institutional self-reflection, and subsequent improvements (Woolston, 2012). The
secondary benefits included: “the possibility of using the accreditation process as a vehicle for
institutional improvement, improved campus unity, a review provided by an entity from outside
the institution, the ability to offer financial aid, the mere fact of having accreditation, and the
reputation provided by having accreditation” (Woolston, 2012, p. 206). Other benefits included:
the opportunity to share best practices, celebrate institutional accomplishments, and conquer the fear
of not having accreditation (Woolston, 2012).
Quality control and assurance in higher education is the fundamental role of accreditation
since its very inception. In the 1800’s, numerous factors (lack of governmental control,
endorsement of autonomy and privatization of the American higher education, etc.) led to fast
expansion and diversity of academic institutions (Brittingham, 2009; Ewell, 2008; Lucas, 2006).
As a result, great confusion, lack of clear identity, and lowered standards of quality plagued
higher education. Accreditation gradually emerged to establish clarity and control quality
(Brittingham, 2009). In 1929, the first institutional accreditation of post-secondary institutions,
public and private, was granted in New England, marking the undisputed and distinct role of
accreditation as a private, self-regulatory tool that validates, ensures, and improves the quality of
education in American colleges and universities (Brittingham, 2009; Eaton, 2009, 2011; Ewell,
2008; CHEA, 2012a; USDE, 2012a).
Access to federal funds is a key role that accreditation played (and continues to play) for
a very long time. As aluded to earlier, with the passing of the GI bill, Veterans Readjustment
51
Act, and HEA, millions of dollars in financial aid were made available directly to students,
including veterans. These funds indirectly improved access, affordability, mission
differentiation, and proliferation of academe (Lucas, 2006; Martin, 1994). Accreditation and its
agencies were the appointed gatekeepers to federal funds with such unquestionable authority.
With over $86 billion in financial aid (Chronicle of Education, 2008), at a time that higher
education continues to be more expensive, less affordable (Project on Student Debt, 2012a,
2012b, 2012c), and more vital in the face of an exhaustive economic downturn, accreditation
continues to be of great value to academic programs and institutions.
The LCME accreditation of medical education programs leading to the MD degree has
numerous benefits (LCME, 2012b, 2012c). It assesses and certifies the quality of the
aforementioned programs. Accreditation provides a seal of approval and assurance to the internal
and external stakeholders that the program meets and exceeds national standards and
expectations when it comes to structure, function, and performance. It grants the program and its
host institution, an opportunity to indulge in collecting a comprehensive medical education
database, conducting a rigorous self-study, reflecting on strengths and weaknesses, and
continuing to improve the quality of medical education (LCME, 2012b, 2012c, 2012f). The
significance of the site visit component of specialized accreditation of medical education
programs is highlighted for its contribution to quality improvement. Through the profound
exchange between the competent and independent team of peer experts and the internal
stakeholders that takes place during the visit, opportunities for “strategic developments and
sustainable quality enhancement” (Schirlo & Heusser, 2010, p. 3) are realized.
Recognition of the LCME by Congress, USDOE, state licensing boards, medical
programs/schools (LCME, 2012b, 2012c), grants the medical students enrolled in accredited
52
programs eligibility for federal financial aid, state medical licensure examinations, and entry into
graduate medical education programs that are accredited by the Accreditation Council for
Graduate Medical Education (ACGME). At a time of limited financial resources and increased
competition among national sectors over these resources, eligibility for federal aid is a crucial
component when it comes to the benefits of accreditation. This aid includes Title VII of the
Public Health Service (LCME, 2012b, 2012c).
In addition, the availability of the LCME staff to extend support and consultations to
medical education programs is yet another benefit of accreditation (LCME, 2012c). Such
requests are customarily emailed to the LCME by the dean, or the leadership of the program. The
LCME follows up with a phone interview and a face-to-face consultation. Indeed, the LCME
does a commendable job of offering numerous guidelines and easy-to-read, up-to-date
comprehensive documents through its website (LCME, 2012c, 2012l).
Furthermore, most state licensure boards require accreditation of medical education
programs by the LCME as a condition for graduates to take the US Medical Licensing
Examination (USMLE) and receive their license to enter their medical field as professionals. For
those who elect to pursue additional residency, the LCME accreditation of their schools grants
them eligibility for ACGME-accredited programs.
Advocacy and leadership are distinctive benefits of accreditation by the LCME (LCME,
2012b, 2012c). The accreditor plays a major role in health-related laws in Congress as well as
licensing boards around the nation. Furthermore, through its stringent standards of student
participation, it advocates for the recipients rights in voicing their opinion about their program
(LCME, 2012c, 2012h, 2012i). The student opinion survey, two lunch meetings with medical
students from all years, and the opportunity to chat informally with student guides during the
53
tour of facilities provide an ample opportunity for the LCME team to promote participation and
leadership among medical students in an atmosphere of transparency, accountability, and
confidentiality. In addition, the LCME has medical students among its membership to ensure the
inclusion of their perspective in accreditation in general (LCME, 2012c; 2012h, 2012j). These
members participate fully in meetings, discussions, and site visits of the LCME and enjoy full
voting and financial support privileges.
The requirement of the LCME for parent institutions to undergo the self-study and
accreditation by regional accrediting agencies ensures a broader and richer academic
environment for the medical education program and its students (LCME, 2012c). Furthermore,
medical education programs/institutions planning to undergo significant changes in terms of
ownership, governance, class size, and branch campuses must inform the LCME (LCME, 2012c,
2012j). Hence, the LCME maintains a close look at medical education programs, and monitors
significant changes on a continual basis (LCME, 2012c, 2012j).
Furthermore, the LCME conducts periodical reviews of its existing standards of
accreditation and considers recommendations for new ones (LCME, 2012c). Planned reviews
take place over a five-year cycle where each year a questionnaire targeting one of the five main
standards is analyzed and modified (LCME, 2012c). Customarily, the measurement tool is sent
out to relevant groups of stakeholders to solicit their opinion regarding the clarity and mission of
the standard in improving the quality of medical education. Unplanned reviews take place in
response to problems with standards, such as confusion, that surface at any point (LCME,
2012c).
54
In Conclusion
Specialized accreditation in medicine led the way and “maintained a status level that all
occupations covet” (Folden, 1980, p.37) when it came to quality control in education. For this
journey of leadership and thrive for excellence to continue, accreditation needs to continue to
evolve in light of challenges, some of which are old and familiar, and others that are new and
foreign. The persistent older factors such as dwindling financial resources, increasing default
rates on student loans, decreasing student learning outcomes, are exacerbated by newer ones that
include globalization, massification, and changing patterns of student mobility. Regardless of
their place in time, these factors will continue to renew the call by external as well as internal
stakeholders for heightened accountability, improved transparency, and better ROI when it
comes to academe. The need for accreditation to rise to another level in face of old and modern
day challenges is further exacerbated by the scarcity of well-published peer-reviewed empirical
research on accreditation (Shibley & Volkwein, 2002), “rigorous cost-based analyses of
accreditation” (Reidlinger & Prager, 1993, p. 39), and sufficient knowledge about accreditation
among various stakeholders.
Through discussing the evolution of accreditation in this country, specialized
accreditation in medicine, and costs as well as benefits of such accreditation, this chapter offered
a review of the relevant literature that will provide the context in which a fresh look and analysis
of the costs and benefits of specialized accreditation by the LCME will take place. This analysis
will aid in filling the existing gap in our “an inch deep” (Ikenberry, 2009, p. 4) knowledge and
support of accreditation, and advancing the wheel of quality control, sustained improvement and
strive for excellence in not only the medical education arena, but also academe in general.
55
Chapter 3
Methodology
Introduction
The purpose of this research was to assess the value of specialized accreditation of four-
year medical education programs leading to the M. D. degrees in the US that are fully-accredited
by the LCME. The perceived value of accreditation was examined through two lenses; the
benefits and costs. Hence, this study catered to the following research questions:
• What are the perceived benefits of specialized accreditation of fully-accredited
four-year medical education programs in the US by the LCME?
• What are the perceived costs of specialized accreditation of fully-accredited four-
year medical education programs in the US by the LCME?
• Do department chairs and others believe that perceived benefits justify the
perceived costs of specialized accreditation of fully-accredited four-year medical
education programs in the US by the LCME?
This third chapter of the study discusses two major components of methodology;
Participants and instrumentation (Orcher, 2007). It offers an overview of the target population,
study sample, selection criteria and demographics, as well as the measurement tool, content,
cover letter, selection criteria of tool, reliability/validity, feasibility, data collection procedures
and data analyses that were utilized in this research. This chapter ends with an overview of the
limitations of this study.
Participants
Target Population. The target population for this study consisted of internal
stakeholders in 126 fully-accredited four-year medical education programs leading to the MD
56
degree in the US (Appendix A) (AAMC, 2012; LCME, 2012a, 2012b, 2012c). The primary
target population was the department chairs and assistant/associate department chairs in all 126
fully-accredited programs. The secondary target population was the faculty members and other
lead administrators in a random sample of the aforementioned programs. Other lead
administrators included: assistant/associate deans, directors, ALOs, board member, research
provost and curriculum committee chairmen.
Study Sample. The sample size plays a major role in the results of any study. According
to Orcher (2007), larger sample sizes “yield more precise results” (p.46). Based on AAMC’s
website (2013), the primary target population (N) is 2,764. Accordingly, the study sample (n)
should range between 335 and 338 (Orcher, 2007, Box 6A, p. 47; Krejcie and Morgan, 1970).
For this study, a primary random cluster (pre-existing groups) sample (Orcher, 2007) of 769
department chairs/associate/assistants was selected. No precise figures were available for each
category of the secondary target population.
A total of 1,096 department chairs, assistant/associate department chairs and others (lead
administrators and faculty members) in fully-accredited undergraduate medical education
programs in the US were targeted by this study. In addition to the 769 department chairs and
assistant/associate department chairs in all fully-accredited programs, this sample included 327
other lead administrators/faculty members in a random sample of these programs. Although 116
participants responded to the survey, the number of usable responses was 87. Hence, the
response rate was 8%.
Selection criteria. An up-to-date list of all four-year medical education programs leading
to the MD degree in the US was retrieved by the researcher from the LCME’s website (2013)
(Appendix A). As the sole recognized entity of specialized accreditation of the aforementioned
57
programs, the LCME’s list was inclusive of the maximum number of programs in the country.
For further information, affirmation and triangulation of data, a second list of the medical
education programs was retrieved from the AAMC’s Directory of Medical Education Programs
in the US (2011) and website (2013).
According to the LCME (2013) and AAMC (2012), there were 137 four-year medical
education programs leading to the MD degree in the US. Out of the total number of programs,
126 received full accreditation from the LCME while the remaining programs were either in the
preliminary or provisional stages of accreditation (LCME, 2013) (Table 5).
Table 5
Number of US Undergraduate Medical Education Programs
Academic year MD Programs
Fully-accredited four-year MD
programs
2009-10 133 126
2010-11 135 126
2011-12 137 126
Source: AAMC Data book 2012, LCME, 2013
Additional information about each fully-accredited program was accessed directly from
the websites of the programs, the hosting academic institutions and AAMC (2012). For
increasing inclusivity and minimizing selection bias, a primary random cluster of 769 contacts
for department chairs and assistant/associate department chairs was selected by the researcher
from all 126 fully-accredited programs identified earlier. A second random cluster sample of 327
other lead administrators and faculty members were selected by the researcher from some of the
aforementioned programs. The total number of random sample was 1,096.
58
The target population was purposefully selected from the leadership of undergraduate medical
education programs to shed the light on the perceived value of accreditation through the eyes of
internal stakeholders, the hard-working recipients of such accreditation. For the primary target
population, department chairs, this choice was guided by several factors: Their leadership
positions in the program, knowledge of, and involvement in, the accreditation process, and their
intermediary role between the deans, faculty members, students, and other administrators of the
medical programs (Occupational Outlook Handbook, Bureau of Labor Statistics [BLS], U.S.
Department of Labor [USDL], 2012-13). As heads of departments they serve as both faculty
members and administrators and enjoy unique privileges of balanced awareness and
receptiveness to both points of view when it comes to “perceived” direct and indirect benefits
and costs of specialized accreditation of their programs. In addition, this target population
enjoyed ample access to financial, logistical and other information that was relevant to this study.
Other internal stakeholders were selected as the secondary target population due to their more
intense exposure/experience with the accrediting agency hence allowing for broader dimensions
and added depth to the insight into accreditation and its value.
According to the websites of the programs and AAMC (2013), the fully-accredited four-
year medical education programs leading to MD degree consisted of departments that are divided
into three main categories/clusters: Basic sciences, clinical sciences and surgery. The basic
sciences departments included anatomy, biochemistry and others while the clinical sciences
departments included community health, emergency medicine and others (Table 6). The third
cluster, surgery included general surgery, neurosurgery and others (Table 6). While clear
distinctions among the three clusters existed, some variations within each cluster were found
among programs based on several factors such as the size and budget of the programs.
59
Table 6
Departments of Medical Education Programs
Department category Departments
Total Basic Sciences Anatomy
Biochemistry
Bioethics/Medical Human.
Biomedical Informatics
Biostatistics
Genetics
Microbiology
Molecular and Cellular
Biology
Neurosciences
Pharmacology
Physiology
Other Basic Sciences
Other Clinical Sciences Community Health
Emergency Medicine
Family Medicine
Neurology
Ophthalmology
Otolaryngology
Physical Medicine and
Rehabilitation
Preventive Medicine
Psychiatry
Other
Total Surgery General Surgery
Neurosurgery
Orthopedic Surgery
Pediatric Surgery
Plastic Surgery
Surgical Oncology
Thoracic/Cardiology Surgery
Transplant Surgery
Trauma/Critical Care Surgery
Urology
Vascular Surgery
Other Surgery
Source: AAMC Data Book (2012).
To ensure the random selection of study sample and minimize selection bias, the
researcher randomly selected different departments in all three clusters and conducted a thorough
search for the contact information of their chairs, as well as interim/co-chairs, in the absence of
chairs of some departments. The information collected for this sample was organized in Excel
sheets by names and addresses of medical education programs, designated departments and
contact information/addresses of the department chairs. The researcher assigned and closely
maintained a contact log and notes section for each department chair. The same procedure was
followed by the researcher with the second sample study of lead administrators and faculty
members.
60
Demographics. According to the survey results, 64 (74%) of participants were
department chairs (62) and assistant/associate department chairs (2) as compared to 50 (58%)
others. Others included: associate/assistant deans (12), directors (14), ALOs (3), faculty
members (16) and other positions (5). Deans were not included in this study. Approximately, 63
(77%) held only one position while 19 (23%) held more than one position. Hence, the study
sample was inclusive of lead administrators in various capacities and levels in the undergraduate
medical education programs to capture diverse opinions of internal stakeholders about
accreditation.
Due to the fact that all fully-accredited programs were included in this study as
previously discussed in detail, then published data about these programs could be considered as
baseline information and a point of reference in this study. By type/sector, out of the 126 fully-
accredited programs, 75 were public and 51 were private (AAMC Data Book, 2012) (Table 7).
Survey results confirmed that participants were employed in both types of undergraduate medical
education programs.
Table 7
Type of Fully-accredited Four-year MD Programs
Academic year
Fully-accredited MD
programs
Public
Private
2009-10 126 75 51
2010-12 126 75 51
Source: AAMC Data book 2012, LCME, 2013
Similarly, by region, responses of the study sample indicated that all five different
regions: Northeastern, Southeastern, Central, Southern and Western of fully-accredited
undergraduate medical education programs were represented. As for the number of MD
61
graduates in the last academic year, customarily used as a measure of the size of the program, it
varied among different programs (AAMC Data Book, 2012). Such variation was evident among
participants. The study results indicated that the number of graduates (MDs) of programs ranged
from 48 to 640 students with an average of 164 graduates/program.
This evaluation of specialized accreditation by the LCME was examined through the lens
of internal stakeholders; Department chairs and others. According to the Occupational Outlook
Handbook (BLS/USDL, 2012-13), post-secondary academic administrators, College or
university department heads or chairpersons are defined as the administrators who, among other
duties:
• In charge of departments that specialize in particular fields of study such as
biology
• Propose budgets
• Serve on committees
• Coordinate schedules of classes and teaching assignments
• Recruit, interview and hire applicants for teaching jobs
• Teach as faculty members
• Evaluate faculty members
• Encourage faculty members development
• Perform other administrative duties
• Consider and balance concerns of faculty members, administrators and students
According to AAMC (2012), the total number of permanent department chairs was 2,764
in 2011 (Table 8). The number of department chairs of basic sciences was 717 as compared to
62
1,972 for the clinical sciences/surgery (AAMC, 2012). By gender, there were a total of 2,384
male and 380 female (a ratio of 19:3) permanent department chairs in 2011 (Table 8). In the
basic sciences departments, 580 males and 137 females (5:1) were permanent chairs as compared
to 1,750 males and 222 females (14:2) in the clinical sciences/surgery departments for the same
year (Table 8). Additional information on the number of permanent department chairs for each
medical education program was listed by state for benchmarking purposes (AAMC, 2012),
(Table 9).
Table 8
Number of Permanent Department Chairs by Cluster and Gender
Department Cluster Total Men Women
Basic Sciences 717 580 137
Clinical
Sciences/Surgery
1972 1750 222
Other 75 54 21
Total 2764 2384 380
Source: AAMC (2012) Annual Women in Academic Medicine and Science Statistics and
Benchmarking Survey (2012)
Table 9
Number of Permanent Department Chairs by State
State
Total
Basic Science
Clinical
Sciences/Surgery
Alabama 39 14 25
Arizona 21 5 16
Arkansas 26 5 21
63
Table 9
Number of Permanent Department Chairs by State
State
Total
Basic Science
Clinical
Sciences/Surgery
California 177 41 136
Colorado 24 7 17
Connecticut 42 17 25
District of Columbia 57 18 39
Florida 85 17 68
Georgia 70 17 53
Hawaii 12 3 9
Illinois 162 50 112
Indiana 24 7 17
Iowa 24 5 19
Kansas 35 8 27
Kentucky 41 12 29
Louisiana 64 17 47
Maryland 65 24 41
Massachusetts 117 25 92
Michigan 56 17 39
Minnesota 108 21 87
Mississippi 15 5 10
Missouri 68 14 54
64
Table 9
Number of Permanent Department Chairs by State
State
Total
Basic Science
Clinical
Sciences/Surgery
Nebraska 32 9 23
Nevada 13 3 10
New Hampshire 16 6 10
New Jersey 22 5 17
New Mexico 18 4 14
New York 267 80 187
North Carolina 78 25 53
North Dakota 11 2 9
Ohio 136 32 104
Oklahoma 27 4 23
Oregon 26 7 19
Pennsylvania 147 43 104
Puerto Rico 58 19 39
Rhode Island 21 5 16
South Carolina 23 6 17
South Dakota 6 0 6
Tennessee 81 15 66
Texas 152 39 113
Utah 24 9 15
65
Table 9
Number of Permanent Department Chairs by State
State
Total
Basic Science
Clinical
Sciences/Surgery
Vermont 14 4 10
Virginia 76 21 55
Washington 27 10 17
West Virginia 38 7 31
Wisconsin 45 13 32
Source: 2011-2012 Directory of American Medical Education, AAMC.
The AAMC (2012) data on race/ethnicity of department chairs indicated that the majority
(2,067) were White (Table 10). Among all other groups, Asians (127) and Hispanics (100) were
in the forefront as compared to Unknown (80), Black and African Americans (69) and Multiple
races (35) (Table 10). Both the American Indians or Alaskan Natives and Native Hawaiian and
other Pacific Islanders groups had no representations when it came to department chairs of both
clusters (Table 10) (AAMC, 2012).
Table 10
Distribution of Department Chairs by Race
Department
Cluster Asian Black
1
Hispanic
Native
America
2
Pacific
Islander
3
White Other
Basic
Sciences
31 12 28 0 0 550 1
Clinical
Sciences
Surgery
96 57 82 0 0 1,517 2
1
Black or African American
2
American Indian or Alaska Native
3
Native Hawaiian and Other Pacific Islanders
66
Table 10
Distribution of Department Chairs by Race
Department
Cluster Asian Black
1
Hispanic
Native
America
2
Pacific
Islander
3
White Other
Sub Total
127 69 100 0 0 2,067 3
Source: AAMC Faculty Roster, May 2012
According to BLS (2012-2013), the median annual pay for postsecondary administrators
was $83,710 in May 2010 and $86,400 (O*Net Info) in 2013 as compared to $91,440 for
management administrators and $33,840 for all other administrators in 2010. They generally
work full time year-round. Some schools allow for lesser number of hours of work during the
summer. In addition, BLS (2012-2013) reported that heads of clinical sciences/surgery
departments earned a median wage of $166,400 in 2010 and cautioned that they are physicians
who might have their own practices apart from the medical education programs. The AAMC’s
Faculty Roster (May 2012) was accessed for more accurate information on the salaries of the
target population.
The AAMC (2012), estimated the salaries of department chairs of the basic sciences
departments at $247,000 for 2010 and $258,000 for 2011 (Table 11). As for the clinical sciences
departments, the salary of department chairs ranged from $284,000 in 2010 and $295,000 in
2011 as compared to the salaries in the total surgery departments of $625,000 in 2010 and
$651,000 in 2011 (Table 11), (AAMC, 2012). The salaries of faculty members (Assistant
professors, associate professors and professors) in 2011 ranged from $86,000 to $159,000 for the
basic sciences departments, $144,000 to $191,000 for the clinical sciences cluster and $308,000
to $419,000 for the surgery departments (Table 11), (AAMC, 2012).
67
Table 11
Salaries of Department Chairs and Faculty Members (in thousands)
Salaries Basic Sciences
_____________________
Other Clinical Sciences
_____________________
Total Surgery
_____________________
2010 2011 2010 2011 2010 2011
Department
Chairs
$247 $258 $284 $295 $625 $651
Faculty
$86
$159
$144
$191
$308
$419
Source: AAMC Faculty Roster, May 2012
The Measurement Tool
The measurement tool was an online survey that addressed the benefits and costs of
accreditation of four-year undergraduate medical education programs leading to the MD degrees
in the US by the LCME (Appendix D). This survey was adapted from Freitas (2007) and Yuen
(2012) that assessed the accreditation of nursing and engineering programs respectively with
modifications that are unique to the medical education programs. To do so, the researcher relied
on a background in medical studies, public health, and sciences, knowledge of the gravity of
accreditation in the medical arena, and professional and academic expertise in higher education.
In addition, the researcher conducted an intensive review of the literature on medical education
programs in the US, accreditation in general, and specialized accreditation by the LCME
specifically.
Content. The survey had a title that reflected the purpose of this study precisely and
appropriately (Appendix D). It started with an expression of gratitude to participants for
accepting the invitation to participate in this study and offered the researcher’s contact
information for any questions they might have. In addition, the researcher requested that
68
participants do not forward the survey since it is intended for a specific target population
(Appendix D).
The survey included 27 questions and 19 optional ones, grouped into five sections that
aligned with the purpose and research questions of this study as well as the guidelines specified
by Orcher (2007). The first question inquired about participation in accreditation. Section A
contained 13 questions and two optional questions that addressed the perceived benefits of
accreditation. Section B adhered to the same system and encompassed seven questions and two
optional ones about the perceived costs of accreditation while Section C included two questions
about perceived benefits versus costs. According to Orcher (2007), “All demographic questions
should be grouped together and placed at the end of the questionnaire” (p. 81), hence four
questions about demographics followed Section C.
The last section of the survey was optional. It attempted to assess the actual benefits and
costs of accreditation via 15 questions. Almost all participants opted out of this last section,
hence, there was no sufficient data on actual benefits/costs available to analyze in this study. The
survey ended with a note of gratitude and best wishes (Appendix D).
The questions were simple, clear, and concise for efficiency and effectiveness (Orcher,
2007). They were a combination of Likert scale/items, multiple choice and closed-ended
questions to generate quantitative data and open-ended questions to garner qualitative data and
feedback. Likert scale allowed for easier quantification (Parnaby, 2006) while open-ended
questions afforded the participants the opportunity to write freely about their perceptions about
accreditation. To that end, the mixed-method approach provided a more holistic approach to the
topic in terms of breadth, depth, meaning, richness, new insights, and credibility.
69
Cover Letter. The cover letter started with an invitation to participate in the survey and
provided recipients with the purpose of the survey, significance of participation and an online
link to the survey. It also provided the time needed for completion, confidentiality of responses,
IRB approval/number, a brief introduction to the researcher and ended up with expression of
gratification (Appendix C). In doing so, the cover letter addressed the “cost” (completion of the
survey), “rewards” (participants were the experts, gratification), and “trust” (IRB approval,
assurances of confidentiality, position/credentials/contact information of researcher and
dissertation committee chair) for participants (Dillman, 2000, p.15, 21). A follow-up letter
adopted from Woolston (2013) was emailed to participants expressing gratitude for recipients
who participated in the study and reminded non-participants of deadlines and encouraged them
to complete the survey online (Appendix C).
Selection Criteria of the Measurement Tool. This tool was selected based on prior
successful utilization in assessing the value of accreditation in the fields of nursing (Freitas,
2007) and engineering (Yuen, 2012) with adaptations to the medical education arena. It served
the purpose of measuring the dependent variables, perceptions of costs and benefits of
accreditation, in relation to the independent variables such as participation in the accreditation of
programs by the LCME, position of participants, program types and regions and number of MD
graduates in the last academic year. In addition, this tool offered a great opportunity for
triangulating data, pilot testing and adaptability. It was the preferred tool of choice for numerous
research studies across all academic disciplines nationally and internationally.
Reliability and Validity. Reliability (replication) is stability, consistency and “whether
an instrument can be interpreted consistently across different situations” (Field, 2009, p. 11).
Validity asks the question “whether an instrument actually measures what it sets out to measure”
70
(Field, 2009, p. 11). In general, statistical significance (p < 0.05) and magnitude of effect (odds
ratio) are two factors that need to be examined to assess the reliability and validity of any study.
In this study, while reliability was high and the magnitude of effect was quite significant,
statistical significance was limited. Further discussion of the findings will be offered in the
following chapter.
This measurement tool allowed for measuring both quantitative and qualitative data on
the perceived costs and benefits of accreditation by LCME. Analyzing both types of data, along
with using clear operational definitions of terms, and a rigorous design added great support to
construct validity and internal validity. As for external validity (Can the results of this study be
generalized?), it really depends on the confounding variables that always exist. These variables
include characteristics of the study sample such as age and gender of participants, and
characteristics of the program such as type, region and resources, to name a few. By including all
fully-accredited undergraduate medical education programs in this study, selection bias is greatly
reduced. Furthermore, environmental influences are minimal because this research did not force
any interventions or recourse of the accreditation process in the specified programs.
Feasibility. In terms of financial cost, online surveys are very desirable due to the fact that
the cost (printing, postage, etc.) is low. The survey of this study was administered electronically,
with a reasonable cost for the online survey portal, Surveymonkey, as is more frequently done
these days. Nevertheless, the cost in terms of time was much higher. The researcher estimated
the time needed to complete the survey at 15-20 minutes. With busy schedules and very limited
time availability of the participants, the length of time needed to answer the questions could be a
hindrance that affected the feasibility of this tool. In addition, the qualitative analysis was more
71
time consuming and difficult to process by the researcher. Regardless, the inherent bias of a
researcher’s lens in analyzing qualitative data will always confound the findings of the study.
Data Collection
The measurement tool (Appendix D) was entered and formatted in Surveymonkey and a
link was established for collecting the responses. The cover letter/invitation to participate in the
study (Appendix C) was sent via email to the 1,096 department chairs in all fully-accredited
undergraduate medical education programs leading to the MD degree in the US and others in a
random sample of the same programs (LCME, 2012a). The letter contained an active link to the
survey in Surveymonkey that allowed the participants a direct and interactive access to the
survey. Upon acceptance of the invitation to participate in this study, respondents clicked on the
link in the letter, completed and submitted the survey online.
Upon accessing the Surveymonkey directly, the researcher was able to monitor the
number of surveys completed/submitted and access each response and its data. She was also able
to utilize some built-in tools of the online survey collector to examine responses collectively by
each question or section of the survey. A total of 116 responses were collected in Surveymonkey
out of which, 87 were usable responses/data. This data was then downloaded to and manipulated
in Excel sheets, for back-up copies, as well as SPSS for data analysis.
The online delivery of the survey instrument and collection of data was selected by the
researcher due to a lesser cost and more benefits than other delivery modes such as by phone or
mail. These benefits included: 1. enhanced ability to reach out to a high number of the target
population/participants across the US, 2. more convenience for participants in terms of time,
place, and mode of completion, 3. less time required for completion, 4. less delay between
survey delivery and retrieval, 5. unlimited access to geographical locations, 6. enhanced ability
72
to manipulate data, and 7. more clarity as compared to handwritten responses (Evans & Mathur,
2005; Fleming and Bowden, 2009; Lefever, Dal & Matthiasdottir, 2007; Parnaby, 2006; Singh,
Taneja & Mangaialaj, 2009). Nevertheless, data collection online has some disadvantages. These
included: 1. loss of delivery due to electronic filtering systems, 2. varying levels of computer
attitude and literacy among participants due to factors like age, gender, level of educational
attainment and income and 3. differing levels of internet accessibility by the study sample. Such
obstacles compounded the non-response bias (Field, 2009; Fleming & Bowden, 2009; Shih &
Fan, 2008).
Regardless, the target population/sample of this study was unique in many ways. The
department chairs and others had a high level of academic attainment and sufficient resources
that facilitated access to electronic media. The high demands of their lead positions, licensure
requirements for continued research and professional development and wide geographical
dispersion across the country were additional factors that placed a tremendous pressure on
participants to maintain a high level of internet access.
Furthermore, according to Orcher (2007), a researcher needs to “Pay special attention to
subject line”(p. 88) since a “more specific subject line is more likely to be opened” (p. 88).
Hence, the cover letter/link to survey, follow-up letter and correspondence with participants were
sent from the researcher’s usc.edu email with the subject line “Study at USC The benefits and
costs of accreditation”. The academic email address enforced the credibility of the researcher and
the legitimacy of the study and the subject line was specific in summarizing the purpose and
topic of interest. Hence, avoiding loss to junk mail and minimizing, but not totally eliminating
non-response bias (Field, 2009).
73
Data Analysis
The data generated by the 87 responses to the online survey, downloaded to Excel sheets
and SPSS and organized was inclusive of both quantitative and qualitative types. The
quantitative data was generated by the closed-ended, Likert scale, and multiple choice questions
of the survey instrument. This type of data was analyzed by the researcher utilizing SPSS for
descriptive statistics (Frequencies, percent, measures of central tendency mean, mode and
median, etc.). Outputs for the first participation question and each section of the survey were
generated by the researcher along with bar graphs that visually display the results. To ensure a
high level of accuracy and expertise, further analysis of the data for inferential statistics
(Correlations, statistical significance, magnitude of effect/odds ratio through logistical regression
analyses) was achieved with the assistance of Dr. Behjri, Director of the Research Consulting
Group, School of Public Health, Loma Linda University, Loma Linda, California. Outputs and
tables for benefits, costs, benefits vs. costs and univariate and multiple logistic regression
analyses were generated, discussed at length and reported in detail in the next chapter of this
study.
The qualitative data was generated by the open-ended questions of the survey and
analyzed utilizing content/text analysis. For content analysis, the frequency of keywords and
concepts were determined and used to classify data into different categories and identify themes
and sub-themes (Patton, 2002). The categories, themes and sub-themes will be discussed in great
detail in the next chapter of this study.
The mixed-method approach of the survey that generated both quantitative and
qualitative data along with expert assistance/advice for analyzing the quantitative data provided
an opportunity for triangulation, and added breadth as well as depth to the findings of this study.
74
With the intensive review of the literature presented earlier and reflection on the findings, the
researcher was able to voice the opinion and deliver the message and recommendations of lead
internal stakeholders of the undergraduate medical education programs in the US. These solid
recommendations contributed to the advancement of our limited knowledge about accreditation
as well as keen effort to fill the gap of empirical research on the topic at hand. With complete
dedication, sincere commitment and strong support at both sides; the accrediting agency and
medical education programs/academic institutions, such recommendations will improve and
advance our system of quality control and assurance in academic medicine and higher education.
Limitations, Delimitations, and Assumptions
This study explored the perceived benefits and costs of specialized accreditation by the
LCME of medical education programs leading to the MD degree in the US and its territories.
This focused program-, accreditor-specific topic (LCME, 2012b, 2012c) did not allow for
investigating either the specialized accreditation of other fields, or the other types of
accreditation, such as national or institutional in this country. In addition, the medical education
programs in this research were country-specific.
The target population consisted of the department chairs, assistant/associate department
chairs and others (lead administrators and faculty members). This population is not inclusive of
all internal and external stakeholders of the undergraduate medical education program. The
degree of honesty and accuracy of participants’ responses, and their writing abilities, can
certainly create bias. Furthermore, the number of usable responses was limited to 87. Such small
sample size was not sufficient to generate more statistically-significant correlations among
variables. Nevertheless, the data generated was sufficient enough to generate a high magnitude
of effect (odds ratio) for some of the variables. At the end, caution should be exercised in making
75
any generalization of the findings either to the entire target population, or the medical education
programs in the US.
The measurement tool for this study was a survey that utilized frameworks of past
surveys applied in the field of accreditation of higher education (Freitas, 2007; Yuen, 2012). This
survey was re-designed by the researcher for relevance and applicability to the medical education
field, based on: 1. the writer’s background in medical/public health studies, 2. management of
county-wide health programs, 3. teaching and academic management in the academe, as well as,
4. an intensive review of the relevant literature on the accreditation of medical education
programs by the LCME in the US and its territories. Furthermore, the researcher combined
Likert scale/items, multiple choice, and closed-ended questions to generate quantitative data, and
open-ended questions to garner feedback, and obtain qualitative data. Such an approach provided
a more holistic review to the topic in terms of breadth, depth, meaning, richness, new insights,
and credibility to this research. It allowed for triangulation of data, and enhanced the quality,
reliability, and validity of this study.
This survey was conducted online. Hence, verbal and non-verbal cues and expressions of
participants were not captured and documented by the researcher. In addition, technical logistics
such as: Loss to un-updated web addresses, security filters and out-of-office notices of
participants resulted in lack of access to numerous number of recipients. Additional logistical
issues were encountered due to lapses in certain programmed mechanisms of the Surveymonkey.
Although these lapses were observed, immediately reported to the online survey portal and
corrected by technical support, the fact remained that some recipients were not able to open the
link to the survey that was included in the cover letter. Few of these recipients were kind enough
to contact the researcher directly and email a confirmation of their completion of the survey after
76
they were done. The researcher removed all information related to the identity of these
participants to ensure confidentiality, foster transparency and improve the validity, and reliability
of this study and its findings.
77
Chapter 4
Findings
Introduction
This chapter will present and analyze the results of the data collected by the measurement
tool, the survey (Appendix D), distributed to department chairs and others (lead administrators
and faculty members) in all fully-accredited undergraduate medical education programs in the
US. The survey assessed the value of accreditation of the aforementioned programs by the
LCME through two lenses, benefits and costs. The presentation of findings will start with the
demographics of the participants and their undergraduate medical education programs, continue
on to findings and discussion for each research question and conclude with recommendations for
changes that were directly expressed by the participants of this study.
Demographics
A. Participants
A total of 1,096 department chairs, assistant/associate department chairs and others (lead
administrators and faculty members) in fully-accredited undergraduate medical education
programs in the US were contacted for this study. The study sample included 769 department
chairs and assistant/associate department chairs in all fully-accredited programs as well as 327
other lead administrators/faculty members in a random sample of these programs (Table 12).
Lead administrators included: assistant/associate deans, directors, ALOs, board member,
research provost and curriculum committee chairmen. Although there were 116 participants, the
number of usable responses was 87. Hence, the response rate was 8%.
By current position, the participants held several titles in the fully-accredited
undergraduate medical education programs in the US. The survey results indicated that 64 (74%)
78
of participants were department chairs (62) and assistant/associate department chairs (2) as
compared to 50 (58%) others (Table 12). Others included: associate/assistant deans (12),
directors (14), ALOs (3), faculty members (16) and other positions (5) (Table 13). Deans did not
participate in this study.
Table 12
Current Position of Participants
Position Frequency *Percent
Department Chairs/Assistant/Associate 64 73.6
Other positions
50
57.5
*Some participants held more than one position.
Table 13
The Study Sample and Current Positions
Position Frequency Percent
Department Chair 62 71.3
Faculty 16 18.4
Director 14 16.1
Assistant/Associate Dean 12 13.8
ALO 3 3.4
Assistant/Associate Department Chair 2 2.3
Other 5 5.7
Dean 0 0
By number of positions, survey participants occupied one position or more in their
perspective undergraduate medical education programs in the US. According to the survey
results, 63 (77%) participants occupied one position while a total of 19 (23%) participants
79
occupied more than one position in their programs (Table 14). Out of the latter, 10 (12%)
participants occupied two positions, seven (9%) participants occupied three positions and two
(2%) participants occupied more than three positions.
Table 14
Number of Positions of Participants
Frequency Percent
Valid One position 63 76.8
More than one position 19 23.2
Total 82 100.0
Missing 5
Total 87
The participants worked in both types of undergraduate medical education programs in
the US; public and private. The survey results indicated that 42 (51%) held positions in the
public programs as compared to 40 (49%) participants who worked in the private programs
(Table 15). Five participants did not specify the type of program they worked at (Table 15).
Table 15
Participants by Type of Medical Education Programs
Frequency Percent
Valid Public 42 51.2
Private 40 48.8
Total 82 100.0
Missing 5
80
Table 15
Participants by Type of Medical Education Programs
Frequency Percent
Total 87
By region, the participants worked in undergraduate medical education programs in all
five regions across the US. Out of 80 participants, 25 (31%) worked in programs located in the
Northeastern region, 18 (22%) in the Central region, 24 (30%) in the Southern area (Southeastern
and Southern regions) and 13 (16%) in the Western region (Table 16). Seven participants elected
not to specify the regions of their programs (Table 16).
Table 16:
Participants by Region of Programs
Frequency Percent
Valid Northeastern 25 31.3
Southern 24 30.0
Central 18 22.5
Western 13 16.3
Total 80 100.0
Missing 7
Total 87
Experience with accreditation of undergraduate medical education programs by the LCME
varied among participants. The number of participants who participated in the accreditation
81
process of their programs was 63 (72%) as compared to 24 (28%) participants who did not
(Table 17).
Table 17
Participation in the Accreditation Process
Participation in accreditation Frequency Percent
Yes 63 72.4
No 24 27.6
Total 87 100
B. Undergraduate Medical Education Programs
As specified earlier, the survey results indicated that undergraduate medical education
programs of participants varied by type. Approximately 42 (51%) participants worked in public
programs while 40 (49%) worked in private ones while five participants elected not to specify
the type of their programs (Table 15). Hence, both types of programs were represented in this
study.
In addition, undergraduate medical education programs of participants were classified by
region. The results showed that 25 (31%) participants worked in programs located in the
Northeastern region as compared to 24 (30%) in the Southern area (Southeastern and Southern
regions), 18 (22%) in the Central region, and 13 (16%) in the Western region (Table16). Seven
participants did not specify the location of their programs. This study encompassed all five
regions of medical education programs.
The size of the undergraduate medical education programs varied among participants.
The study results indicated that the total number of graduates (MDs) for the last academic year
82
was 12,756 graduates. The number of graduates (MDs) of programs ranged from 48 to 640
students with an average of 164 graduates/program (Table 18). Furthermore, the size of
programs was positively-skewed. Approximately, 17 (22%) of programs had 100 graduates or
less and 60 (78%) had more than 100 graduates (Table 19).
Table 18
Number of Graduates
Total Minimum Maximum Mean
Number of graduates (MDs) in last
academic year
12756 48 640 163.54
Table 19
Medical Education Programs by Number of Graduates
Frequency Percent
Total number of programs 87
Missing 10
100 students or less 17 22.1
More than 100 students 60 77.9
Total responding 77 100.0
Findings and Discussions by Research Questions
As mentioned previously in the preceding chapter, the survey questions were grouped
into sections that corresponded with the research questions of this study. In general, statistical
analyses at a 95% confidence limit and Type I error of 0.05 indicated a high reliability
(Cornbach’s alpha) of 0.913 for benefits and 0.758 for costs and a high magnitude of effect (odds
ratio).
83
Research Question #1: The Benefits of Accreditation
This research question identified and assessed the value of accreditation of all fully-
accredited undergraduate medical education programs in the US and its territories in terms of
benefits. Such assessment was presented through the lenses of department chairs,
assistant/associate department chairs and others (assistant/associate deans, directors, ALOs,
faculty members, etc.) in the undergraduate medical education programs. Quantitative data will
be presented first followed by the qualitative data utilizing the Statistical Package for Social
Sciences (SPSS) for descriptive and inferential statistics and content analysis respectively.
The benefits of accreditation were identified by the researcher through an intensive
review of the literature and listed in the first question of this section (Table 20). The participants
evaluated each benefit utilizing a Likert scale that ranged from ‘no’ and ‘low’ benefit to
‘moderate’ and ‘high’ benefit (Appendix D). The results of the study are stated (Table 20) and
will be discussed individually by benefit.
Table 20
Benefits of Accreditation
Benefits
No benefit Low
benefit
Moderate
benefit
High
benefit
N (%) N (%) N (%) N (%)
Improved overall quality of your medical
education program
8 (9.2) 18 (20.7) 42 (48.3) 19 (21.8)
Provision of a structured mechanism to assess
your program
1 (1.1) 11 (12.6) 47 (54.0) 28 (32.2)
A stimulus for program improvement 2 (2.3) 16 (18.4) 33 (37.9) 36 (41.4)
Provision of a way to benchmark your
program with other programs
8 (9.2) 19 (21.8) 45 (51.7) 15 (17.2)
84
Table 20
Benefits of Accreditation
Benefits
No benefit Low
benefit
Moderate
benefit
High
benefit
N (%) N (%) N (%) N (%)
Improved access to monetary resources 35 (40.2) 34 (39.1) 13 (14.9) 5 (5.7)
Improved recognition, ranking, etc. 26 (29.9) 37 (42.5) 15 (17.2) 9 (10.3)
The ability to recruit/retain quality faculty 37 (42.5) 28 (32.2) 13 (14.9) 9 (10.3)
Improved faculty experience (Teaching
practices, curriculum development, etc.)
20 (23.0) 32 (36.8) 26 (29.9) 9 (10.3)
The ability to recruit/retain quality students 19 (21.8) 28 (32.2) 25 (28.7) 15 (17.2)
Enhanced student experience (Quality of
education, leadership, etc.)
11 (12.6) 27 (31.0) 37 (42.5) 12 (13.8)
Improved student learning outcomes/SLOs
(Graduation rates, licensure, internship &
employment opportunities, etc.)
20 (23.3) 40 (46.5) 18 (20.9) 8 (9.3)
The benefit of ‘improving the quality of the undergraduate medical education program’
was rated by the participants. The results showed the following ratings: Approximately 42 (48%)
moderate benefit, 19 (22%) high benefit, 18 (21%) low benefit and eight (9%) no benefit
(Figure 1).
The provision of a structured mechanism to assess the undergraduate medical education
program was another benefit identified by the researcher through literature review of
accreditation. This benefit was valued by participants as follows: Approximately 47 (54%)
moderate benefit, 28 (32%) high benefit, 11 (13%) low benefit and one (1%) no benefit
(Figure 2).
85
Figure 1
Accreditation and Program Quality
0
5
10
15
20
25
30
35
40
45
No Low moderate High
Figure 2
Accreditation and Program Assessment
0
10
20
30
40
50
No low Moderate High Benefit
Accreditation as a stimulus for improvement of undergraduate medical education
program was another benefit identified. According to the participants, this benefit was rated as
follows: Approximately 36 (41%) as a high benefit, 33 (38%) as moderate benefit, 16 (18%) as
low and two (2%) as no benefit (Figure 3).
86
Provision of a way to benchmark an undergraduate medical education program with other
programs was yet another benefit of accreditation. From the participants’ point of view, this
benefit was valuated as follows: 45 (52%) moderate benefit, 19 (22%) low benefit, 15 (17%)
high and eight (9%) no benefits (Figure 4).
Figure 3
Accreditation as Stimulus for Improvement
0
5
10
15
20
25
30
35
40
No Low Moderate High
Figure 4
Accreditation and Benchmarking
0
10
20
30
40
50
No Low Moderate High
87
An additional benefit of accreditation was improved access to funds. The participants
evaluated this variable with: Approximately 35 (40%) no benefit, 34 (39%) low benefit, 13
(15%) moderate and five (6%) high benefit (Figure 5).
Figure 5
Accreditation and Access to Funds
0
5
10
15
20
25
30
35
No Low Moderate High
Improved recognition and ranking was another benefit of accreditation. The participants
rated this benefit as follows: 37 (43%) low benefit, 26 (30%) no benefit, 15 (17%) moderate and
nine (10%) high benefit (Figure 6).
Figure 6
Accreditation and Ranking
0
5
10
15
20
25
30
35
40
No benefit Low Moderate High benefit
88
For faculty members, the ability to recruit/retain quality faculty members was specified
as a benefit of accreditation of undergraduate medical education programs in the US.
Approximately 37 (43%) of participants perceived this contribution as not beneficial while 28
(32%) rated it as low benefit, 13 (15%) moderate and nine (10%) as highly beneficial (Figure 7).
Figure 7
Accreditation and Quality Faculty Members
0
5
10
15
20
25
30
35
40
No benefit Low Moderate High benefit
Improved faculty members experience (Teaching practices, curriculum development,
etc.) was yet another benefit of accreditation. From the participants’ point of view, this benefit
was valuated as follows: 32 (37%) low benefit, 26 (30%) moderate benefit, 20 (23%) no benefit
and nine (10%) high benefit (Figure 8).
As for students, the ability to recruit/retain quality students was assessed as a benefit of
accreditation of undergraduate medical education programs in the US. The participants rated this
benefit as follows: 28 (32%) low benefit, 25 (29%) moderate benefit, 19 (22%) no benefit and 15
(17%) high benefit (Figure 9).
89
Figure 8
Accreditation and Faculty Members Experience
0
5
10
15
20
25
30
35
No benefit Low Moderate High benefit
Figure 9
Accreditation and Quality Students
0
5
10
15
20
25
30
No benefit Low Moderate High benefit
As for enhanced student experience (Quality if education, leadership, etc.), the
contribution of accreditation was assessed by the participants. The results showed the following:
90
37 (43%) participants perceived this benefit as moderate, 27 (31%) as low, 12 (14%) as high and
11 (13%) with no benefit to the undergraduate medical programs (Figure 10).
Figure 10
Accreditation and Student Experience
0
5
10
15
20
25
30
35
40
No benefit Low Moderate High benefit
Finally, this study assessed the perceived contribution of accreditation towards improving
student learning outcomes/SLOs (Graduation rates, licensure, internship & employment
opportunities, etc.). The results indicated the following ratings: 40 (47%) low benefit, 20 (23%)
no benefit, 18 (21%) moderate and eight (9%) high benefit (Figure 11).
Ranked Benefits. In ranking the top three benefits of accreditation of undergraduate
medical education programs, the benefit that coincided with the highest Likert score was selected
in each rank. According to the survey results, participants selected provision of a structured
mechanism to assess their programs as the highest benefit of accreditation. They chose the role
of accreditation as a stimulus for improvement of programs as the second ranked benefit. Finally,
participants selected two benefits, improved overall quality and a way to benchmark their
programs, as the third ranked benefit of accreditation. All top three (four) benefits selected by
91
participants were identified through the literature review of accreditation conducted by the
researcher.
Figure 11
Accreditation and SLOs
0
5
10
15
20
25
30
35
40
No benefit Low Moderate High benefit
Total Benefits. Participants of the survey rated total benefits of accreditation of their
medical education programs utilizing a Likert scale that ranged from not beneficial and slightly
beneficial to moderately and highly beneficial. Overall, participants rated the benefits of
accreditation as slightly beneficial 35 (40%), moderately beneficial 30 (35%), highly beneficial
16 (18%) and not beneficial six (7%) (Table 21) (Figure 12).
For performing inferential analyses, the perceived overall level of benefits was
aggregated into two categories, no-low benefit and moderate-high benefit. Survey results
indicated that 41 (47%) participants perceived the overall level of benefits as no-low as
compared to 46 (53%) participants who selected moderate-high benefit (Table 22).
92
Table 21
Total Overall Benefits
Level of benefit Frequency Percent
No benefit 6 6.9
Low 35 40.2
Moderate 30 34.5
High 16 18.4
Figure 12
Total Overall Benefit
0
5
10
15
20
25
30
35
No benefit Low Moderate High benefit
While the responses to overall level of benefits were statistically manipulated based on
one direct question in the survey thus far, a different methodology was applied to rate the overall
level of benefits and identify any further correlations. In applying the principal component
analysis extraction method to produce the rotated component matrix where rotation converged
around three iterations, all 11 benefits identified in the literature review and listed in the survey
93
Table 22
Overall Level of Benefits
Level of benefit Frequency Percent
No-low benefit 41 47.1
Moderate-high benefit 46 52.9
were divided into two factors: 1. impact on school accountability and credibility and 2. impact on
program outcome. The first factor encompassed the ability to attract quality students, improved
SLOs, the ability to attract quality faculty members, benchmarking, improved
ranking/recognition and improved access to funds. The second factor included: improved overall
quality of the program, a stimulus for program improvement, provision of a structured
mechanism to assess the program and improved student and faculty members experiences (Table
23).
The overall level of benefits received a score of 70 (81%) for no to low benefit and 17
(19.5%) moderate to high benefit (Table 24). In addition, 74 (85%) participants rated the first
factor, Impact on school accountability and credibility as no to low benefit and 13 (15%) as
moderate to high benefit (Table 25). By contrast, 50 (58%) of participants perceived the second
factor, impact on outcomes as no to low benefit and 37 (43%) moderate to high benefit (Table
26).
Additional correlational analyses were performed between several independent and the
dependent (Overall level of benefits) variables. These independent variables included:
participation in accreditation by participants, position of participants, program type, number of
graduates and region of programs. No statistical significance was identified.
94
Table 23
Rotated Component Matrix for 11 Benefits
Factor name Rotated Component Matrix*
Component
______________
1 2
Impact on school
accountability and
credibility
The ability to attract quality students .814 .123
Improved recognition, ranking, etc. .778 .280
The ability to recruit and retain quality faculty .715 .283
Improved access to monetary resources .658 .267
Improved student learning outcomes/SLOs .611 .567
Provision of a way to benchmark your program .569 .386
Impact on outcome
Provision of a structured mechanism to assess your
program
.087 .821
Enhanced student experience .381 .779
A stimulus for program improvement .269 .758
Improved overall quality of your program .446 .704
Improved faculty experience .508 .597
Extraction Method: Principal Component Analysis
*Rotation converged in three iterations
Table 24
Accreditation and Overall Level of Benefits (Factor 1 and 2 combined)
Frequency Percent
No to low benefit 70 80.5
Moderate to high benefit 17 19.5
95
Table 25
Impact on School Accountability and Credibility (Factor 1)
Frequency Percent
No to low benefit 74 85.1
Moderate to high benefit 13 14.9
Table 26
Impact on Outcomes (Factor 2)
Frequency Percent
No to low benefit 50 57.5
Moderate to high benefit 37 42.5
Furthermore, the sum of all 11 questions about the different benefits of accreditation
identified via literature review was computed and the mean, mean value score, for benefits was
determined. By confining the level of benefits to two categories, “mean or less” (low benefit)
and “above the mean” (high benefit), 48 (55%) participants perceived the overall level of
benefits of accreditation as low “mean or less” as opposed to 39 (45%) who perceived overall
level of benefits to be high. Furthermore, no statistically significant correlations were found
between the mean value score of benefits and the independent variables, Position of participants,
participation in the accreditation process as well as region, type and size of undergraduate
medical education programs.
In addition, the univariate logistic regression analysis was performed between the mean
value score (The overall benefit score) as a dependent variable and all other independent
variables (Position, participation, type, region and number of graduates of program). The
96
confidence limit was selected at 95%. The survey results indicated a high magnitude of effect
(odds ratio) (Table 27).
Table 27
Mean Value Score for Benefits (Overall Benefit Score)
Odds Ratios
Effect Estimate
95% Confidence
Limits
Participation “No” vs. “Yes” 2.128 0.818 5.534
Region “Northeastern” vs. “Others” 1.284 0.497 3.321
Program type “Private” vs. “Public” 1.800 0.743 4.360
Number of graduates “100 students or less” vs. “More
than 100 students”
0.915 0.307 2.730
Position of participants “More than one position” vs. “One
position”
0.729 0.254 2.097
(Event= “More than the mean value score”)
Finally, the multiple logistic regression analysis was performed between the mean value
score (The overall benefit score) as a dependent variable and all other independent variables
(Position, participation, type, region and number of graduates of program). The confidence limit
was selected at 95%. The survey results indicated a high magnitude of effect (odds ratio) for
some independent variables (Table 28).
For participation, people who did not participate in the accreditation process were 2.4
times (141%) more likely to report high benefit (More than the mean score) than people who
participated in the process after adjusting for all other variables in the model (Table 28). By
program type, participants who worked in private programs were approximately 2.3 times
(130%) more likely to report high benefit than participants who worked in public programs. By
region, participants who worked in programs located in the Northeastern region were
97
approximately 28% less likely to report high benefit than participants who worked in other
regions. Similarly, participants who held more than one position were approximately 19% less
likely to report high benefit than participants who held only one position. Finally, participants
working in 100 graduates or less were approximately 10% less likely to report a high benefit
score of accreditation (Table 28).
Table 28
Mean Value Score (Overall Benefit Score) and Independent Variables
Odds Ratio
Effect Estimate
95% Confidence
Limits
Participation “No” vs. “Yes” 2.414 0.813 7.172
Region “Northeastern” vs. “Others” 1.222 0.441 3.386
Program type “Private” vs. “Public” 2.296 0.841 6.267
Position of
participants
“More than one position” vs. “One
position”
0.807 0.257 2.540
(Event= “More than the mean value score”)
Other Benefits. Additional benefits of accreditation of undergraduate medical education
programs by the LCME were identified by the participants in a qualitative/open-ended question
in the benefits section (A) of the survey (Appendix D). This optional question was answered by
50 participants, 18 of whom did not identify any other benefits. Out of the remaining 32
responses, three responses were not usable/related. Through content/text analysis, the responses
of 29 participants were generalized into three categories: Cultural change, medical education and
self-analysis.
Cultural change was cited by participants as an additional perceived benefit of
accreditation. According to one participant, accreditation “Initiated the process of cultural
98
change”. Content analysis of the qualitative data identified two themes: 1. collaboration and
teamwork and 2. innovation. For “Collaboration/teamwork” (1), two sub-themes were identified:
vertical (intra-) and horizontal (inter-) collaboration. According to the participants, accreditation
“Pulls faculty together in their work on a shared goal” and contributed to “building relationships
and sharing of ideas within faculty, staff and students” (intra-collaboration). Accreditation
“allowed faculty and students to exchange views on education,” “develop shared approach to
education” and “faculty and staff communicated and collaborated in meaningful ways to prepare
for accreditation” (Inter-collaboration). The “development of communication between
departments to access and analyze the information” and “improved collegiality across the school
and campus” are yet other indications of inter-collaboration in the undergraduate medical
education programs. For “Innovation” (2), participants eluded to accreditation as a “means to
overcome institutional inertia,” an “impetus to look at curriculum change” and “a strong impetus
to modernize approaches.”
Medical education was cited by some participants as an additional benefit of accreditation
and three themes were identified by content analysis and categorization: 1. content, 2. quality
and 3. process. Responses related to content (1) included curriculum assessment and
change/revisions such as: “review our entire curriculum,” “Justification for curricular renewal”
and “relate all courses to competences (competencies).” Participants’ comments regarding
quality (2) were: “verification that the education provided is first-rate and appropriate,”
“Ensuring minimum standards leading to quality,” “indicator of success” and “graduates are not
subjected to obstacles in licensing.” Nevertheless, one participant questioned if “at this point in
the history of med(ical) schools, would the quality of education drop WITHOUT the LCME
process is unsure.” Lastly, for process (3), the participants mentioned standardization:
99
“homogenization of education,” “[accreditation] assisted in standardization of processes -
syllabus, lecture postings, lecture objectives, integration beyond [epartment-] based” and
“develop [ment of] shared approach to education” from the exchange of views between faculty
members and students. The homogenization process was perceived negatively by one participant
due to accreditation that “is an increasingly bureaucratic process.”
Self-analysis was another perceived benefit listed by some participants. Content analysis
identified two themes: 1. purpose and 2. outcome. For purpose (1), participants stated that
accreditation “allowed for self[-]identification of issues requiring attention and improvement”
and “allow[ed] faculty to see gaps and [redundancies].” As a result (2), accreditation “increased
emphasis on the need for faculty and professional development,” assessment of curriculum and
faculty members’s “teaching, learning, and assessment strategies.” One participant added that
self-assessment “forced our school to provide recreational facilities and more convenient
restrooms for students.” In brief, accreditation “merely served as a stimulas [stimulus] for us to
gether (gather) data, of use in reflecting on who we are and what we do.”
Research Question #2: The Costs of Accreditation.
This research question identified and assessed the value of accreditation of all fully-
accredited undergraduate medical education programs in the US in terms of costs. Such
assessment was presented through the lenses of department chairs, assistant/associate department
chairs and others (assistant/associate deans, directors, ALOs, faculty members) in the
undergraduate medical education programs. Quantitative data will be presented first followed by
the qualitative data utilizing SPSS for descriptive and inferential statistics and content analysis
respectively.
100
The costs of accreditation were identified by the researcher through an intensive review
of the literature and listed in the first question of this section (Table 29). The participants
evaluated each cost utilizing a Likert scale that ranged from no and low cost to moderate and
high cost (Appendix D ). The results of the study are presented (Table 29) and will be discussed
individually by cost.
The first variable identified through literature review of accreditation, direct
financial/monetary expenses spent to achieve accreditation of the undergraduate medical
Table 29
Costs of Accreditation
Costs
No cost Low cost
Moderate
cost High cost
N (%) N (%) N (%) N (%)
Direct financial/monetary expenses spent
to achieve accreditation of program
2 (2.5) 15 (18.5) 42 (52.5) 21 (26.3)
Total amount of time spent by faculty,
administrators and staff to achieve
accreditation of program
1 (1.2) 2 (2.4) 27 (32.9) 52 (63.4)
Total amount of effort spent by faculty,
administrators and staff to achieve
accreditation of program
1 (1.2) 0 (0.0) 29 (35.4) 52 (63.4)
Impact on morale: Resistance of
faculty/administrators/staff to the
accreditation process
9 (11.3) 24 (30.0) 30 (37.5) 17 (21.3)
Impact on academic freedom
26 (31.7) 29 (35.4) 21 (25.6) 6 (7.3)
education program, was rated by the participants. The survey results indicated that 42 (53%) of
participants perceived this variable as a moderate cost as compared to 21 (26%) high cost, 15
(18.5%) low cost and two (3%) as no cost (Figure 13).
101
The second variable, total amount of time spent by faculty members, administrators and
staff to achieve accreditation of their program was assessed by the participants. The results were
positively skewed with 52 (63%) of participants rating this cost as high, 27 (33%) moderate, two
(2%) low and only one (1%) no cost (Figure 14).
Similarly, the total amount of effort spent by faculty members, administrators and staff to
achieve accreditation of their program was assessed by the participants. The results indicated that
52 (63%) of participants perceived this variable as a high cost as compared to 29 (35%) moderate
and one (1%) low (Figure 15).
Figure 13
Accreditation and Direct Financial Monetary Expenses
0
5
10
15
20
25
30
35
40
45
No cost Low cost Moderate
cost
High cost
102
Figure 14
Accreditation and Total Amount of Time
0
10
20
30
40
50
60
No cost Low cost Medium cost High cost
Figure 15
Accreditation and Total Amount of Effort
0
10
20
30
40
50
60
No cost Low cost Moderate cost High cost
103
The impact of accreditation on morale in terms of resistance of faculty members,
administrators and staff to the accreditation process was evaluated by the participants. The
responses indicated that 30 (38%) of participants perceived this cost as moderate while 24 (30%)
rated it as low, 17 (21%) high and nine (11%) as no cost to their programs (Figure 16).
Finally, the impact of accreditation on academic freedom was identified through literature
review and rated by the participants of this survey. According to the results, 29 (35%) of
participants valued this cost as low, 26 (32%) no cost, 21 (26%) moderate and six (7%) as a high
cost (Figure 17).
Ranked Costs. In ranking the top three costs of accreditation of undergraduate medical
education programs, the cost with the highest Likert score was selected in each rank. Participants
selected the total amount of time spent by faculty members, administrators and staff to achieve
accreditation of their programs as the highest cost of accreditation. They chose the total amount
of effort spent by faculty members, administrators and staff to achieve accreditation of their
programs as the second and third ranked cost. All top three costs selected by participants were
identified through literature review of accreditation.
Figure 16
Accreditation and Impact on Morale
0
5
10
15
20
25
30
No cost Low cost Moderate cost High cost
104
Figure 17
Accreditation and Impact on Academic Freedom
0
5
10
15
20
25
30
No cost Low cost Moderate cost High cost
Total Costs. Participants of the survey rated overall level of cost of accreditation of their
medical education programs utilizing a Likert scale that ranged from no and low cost to
moderate and high cost to their program. Overall, almost all of the participants perceived the
costs of accreditation to be moderate 47 (57%) and high 32 (39%) with only two (2%) and one
(1%) of participants rating the cost as low or no cost respectively (Table 30), (Figure 18).
Combined, 79 (96%) of participants perceived cost as moderate to high cost while three (4%)
participants perceived cost as no-low (Table 30), (Figure 18).
For further analysis of the data, the principal component analysis extraction method was
applied to produce the rotated component matrix where rotation converged around three
iterations. All listed costs were divided into two factors: 1. personnel cost and 2. program-related
costs. The first factor encompassed total amount of time and total amount of effort spent by
faculty members, administrators and staff to achieve accreditation of their programs (Table 31).
The second factor included: direct financial/monetary expenses spent to achieve accreditation of
program, impact on morale (resistance of faculty members/administrators/staff to the
accreditation process) and impact on academic freedom (Table 31).
105
Table 30
Total Overall Cost
Frequency Percent
Valid No cost 1 1.2
Low cost 2 2.4
Moderate cost 47 57.3
High cost 32 39.0
Total 82 100
Missing 5
Total 87
Figure 18
Overall Total Cost
0
10
20
30
40
50
No cost Low Moderate High cost
The results indicated that 49 (60%) of participants perceived the value of all costs as no
to low as compared to 33 (40%) moderate to high (Table 32). In addition, 79 (96%) participants
rated the first factor, personnel cost as moderate to high cost and three (4%) participants as no to
low cost (Table 33). By contrast, 49 (60%) of participants perceived the second factor, program-
related cost as moderate to high cost and 33 (40%) as no to low cost (Table 34).
106
Table 31
Rotated Component Matrix for Five Costs
Factor name
Rotated Component Matrix*
Component
1 2
Personnel cost
Total amount of time spent by faculty,
administrators and staff to achieve accreditation of
their programs
.929 .185
Total amount of effort spent by faculty,
administrators and staff to achieve accreditation of
their programs
.922 .157
Program-related cost Direct financial/monetary expenses spent to
achieve accreditation of program
.076 .861
Impact on morale: Resistance of
faculty/administrators/staff to the accreditation
process
.239 .812
Impact on academic freedom .487 .504
Extraction Method: Principal Component Analysis
*Rotation converged in three iterations
Table 32
Accreditation and Total Overall Level of Costs (Factors 1 and 2 combined)
Overall cost score Frequency Percent
“No” to “low cost” 49 59.8
“Moderate” to” high cost” 33 40.2
Valid Total 82 100
Missing 5
Total 87
107
Table 33
Impact on Personnel Cost Score (Factor 1)
Overall cost score Frequency Percent
“No” to “low cost” 3 3.7
“Moderate” to” high cost” 79 96.3
Valid Total 82 100
Missing 5
Total 87
Table 34
Impact of Program-related Cost Score (Factor 2)
Overall cost score Frequency Percent
“No” to “low cost” 33 40.2
“Moderate” to” high cost” 49 59.8
Valid Total 82 100
Missing 5
Total 87
Furthermore, the sum of all five questions about the different costs of accreditation
identified via literature review was computed and the mean, mean value score, for costs were
determined. By confining level of costs to two categories, “mean or less” (low cost) and “above
the mean” (high cost), 33 (40%) participants perceived the overall level of costs of accreditation
as low “mean or less” as opposed to 49 (60%) who perceived overall level of costs to be high
(Table 35). Furthermore, no statistically significant correlations were found between the mean
108
value score of costs and the five independent variables (position, participation, region, type and
size of programs).
In addition, the univariate logistic regression analysis was performed between the mean
value score (the overall cost score) as a dependent variable and all other independent variables
(position, participation, type, region and number of graduates of program). The confidence limit
was selected at 95%. The survey results indicated a high magnitude of effect (odds ratio), (Table
36).
Table 35
Mean Value Score for Costs
Frequency Percent
Mean Value Score or less 33 40.2
More than the Mean Value
Score
49 59.8
Table 36
Mean Value Score (Overall Cost Score)
Odds Ratio
Effect Estimate
95% Confidence
Limits
Participation “No” vs. “Yes” 0.800 0.292 2.194
Region “Northeastern” vs. “Others” 1.615 0.620 4.210
Program type “Private” vs. “Public” 2.231 0.906 5.493
Number of
graduates
“100 students or less” vs. “More than
100 students”
0.713 0.233 2.182
Position of
participants
“More than one position” vs. “One
position”
1.463 0.520 4.114
(Event= “mean score or less”)
109
Finally, the multiple logistic regression analysis was performed between the mean value
score (The overall cost score) as a dependent variable and all other independent variables
(Position, participation, type, region and number of graduates of program). The confidence limit
was selected at 95%. The survey results indicated a high magnitude of effect (odds ratio), (Table
37).
Table 37
Mean Value Score (Overall Benefit Score) and Independent Variables
Odds Ratio
Effect Estimate 95% Confidence
Limits
Program type
“No” vs. “Yes”
1.677 0.597 4.712
Region
“Northeastern” vs. “Others”
1.457 0.525 4.043
Number of
graduates
“Private” vs. “Public” 0.912 0.274 3.035
Position of
participants
“More than one position” vs. “One
position”
1.321 0.413 4.230
(Event= “mean score or less”)
For program type, participants who worked in private medical education programs were
approximately 1.68 times (68%) more likely to report low cost (mean score or less) than
participants who worked in public programs after adjusting for all variables in the model (Table
40). By region, participants who worked in programs located in the Northeastern region were
approximately 1.46 times (46%) more likely to report low cost than participants who worked in
other regions (Table 40). Participants who held more than one position were approximately 1.32
(32%) more likely to report low cost than participants who held only one position (Table 37).
110
Finally, people who worked in private programs were 18% less likely to report low cost than
participants who worked in public programs (Table 37).
Other Costs. Additional costs of accreditation of undergraduate medical education
programs by the LCME were identified by the participants in a qualitative/open-ended question
in Section B: Costs of the survey (Appendix D). This optional question was answered by 19
(22%) participants, two of whom made comments that will be discussed later on in this chapter.
Through content/text analysis, the remaining responses were divided into five categories:
change, diversion, restriction/loss of innovation, lack of knowledge and lack of reward.
Change was identified by participants as an additional cost of accreditation. Through
content analysis, two themes were recognized: positive and negative change. For positive
change, allocation of resources resulted in “making necessary investments to meet standarss
(standards).” Another participant added: “self study revealed facility improvements that were
needed for enhanced [student-enhanced] experiences.” As for negative change, one participant
stated that accreditation “led to changes that were NOT beneficial but were required to address
‘concerns’ of the LCME – [e.g.] changes in preclinical curriculum (that were not red flagged
themselves, and were worse after reorganization) to accommodate changes required/beneficial
to the clinical program.” Another participant added “it was not accreditation per se that was
costly. It was being told by collegiate leaders that LCME demands a particular type of
curriculum reform. The resulting curriculum reform process has been extremely fatiguing and
demoralizing of the people who do the most in medical education. It currently threatens to
increase educational costs, elevate the authority of non-content experts, and devalue educators in
departments” and “loss of medical school applicants” as another participant added. Other
participants challenged change that is based on perceptions versus evidence. They stated that
111
“The fact that ‘standards’ are usually based on expert opinion, rather than meeting the
professional standards of being ‘evidence-based’ leads to some dissatisfaction.” Overall, one
participant added, accreditation “often feels as though aspects of the requirements do not benefit
[the] institution in a meaningful way.”
Diversion was specified as an additional perceived cost of accreditation by four
participants. Loss of productivity was mentioned by participants who stated: “time spent on
preparing for the LCME visit decreases the production of faculty members and staff with other
duties.” They added that it is “time taken away from research and teaching ‘learners’ and ‘other
priorities’ such as ‘innovation’ [and] ‘program improvements’.”
Restriction/loss of innovation was expressed by three participants. Lack of knowledge
about the cost was cited by one participant. Similarly, lack of reward was mentioned by one
participant who commented: “it [accreditation] also requires colleagues (colleagues) to pick up
the pieces for no or little reward.”
Research Question #3: Benefits vs. Costs of Accreditation
The third research question sought to rate perceived benefits as compared to perceived
costs of accreditation amongst department chairs and others in the fully-accredited undergraduate
medical education programs in the US. Responses were solicited through a quantitative multiple
choice question in Section C of the survey that afforded the participants one choice among three
options, benefits exceed costs, benefits equal costs and costs exceed benefits (Appendix D). The
second question was open-ended and probed for additional input from participants for the
purpose of increasing the benefits and decreasing the costs of accreditation of the
aforementioned programs.
112
According to the survey results, 36 (44%) participants perceived that total cost of
accreditation exceed total benefits and 29 (35%) believed that benefits equal costs (Table 38).
Only 17 (21%) of participants perceived benefits to exceed costs and five participants elected not
to answer the question (Table 38). Additional statistical regression analyses were performed to
identify correlations between cost-benefit (dependent variable) and participation in accreditation,
program type, region and number of graduates. Due to small sample size, no statistically-
significant correlations were identified.
Table 38
Total Benefits vs. Total Costs
Frequency Percent
Benefits exceed costs 17 20.7
Valid Benefits equal costs 29 35.4
Costs exceed benefits 36 43.9
Total 82 100
Missing 5
Total 87
In combining the categories of benefits vs. costs, two options emerged: Benefits equal or
exceed costs and costs exceed benefits. Further testing for this dependent variable against the
independent variables of participation, position, type of program, region and number of
graduates still did not yield any statistically-significant correlation. Nevertheless, results
indicated a statistically-significant (p < 0.001) correlation between the overall benefit score and
the cost-benefit (Table 42). Approximately 28 (78%) participants who reported that costs exceed
benefits also reported the overall benefit score to be low (mean score or less) (Table 39).
113
Table 39
Overall Benefit Score and Cost-Benefit
Cost-benefits
________________________ p
Benefits
equal or
exceed costs
Costs exceed
benefits
Overall
benefit score
Mean score
or less
Count 19 28
< 0.001
% within cost-benefit 41.3% 77.8%
More than
the mean
score
Count
27
8
% within cost-benefit 58.7% 22.2%
Univariate logistic regression analysis of the data was conducted to identify any
correlations between costs-benefits and the five independent variables (Participation, position,
type, region and number of graduates for the last academic year of the programs). Regarding
position, the participants holding more than one position were 90% more likely to perceive
benefits to exceed/equal costs (Table 40). By region, participants from the Northeastern region
are 16% more likely to rate benefits to exceed/equal costs. In contrast, by participation, the
results indicated that respondents who did not participate in the accreditation process were
approximately 29% less likely to report benefits exceed/equal costs than respondents who
participated in accreditation (Table 40). By number of graduates, participants who were
employed at programs with 100 students or less were 20% less likely to perceive the benefits to
exceed/equal costs (Table 40). Finally, by type, participants working in private programs were
8% less likely to rate the benefits to exceed/equal costs than those working in public programs
(Table 40). For the cost, participants were 64% more likely to perceive costs as low (mean score
114
or less). For the benefits, participants were 80% less likely to perceive benefits as low (mean
score or less) (Table 40).
Table 40
Cost-benefits and Independent Variables: Univariate Logistical Regression
Odds Ratio
Effect Estimate 95% Confidence
Limits
Participation
“No” vs. “Yes”
0.714 0.268 1.905
Region
“Northeastern” vs. “Others”
1.161 0.444 3.037
Program type
“Private” vs. “Public”
0.917 0.383 2.194
Number of
graduates
“100 students or less” vs. “More than 100
students”
0.804 0.272 2.371
Position of
participants
“More than one position” vs. “One
position”
1.100 0.390 3.104
Cost score “Mean score or less” vs. “More than the
mean score”
1.361 0.556 3.333
Benefits score “Mean score or less” vs. “More than the
mean score”
0.201 0.075 0.536
Cost-benefits (Event = “benefits exceed/equal costs”)
Multiple logistic regression analysis of the data was conducted to identify any
correlations between costs-benefits and participation as an independent variable. The study
results indicated that respondents who did not participate in the accreditation process are
approximately 50% less likely to perceive benefits to exceed/equal costs (Table 41). For the cost
mean value score, participants were approximately 90% more likely to perceive costs as low
(Mean score or less) (Table 41). For the benefit mean value score, participants were
approximately 84% less likely to perceive benefits as low (mean score or less) (Table 41).
115
Table 41
Cost-benefits and Independent Variables: Multivariate Logistical Regression
Odds Ratio
Effect Estimate 95% Confidence
Limits
Participation “No” vs. “Yes” 0.495 0.161 1.521
Cost score “Mean score or less” vs. “More than the
mean score”
1.899 0.696 5.184
Benefits score “Mean score or less” vs. “More than the
mean score”
0.155 0.053 0.450
Cost-benefits (Event = “Benefits exceed/equal costs”)
Recommended changes. The participants were afforded the opportunity to share their
own opinion and experiences about increasing the benefits and decreasing the cost of
accreditation of undergraduate medical education programs in the US. Probing for additional
input/recommendations was achieved through an optional open-ended question in Section C,
benefits and costs of accreditation (Appendix D). Participants were repeatedly assured of
absolute confidentiality in order to increase response rate, data as well as validity and reliability
of this study.
Overall, 73 participants responded with several recommendations and 14 participants
elected not to respond to this question. Some responses indicated: lack of knowledge about
accreditation, involvement with accreditation and recommendations due to accreditation being “a
fact of life” with “necessary time and effort” where it is “hard to change the effort required” and
“difficult to assess” with ultimately “no option!” While only two responses viewed accreditation
positively and stated that “accreditation should be afforded the highest importance” and
“typically good site teams” visited, all other participants viewed accreditation and its agency in a
negative and adversarial role.
116
In applying content analysis to the rest of usable responses, three categories for
recommended changes emerged. They are: Process, outcome and approach. Within each
category, several themes were identified (Table 42).
Table 42
Recommended Changes for Accreditation
Process Outcome Approach
Improved data collection/self-
analysis
Improved outcomes related to
medical education
Provision of proper guidance
Improved site visits Improved outcomes related to
medical students
Provision of clear information
Effective remediation Flexibility
Prolonged accreditation cycle Efficiency
Fostering innovation
The first category, the process of accreditation was perceived negatively by the
participants. Participants recommended to “simplify the process significantly” and “cut the
process by 90%.” Content analysis of this category identified several themes that coincided with
the different stages and cycle of accreditation. These themes included: data collection/self-
analysis, site visits, remediation and the accreditation cycle.
For data collection/self-analysis, participants recommended to “reduce unnecessary
gathering of data” and “the amount of paperwork” that is “really of little value” in evaluating “a
medical school.” According to the participants, the LCME and medical education programs need
to “limit the areas reviewed” and conduct a “careful assessment of the data collected” in terms of
need, detail and relevancy. This will “decrease the paperwork that is submitted [to LCME]” and
117
make self-study “less burdensome.” The participants recommended ongoing “annual collection
of data and ongoing assessment of educational programs” “from ongoing metrics initially
established through the self-study process rather than a huge to-do [list] every 8 years.” In
addition, participants called for consolidating readily-available data sources through the “use of
electronic data bases to prepopulate the tables...the AAMC has all of the student, faculty, and
financial data already.” Lastly, they advocated for the LCME and medical programs/schools to
“codify the process” and “make it electronic.”
For site visits, the following sub-themes were identified: duration of site visits, frequency
of visits and team conducting the visit. For the first sub-theme, participants suggested shorter
visits that should “be focused mainly on areas identified by the school as deficient or
questionable.” For frequency of visits, participants urged for flexibility based on the status of
accreditation of the undergraduate medical education programs. They stated that “the frequency
of site visits for good programs seem frequent.” They recommended some flexibility from the
LCME to “minimize frequency of accreditation visits” and limit them to “only when red flags
appear.” Participants suggested having “a different process, such as an on-line monitoring
program, that could substitute for some of the functions that are currently taking place that would
be less expensive,” “not consume one large visit” and “occur longitudinally as not to disrupt
academic operations” or LCME from focusing on assisting programs that need “help and
revision.” Furthermore, some participants recommended that the LCME should “decrease the
number of people doing site visits, saving large expenses” and for consistency and efficiency.
Others demanded that teams should be “vetted to ensure there are no potential conflicts of
interest or personalities that overshadow the process.” In brief, participants expressed the need
118
for the LCME to provide “qualified reviewers without an agenda” for site visits to their
programs.
For remediation, a participant called for focus “mainly on areas identified by the school
as deficient or questionable.” Finally, for the accreditation cycle theme, participants suggested
“longer accreditation cycle” than seven years. A participant stated: “extend to 10 years the period
between accreditations.”
As for the second category of recommended change, the outcome of accreditation,
several participants advised the LCME to “Focus” and “work on outcomes,” defined by a
participant as “performance at the next stage” rather than “process,” “numbers per se” and
“adherence to standards that are at times meaningless/arbitrary.” Within this category, two
themes were identified: Outcomes related to medical education and medical students. For the
first theme, participants demanded that the LCME “stop looking at factors that have almost no
value in actual medical education,” but rather “look at the important things that produce good
medical education” and “outcomes of that education” and implement “more metrics for success”
that are “automatically collected” such as the student survey outcomes data. One participant even
believed that “the LCME seems to have an agenda that they are pushing that [supersedes]
educational experience.” He added that “duty hours and core competency in a surgical specialty”
are “impediments to education rather than aids.” For the second theme, outcomes related to
medical students, participants advised the LCME to have “less emphasis on student
satisfaction/mistreatment” and more emphasis on evaluating “student performance.” Others
added that “focusing on student satisfaction (by the LCME) leads to creation of a new
bureaucracy that does not value course content and that costs time and resources.”
119
The third category identified, the current approach to accreditation by the LCME was
perceived negatively by participants. Utilizing content analysis, five themes of recommendations
by participants to increase benefits and decrease cost were recognized. They are: provision of
proper guidance, provision of clear, concise, consistent, relevant, objective and evidence-based
information, flexibility, efficiency and promotion of innovations.
For the first theme, provision of proper guidance, participants strongly recommended that
“intimidation should be removed from the program.” They advised the accrediting agency to
“decrease the ‘gotcha’ nature of recent accreditation visits,” the “judgmental attitude and treating
everything being done as not enough - or the ‘wrong thing’ [as] if they are looking for certain
buzz words as proof [that] something isn't going as it should” and “as if this is some kind of
surprise inspection approached with the apparent assumption that everyone is failing in their
efforts, and they (site-visit team members) are simply looking for the proof.” According to a
participant who endured several site visits and hoped to retire before the next one, the approach
of the LCME to accreditation is “simply the academic medicine version of a ‘witch hunt’.”
Rather, the participants envisioned accreditation “like a consultation process, mutually agreed
upon by the parties, to benefit the institution.” They demanded “focus (by the LCME) on
assisting an institution with what they view as issues that need to be addressed with advice and
resources” and creating “quality programs.” Participants added that the agency needs to
“decrease the negative impact on the institution. Provide more substantive guidance rather than
the vagueness which sets anxiety. Work as partners with institutions.” Others commented: “make
it more collegial,” “recognize and appreciate our unique contribution to medical education” and
“make it an improvement process.”
120
The second theme, provision of clear, consistent, relevant, objective and evidence-based
information was identified by the participants of this study. They mentioned “if the outcome is
depended (dependent) on how I think we're doing, then of course I think we are doing great.
This is not the appropriate way to evaluate for accreditation.” Others added “I see little in terms
of true outcomes research supporting the importance of all of the requirements” and asked the
accrediting agency to “produce a more consistent interpretation of the standards and evidence to
achieve the standards.” Participants demanded from the accreditor to “be more clear and direct in
describing what the LCME wants.” They added that “many of the items dictated by the LCME to
our medical school were really not problems and there was no evidence provided that the LCME
suggestions would improve medical school education or would improve better educated
graduates from our school. The LCME seems to not believe in ‘evidence-based’ education.
Rather, it is what is the latest fad and fluff.” Other participants stated: “the recommendations by
LCME for accreditation amount to window dressing,” “many of the rubrics used are subjective
and have no real value” and concluded that the LCME needed to “[eliminate] the subjective
nature of the review.”
The third theme identified through content analysis was: flexibility. Recommendations
from participants encompassed leaving “the self-assessment to the institutions” not the accreditor
and stopping the LCME from “trying to extend the process into every aspect of running a
medical school and ‘penalizing’ institutions who choose to dismiss poorly performing students.”
After all, participants added that “this process [of accreditation] should look at the important
things that produce good medical education and look at outcomes of that education.”
Efficiency was recognized as yet another theme through text analysis of the participants’
responses and two sub-themes emerged; internal (program) and external (the LCME). For
121
internal efficiency, participants advised that programs should “recruit/assign the best experienced
[and] most qualified team to supervise accreditation” and “[lose] the consultants” to “decrease
time, effort and cost.” For external efficiency, participants stressed the need to “cut the process
by 90%.” They observed a system that is “intrusive” and a process that is “too much
bureaucratic” at the LCME where “the number of regulations and ‘musts’ has grown
exponentially.” Participants advocated for “less intrusiveness” by and “less bureaucracy” at the
accrediting agency. They called upon the LCME to “simplify the process significantly,”
“streamline it,” “limit the areas reviewed,” have “less standards,” make regulations and musts
“more stream-lined and consistent” and “increase/more relevancy” “to future growth in
medicine.” They emphasized the need to “decrease the amount of data/work to enter needed,”
“codify the process,” “generate [an] institutional memory in the form of a formal timeline,”
“make it electronic” in order to decrease the “costs in terms of faculty time taken from other
important tasks.” These recommendations will “decrease time, effort and cost.”
Lastly, embracing innovation was identified through content analysis of responses as the
fifth theme of the approach to accreditation category. Participants perceived this theme
negatively and demanded to “stop LCME from ‘encouraging innovations’ that ignore the
importance of students achieving a good score on the USMLE in order to gain a residency slot.”
They requested the accrediting agency to “celebrate innovation,” “share the experiences of
accreditation visits not only in rewriting standards but [also offering] recommendations for best
practice[s]” and “what is appropriate for modern day teaching practices.” Furthermore, they
advised the LCME to instate an “on-line monitoring program for good programs” as well as an
“online FAQs [frequently asked questions] for administrators to reference.” They added that the
LCME’s “connections publications was a big improvement, but more is needed.”
122
In summary, whether it is the approach, process or outcome of accreditation with their
various themes and sub-themes, participants assessed the situation as follows: “medical schools
and universities are caught between a rock and a hard place, accreditation is essential, so you
must comply regardless of cost” and great toll on medical education, outcome and progress.
123
Chapter 5
Summary and Recommendations
At the time of limited resources, increased academic cost and competition and strong
globalization trends, undergraduate medical education programs, a significant component of
higher education in the US, face heightened scrutiny for a better return on investment while
maintaining high quality of medical education, graduates and services to the public. Hence, an
examination of specialized accreditation of these programs in terms of value through the lens of
recipients will greatly enhance our understanding of the process. Furthermore, it will provide us
with opportunities for having an open dialogue between the accrediting agency and recipients of
its services, increasing the benefits, decreasing the costs of such accreditation and maximizing
our academic, social and financial gains. This chapter will offer a summary of the study, its
purpose and research questions. It will apply the findings discussed in the previous chapter to
offer evidence-based recommendations for improving the approach, process and outcomes of
accreditation and end with a short conclusion.
Summary of Study
This study assessed the value of accreditation of undergraduate medical education
programs through two lenses, perceived benefits and costs, from the perspective of internal
stakeholders of the programs. To this end, the research questions were:
1. What are the perceived benefits of accreditation of undergraduate medical education
programs in the US?
2. What are the perceived costs of accreditation of undergraduate medical education
programs in the US?
124
3. What are the perceived benefits vs. perceived costs of accreditation of undergraduate
medical education programs in the US?
The measurement tool, an online survey, with quantitative (closed-ended/multiple choice)
and qualitative (open-ended) questions (Appendix D), was distributed to 1,096 department chairs
and others. Others included: lead administrators (assistant/associate deans, directors, ALOs,
board member, research provost and curriculum committee chairmen) and faculty members. The
target sample consisted of: 1. a primary random cluster sample of department
chairs/assistants/associates in all 126 fully-accredited undergraduate medical education programs
leading to the MD degree in the US and 2. a secondary random cluster sample of others in a
random sample of the aforementioned programs. A total of 116 responses were received
electronically however only 87 responses were usable. Hence placing the response rate at 8%.
Out of the participants of the study, 74% were department chairs/assistants/associates. By
number of positions held, approximately, 77% held one position while 23% held more than one
position at the time of the study. Approximately, 51% of participants worked in public vs. 49%
in private undergraduate medical education programs in the US. By number of graduates (MDs)
of last academic year, approximately 78% of participants worked in programs of “more than 100
students” vs. 22% in programs of “100 students or less.” By region, participants worked in
programs of all regions across the continental US in the following ratio: 31% Northeastern
region, 30% Southern area (Southeastern and Southern regions), 22% Central region and 16% in
the Western region. Finally, approximately 72% of participants participated in the accreditation
process of their four-year undergraduate medical education programs.
125
All other results of the study are articulated by the aforementioned research questions.
The quantitative data will be presented and discussed first followed by the qualitative data. This
mixed-method approach will ensure breadth and depth of the research findings.
Research Question 1: Perceived Benefits of Accreditation
The first research question sought to confirm, identify, rank and rate the perceived
benefits of accreditation of undergraduate medical education programs in the US. Identification
of the benefits of accreditation ensued through literature review as well as responses to an open-
ended question in the survey (Appendix D). There were 11 benefits identified through literature
review. They included: improved overall quality of your medical education program, provision
of a structured mechanism to assess your program, a stimulus for program improvement,
provision of a way to benchmark your program with other programs, improved access to
monetary resources, improved recognition and ranking, the ability to recruit/retain quality
faculty, improved faculty experience e.g. teaching practices, curriculum development, the ability
to recruit/retain quality students, enhanced student experience e.g. quality of education and
leadership and improved student learning outcomes (SLOs) e.g. graduation rates, licensure,
internship and employment opportunities. Additional benefits identified by the participants were;
self-analysis, medical education and cultural change.
By rank, participants perceived the provision of a structured mechanism to assess their
programs as the highest benefit of accreditation followed by the role of accreditation as a
stimulus for improvement of their programs. Participants selected two benefits, improved overall
quality and a way to benchmark their programs, as the third ranked benefit of accreditation. All
ranked benefits chosen by the participants were identified via literature review.
126
Rated based on responses to one direct Likert-scale question in Section A of the survey
(Appendix D), approximately 40% of the participants perceived the total overall level of benefits
(total benefits) as “low benefit” as compared to 35% for “moderate benefit”, 18% “high benefit”
and 7% “no benefit”. Further statistical analyses for combined responses to the 11 questions on
benefits indicated that approximately 81% of participants perceived total benefit of accreditation
to be “no” to “low” benefit as opposed to approximately 20% of participants who rated it as
“moderate” to “high” benefits. In placing “high benefit” as a distinctive category due to the fact
that boundaries between “no”, “low” and “moderate” levels of benefit can be subjective and less
demarcated, the mean value score for responses to all 11 questions was rated according to two
categories; “moderate” benefits or less (mean value score or less) and “high” benefit (more than
the mean value score). To that end, results indicated that approximately 55% of participants
perceived the benefits of accreditation as “low” and 48% as “high” benefit. While the study
sample is small for significance and ratio between the combined levels of total benefits
fluctuated according to different tests, the fact remained that accreditation of undergraduate
medical education programs was perceived by the participants to be more toward the “lower”
end of the scale.
Total overall level of benefits of accreditation of undergraduate medical education
programs in the continental US was correlated with several independent variables via chi square
(p value) for statistical significance and univariate and multiple logistic regression analyses for
the magnitude of effect (odds ratio). These variables included: participation in the accreditation
process, position of participants, region of program, type of program and number of graduates
(MDs) in the last academic year.
127
For the magnitude of effect (odds ratio) at a confidence limit of 95%, the mean value
score of benefits (benefit score) had a high correlation with some of the five independent
variables (participation in the accreditation process, position of participants, region of program,
type of program and number of graduates (MDs) in the last academic year). By “participation”,
people who did not participate in the accreditation process were 2.4 times (141%) more likely to
perceive accreditation as a “high” benefit (more than the mean score) than people who
participated in the process after adjusting for all other variables in the model. By “program type”,
participants who worked in private programs were approximately 2.3 times (130%) more likely
to report “high” benefit than participants who worked in public programs. By “region”,
participants who worked in programs located in the Northeastern region were approximately
28% less likely to report “high benefit” than participants who worked in other regions. Similarly,
participants who held “more than one position” were approximately 19% less likely to report
“high” benefit than participants who held only “one position”. Finally, participants working in
“100 graduates or less” were approximately 10% less likely to report a “high” benefit score of
accreditation.
Research Question 2: Perceived Costs of Accreditation
The second research question sought to confirm, identify, rank and rate the perceived
costs of accreditation of undergraduate medical education programs in the US. Identification of
the costs of accreditation ensued through literature review as well as responses to an open-ended
question in the survey (Appendix D). A total of five costs were identified by the researcher
through literature review. They included: direct financial/monetary expenses spent to achieve
accreditation of program, total amount of time spent by faculty/administrators/staff to achieve
accreditation of program, total amount of effort spent by faculty/administrators/staff to achieve
128
accreditation of program, impact on morale: resistance of faculty/administrators/staff to the
accreditation process and impact on academic freedom. Additional costs identified by the
participants were: change, diversion, restriction/loss of innovation, lack of knowledge and lack
of reward.
By rank, participants perceived the “total amount of time spent” to be the “highest” cost
of accreditation of their programs. The “total amount of effort” was perceived by the participants
as the “second” and “third highest” cost of accreditation. All ranked costs chosen by the
participants were identified by the researcher via literature review.
Rated based on responses to one direct Likert-scale question in Section A of the survey
(Appendix D), approximately 57% of participants perceived the “total cost” (total overall cost) of
accreditation to be “moderate” and 39% of participants rated it as “high”. By contrast, only 2%
of participants perceived “total cost” as “low” and 1% of participants rated their accreditation as
“no” cost. Combined then, 96% of participants perceived cost as “moderate” to “high” while
only 4% participants perceived cost as “no” to “low”. Further statistical analyses indicated that
“personnel cost score” (total amount of time and total amount of effort spent by
faculty/administrators/staff) was rated as “moderate” to “high” cost to programs by 96% of
participants as compared to “no” to “low” cost by approximately 4% of participants. Such
congruence was not observed for “program-related factor” (direct financial/monetary expenses to
achieve accreditation of program, impact on morale and impact on academic freedom).
In placing “high” cost as a distinctive category due to the fact that boundaries between
“no”, “low” and “moderate” levels of cost can be subjective and less demarcated, the mean value
score for responses to all five questions was rated according to two categories: “low” cost (mean
value score or less) and “high” cost (more than the mean value score). To that end, results
129
indicated that approximately 60% of participants perceived the costs of accreditation as “high”
and 40% of participants as “low” cost. While the study sample is small for significance and the
ratio between the combined levels of total costs fluctuated according to different tests, the fact
remained that accreditation of undergraduate medical education programs was perceived by the
participants to be more towards the “higher” end of the scale.
Total overall level of costs of accreditation of undergraduate medical education programs
in the continental US was correlated with several independent variables via chi square (p value)
for statistical significance and univariate and multiple logistic regression analyses for the
magnitude of effect (odds ratio). These variables included: participation in the accreditation
process, position of participants, region of program, type of program and number of graduates
(MDs) in the last academic year. No statistically significant correlations were found.
For the magnitude of effect (odds ratio) at a confidence limit of 95%, the “cost score”
(mean value score of costs) had a high correlation with some of the five independent variables
(participation in the accreditation process, position of participants, region of program, type of
program and number of graduates (MDs) in the last academic year). For each variable, the
highest value for the variable will be presented. By “program type”, participants who worked in
“private” programs were approximately 2.2 times (123%) more likely to report “low” cost than
participants who worked in “public” programs. By “region”, participants who worked in
programs located in the Northeastern region were approximately 1.6 times (62%) more likely to
report “low” cost than participants who worked in other regions. By “number of positions”,
participants who held “more than one position” were approximately 1.4 times (46%) more likely
to report “low” cost than participants who held only “one position”. For “participation”, people
who “did not participate” in the accreditation process were 20% less likely to report “low” cost
130
than people who participated in the process after adjusting for all other variables in the model.
Finally, participants working in “100 graduates or less” were approximately 29% less likely to
report a “low” cost score of accreditation.
Research Question 3: Perceived Benefits vs. Costs of Accreditation
The third research question sought to rate perceived benefits as compared to perceived
costs of accreditation amongst department chairs and others in the fully-accredited undergraduate
medical education programs in the US. Responses were solicited through a quantitative multiple
choice question in Section C of the survey that afforded the participants one choice among three
options, benefits exceed costs, benefits equal costs and costs exceed benefits (Appendix D).
Rated based on direct responses to the above question, approximately 44% of the
participants perceived that “total cost” exceeded “total benefits” and 35% stated “total benefits”
was equal to “total cost” (Worth it). Approximately 21% perceived that “total cost” exceeded
“total benefits”.
Additional statistical analyses indicated a statistically-significant (p = 0.001) correlation
between “overall benefit score” (mean value score or less) and “cost-benefit”. Approximately,
78% of participants who perceived “overall benefit score” as “low” also believed that “costs
exceeded benefits” while 59% of participants who believed that “benefits were equal to or
exceeded costs” rated “overall benefit score” as “high”.
In addition, Univariate and Multiple Variate Logistical Regression analyses for the
magnitude of effect (odds Ratio) was moderate for cost-benefits and some of the five
independent variables (participation, position of participants, type of program, region and
number of student graduates/MDs in the last academic year). Results indicated that respondents
who “did not participate” in the accreditation process were 50% less likely to perceive that
131
benefits “exceeded or were equal to” costs. By “number of graduates”, participants who were
employed in programs of “100 students or less” were approximately 20% less likely to perceive
that benefits were “equal to or exceeded” costs.
Finally, univariate and multiple variate logistical regression analyses indicated a high
magnitude of effect between “cost score”, “benefit score” and “cost-benefits”. For the “cost
score”, participants who perceived costs as “low” were 90% less likely to perceive benefits as
“equal or more than costs” than participants who rated costs as “high”. For the “benefit score”,
participants who perceived benefits as “low” were 85% less likely to perceive benefits as “equal
or more than” costs than participants who rated benefits as “high”.
Finally, content/text analysis of qualitative data solicited through a direct open-ended
question to participants about improving the value of accreditation of their undergraduate
medical education programs generated several recommendations. These recommendations were
divided to three categories: process, outcome and approach. Further discussion of
recommendations will be offered in the next section.
Recommendations for Improving the Value of Accreditation of Undergraduate Medical
Education Programs in the US.
It is imperative for any study/project that aims at improving outcomes to start with a
“needs assessment” of the target population that are directly or indirectly affiliated, involved and
affected by the project. Such assessment will ensure that the “actual needs” rather than the
“assumed needs” of the internal stakeholders are addressed and the gap between them and the
accrediting agency is bridged. In this study, the recommendations voiced by the department
chairs and others in all fully-accredited undergraduate medical education programs in the US
clearly defined their “actual needs” and formed a “solid basis” and “framework” not only for this
132
section but this entire research (Table 42). It certainly enhanced our evidence-based knowledge
about, assessment of and efficiency in accreditation of the aforementioned programs.
Table 42
Recommended Changes for Accreditation
Process Outcome Approach
Improved data collection/self-
analysis
Improved outcomes related to
medical education
Provision of proper guidance
Improved site visits Improved outcomes related to
medical students
Provision of clear information
Effective remediation Flexibility
Prolonged accreditation cycle Efficiency
Fostering innovation
Continuity of Quality Control/Improvement. The recommendation from participants
to have some sort of continuity for the accreditation process can be addressed through instating a
new system or enforcing an already existing system of continuous monitoring of quality control
according to the standards of the LCME. This process can collect and provide relevant data and
guide the program in continuous improvements (Spangehl, 2012). With ongoing entry/update of
such a system will serve to provide relevant information to the program and its internal
stakeholders about where they stand and what needs to be accomplished by the next accreditation
cycle. In addition, it will greatly alleviate the costs of accreditation in terms time, effort, impact
on morale/resistance to accreditation, diversion from other duties as well as some financial
expenses. Furthermore, it can promote much-needed principles of governance. According to the
Higher Learning Commission (2002), institutions that “achieved a systematic approach to quality
133
improvement” (p. 2-4) were able to do so through upheld principles of “leadership”, “people”,
“focus”, “learning”, “agility”, “collaboration”, “involvement”, “foresight” and “information”. To
that end, implementation of a system of continuous monitoring of quality control and
improvement can address and resolve several concerns voiced by the participants in all three
categories; process, outcomes and approach (Table 50).
Fostering Innovation/Use of Technology. Technology can greatly enhance maintenance
of the system of continuous monitoring and improvement discussed earlier. Data collected and
gathered by such a system can decrease costs and increase benefits, not only internally in terms
of collaboration and team work amongst departments but also externally among hosting
academic institutions, other programs, the accrediting agency and the public. Furthermore,
access to diverse electronic sources of relevant information and data pools collected by other
agencies such as AAMC can be easily incorporated to reduce redundancy and wasted duplicate
effort, time and funds. An efficient online system of information gathering, organization and
dissemination will undoubtedly have its own initial costly price tag that will be absorbed along
the time. It will increase consistency, efficiency in data collection, site visits and collaboration
and decrease monetary expenses, time, effort and other costs of accreditation.
Collaboration. According to Baron (2002), people can increase their outcome potential
by engaging in joint activities. Similarly, undergraduate medical education programs can
improve their outcome through internal collaboration at all levels vertically and horizontally as
well as external collaboration across programs and agency. Best practices of successful programs
can be shared with new and existing ones. Collaboration across disciplines and among different
quality control boards and agencies at the state and national levels can result in shared standards
and site visits that will greatly reduce the costs and enhance the benefits of accreditation at all
134
stages of accreditation. At a time of limited financial resources, increased competition for
available resources nationally and globally, heightened scrutiny and demand for accountability
and ROI (Madigan & Goodfellow, 2005; Rosseter, 2005; Selingo, 2005; Unruh & Fottler, 2005).
Clarity of Information/Provision of Proper Guidance. Almost all participants voiced
their frustration with the current approach and process of the accrediting agency of their
programs. Hence, the LCME needs to conduct a self-analysis and reflect on its own practices in
order to move forward in accomplishing its mission more effectively and efficiently. To that end,
some of the recommendations offered by the target sample demanded the provision of: 1. clear,
concise and consistent information about the process, requirements and outcomes of
accreditation by the accreditor, 2. proper training for faculty members/administrators/staff to
raise awareness of and skills needed for accreditation and 3. proper guidance by the accreditor
throughout the process, 3. positive support in an atmosphere and culture of efficiency and respect
to/appreciation of the hard work of the internal stakeholders of the undergraduate medical
education programs. After all, the desire to improve needs to come from within and not be forced
upon programs/institutions. According to Lillis (2006) and Driscoll and Noriega (2006), the
forces motivating self-study are more effective when they coincide with the desire of the
institution to improve. Similarly, El-Khawas (2000) found that accreditation was not actually the
stimulus for change. Hence, the visiting team members need to fulfill a role of collegiality and
mentorship and be more supportive of the programs rather than exhibiting the intimidating
authoritative and punitive ‘gotcha’ style.
Flexibility. For flexibility, the accrediting agency can streamline both the breadth and
depth of data requested and collected by the undergraduate medical education programs for
efficiency and cost reduction. Streamlining the data can also take place in remediation to areas
135
that need to be brought up to par with the accreditor’s standards. For programs that exhibited a
sustained level of success in maintaining their accreditation over an extended duration of time,
flexibility in the accrediting requirements and process can serve as a positive reinforcement for
serious commitment and diligence. For example, site visits can be grouped with other quality
control visits, shorter in duration, smaller in size and partially replaced by interim reports
submitted online. The accrediting agency can perhaps extend a longer accreditation cycle than 7
years to these programs. Finally, these distinguished programs can serve as role models and
“internal consultants” to other programs that need additional assistance and guidance through the
accreditation process. Finally, the accrediting agency should enhance its effort in fostering
innovation through adopting technology. Moving forward, the LCME need to explore the
possibility of report submission and meetings with the various programs via email and video-
conferencing.
Focus on Outcomes vs. Standards. Accreditation in undergraduate medical education
programs is highly focused on standards rather than outcomes as indicated by the participants. It
is the recommendation of the target study and the researcher of this study that the accreditor and
programs shift focus to clear, concise and measurable outcomes that are objective and supported
by empirical research. In terms of outcomes related to medical education, implementation of
solid continuous mandatory training for faculty members in better teaching strategies will greatly
enhance the learning experiences and outcomes of students. Such opportunities can be greatly
enhanced through collaborating with existing medical education and higher education
departments in and across programs and participation in conferences such as “Innovations in
Medical Education/USC 2013” in person or online where great interaction and teaching/learning
experiences occur among colleagues. The medical programs can even host centers for academic
136
excellence in teaching/learning on their/hosting institution campuses. Furthermore, medical
programs can instate measurable ‘metrics for success’ that are collected regularly and
‘automatically’ similar to the student surveys. In terms of outcomes related to medical students,
it is imperative that great attention and focus is shifted to student learning and performance
rather than student satisfaction. Participants of this study clearly voiced their fear that by
focusing on satisfaction, the accrediting agency “leads to creation of a new bureaucracy that does
not value course content” and waste much-needed resources.
Leadership. In terms of leadership, the medical program needs to assemble the most
qualified team to address the entire project of accreditation. The team leader (or ALO) and
members should be highly knowledgeable, experienced with the process and motivated. They
need to possess exceptional abilities in organization, processing of large quantities of data and
public relations skills (Faye, 2012) and collaboration across all levels of the medical programs
horizontally and vertically. Furthermore, the team needs to have the “Stockdale Paradox”
(Collins, 2001), defined as the possession of ‘Unwavering faith’ that they ‘can and will prevail’
while being humble enough to accept “The most brutal facts of reality” (p. 1, 13). Such team can
succeed in its mission with the political will, support and leadership granted at the highest levels
of academic governance; the dean of the program and the president of the hosting academic
institution. Effectiveness of the president is a key indicator of the levels of resources, visibility,
influence with students, faculty members, administrators, trustees, politicians and the public
(McGoey, 2005; Michael, Schwartz and Balraj, 2001). All of which are crucial for empowering
the accreditation team and ensuring the success of the accreditation process.
Finally, leadership on the side of the accrediting agency needs to pay special attention not
only to standards and visits but rather to other numerous issues and concerns voiced by the
137
participants. The accreditor needs to be a source of development for the recipients through
raising awareness and knowledge about accreditation, promoting consistent measurable objective
outcomes of academic medicine and students and fostering self-autonomy of and innovation in
programs. In addition, the agency needs to be current in terms of information and technology that
will facilitate the process and decrease the costs of accreditation. Furthermore, the value of
accreditation will greatly benefit from bringing the lessons home to the accreditor. During
numerous discussions between the researcher and medical students, faculty members, department
chairs and others both inside the US and internationally, it was apparent that other
approval/accrediting systems and agencies had great success in addressing the issue of
accreditation. One of these examples included the accrediting agency of graduate medical
education in the US. Perhaps then it is time for the LCME to be “Institutionally-responsive”
(Michael et al., p.54) in sensing and serving the changing needs of its constituents and establish
formal channels of communication and collaboration with the aforementioned agencies for the
sake of increasing the value of accreditation of undergraduate medical education programs in the
US.
Future Research. Regardless of the fact that specialized accreditation in medicine led
the way and “maintained a status level that all occupations covet” (Folden, 1980, p.37) when it
came to quality control in education, well-published peer-reviewed empirical research on the
subject matter remained scarce (Shibley & Volkwein, 2002; Reidlinger & Prager, 1993). Such
need for research is further excaberated by new forces (globalization, massification and student
mobility across international borders) that evolved in recent times. Hence, the participants of this
study as well as the researcher voiced strong recommendations for promoting additional research
on the process, outcomes and approach of accreditation of medical programs in the US and
138
abroad. Future research can re-attempt to assess the ‘actual’ costs of accreditation for
undergraduate medical education programs (Section D) of the survey of this study (Appendix D).
Research can further investigate how to increase the value of accreditation at each category of
internal stakeholders as well as external ones via interviews and focus groups. Such qualitative
data will add great depth to assessing the knowledge about accreditation, addressing the
obstacles/costs and enhancing the benefits of accreditation. Such assessment can also be looked
upon through the lens of gender of participants in order to shed light on differences in
perceptions about accreditation and its value across the gender divide. In addition, the results can
provide a specific ‘Job description guidelines’ for hiring efficient ALOs (Or consultants on an
extended basis) that can alleviate the high level of anxiety, stress and diversion and other costs of
accreditation experienced by internal stakeholders of the medical programs. Additional areas of
future research can focus on how to make the process of accreditation more efficient in terms of
use of technology, time, effort, financial expenses and positive interaction (respect, trust and
cooperation) within/between programs and the accrediting agency. Furthermore, additional
research can encompass investigations of areas of overlap among various processes and
standards of quality/approval bodies of medical programs and other disciplines nationally in
order to decrease the redundancy of information and costs of accreditation. Finally, future
research should grant special attention to international accrediting agencies and systems and
explore areas of collaboration and standardization in light of the current wave of globalization
and cross-border mobility of medical students and graduates. Based on attendance/observations
at international higher education quality control conferences, the researcher strongly
recommends that such exploration and collaboration should ensue in an atmosphere of mutual
139
respect and openness to ‘actual needs’ of other medical programs and accreditation systems
around the globe.
Conclusion
In conclusion, as this journey of leadership and thrive for excellence in the medical arena
continues, accreditation needs to continue to evolve in light of challenges, some of which are old
and familiar, and others that are new and foreign. Whether it is dwindling financial resources or
globalization, these factors will continue to renew the call by external as well as internal
stakeholders for heightened accountability, improved transparency, and better ROI when it
comes to academe. The need for accreditation to rise to another level in face of old and modern
day challenges is further exacerbated by the “inch deep” (Ikenberry, 2009, p. 4) knowledge and
support of accreditation that exist among various stakeholders. This study offered a further
insight into the value of accreditation and extended solid recommendations to bridge the existing
gap, increase the benefits and decrease the costs of accreditation in undergraduate medical
education programs in the US.
140
References
Association of American Medical Colleges/AAMC, (2011a). Directory of American Medical
Education 2011.
AAMC, (2012b). Data Book: Medical Schools and Teaching Hospitals by the Numbers.
AAMC (2011c) Report on Medical Schools Faculty Salaries 2010-2011.
AAMC, (2011d). Number of US Medical Schools by Region. Retrieved from
https://www.aamc.org/download/251568/data/numberofschoolsbyregion.pdf
AAMC, (2011e). Table 9A:2011-Benchmarking: Permanent Division/ Section Chiefs and
Department Chairs. Retrieved from
https://www.aamc.org/download/305536/data/2012_table9a.pdf
AAMC (2012f). Faculty Roster. Table 11: Distribution of Chairs by Departments, Gender and
Race. Retrieved from https://www.aamc.org/download/305548/data/2012_table11.pdf
Altbach, P. G., & Knight, J. (2009). The internationalization of higher education: Motivations
and realities. Journal of Studies in International Education, 11(3/4), 290-305.
Altbach, P. G., Reisberg, L., & Rubley, L. E. (2009). Trends in global higher education:
Tracking an academic revolution. Paris: UNESCO.
Altbach, P. G., Reisberg, L., & Rubley, L. E. (2010). Tracking a global academic revolution.
Change, 42(2), 30-39.
Astin, A.W., Keup, J.R., & Lindholm, J.A. (2002), “A decade of changes in undergraduate
education: a national study of system transformation”, The Review of Higher Education,
Vol. 25 No. 2, pp. 141-62.
American Medical Association/AMA, (2012a). The Founding of the AMA. Retrieved from
http://www.ama-assn.org/ama/pub/about-ama/our-history/the-founding-of-ama.page?
141
American Medical Association/AMA, (2012b). Our Mission. Retrieved from
http://www.ama-assn.org/ama/pub/about-ama/our-mission.page?
American Medical Association/AMA, (2012c). 2011 Annual Report. Retrieved from
http://www.ama-assn.org/resources/doc/about-ama/2011-annual-report.pdf
Armstrong, L. (2007). Competing in the global higher education marketplace: Outsourcing,
twinning and franchising. New Directions for Higher Education, 140, 131-138.
Baron, R. (2002). Exchange and development: A dynamical, complex systems
perspective. New Directions for Child and Adolescent Development, 95, 53-71.
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are?
New Directions for Higher Education, 145, 7-27. doi:10.1002/he.331
Chapman, C. B. (1974). “The Flexner Report” by Abraham Flexner. Daedalus, 103(1), 105-117.
Retrieved from http://www.jstor.org/stable/20024193
Council for Higher Education Accreditation/CHEA, (2012a). Informing the Public about
Accreditation. Retrieved from http://chea.org/public_info/index.asp
Council for Higher Education Accreditation/CHEA, (2012b). Important Questions about
Accreditation, Degree Mills, and Accreditation Mills. Retrieved from
http://chea.org/degreemills/default.htm
Council for Higher Education Accreditation /CHEA, (2012c). Degree Mills: An Old Problem
and a New Threat. Retrieved from http://chea.org/degreemills/frmPaper.htm
Council for Higher Education Accreditation/CHEA, (2012d). Accreditation Serving the Public
Interest. Retrieved from http://chea.org/pdf/chea_glance_2006.pdf
Council for Higher Education Accreditation /CHEA, (2012e). Recognition of Accrediting
Organizations Policy and Procedures. Retrieved from
142
http://chea.org/pdf/Recognition_Policy-June_28_2010-FINAL.pdf
Council for Higher Education Accreditation /CHEA, (2012f). Accreditation and Recognition in
the US. Retrieved from http://chea.org/pdf/AccredRecogUS_2011.12.05.pdf
Council for Higher Education Accreditation/CHEA, (2012g). An Overview of US Accreditation.
Retrieved from
http://www.chea.org/pdf/Overview%20of%20US%20Accreditation%2003.2011.pdf
Driscoll, A., & De Noriega, D.C. (2006). Taking ownership of accreditation: Assessment
processes that promote institutional improvement and faculty engagement. Sterling, VA:
Stylus Publishing, L.L.C.
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education,
145, 79-86. doi:10.1002/he.337.
Eaton, J. S. (2011). An overview of U.S. accreditation. Washington, D.C.: Council for Higher
Education Accreditation. Retrieved from
http://www.chea/pdf/overview%20of%20US%20Accreditation%2003.2011.pdf.
El-Khawas, E. (1993). External Scrutiny, US Style: Multiple actors, overlapping roles. In T.
Becker (Ed.), Governments and Professional Education. UK: SRHE/Open University
Press.
El-Khawas, E. (2000). The impetus for organizational change: An exploration. Tertiary
Education and Management, 6, 37-46.
Ewell, P. T. (2008). U. S. Accreditation and the future of Quality Assurance: A Tenth
Anniversary Report from the Council for Higher Education Accreditation. Washington,
D.C.: Council for Higher Education Accreditation.
Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and Policy
143
Analysis, 2(2), 35-46. Retrieved from http://www.jstor.org/stable/1163932
Freitas, F. A. (2007). Cost-benefit analysis of professional accreditation: A national study of
baccalaureate nursing programs (Doctoral dissertation). Retrieved from ProQuest
Digital Dissertations. (AAT 3279200)
Harcleroad, F. F. (1980). Accreditation: History, process, and problems. Washington, D.C:
American Association for Higher Education.
Ikenberry, S.O. (2009). Where do we take accreditation? Washington, DC: Council for Higher
Education Accreditation.
Lederman, D., & Redden, E. (2007). Accountability and comparability. Inside Higher
Education. Retrieved from www.insidehighereducation.com/news/2007/01/31/compare
Liaison Committee on Medical Education/LCME, (2012a). Functions and Structure of a
Medical School. Retrieved from http://www.lcme.org/functions.pdf
Liaison Committee on Medical Education/LCME, (2012b). Overview: Accreditation and the
LCME. Retrieved from http://www.lcme.org/overview.htm
Liaison Committee on Medical Education/LCME, (2012c). Rules of Procedures. Retrieved from
http://www.lcme.org/rules_of_procedure.pdf
Liaison Committee on Medical Education/LCME, (2012d). Directory of Accredited Medical
Education Programs. Retrieved from http://www.lcme.org/directry.htm
Liaison Committee on Medical Education/LCME, (2012e). Frequently Asked Questions, Why
There Are Two LCME Offices? Which One Should I Contact? Retrieved from
http://www.lcme.org/faqlcme.htm
Liaison Committee on Medical Education/LCME, (2012f). Accreditation Procedures. Retrieved
from http://www.lcme.org/procedur.htm
144
Liaison Committee on Medical Education/LCME, (2012g). LCME Members. Retrieved
from http://www.lcme.org/members.htm
Liaison Committee on Medical Education/LCME, (2012h). Role of Students. Retrieved
from http://www.lcme.org/roleofstudents2011.pdf
Liaison Committee on Medical Education/LCME, (2012i). Connections. Retrieved
from http://www.lcme.org/connections.htm
Liaison Committee on Medical Education/LCME, (2012j). Submission Requirements. Retrieved
from http://www.lcme.org/submission_requirements.htm
Liaison Committee on Medical Education/LCME, (2012k). Contacts. Retrieved
from http://www.lcme.org/contacts.htm
Liaison Committee on Medical Education/LCME, (2012l). Publications. Retrieved
from http://www.lcme.org/pubs.htm
Liaison Committee on Medical Education/LCME, (2012m). LCME Accreditation Guideleines
for New and Developing Medical Schools. Retrieved from
http://www.lcme.org/newschoolguide.pdf
Lillis, D. (2006). Bar raising or navel-gazing? : The effectiveness of self-study programmes in
leading to improvements in institutional performance. Paper presented at the 2006
conference of the Dublin Institute of Technology. Retreived from
http://arrow.dit.ie/scschcomcon/41
Lucas, C. J. (2006). American Higher Education A History. New York: Palgrave Macmillan.
Madigan, E., & Goodfellow, M. (2005). The influence of family income and parents
education on digital access: Implications for first-year college students.
Sociological Viewpoints, Pennsylvania Sociological Society, 53-62.
145
Malandra, G. H. (2008). Accountability and learning assessment in the future of higher
education. On the Horizon. 16(2): 57-71.
Martin, J. C. (1994). Recent developments concerning accrediting agencies in postsecondary
education. Law and Contemporary Problems, 57(4), 121-149. Retrieved from
http://www.jstor.org/stable/1192059
McGoey, S. (2005). A comparison of institutional stakeholders’ perceptions of presidential
effectiveness. Doctoral dissertation, Kent State University.
Michael, S. O., Schwartz, M., & Balraj, L. (2001). Indicators of presidential effectiveness:
A study of trustees on higher education institutions. The International Journal of
Educational Management, 15(7), 332-346.
Michael, S. O., Holdaway, E., & Young, C. (1994). Institutional responsiveness: A study
of administrators’ perceptions. Educational Management and Administration,
22(1), 54-62.
National Center for Public Policy and Higher Education, 2004. Measuring up 2004: The National
Report Card on Higher Education, National Center for Public Policy and Higher
Education, San Jose, CA.
Orcher, L. T. (2007). Conducting a Survey.Techniques for a Term Project. Glendale, California:
Pyrczak Publishing.
Project on Student Debt, (2012a). Keeping College Within Reach. Retrieved from
http://projectonstudentdebt.org/
Project on Student Debt, (2012b). Poll: Young Adults Say Higher Education is More Important
but Less Affordable. Retrieved from
146
http://projectonstudentdebt.org/pub_view.php?idx=793
Project on Student Debt, (2012c). Student Debt and the Class of 2010. Retrieved from
http://projectonstudentdebt.org/pub_view.php?idx=791
Qiang, Z. (2003). Internationalization of higher education: Towards a conceptual framework.
Policy Futures in Education, 1(2), 248-270.
Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the U.S.
Quality in Higher Education, 2(1), 5-19. doi:10.1080/1353832960020102
Reidlinger, C.R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New Directions
for Community Colleges, 83, 39-47.
Rosseter, R. (2005). With enrollments rising for the 5th consecutive year, U.S. schools
turn away more than 30,000 qualified applications in 2005. AACN Press Release.
American Association of Colleges of Nursing. Retrieved July 18, 2006, from
http://www.aacn.nche.edu/Media/News/releases/2005/enrl105.htm
Salkind, N. J. (3
rd
edition). Statistics for people who (think they) hate statistics. Thousand Oaks,
CA: Sage Publications.
Schirlo, C., & Heusser, R. (2010). Quality assurance of medical education: A case study from
Switzerland. GMS Z Medical Ausbild. 27(2): Doc24. Doi: 10.3205/zma000661
Shibley, L.R., & Volkwein, J.F. ( 2002, June). Comparing the costs and benefits of re-
accreditation processes. Paper presented at the Annual meeting of the Association for
Institutional Research, Toronto, Ontario, Canada.
Spangehl, S. D. (2012). AQIP and accreditation: Improving quality and performance. Planning
for Higher Education, 40(3), 6-7.
147
U.S. Department of Education/USDE, (2012a). Degree mills and accreditation-Accreditation.
Retrieved from
http://www2.ed.gov/students/prep/college/diplomamills/accreditation.html
U.S. Department of Education/USDE, (2012b). Degree mills and accreditation-diploma mills.
Retrieved from http://www2.ed.gov/students/prep/college/diplomamills/diploma-
mills.html
U.S. Department of Education/USDE, (2012c). A Test of Leadership: Figureing the Future of
U.S. Higher Education. Washington, D.C.: U.S. Department of Education. Retrieved
from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf
Unruh, L., & Fottler, M. (2005). Projections and trends in RN supply: What do they tell
us about the nursing shortage? Policy, Politics & Nursing Practice, 6(3), 171-182.
Woolston, P. J. (2012). The costs of institutional accreditation: A study of indirect and direct
costs. (Unpublished doctoral dissertation). University of Southern California, California.
Yuen, F. (2012). A cost benefit analysis of professional accreditation by the accreditation board
for engineering and technology (Unpublished doctoral dissertation). University of
Southern California, California.
148
Appendix A
Medical Education Programs Leading to the MD Degree Accredited by the LCME in the
US and its Territories
State/Province
Medical Education
Program
City
Next Survey
Date
Next Survey
Type
Accreditation
Status
Alabama
University of Alabama
School of Medicine
Birmingham
2013-2014
Full Visit
Full
Accreditation
University of South
Alabama College of
Medicine
Mobile Pending
Full
Accreditation
Arizona
University of Arizona
College of Medicine
Tucson
2013-2014
Full Visit
Full
Accreditation
Arkansas
University of Arkansas
for Medical Sciences
College of Medicine
Little Rock
2014-2015
Full Visit
Full
Accreditation
California
Loma Linda University
School of Medicine
Loma Linda
2015-2016
Full Visit
Full
Accreditation
Stanford University
School of Medicine
Palo Alto
2013-2014
Full Visit
Full
Accreditation
University of California
- Davis School of
Medicine
Davis
2013-2014
Full Visit
Full
Accreditation
University of California
- Irvine School of
Medicine
Irvine
2016-2017
Full Visit
Full
Accreditation
David Geffen School of
Medicine at UCLA
Los Angeles
2012-2013
Full Visit
Full
Accreditation
University of California
- San Diego School of
Medicine
San Diego
2017-2018
Full Visit
Full
Accreditation
University of California San Francisco 2018-2019 Full
149
- San Francisco
School of Medicine
Full Visit Accreditation
Keck School of
Medicine of the
University of Southern
California
Los Angeles
2017-2018
Full Visit
Full
Accreditation
Colorado
University of Colorado
School of Medicine
Denver
2016-2017
Full Visit
Full
Accreditation
Connecticut
University of
Connecticut School of
Medicine
Farmington
2011-2012
Interim Visit
Full
Accreditation
Yale University School
of Medicine
New Haven
2015-2016
Full Visit
Full
Accreditation
District of
Columbia
Georgetown University
School of Medicine
Washington
2018-2019
Full Visit
Full
Accreditation
The George Washington
University School of
Medicine and Health
Sciences
Washington
2015-2016
Full Visit
Full
Accreditation
Howard University
College of Medicine
Washington
2012-2013
Interim Visit
Full
Accreditation
Florida
The Florida State
University College of
Medicine
Tallahassee
2018-2019
Full Visit
Full
Accreditation
University of Florida
College of Medicine
Gainesville
2014-2015
Full Visit
Full
Accreditation
University of Miami
Miller School of
Medicine
Miami
2016-2017
Full Visit
Full
Accreditation
USF Health Morsani
College of Medicine
Tampa
2014-2015
Full Visit
Full
Accreditation
Charles E. Schmidt
College of Medicine at
Florida Atlantic
University
Boca Raton
2012-2013
Provisional Visit
Preliminary
Accreditation
150
Florida International
University College of
Medicine
Miami
2012-2013
Full Visit
Provisional
Accreditation
University of Central
Florida College of
Medicine
Orlando
2012-2013
Full Visit
Provisional
Accreditation
Georgia
Emory University
School of Medicine
Atlanta
2015-2016
Full Visit
Full
Accreditation
Medical College of
Georgia
Georgia Health Sciences
University
Augusta
2015-2016
Full Visit
Full
Accreditation
Mercer University
School of Medicine
Macon
2012-2013
Full Visit
Full
Accreditation
Morehouse School of
Medicine
Atlanta
2012-2013
Full Visit
Full
Accreditation
Hawaii
John A. Burns School of
Medicine
University of Hawaii at
Manoa
Honolulu
2016-2017
Full Visit
Full
Accreditation
Illinois
Stritch School of
Medicine
Loyola University of
Chicago
Maywood Pending
Full
Accreditation
The Feinberg School of
Medicine
Northwestern University
Chicago
2012-2013
Full Visit
Full
Accreditation
Rush Medical College of
Rush University Medical
Center
Chicago
2012-2013
Full Visit
Full
Accreditation
Southern Illinois
University School of
Medicine
Springfield
2014-2015
Full Visit
Full
Accreditation
University of Chicago
Division of the
Biological Sciences, The
Pritzker School of
Chicago
2012-2013
Full Visit
Full
Accreditation
151
Medicine
Chicago Medical School
at Rosalind Franklin
University of Medicine
and Science
North Chicago Pending
Full
Accreditation
University of Illinois at
Chicago College of
Medicine
Chicago
2017-2018
Full Visit
Full
Accreditation
Indiana
Indiana University
School of Medicine
Indianapolis Pending
Full
Accreditation
Iowa
University of Iowa Roy
J. and Lucille A. Carver
College of Medicine
Iowa City
2017-2018
Full Visit
Full
Accreditation
Kansas
University of Kansas
School of Medicine
Kansas City
2013-2014
Full Visit
Full
Accreditation
Kentucky
University of Kentucky
College of Medicine
Lexington
2018-2019
Full Visit
Full
Accreditation
University of Louisville
School of Medicine
Louisville
2012-2013
Full Visit
Full
Accreditation
Louisiana
Louisiana State
University
School of Medicine in
New Orleans
New Orleans
2017-2018
Full Visit
Full
Accreditation
Louisiana State
University
School of Medicine in
Shreveport
Shreveport
2013-2014
Full Visit
Full
Accreditation
Tulane University
School of Medicine
New Orleans
2018-2019
Full Visit
Full
Accreditation
Maryland
Johns Hopkins
University School of
Medicine
Baltimore
2013-2014
Full Visit
Full
Accreditation
Uniformed Services
University of the Health
Sciences, F. Edward
Hébert School of
Bethesda
2015-2016
Full Visit
Full
Accreditation
152
Medicine
University of Maryland
School of Medicine
Baltimore
2015-2016
Full Visit
Full
Accreditation
Massachusetts
Boston University
School of Medicine
Boston
2018-2019
Full Visit
Full
Accreditation
Harvard Medical School Boston Pending
Full
Accreditation
Tufts University School
of Medicine
Boston
2013-2014
Full Visit
Full
Accreditation
University of
Massachusetts Medical
School
Worcester
2011-2012
Full Visit
Full
Accreditation
Michigan
Central Michigan
University College of
Medicine
Mount
Pleasant
2014-2015
Provisional Visit
Preliminary
Accreditation
Michigan State
University College of
Human Medicine
East Lansing
2014-2015
Full Visit
Full
Accreditation
University of Michigan
Medical School
Ann Arbor
2011-2012
Full Visit
Full
Accreditation
Oakland University
William Beaumont
School of Medicine
Rochester
2012-2013
Provisional Visit
Preliminary
Accreditation
Wayne State University
School of Medicine
Detroit
2014-2015
Full Visit
Full
Accreditation
Minnesota
Mayo Medical School Rochester
2018-2019
Full Visit
Full
Accreditation
University of Minnesota
Medical School
Minneapolis
2011-2012
Full Visit
Full
Accreditation
Mississippi
University of Mississippi
School of Medicine
Jackson
2011-2012
Full Visit
Full
Accreditation
Missouri
Saint Louis University
School of Medicine
St. Louis
2016-2017
Full Visit
Full
Accreditation
University of Missouri
School of Medicine
Columbia
2015-2016
Full Visit
Full
Accreditation
153
University of Missouri-
Kansas City School of
Medicine
Kansas City
2011-2012
Interim Visit
Full
Accreditation
Washington University
in St. Louis School of
Medicine
St. Louis
2014-2015
Full Visit
Full
Accreditation
Nebraska
Creighton University
School of Medicine
Omaha
2019-2020
Full Visit
Full
Accreditation
University of Nebraska
College of Medicine
Omaha
2013-2014
Full Visit
Full
Accreditation
Nevada
University of Nevada
School of Medicine
Reno
2011-2012
Interim Visit
Full
Accreditation
New
Hampshire
Geisel School of
Medicine at Dartmouth
Hanover
2012-2013
Full Visit
Full
Accreditation
New Jersey
UMDNJ - New Jersey
Medical School
Newark
2012-2013
Full Visit
Full
Accreditation
UMDNJ - Robert Wood
Johnson Medical School
Piscataway
2017-2018
Full Visit
Full
Accreditation
Cooper Medical School
of Rowan University
Camden
2013-2014
Provisional Visit
Preliminary
Accreditation
New Mexico
University of New
Mexico School of
Medicine
Albuquerque
2017-2018
Full Visit
Full
Accreditation
New York
Albany Medical College Albany
2017-2018
Full Visit
Full
Accreditation
Albert Einstein College
of Medicine of Yeshiva
University
New York
2014-2015
Full Visit
Full
Accreditation
Columbia University
College of Physicians
and Surgeons
New York
2017-2018
Full Visit
Full
Accreditation
Weill Cornell Medical
College of Cornell
University
New York
2017-2018
Full Visit
Full
Accreditation
154
Hofstra North Shore-LIJ
School of Medicine at
Hofstra University
Hempstead
2012-2013
Provisional Visit
Preliminary
Accreditation
Mount Sinai School of
Medicine
New York
2019-2020
Full Visit
Full
Accreditation
New York Medical
College
Valhalla
2015-2016
Full Visit
Full
Accreditation
New York University
School of Medicine
New York
2014-2015
Full Visit
Full
Accreditation
University of Rochester
School of Medicine and
Dentistry
Rochester
2015-2016
Full Visit
Full
Accreditation
State University of New
York, Downstate
Medical Center, College
of Medicine
Brooklyn
2012-2013
Full Visit
Full
Accreditation
University at Buffalo
School of Medicine and
Biomedical Sciences,
State University of New
York
Buffalo Pending
Full
Accreditation
Stony Brook University
School of Medicine
Stony Brook Pending
Full
Accreditation
State University of New
York Upstate Medical
University College of
Medicine
Syracuse Pending
Full
Accreditation
On probation
North Carolina
Wake Forest School of
Medicine of Wake
Forest Baptist Medical
Center
Winston-
Salem
2015-2016
Full Visit
Full
Accreditation
Duke University School
of Medicine
Durham
2016-2017
Full Visit
Full
Accreditation
The Brody School of
Medicine at East
Carolina University
Greenville
2011-2012
Full Visit
Full
Accreditation
155
University of North
Carolina at Chapel Hill
School of Medicine
Chapel Hill
2011-2012
Full Visit
Full
Accreditation
North Dakota
University of North
Dakota School of
Medicine and Health
Sciences
Grand Forks
2013-2014
Full Visit
Full
Accreditation
Ohio
Case Western Reserve
University School of
Medicine
Cleveland
2016-2017
Full Visit
Full
Accreditation
University of Cincinnati
College of Medicine
Cincinnati
2019-2020
Full Visit
Full
Accreditation
University of Toledo
College of Medicine
Toledo
2012-2013
Full Visit
Full
Accreditation
Northeast Ohio Medical
University College of
Medicine
Rootstown
2018-2019
Full Visit
Full
Accreditation
Ohio State University
College of Medicine
Columbus
2013-2014
Full Visit
Full
Accreditation
Boonshoft School of
Medicine
Wright State University
Dayton
2016-2017
Full Visit
Full
Accreditation
Oklahoma
University of Oklahoma
College of Medicine
Oklahoma
City
2018-2019
Full Visit
Full
Accreditation
Oregon
Oregon Health &
Science University
School of Medicine
Portland
2011-2012
Full Visit
Full
Accreditation
Pennsylvania
The Commonwealth
Medical College
Scranton Pending
Preliminary
Accreditation
On Probation
Drexel University
College of Medicine
Philadelphia
2012-2013
Full Visit
Full
Accreditation
Jefferson Medical
College of Thomas
Jefferson University
Philadelphia
2014-2015
Full Visit
Full
Accreditation
Pennsylvania State Hershey 2017-2018 Full
156
University College of
Medicine
Full Visit Accreditation
The Raymond and Ruth
Perelman School of
Medicine at the
University of
Pennsylvania
Philadelphia
2015-2016
Full Visit
Full
Accreditation
University of Pittsburgh
School of Medicine
Pittsburgh
2018-2019
Full Visit
Full
Accreditation
Temple University
School of Medicine
Philadelphia
2016-2017
Full Visit
Full
Accreditation
Puerto Rico
Universidad Central del
Caribe School of
Medicine
Bayamon
2011-2012
Full Visit
Full
Accreditation
Ponce School of
Medicine
Ponce
2015-2016
Full Visit
Full
Accreditation
San Juan Bautista School
of Medicine
San Juan
2012
Full Visit
Full
Accreditation
On Probation
University of Puerto
Rico School of Medicine
San Juan
2016-2017
Full Visit
Full
Accreditation
Rhode Island
The Warren Alpert
Medical School of
Brown University
Providence
2012-2013
Full Visit
Full
Accreditation
South Carolina
Medical University of
South Carolina
College of Medicine
Charleston
2012-2013
Full Visit
Full
Accreditation
University of South
Carolina School of
Medicine, Columbia
Columbia
2016-2017
Full Visit
Full
Accreditation
University of South
Carolina School of
Medicine, Greenville
Greenville
2013-2014
Provisional Visit
Preliminary
Accreditation
South Dakota
Sanford School of
Medicine of the
University of South
Vermillion
2017-2018
Full Visit
Full
Accreditation
157
Dakota
Tennessee
Meharry Medical
College School of
Medicine
Nashville
2013-2014
Full Visit
Full
Accreditation
East Tennessee State
University
James H. Quillen
College of Medicine
Johnson City Pending
Full
Accreditation
University of Tennessee
Health Science Center
College of Medicine
Memphis
2013-2014
Full Visit
Full
Accreditation
Vanderbilt University
School of Medicine
Nashville
2012-2013
Full Visit
Full
Accreditation
Texas
Baylor College of
Medicine
Houston
2013-2014
Full Visit
Full
Accreditation
Texas A&M Health
Science Center College
of Medicine
College
Station
2011-2012
Full Visit
Full
Accreditation
Texas Tech University
Health Sciences Center
School of Medicine
Lubbock
2016-2017
Full Visit
Full
Accreditation
University of Texas
Southwestern Medical
Center at Dallas,
Southwestern Medical
School
Dallas
2017-2018
Full Visit
Full
Accreditation
University of Texas
Medical Branch at
Galveston
Galveston
2014-2015
Full Visit
Full
Accreditation
University of Texas
Medical School at
Houston
Houston
2011-2012
Full Visit
Full
Accreditation
University of Texas
School of Medicine at
San Antonio
San Antonio Pending
Full
Accreditation
On Probation
Paul L. Foster School of El Paso 2012-2013 Provisional
158
Medicine
Texas Tech University
Health Sciences Center
Full Visit Accreditation
Utah
University of Utah
School of Medicine
Salt Lake City Pending
Full
Accreditation
Vermont
University of Vermont
College of Medicine
Burlington
2012-2013
Full Visit
Full
Accreditation
Virginia
Eastern Virginia Medical
School
Norfolk
2012-2013
Full Visit
Full
Accreditation
Virginia Commonwealth
University School of
Medicine
Richmond
2015-2016
Full Visit
Full
Accreditation
Virginia Tech Carilion
School of Medicine
Roanoke
2011-2012
Provisional Visit
Preliminary
Accreditation
University of Virginia
School of Medicine
Charlottesville
2014-2015
Full Visit
Full
Accreditation
Washington
University of
Washington School of
Medicine
Seattle
2017-2018
Full Visit
Full
Accreditation
West Virginia
Marshall University
Joan C. Edwards School
of Medicine
Huntington Pending
Full
Accreditation
On Probation
West Virginia University
School of Medicine
Morgantown
2014-2015
Full Visit
Full
Accreditation
Wisconsin
Medical College of
Wisconsin
Milwaukee
2018-2019
Full Visit
Full
Accreditation
University of Wisconsin
School of Medicine and
Public Health
Madison
2017-2018
Full Visit
Full
Accreditation
Source: The LCME, 2012d.
159
Appendix B
New and Developing Medical Education Programs in the United States and its Territories
(LCME, 2012m)
Applicant Schools
State/Province Medical Education Program City
California California Northstate University College of Medicine Rancho Cordova
California University of California, Riverside School of Medicine* Riverside
Connecticut
Frank H. Netter MD School of Medicine at Quinnipiac
University
North Haven
Florida Palm Beach Medical College Palm Beach
Michigan Western Michigan University School of Medicine Kalamazoo
Virginia King School of Medicine and Health Science Center Abingdon
*The Department of Education requires all accreditors to provide a brief statement explaining the
bases for the accrediting body's decision to deny accreditation or withdraw accreditation. A
decision to deny preliminary accreditation to the University of California at Riverside School of
Medicine was based primarily on the LCME's assessment that the school had not demonstrated
sufficient financial resources to sustain a sound program of medical education. In addition, the
LCME found inadequate strategic planning, insufficient key personnel, and weak policies and
procedures related to student advancement and diversity. The LCME also found limited clinical
education opportunities in psychiatry and pediatrics. The school declined to appeal this decision.
NOTE: Applicant Schools are not accredited and may not recruit or advertise for
applicants or accept student applications.
Candidate Schools
State/ Province Medical Education Program City
Arizona University of Arizona College of Medicine-Phoenix Phoenix
NOTE: Candidate Schools are not accredited and may not recruit or advertise for
applicants or accept student applications.
160
Preliminary Accreditation
State/Province Medical Education Program City
Florida Charles E. Schmidt College of Medicine at Florida Atlantic University Boca Raton
Michigan Central Michigan University School of Medicine Mount Pleasant
Michigan Oakland University William Beaumont School of Medicine Rochester
New Jersey Cooper Medical School of Rowan University Camden
New York Hofstra North Shore-LIJ School of Medicine at Hofstra University Hempstead
Pennsylvania
The Commonwealth Medical College (Preliminary accreditation, on
probation)
Scranton
South Carolina University of South Carolina School of Medicine, Greenville Greenville
Virginia Virginia Tech Carilion School of Medicine Roanoke
Provisional Accreditation
State/Province Medical Education Program City
Florida Florida International University College of Medicine Miami
Florida University of Central Florida College of Medicine Orlando
Texas
Texas Tech University Health Sciences Center, Paul L. Foster
School of Medicine
El Paso
New Medical Education Program Development Process
Step 1. Applicant School Status
A program obtains "Applicant School" status when all of the following are completed:
1. The program remits the $25,000 application fee to the LCME in order to begin the process of
applying for preliminary accreditation.
2. LCME and CACMS Secretariat determine that the school meets the basic eligibility
requirements to apply for accreditation (i.e., a current or anticipated Figureer in the U.S. or
Canada and plans to offer the educational program in the U.S. or Canada).
NOTE: Applicant Schools are not accredited and may not recruit or advertise for
applicants or accept student applications.
Step 2. Candidate Schools
161
A program obtains "Candidate School" status when all of the following are completed:
1. The program submits a completed Modified Medical Education Database and self-study
document to the LCME for review.
2. Those documents are favorably reviewed by the LCME (and, for Canadian schools, also by
the CACMS).
3. The LCME approves the program to be granted a survey visit for preliminary accreditation.
NOTE: Candidate Schools are not accredited and may not recruit or advertise for
applicants or accept student applications.
Step 3. Preliminary Accreditation
A program may obtain preliminary accreditation when all of the following are completed:
1. The program obtains Candidate School status.
2. An LCME survey team completes a survey visit and prepares a report of its findings for
consideration by the LCME at its next regularly scheduled meeting.
3. The LCME reviews the survey team's report and determines that the program meets the
standards outlined in the LCME document, Guidelines for New and Developing Medical
Schools.
4. The LCME votes to grant preliminary accreditation to the program for an entering class in an
upcoming academic year.
Once preliminary accreditation is granted, the program may begin to recruit applicants and
accept applications for enrollment. If the program does not enroll a Figureer class within two
years of its receipt of preliminary accreditation, it must reapply for applicant school status and
pay a reapplication fee.
Step 4. Provisional Accreditation
A program may obtain provisional accreditation when all of the following are completed:
1. The program obtains preliminary accreditation.
2. The program submits a modified medical educational database and a self-study summary to
the LCME.
3. An LCME survey team completes a limited survey visit prior to the midpoint of the second
year of the curriculum to review implementation progress and the status of planning for later
162
stages of the program. The survey team prepares a report of its findings for consideration by the
LCME at its next regularly scheduled meeting.
4. The LCME reviews the survey team's report and determines that the program meets the
standards outlined in the LCME document, Guidelines for New and Developing Medical
Schools.
5. The LCME votes to grant provisional accreditation to the program.
Once provisional accreditation has been granted, students enrolled in the program may continue
into their third and fourth years of medical education, and the program may continue to enroll
new students.
Step 5. Full Accreditation
After obtaining provisional accreditation, a program may obtain full accreditation when all
of the following are completed:
1. The program obtains provisional accreditation.
2. The program submits a modified medical educational database and a self-study summary to
the LCME.
3. An LCME team completes a full accreditation survey visit that takes place late in the third
year or early in the fourth year of the curriculum, and prepares a report of its findings for
consideration by the LCME at its next regularly scheduled meeting.
4. The LCME reviews the survey team's report and determines that the program leading to the
MD degree fully complies with all LCME accreditation standards.
5. The LCME votes to grant full accreditation to the program for the balance of an eight-year
term that begins when the program was granted preliminary accreditation.
163
Appendix C
The Cover Letter of the Survey
[date]
Dr. [name]
Chair, Department of
[name] School of Medicine,
[city, state, zip code]
Dear Dr. ,
Dr. Keim, Director of Accreditation, USC School of Dentistry, and I would like to extend the
invitation to you to participate in this study that assesses the value of accreditation of your four-
year undergraduate medical education program leading to the M.D. degree by the Liaison
Committee of Medical Education (LCME).
Time needed is 15 - 20 minutes. Here is the link to the study survey: [link]
• The study evaluates accreditation through 2 lenses: Benefits and costs.
• Purpose of study: increase the benefits and decrease the cost of such accreditation.
• Your responses are absolutely confidential and anonymous.
• This survey is IRB-approved (IIR00001350)
• Section A (Benefits): 4 questions.
• Section B (Costs): 4 questions.
• Section C (Benefits/costs): 2 questions.
• Some quick demographics: 4 questions.
• There are optional questions if you are interested.
• The survey will close in 4-5 weeks.
If you have any comments or questions, please do not hesitate to contact me at your convenience.
With a background in medical sciences, public health, global health, cancer control health
programs management, as well as academe (as a faculty member and administrator), I am
currently conducting this doctoral research through the higher education leadership program of
University of Southern California, Los Angeles, California and would like to extend my sincere
gratitude for your participation.
Best regards and many wishes for 2013!
Lead Researcher
Dr. Dee (Dalal) Muhtadi
BSc, MPH, Ed.D. Candidate
164
Medical Studies (overseas)
Dean's List 2010, 2011 & 2012
Tel: (949)394-9358
E-mail: dmuhtadi@usc.edu
E-mail: deeglobal100@gmail.com
Mail: P.O. Box: 5004, Newport Beach, CA. 92662
Chairman of Dissertation Committee
Robert Keim, D.D.S., Ed.D.
Director of Accreditation
Associate Dean for Graduate Studies
Ostrow School of Dentistry
Associate Professor of Education
Rossier School of Education
University of Southern California
Los Angeles, California
E-mail: rkeim@usc.edu
165
Follow-up Letter
Dear Department Chair:
Good afternoon to you.
Many sincere thanks if you have already completed the survey on the benefits and costs of
accreditation of your undergraduate medical education program leading to the MD degree by the
LCME. The response has been overwhelmingly positive.
Because the survey is completely anonymous, there is no way of identifying who has submitted
it. To be sensitive to the various and multiple demands on your time, I am writing to invite you
to participate in the study, if you have not yet had a chance to do so. This study will conclude in
2 weeks.
Time needed to complete the survey is: 15-20 minutes. Here is the link to the survey:
Please do not hesitate to contact me directly if you have any questions and/or comments at:
dmuhtadi@usc.edu or call me at: (949) 394-9358 at your convenience.
Thank you for your contribution to a greater understanding of how to increase the benefits and
decrease the costs of accreditation, as our journey in the medical arena continues unabated
towards excellence and global leadership.
Best regards.
Lead Researcher
Dr. Dee (Dalal) Muhtadi
BSc, MPH, Ed.D. Candidate
Medical Studies (Overseas)
Dean's List 2010, 2011 & 2012
E-mail: dmuhtadi@usc.edu
E-mail: deeglobal100@gmail.com
Mail: P.O. Box: 5004, Newport Beach, CA. 92662
Chairman of Dissertation Committee
Robert Keim, D.D.S., Ed.D.
Director of Accreditation
Associate Dean for Graduate Studies
Ostrow School of Dentistry
Associate Professor of Education
Rossier School of Education
University of Southern California
Los Angeles, California
E-mail: rkeim@usc.edu
166
Appendix D
The Survey Protocol
The Benefits and Costs of Accreditation of Fully-accredited Medical Education Programs
leading to the MD Degrees by the LCME
- This instrument assesses the costs and benefits of accreditation of medical education
programs leading to the MD degree by the Liaison Committee of Medical Education
(LCME).
- Your responses are critical to the success of this study and are absolutely confidential.
- Response time for this survey is estimated at 15 -20 minutes.
- The survey does not have to be completed in one setting.
- If you have any questions, please contact Ms. Muhtadi at: E-mail: dmuhtadi@usc.edu,
tel.: (949)394-9358, and mail: P.O. Box: 5004, Newport Beach, California 92662.
Thank you for your participation!
Please answer the following questions using your medical education program leading to the MD
degrees as the basis for the answers. The accreditation process for this survey refers only to the
program being accredited regardless of the number of campuses or sites within a system.
Participation in accreditation (Please check one of the following):
I._____ I have participated in the LCME accreditation of my medical education program.
(Please start at Current Position: Question III).
II. ____ I have not participated in the LCME accreditation of my medical education program.
Please do not continue with this survey and kindly provide the email information for the
person who is the most involved in the LCME accreditation process of the program:
______________________________________________________________________________
______________________________________________________________________________
Current position (Please check all that apply):
III. Dean______, IV. Associate/Assistant Dean______, V. Department Chair ______,
VI. Associate/Assistant Department Chair____, VII. Faculty____, IIX. Other (pls.
specify):_______________________________________________________________________
______________________________________________________________________________
******************************************************************************
SECTION A: BENEFITS OF ACCREDITATION
Benefits of accreditation identified by researchers are cited below. In relation to your program
only, how would you rate the following as benefits of your program’s accreditation? (Please
check one box only for each benefit).
167
No benefit Low benefit Moderate
benefit
High
benefit
1. Improved overall quality of your
medical education program
2. Improved access to monetary
resources (financial aid, others)
3. Improved recognition, ranking,
prestige, visibility, or
competitiveness of your program
4. The ability to recruit/retain quality
faculty
5. Improved faculty experience
(teaching/learning practices,
experimentation, professional
development, buy-in, and shared
governance)
6. The ability to attract quality
students
7. Enhanced student experience
(knowledge base, involvement,
leadership, advocacy)
8. Improved student learning
outcomes (SLO)
(graduation rates, internship,
employment, licensure, mobility)
9. With regard to your program’s most recent accreditation, which three benefits do you consider
most important? (Please rank them)
a. Most important______________________________________________________________
b. __________________________________________________________________________
c. Least important ______________________________________________________________
Can you rate the benefits of the different stages of the most recent accreditation of your
program? (Please check one box only for each stage)
Stage of accreditation
process
No
benefits
Low benefits to
program
Moderate benefits
to program
High benefits to
program
10. Self-study stage
(Database, self-study report,
summary)
11. Site-visit
(by LCME team)
168
12. Site-visit report by
LCME
13. Final accreditation
decision of LCME
14. As an administrator, what would you say was the OVERALL level of benefits for your most
recent accreditation? (Please check only one)
Not
Beneficial
Slightly
Beneficial
Moderately
Beneficial
Highly
Beneficial
Total overall benefits
15. Optional: Reflecting on the last accreditation of your program, are there any additional
comments you would like to make about the benefits of accreditation?
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
*****************************************************************************
SECTION B: COSTS OF ACCREDITATION
Costs of accreditation of medical education programs leading to the MD degree by LCME are
examined through three lenses: Financial costs (direct), time and effort.
In your most recent accreditation by the LCME, please estimate the number of INTERNAL
stakeholders who participated in any way in all stages of the accreditation process and time spent
by each of them:
Number (approximately) Hours/person (approximately)
16. Faculty
(Full or part time)
17. Administrators
(Full or part time)
18. Staff members
(Secretaries, assistants)
(Full or part time)
19. Other personnel
(Clerical, AV, etc.)
(Full or part time)
169
In your most recent accreditation, please estimate the number of EXTERNAL stakeholders
(consultants, others) who participated in any way in all stages of the accreditation process, time
and fees paid:
Number
(Approximately)
Hours/person
(Approximately)
Fee in
dollars/hour paid
(Approximately)
Total fee paid in
dollars
(Approximately)
20. Consultants
21. Others:
a. __________
b. __________
c. __________
d. ___________
22. Reflecting on the last accreditation process of your program, are there any other costs
(financial/direct, time, effort, other) of accreditation by the LCME that you can identify? (Please
list them)
a.____________________________________________________________________________
b.____________________________________________________________________________
c.____________________________________________________________________________
d.____________________________________________________________________________
23. With regard to your program’s most recent accreditation, which three costs in terms of
TIME (hours) do you consider the highest? (Please rank them)
a. Highest cost in time (hours) _____________________________________________________
b. ____________________________________________________________________________
c. Lowest cost in time (hours) _____________________________________________________
24. With regard to your program’s most recent accreditation, which three costs in terms of
EFFORT do you consider the highest? (Please rank them)
a. Highest cost in effort __________________________________________________________
b. ____________________________________________________________________________
c. Lowest cost in effort ___________________________________________________________
25. With regard to your program’s most recent accreditation, which three DIRECT
FINANCIAL COSTS (in dollars) do you consider the highest? (Please rank them)
a. Highest direct financial cost _____________________________________________________
b. ____________________________________________________________________________
c. Lowest direct financial cost _____________________________________________________
Can you rate the cost in the amount of TIME extended by program and all stakeholders during
the different stages of the most recent accreditation by LCME of your program? (Please check
one box only for each stage)
170
Stage of accreditation
process
No time
extended by
the program
Low amount
of time
(in hours)
Medium
amount of
time
(in hours)
High amount
of time
(in hours)
26. Self-study stage
(Database preparation and
compilation, self-study
report and summary)
27. Site-visit stage
28. After site-visit stage
Can you rate the amount of EFFORT extended by program and all stakeholders during the
different stages of the most recent accreditation of your program? (Please check one box only for
each stage)
Stage of accreditation
process
No effort
extended
Low amount
of effort
extended
Medium
amount of
effort
extended
High amount
of effort
extended
29. Self-study stage
(Database preparation and
compilation, self-study
report and summary)
30. Site-visit stage
31. After site-visit stage
Can you rate the DIRECT FINANCIAL COST (in dollars) extended by program and all
stakeholders during the different stages of the most recent accreditation by LCME of your
program? (Please check one box only for each stage)
Stage of accreditation
process
No direct
financial cost
(in dollars)
Low direct
financial cost
(in dollars)
Medium direct
financial cost
(in dollars)
High direct
financial cost
(in dollars)
32. Self-study stage
(Database preparation and
compilation, self-study
report and summary)
33. Site-visit stage
171
34. After site-visit stage
35. Overall, as an administrator, would you say the level of total cost (financial, time, effort,
other) for accreditation of your program by the LCME is: (Please check one only)
No cost Low cost Moderate cost High cost
Total Cost
(financial, time,
effort, others)
36. Optional: Would you like to share any other comments about the costs of accreditation of
your medical education program by the LCME?
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
37. In comparing the total benefits and total costs of accreditation of your program by the
LCME, do: (Please circle one best answer only)
a. Benefits exceed costs
b. Benefits equal costs
e. Costs exceed benefits
38. Optional: In your opinion, what changes, if any, would you recommend in order to increase
the benefits and decrease the costs of accreditation of your program by the LCME?
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
Please check below if you are interested in having a summary of the results emailed to you:
_____YES _____NO
Thank you very much for your support and help in this survey
Abstract (if available)
Abstract
This study assessed the value of accreditation of all 126 fully-accredited four-year undergraduate medical education programs leading to the MD degree in the US through two lenses, perceived benefits and costs from the perspective of the leadership of internal stakeholders of the aforementioned programs. The online survey was sent to a random cluster sample of 1,096 department chairs/assistant/associate, faculty members and other lead administrators in the programs. The descriptive statistic results of the survey indicated that approximately 74% of participants were 'department chairs, assistants or associates', 77% held only 'one position', 51% worked in 'public' programs and 78% worked in programs that graduated 'more than 100 students' in the last academic year. Participants worked in programs that were located in 'all five regions' of the continental US and approximately 72% of them 'participated' in the accreditation process of their programs. Inferential statistical analyses including univariate and multivariate logistical regression were performed for the dependent variables of benefits, costs, costs vs. benefits (cost-benefits) and five independent variables (participation in the accreditation process, number of positions per participant, program type, region and number of graduates in last academic year). At a 95% confidence limit and Type I error of 0.05, the reliability coefficient (Cornbach's alpha) had a high value of 0.913 for benefits and 0.758 for costs and a high magnitude of effect (odds ratio). For benefits, results indicated that respondents who 'did not participate' in the accreditation process were approximately 2.4 times (141%) more likely to report 'high benefit' than people who 'participated' in the process after adjusting for all variables in the model. By 'program type', participants who worked in 'private' programs were approximately 2.3 times (130%) more likely to report 'high benefit' than those who worked in 'public' programs. For costs, participants who worked in 'private' programs were approximately 2.2 times (120%) more likely to report a 'low' cost of accreditation after adjusting for all variables in the model. Upon being asked to rate ‘overall level’ of costs vs. benefits of accreditation, results indicated that participants who 'did not participate' in the accreditation process were approximately 50% less likely to report 'benefits exceed or equal costs' than participants who 'participated' in the process. Furthermore, a statistically-significant correlation (p < 0.001)was found between the 'overall benefits score' and 'cost-benefit'. Approximately 78% of participants who reported 'costs exceeded benefits' also reported the 'overall benefit score' to be 'low.' Finally, in ranking 11 benefits of specialized accreditation, participants rated the 'provision of a structured mechanism to assess the medical education programs' as the 'highest' benefit of accreditation followed by the role of accreditation as 'a stimulus for program improvement.' 'Improved overall quality of program' and 'benchmarking' were both in third place. For five costs of specialized accreditation, participants ranked the 'total amount of time' spent by internal stakeholders on accreditation as the 'highest' cost followed by 'total amount of effort' in second and third places. Based on participants' opinions/perceptions (actual needs), this study offered significant recommendations to improve the approach, process and outcome of accreditation. They included: continuity of quality control and improvement, fostering innovation, use of technology, enhancing leadership, improving horizontal and vertical collaboration as well as provision of clear information, proper guidance, focus on outcomes vs. standards and flexibility by the accrediting agency. In a culture of trust, mutual respect,collaboration and open communication, these recommendations can enhance the value of accreditation by the LCME, promote excellence in the quality of medical education/programs and their ultimate mission of extending high standards of patient care nationally and globally.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
A cost benefit analysis of professional accreditation by ABET for baccalaureate engineering degree programs
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
The efficacy of regional accreditation compared to direct public regulation of post-seconadary institutions in the United States
PDF
Accreditation and accountability processes in California high schools: a case study
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
Assessment and accreditation of undergraduate study abroad programs
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
University ready: examining the relationships between social capital and an online college access program
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
Blended learning: developing flexibility in education through internal innovation
PDF
BTSA mentors: the costs and benefits to mentors and their fidelity to constructivist practice
PDF
The benefits and challenges of early childhood education programs on kindergarten readiness
PDF
Impact of accreditation actions: a case study of two colleges within Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges
PDF
A comparsion of traditional face-to-face simulation versus virtual simulation in the development of critical thinking skills, satisfaction, and self-confidence in undergraduate nursing students
Asset Metadata
Creator
Muhtadi, Dalal J.
(author)
Core Title
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
08/05/2013
Defense Date
08/05/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,benefits of accreditation,costs of accreditation,Higher education,LCME,medical schools,OAI-PMH Harvest,USA,value of accreditation
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Neish, Christine (
committee member
), Venegas, Kristan M. (
committee member
)
Creator Email
dee9481@gmail.com,dmuhtadi@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-316123
Unique identifier
UC11295248
Identifier
etd-MuhtadiDal-1959.pdf (filename),usctheses-c3-316123 (legacy record id)
Legacy Identifier
etd-MuhtadiDal-1959.pdf
Dmrecord
316123
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Muhtadi, Dalal J.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
benefits of accreditation
costs of accreditation
LCME
value of accreditation