Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The costs of institutional accreditation: a study of direct and indirect costs
(USC Thesis Other)
The costs of institutional accreditation: a study of direct and indirect costs
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE COSTS OF INSTITUTIONAL ACCREDITATION:
A STUDY OF DIRECT AND INDIRECT COSTS
by
P. J. Woolston
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2012
Copyright 2012 P. J. Woolston
ii
ACKNOWLEDGMENTS
Like any dissertation this one owes an enormous debt of gratitude to innumerable
people. I am grateful to the wonderfully bright and intelligent people that have
surrounded me in this doctoral program over the course of the past several years: fellow
students and colleagues from my cohort, faculty who have challenged me to exceed my
best efforts, genuinely caring staff, and the members of my dissertation committee who
tirelessly gave me feedback in guiding this document.
I have also very much appreciated the energy and enthusiasm of the highly
capable accreditation professionals with whom I have worked on this study. These
participants, including those from the regional accrediting agencies who contributed
directly to this work, have taught me a great deal. This is an extraordinarily dedicated
group of experts who are committed to higher education, and their energy and creativity
are boundless. I admire them greatly and I am grateful for their examples.
Finally, I owe particular mention to two individuals who have been an immense
source of personal strength to me. Working with Phillip Placenti over the past several
years has been a profoundly positive experience, from his tremendous impact on my
career to his powerful influence on my personal and family life. Through our countless
conversations on numerous topics he also has made an important contribution to this
study. Finally (and most importantly) I am thankful to my wife Rachelle. Her constant
and unwavering support since I first met her has continually made me a better person.
She has given me both the ability to succeed and the reason to do so.
iii
TABLE OF CONTENTS
ACKNOWLEDGMENTS ii
LIST OF TABLES vi
ABSTRACT viii
CHAPTER ONE: OVERVIEW OF THE STUDY 1
Definition and Process of Accreditation 2
Goals of Accreditation 8
History of Accreditation 11
Regional Institutional Accreditation, Beginnings to 1920 13
Regional Institutional Accreditation, 1920-1950 18
Regional Institutional Accreditation, 1949-1985 20
Regional Institutional Accreditation, 1985-Present 22
Regional Institutional Accreditation in Future Periods 24
Argument for the Study 26
Statement of the Problem 26
Purpose of the Study 28
Significance of the Study 29
Definitions 31
CHAPTER TWO: LITERATURE REVIEW 36
The Development of Accreditation 37
Accreditation in the U.S. 37
The International Perspective of Accreditation 42
Critical Assessments of Accreditation 47
Criticisms of Accreditation 48
Accreditation within the Context of Accountability 56
Alternatives and Amendments to Accreditation 60
Effects of Accreditation 67
Accreditation and Student Assessment 68
Specialized Accreditation 73
Organizational Effects of Accreditation 74
Costs of Accreditation 77
Conclusion 83
CHAPTER THREE: METHODOLOGY 85
Research Design 88
Population and Sample 88
Instrumentation 91
Reliability 92
Validity 93
iv
Data Collection and Analysis 94
Response 98
Gross Response Rate 99
Respondents 100
Quantitative Variability Demonstrated by Responses 103
Delimitations and Limitations 105
Delimitations 105
Limitations on Data Collection 106
Limitations on Quantitative Data 108
Limitations on Qualitative Data 110
Limitations on the Methodology Used for the
Monetization of Indirect Costs 111
CHAPTER FOUR: RESULTS 113
The Assigned Value of Accreditation 114
Formal Titles of ALOs 115
ALO Time Committed to Accreditation 119
The Perceived Direct and Indirect Costs of Institutional Accreditation 127
Reported Direct Costs of Accreditation 127
ALO Comments on Direct Costs 134
Reported Indirect Costs of Accreditation 139
ALO Comments on Indirect Costs 145
Direct and Indirect Costs Combined 151
The Justification of Institutional Accreditation Costs 156
The Benefits of Institutional Accreditation 157
Are Accreditation Costs Justified? 172
ALO Comments on the Justification for Accreditation 175
Summary 204
CHAPTER FIVE: DISCUSSION 208
Discussion of the Findings 209
Research Question #1: What costs are associated with
institutional accreditation and how do those costs
vary between and among types of institutions? 209
Research Question #2: How is financial commitment
toward institutional accreditation manifested, i.e.,
what are the perceived direct and indirect costs of
institutional accreditation? 218
Research Question #3: Do primary Accreditation Liaison
Officers believe that the perceived benefits
associated with institutional accreditation justify
the institutional costs? 227
v
Research Question #4: What kinds of patterns emerge in
accreditation commitment between types of
institutions? 236
Implications for Practice 244
Budget Implications 244
Accreditation Costs, Though Considered High, Are
Perceived as Justified 247
Other Implications 248
Implications: Conclusion 248
Future Research 249
Conclusion 252
REFERENCES 255
APPENDICES
APPENDIX A: HISTORY OF INSTITUTIONAL
ACCREDITATION TIMELINE 276
APPENDIX B: SURVEY INVITATION LETTERS 280
APPENDIX C: SURVEY INSTRUMENT 283
APPENDIX D: TITLE ANALYSIS 290
APPENDIX E: DIRECT COSTS 294
APPENDIX F: INDIRECT COSTS 299
vi
LIST OF TABLES
Table 3.1: Number of Regionally-Accredited Institutions by
Carnegie Classification 90
Table 3.2: Gross Response Rate 100
Table 4.1: Formal Titles of ALOs by Accreditation Region 116
Table 4.2: Formal Titles of ALOs by Carnegie Classification 116
Table 4.3: Document Cost by Carnegie Classification 130
Table 4.4: Site Visit Cost by Carnegie Classification 130
Table 4.5: Combined Direct Costs by Carnegie Classification 133
Table 4.6: Total Number of People Involved with Accreditation 140
Table 4.7: Cumulative Hours Spent on Accreditation 140
Table 4.8: Test for Significant Difference between Means by 142
Accreditation Region
Table 4.9: Test for Significant Difference between Means by 144
Carnegie Classification
Table 4.10: Combined Direct and Indirect Costs of Accreditation
by Accreditation Region 155
Table 4.11: Combined Direct and Indirect Costs of Accreditation by
Carnegie Classification 155
Table 4.12: Whether Accreditation Costs Were Justified by
Accreditation Region 174
Table 4.13: Whether Accreditation Costs Were Justified by
Carnegie Classification 175
Table 5.1: Combined Direct and Indirect Costs of Accreditation by
Accreditation Region 222
Table 5.2: Combined Direct and Indirect Costs of Accreditation by
Carnegie Classification 222
vii
Table 5.3: Total Cost of Accreditation to All Institutions by
Accreditation Region 224
Table 5.4: Total Cost of Accreditation to All Institutions by
Carnegie Classification 224
Table 5.5: Average Per-Year Cost of Accreditation by Accreditation
Region 226
Table 5.6: Average Per-Year Cost of Accreditation by Carnegie
Classification 226
viii
ABSTRACT
This mixed methods study investigated the direct and indirect costs of
institutional accreditation and the differences in those costs between types of institutions.
Accreditation Liaison Officers (ALOs) at four-year institutions from three of the six
regional accrediting agencies were surveyed. Statistically significant differences were
discovered between the means of direct costs by Carnegie classification but not by
accrediting region, whereas statistically significant differences were discovered between
various means of the different categories of indirect costs by both Carnegie classification
and accrediting region. Indirect costs were monetized and combined with direct costs to
determine a cumulative cost. Thus a per-institution average of $327,254 (when calculated
by accreditation region) and $341,103 (when calculated by Carnegie Basic classification)
was identified for a total institutional expense to the higher education community in
excess of $660 million per seven to 10 year review cycle. Additionally it was determined
that the indirect costs amount to roughly four times the direct costs. Despite the high
costs of accreditation more than three times as many ALOs believed the costs of
accreditation to be justified than those who did not believe them to be justified. It was
also possible to develop a general profile of the ALO as a highly committed and capable
individual who is often overwhelmed professionally and sometimes even personally by
the demands of accreditation. This has important implications for institutional
administrators in determining what constitutes adequate support for this person in both
money and time resources made available for the execution of his or her responsibilities.
1
CHAPTER ONE: OVERVIEW OF THE STUDY
Accreditation in higher education is a kind of quality assurance mechanism, a
formal recognition by an external group of the maintenance of a certain minimum caliber
of education. Because accreditation has become the means by which the federal
government determines eligibility for federal fund distribution via students, the
acquisition and maintenance of institutional accreditation has become a standard
expectation and a practical necessity for most colleges and universities operating in the
United States. In the year 2009 a total of 7,435 institutions were accredited by regional
accrediting agencies operating in six geographically distinct U.S. regions, four national
faith-related accrediting agencies, and seven national career-related accrediting
organizations. A formal process generally spanning two years and requiring significant
coordination of resources compels institutions to prepare for an evaluative visit on a cycle
usually varying from seven to 10 years. Acquiring and maintaining accreditation is costly
however and requires a significant institutional commitment (American Council of
Trustees and Alumni, 2007; American Council on Education, 2012; Dill, 1998; Doerr,
1983; Eaton, 2012a; Gillen, Bennett, & Vedder, 2010; Hartle, 2012; Kennedy, Moore, &
Thibadoux, 1985; Leef & Burris, 2002; National Advisory Committee on Institutional
Quality and Integrity, 2012; Reidlinger & Prager, 1993; Shibley & Volkwein, 2002;
Stoodley, 1985; Willis, 1994). From the perspective of the accreditors alone, 327
accrediting association employees rely on operating budgets of $44,259,903, and 34,705
volunteers provide support in the amount of $9,805,770, all in the name of maintaining
that recognition (Council for Higher Education Accreditation, 2010).
2
While informative, these all-inclusive numbers do not convey a sense of the cost
of accreditation to individual or like institutions, and it is plausible that different types of
institutions with a greater stake in the outcomes of accreditation are making a more
significant investment. This study seeks to investigate the cost of accreditation to
institutions in finer detail.
Definition and Process of Accreditation.
The process of accreditation involves a value-oriented study used by institutions
to assess the quality of the education they offer (Stufflebeam & Webster, 1980). Ewell
(2008) described it as “a process of external quality review created and used by higher
education to scrutinize colleges, universities, and programs for quality assurance and
quality improvement” (p. 12). The accreditation of institutions (as opposed to the
accreditation of individual programs or disciplines) assures interested parties that there is
a basic reputable level of academic quality throughout the university as a whole, although
it does not vouch for any specific program of study at the institution (American Council
of Trustees and Alumni, 2007; Blauch, 1950; Clitheroe, 2010; Ewell, Wellman, &
Paulson, 1997; Leef & Burris, 2002; Lubinescu, Ratcliff, & Gaffney, 2001; Pfnister,
1971; Rhodes, 2012; Sibolski, 2012; Stoodley, 1985; Wergin, 2005). Specific programs
may also be separately and individually accredited, however an overarching accreditation
at the institutional level is important:
It does not certify that every part is of equal quality, but it does indicate that none
of them are so weak as to undermine the educational effectiveness of the
3
institution and its services to its students and that the institution knows and is
working to strengthen its weaker areas. (Blauch, 1959, p. 44)
According to this holistic philosophy, “an institution as a whole is greater than a sum of
its parts” (Ratteray, 2008, p. 15).
Institutions use accreditation as a means of formally establishing legitimacy by
affiliating themselves with other highly regarded institutions of varying types that
maintain the same status of being accredited (Eaton, 2007; Eaton, 2011a; Ewell,
Wellman, & Paulson, 1997; Longanecker, 2011; Rusch & Wilbur, 2007; Stensaker &
Harvey, 2006). Walker (2010) adds, “Accreditation is an important part of education. In
the U.S. school system, it dictates what is considered a ‘real’ school and what isn’t. Thus
the power and responsibility of accrediting agencies is very important” (p. 2). The
inability of the current accreditation system to make distinctions beyond “accredited” and
“not accredited” is frequently regarded as one of the major weaknesses of accreditation
(Orlans, 1975). Arguably some institutions have greater legitimacy than the process of
accreditation itself (Bloland, 2001); however the attainment of accredited status is an
important institutional achievement (Asgill, 1976; Graffin & Ward, 2001). Similarly
institutions use accreditation to establish the integrity of their degrees (Council of
Regional Accrediting Commissions, n.d.).
The accreditation process first involves the preparation of an institutional self-
study evaluating how well a university is meeting its institutional mission (Chernay,
1990; Council of Regional Accrediting Commissions, n.d.; Jackson, Davis, & Jackson,
4
2010; Zook & Haggerty, 1936). According to Michael (2005) this kind of self-evaluation
is critical to the institution:
For a higher education institution to be actively responsive, the leaders must have
in place within it a self-monitoring mechanism for institutional renewal and
transformation. A dynamic institution has an active sensory apparatus designed to
sense external and internal changes. Institutional transformation is only possible
where the institution is in tune with itself, conducts periodic self-examination and
utilizes the data for self-renewal. (p. 30)
Because of the vast diversity of institutions across the U.S. it is impossible to develop a
common set of standards by which all institutions might be measured (American Council
on Education, 2012; Ewell, 2008; Zook & Hagerty, 1936). Therefore, “the responsibility
for evaluating how well an institution is accomplishing its educational work can and
should rest exclusively with the institutions and/or the accrediting bodies” (National
Advisory Committee on Institutional Quality and Integrity, 2012, p. 2). This self-study is
then reviewed by a volunteer team of professional colleagues from peer institutions prior
to an actual visit by the team to the campus. The non-adversarial peer review of the self-
study and the site visit together allow the evaluating team to determine how well the
institution is meeting its stated objectives (Adelman & Silver, 1990; Bardo, 2009;
Brittingham, 2009; Chernay, 1990; Jackson, Davis, & Jackson, 2010; Van Vught &
Westerheijden, 1994; Winskowski, 2012).
The concept of peer review is critical as “peer review is the most widely accepted
and respected method of evaluation in all of higher education. This respect underlies the
5
use of peer evaluators in disciplinary and regional accreditation processes” (Banta &
Associates, 2002, p. 276). The theme of legitimacy is evident in the peer review as well.
Because the assessment is made by knowledgeable professionals who are the most
qualified to do such a review, the evaluations are credible, and therefore respected
(Lubinescu, Ratcliff, & Gaffney, 2001; National Advisory Committee on Institutional
Quality and Integrity, 2012; Zook & Haggerty, 1936). Accreditation is also a way that
best practices are spread as peer reviewers take what they learn from the experience both
on to the next campus they review and back to their home institutions (American Council
on Education; Crow, 2009; Ewell, 2008).
On the other hand this very concept underlies a fundamental concern to some
parties that are critical of accreditation: The concept of self-government through peer
review is “an inherent conflict of interest… because the accrediting agencies are
governed by people who represent the very institutions that are being accredited”
(American Council of Trustees and Alumni, 2007; Brittingham, 2008; Ewell, 2012;
Gillen, Bennett, & Vedder, 2010; Hartle, 2012; Kelderman, 2011, para. 15; Leef &
Burris, 2002; Longanecker, 2011; National Advisory Committee on Institutional Quality
and Integrity, 2012; Wergin, 2012). Peer review is also problematic because of the depth
of knowledge needed to evaluate an entire institution, the lack of training provided to
reviewers, and the “lack of consistency in the outcomes of reviews” (Ewell, 2012, p. 96;
Gillen, Bennett, & Vedder, 2010). Accreditation was never intended as a means for
policing institutions to ensure that they are meeting minimum standards however, but
rather as a means of assisting institutions with self-improvement (National Advisory
6
Committee on Institutional Quality and Integrity, 2012; Selden, 1960; Zook & Haggerty,
1936).
Institutional accreditation is granted by regional accrediting organizations
(accrediting institutions according to geographic region), national faith-related
accrediting organizations (accrediting religiously affiliated or spiritually oriented
institutions), and national career-related accrediting organizations (accrediting vocational
and professional institutions which are frequently nondegree and for-profit). The regional
accreditors are among the oldest accrediting organizations in the country and serve the
largest number of institutions (Council for Higher Education Accreditation, 2010). There
are eight higher education accrediting commissions within the six regional accrediting
agencies primarily responsible for granting institutional accreditation: the Middle States
Commission on Higher Education (MSCHE), the New England Association of Schools
and Colleges Commission on Institutions of Higher Education (NEASC-CIHE), the New
England Association of Schools and Colleges Commission on Technical and Career
Institutions (NEASC-CTCI), the North Central Association of Colleges and Schools The
Higher Learning Commission (NCA-HLC), the Northwest Commission on Colleges and
Universities (NWCCU), the Southern Association of Colleges and Schools Commission
on Colleges (SACS-COC), the Western Association of Schools and Colleges Accrediting
Commission for Community and Junior Colleges (WASC-ACCJC), and the Western
Association of Schools and Colleges Accrediting Commission for Senior Colleges and
Universities (WASC). Through accreditation these regional accrediting bodies strive to
ensure that the work performed at and by institutions of higher learning is adequate in
7
quality (Chernay, 1990; Council of Regional Accrediting Commissions, n.d.; Zook &
Haggerty, 1936).
This decentralized system came about as the direct result of the diversity which
characterized higher education in the U.S., which diversity stemmed directly from the
government of education at the state level rather than at the federal level (Eaton, 2011b;
Ewell, 2008; Ewell, 2012; Jackson, Davis, & Jackson, 2010; Middaugh, 2012; Ratteray,
2008; Wergin, 2012). Accreditation emerged as higher education in the U.S. aged,
spontaneously generated by like-minded professionals who were concerned about critical
issues such as the definition and standards of a college. The method proved useful and a
terminology of accreditation developed which could be held relatively common across
various regions of the U.S., thus uniting independent, simultaneous efforts around the
country.
As accreditation matured it proved to be such a valuable metric that other vastly
disparate entities (e.g., prospective students, state governments, federal government, etc.)
began relying increasingly upon it for separate unique needs (El-Khawas, 2001; Ewell,
2008; Finkin, 1973; Hartle, 2012; National Advisory Committee on Institutional Quality
and Integrity, 2012; Van Damme, 2000; Winskowski, 2012). Reliance on accreditation
was inexpensive and highly efficient for these other entities. The higher education
community had developed a highly functioning, self-reliant system of quality assurance
that relied upon an army of volunteers to evaluate institutional quality through a peer
review process commanding professional respect. Over time greater expectations formed
about what accreditation could represent thereby altering the purposes of accreditation
8
and confounding its use even among those in academe (American Council on Education,
2012; Banta & Associates, 2002; Brittingham, 2008; Ewell, 2008; Finkin, 1973; Gillen,
Bennett, & Vedder, 2010; National Advisory Committee on Institutional Quality and
Integrity, 2012). On the other hand, “Accreditation doesn’t accomplish most of the things
people outside academe would like it to accomplish because it was designed by and for
the relatively small group of people who understand the complexities of American higher
education” (Kelderman, 2011, para. 3) rather than with these other additional purposes in
mind.
Goals of Accreditation
The accreditation process currently has four primary purposes: it is a quality
assurance mechanism, it is an instrument for institutional quality improvement, it enables
and facilitates student mobility, and it provides access to federal funding. Because of
these uses, acquiring and maintaining institutional accreditation is an absolute necessity
for most institutions of higher education.
The first purpose, quality assurance, was the concept behind the original ideas that
ultimately evolved into the current system of accreditation (Blauch, 1959; Newman,
1996; Pfnister, 1971; Winskowski, 2012). The second purpose, quality enhancement, also
figures among its primary explicit purposes (Zook & Haggerty, 1936). There exists an
irresolvable tension between these two concepts and there have always been questions as
to whether accreditation can simultaneously provide both quality assurance and quality
enhancement in adequate fashion (Brittingham, 2008; Ewell, 2009; Ewell, 2012; Ewell,
Wellman, & Paulson, 1997; Gillen, Bennett, & Vedder, 2010; Jackson, Davis, & Jackson,
9
2010; National Advisory Committee on Institutional Quality and Integrity, 2012;
Provezis, 2010; Sibolski, 2012; Spangehl, 2012). Nevertheless accreditation does serve
both of these purposes to some extent. The demonstration of accredited status by an
institution indicates to the public at large that the institution maintains a minimum level
of quality (Eaton, 2009), and when implemented as intended accreditation is a vehicle for
institutional improvement (Ewell, 2008).
Thirdly from a practical standpoint, accreditation serves the purpose of enabling
student mobility by facilitating the transfer of credit between institutions (American
Council on Education, 2012; Blauch, 1959; Eaton, 2009; Eaton, 2011a; Eaton, 2011b;
National Advisory Committee on Institutional Quality and Integrity, 2012; Sibolski,
2012; Winskowski, 2012). Accredited status indicates that the quality of instruction of
one institution is similar in rigor to that of another institution and therefore can be relied
upon as being adequate to fulfill similar curricular requirements, thus allowing students
not to repeat previously completed courses. This particular issue has become increasingly
important in the last decade as college students in general have become increasingly
mobile, earning credit from multiple institutions before completing a degree (Ewell,
2008).
Finally, publicly accreditation is known primarily as the means by which the
federal government determines institutional eligibility for federal fund distribution via
students. Although much of the power of accreditation as a means of promoting quality
assurance stems from the fact that it is non-governmental, the government has come to
rely upon it to serve a gatekeeper function in determining that eligibility (American
10
Council on Education, 2012; Brittingham, 2012; Chernay, 1990; Eaton, 2011a; Eaton,
2011b; Eaton, 2012a; Eaton, 2012b; Ewell, 2008; Gillen, Bennett, & Vedder, 2010;
Hartle, 2012; Leef & Burris, 2002; Middaugh, 2012; National Advisory Committee on
Institutional Quality and Integrity, 2012; Orlans, 1975; Sibolski, 2012; Spangehl, 2012).
This is presently one of accreditation’s most critical uses. “While this service may not
have been envisioned at the origin of accrediting agencies, accreditation nonetheless
provides a valuable function in this process, and is uniquely appropriate for that function”
(National Advisory Committee on Institutional Quality and Integrity, 2012, p. 2). With
the high stakes of significant governmental funding ($60 billion through federal grants
and loans, more than $63 billion from state governments, etc.), accreditation thus serves
as a “buffer against the politicizing of higher education” (Eaton, 2003b, p. 1).
In theory, accreditation is a voluntary process and institutions are not required to
seek it. However “the shibboleth that accrediting agencies are merely voluntary
membership organizations ignores the reality that the federal government has given
accreditors federal power to determine access of higher education institutions to federal
monies—indeed, in some cases, monopolistic power” (Finkin, 1994a, p. 130). In fact,
“accreditation represents one of the few avenues of direct influence – i.e., regulation –
that the federal government can exercise over colleges and universities” (Brittingham,
2012, p. 61). The strength of the relationship between these federal monies and
accreditation has served to distort the role of accreditation over time (Finkin, 1994a).
Along with the link between accreditation and the determination of eligibility for federal
funding comes “an array of consequences that are neither appropriate nor desirable”
11
(National Advisory Committee on Institutional Quality and Integrity, 2011, p. 4),
therefore the link between the two is presently being questioned.
Accreditation began as a voluntary method of quality assurance, however because
of the way accreditation has adopted other functions over the course of its evolution (in
particular in adopting the role of determining eligibility for federal funding) it has
become imperative for institutions of higher education to acquire and maintain it.
History of Accreditation
Concern about the cost of accreditation has been evident almost as long as
accreditation has existed (Ewell, 2008). As noted, many entities have come to rely upon
accreditation largely because of the relatively low cost of self-governance that relies
heavily upon such a large body of volunteers (Brittingham, 2008; Ewell, 2008; Ewell,
2012; Gillen, Bennett, & Vedder, 2010; National Advisory Committee on Institutional
Quality and Integrity, 2012). Still the cumulative direct and indirect cost to institutions
comprised most obviously of membership dues to accrediting organizations, the
execution of a comprehensive self-study, site visit expenses, and the commitment of
volunteers is far from negligible (American Council of Trustees and Alumni, 2007;
American Council on Education, 2012; Dill, 1998; Doerr, 1983; Eaton, 2012a; Gillen,
Bennett, & Vedder, 2010; Hartle, 2012; Kennedy, Moore, & Thibadoux, 1985; Leef &
Burris, 2002; National Advisory Committee on Institutional Quality and Integrity, 2012;
Reidlinger & Prager, 1993; Shibley & Volkwein, 2002; Stoodley, 1985; Willis, 1994).
Additionally, as accreditation has matured the cumulative costs have mounted rapidly
12
(Orlans, 1975; Puffer, 1970). The following overview of the history of accreditation will
demonstrate how concern over its cost has grown since its beginnings.
Institutional accreditation in the United States has “developed through evolution,
not design” (Brittingham, 2009, p. 14) and “has always changed as higher education has
changed” (p. 17). The growth and rise in importance of voluntary accrediting associations
“grew out of the notion that the national government ought not to control educational
affairs, but that more control was necessary” (Hawkins, 1992, p. xi). Indeed, “the best
defense against excessive regulation is a trustworthy system of self-regulation. Self-
regulation works well with the involvement of creative, committed, and accomplished
members of the academy” (Brittingham, 2008, p. 36). According to the National
Advisory Committee on Institutional Quality and Integrity:
Ultimately, all regulation in an enterprise as complex and diverse as American
higher education is self-regulation, and it is necessary that member institutions be
sufficiently involved and invested in understanding the issues, arriving at self-
regulatory solutions, and establishing principles to ensure institutional
compliance. (National Advisory Committee on Institutional Quality and Integrity,
2012, p. 3)
The recognition of an “obligation to protect the public interest” (Finkin, 1973, p.
339) comes from the fact that accreditation is increasingly serving not just the higher
education community but society at large as well (Eaton, 2012a; Finkin, 1973; National
Advisory Committee on Institutional Quality and Integrity, 2012; Ratteray, 2008;
Sibolski, 2012; Winskowski, 2012). Elements of accreditation in the U.S. hearken back to
13
two starkly contrasting models for self-governance: the British model and the continental
European model (Clark, 1983; Commission of the European Communities, 1993; Selden,
1960; Van Vught & Westerheijden, 1994). Both of these models were used as
mechanisms for accountability in higher education. As early as medieval times the British
model was characterized by the self-governance exemplified by such institutions as
Oxford and Cambridge through external accountability to highly reputed peers. The
continental European model as characterized particularly by the Napoleonic tradition in
France and the Humboldtian tradition in Germany emphasized higher education in the
service of the nation with accountability to the government (Van Vught & Westerheijden,
1994).
Ewell (2008) identified four fairly distinct periods of development in institutional
accreditation in the U.S.: 1850 to 1920 when accreditation first emerged as a concept,
1920 to 1950 when regional accreditors began to develop roles for their operation, 1950
to1985 which was widely considered a Golden Age for higher education marked by
increasing federal regulation, and 1985 to the present day when accountability has
become the issue of paramount importance. This review of the history of institutional
accreditation will basically follow Ewell’s divisions, and a succinct timeline can be found
in Appendix A.
Regional Institutional Accreditation, Beginnings to 1920
The development of accreditation resulted from the “vigorous but unplanned
growth and development in U.S. postsecondary education” (Ewell, 2008, p. 28). This
growth was the result of a lack of direct federal oversight which caused a great diversity
14
of new institutions and eventually accrediting bodies as well (Bernhard, 2011; Blauch,
1959; Brittingham, 2009; National Advisory Committee on Institutional Quality and
Integrity, 2012; Winskowski, 2012). The historical use of accreditation has “provided the
conditions under which [diversity between institutions] has flourished” (Brittingham,
2008, p. 33), and ultimately it would be the consistency of this single concept serving that
vast array of institutions that would lend accreditation the power it holds today (Crow,
2009).
Initially a number of organizations undertook accrediting activities in unorganized
fashion. Harvard was not only the first college established in the United States, it was
also the first to initiate an external review of its programs, doing so in 1642 (Davenport,
2000). The oldest recognized accrediting body however is the New York Board of
Regents which was established in 1784 in the style of a European ministry (Orlans,
1975). It was the New York Board of Regents that would first define what a college was
in 1901 (Nevins, 1959). Shortly after its formation in 1882 the American Association of
University Women (AAUW) began visiting institutions and distributing lists of those it
approved (Nevins, 1959). The North Central Association (NCA) suggested that the era of
accreditation began in 1888 when Charles W. Eliot, president of Harvard University,
argued in a national forum that school organizations were not effectively addressing the
educational problems of the day, resulting in the formation of the Committee of Ten
(actually 100 people strong) investigating topics that would remain at the heart of
accreditation throughout its evolution (Davis, 1945; Shaw, 1993).
15
At the turn of the century the movement for the standardization of education led
to a national desire for an institution that could coordinate the function of college and
university accreditation. The Carnegie Foundation was considered early as an institution
that might serve that purpose but only became involved in defining and listing colleges
through one of its original missions providing for the retirement funds of professors of
higher education. The Foundation eventually declined to be further involved with
accreditation (Orlans, 1975; Selden, 1960; Shaw, 1993; Winskowski, 2012; Zook &
Haggerty, 1936). The American Association of Universities (AAU) was also involved
early in creating lists of accredited institutions, but it was the eventual decision of the
AAU to withdraw from that practice that led to the agreement of regional accreditors to
adopt this role (Harcleroad, 1980).
The regional accrediting agencies were formed as similar institutions established
associations out of mutual interest and in response to growing concern about collegiate
standards. Colleges were developing in relatively rapid and uncoordinated fashion
causing a wide range in quality of instruction. Traditionally admission was determined by
an entrance examination; however the University of Michigan pioneered a certificate
system whereby admission was based on the secondary school diploma. The subsequent
standardized determination of reputable secondary schools granting such diplomas
became the forerunner of the modern day accreditation system. Additionally, the
intention of these newly formed regional associations included identifying institutions to
which graduating secondary school students could be sent, and standardizing the
requirements (admission standards) those students should meet to be successful there.
16
This meant a quantitative focus on inputs (rather than eventual student outcomes) some
of which were based more on custom than on precedent. Both of these foci would change
as accreditation evolved (Blauch, 1959; Ewell, 2008; Newman, 1996; Orlans, 1975;
Selden, 1960; Shaw, 1993; Zook & Haggerty, 1936).
Members agreed to abide by the principles they established and membership was
completely voluntary. At first accreditation was synonymous with membership.
Essentially accreditation was the answer to the question, “What is a college?” Institutions
were not re-reviewed after becoming members (Blauch, 1959; Brittingham, 2009; Ewell,
2008; Middle States Commission on Higher Education, 2009; Zook & Haggerty, 1936).
Decisions were made by “peer affirmation” which would later become “peer review”
(Ewell, 2008, p. 21).
While it had originally been established in 1867 for the collection of statistics and
data on schools and colleges, at the turn of the century the Department of Education was
becoming increasingly concerned about the adequacy of college standards (Blauch, 1959;
Warren, 1974). Therefore another early intention of these associations was the
minimization of federal intervention (Middle States Commission on Higher Education,
2009).
The homogenization of U.S. culture, largely as a result of the development of
transportation and media, has caused the erosion of many differences between regional
accrediting agencies. The agencies formed at a time when there were real and significant
differences between geographic regions of the U.S. however, and the structures remain
standard today despite a reduction of those differences (American Council on Education,
17
2012; Ewell, 2008; Hartle, 2012; National Advisory Committee on Institutional Quality
and Integrity, 2012; Newman, 1996; Pfnister, 1971; Sibolski, 2012). These associations
would become the regional accrediting agencies we know today and developed in this
order:
• 1885: New England Association of Schools and Colleges (NEASC)
• 1887: Middle States Association of Colleges and Secondary Schools (within
which the Middle States Commission on Higher Education or MSCHE deals with
collegiate education)
• 1895: North Central Association of Colleges and Schools (NCA)
• 1895: Southern Association of Colleges and Schools (SACS)
• 1917: Northwest Commission on Colleges and Universities (NWCCU)
• 1924: Western Association of Schools and Colleges (WASC)
Formal accreditation was pioneered by the College Entrance Examination Board
(CEEB) in 1901 (Stufflebeam & Webster, 1980), the creation of CEEB being a Middle
States Association effort to unify college entrance requirements (Blauch, 1959). Shortly
thereafter in 1909, the North Central Association published its first standards for colleges,
followed in 1916 by its first list of accredited colleges, 11 years after it had first begun
accrediting secondary schools (El-Khawas, 2001).
The U.S. Department of Education became involved briefly with accreditation in
1910 also through the attempt to define what a college was. Kendrick C. Babcock, the
first higher education specialist at the Department of Education, prepared a list
classifying colleges in groups based largely on the work done by the schools’ graduates.
18
The list was suppressed prior to publication by President William Howard Taft following
public opposition to its publication. Subsequently the Department of Education
determined to play an advisory and consultative role as fact gatherer and data recorder in
order to assist other associations in appraising colleges (as well as institutions at all levels
of education), and not to duplicate the efforts already being made by other organizations
(Blauch, 1959; Ewell, 2008; Orlans, 1975; Zook & Haggerty, 1936).
Regional Institutional Accreditation, 1920-1950
The second period of accreditation’s evolution saw the expansion of the definition
of college to include many more kinds of institutions such as vocational colleges and
community colleges. Two regional accreditors (NEASC and WASC) responded to the
variation by creating separate commissions for accrediting different types of institutions
(Ewell, 2008).
Accreditors began to establish firmer credibility by building a role for themselves.
The language formally changed from “approved” institutions to “accredited” institutions
during this period (Middle States Commission on Higher Education, 2009, p. 5). This
period also witnessed the shift from quantitative standards to qualitative standards, and
from minimum requirements to optimal requirements, all of which modifications were
inspired by the expanding diversity of schools (Geiger, 1970). For similar reasons the
North Central Association began using a mission-oriented approach to determining
eligibility for accreditation, allowing each institution’s mission to drive the question of
quality (Brittingham, 2009; Christal & Jones, 1995; Newman, 1996; Zook & Hagerty,
1936). During this period regional accreditors began to re-visit schools but not in
19
systematic fashion, only if institutions demonstrated instability or difficulty in meeting
their mission. Naturally these changes (i.e., qualitative review, school visits, mission-
based evaluation, etc.) also brought rising costs, but these were distributed among all
institutions largely through the contribution of volunteers (Ewell, 2008).
This period also saw the first development of an independent accreditation
oversight organization with a cross-regional or national influence. The Association of
Land-Grant Colleges and Universities and the National Association of State Universities
formed the Joint Committee on Accrediting in 1938 with the goal of approving
accrediting agencies as well as eliminating superfluous ones. Eventually the Association
of Urban Universities and the Association of American Universities would also become
involved with the Joint Commission on Accrediting; however the abrupt decision by the
Association of American Universities no longer to be involved with accreditation forced
the members of the Joint Commission to reassess their work in 1948. The following year
the Joint Commission on Accrediting became the National Commission on Accrediting
(Blauch, 1959).
An important moment in the legitimation of accreditation occurred as a result of
the Langer Case of 1938 when the North Central Association (NCA) was sued by
William Langer, governor of North Dakota, because North Dakota Agricultural College
was removed from the list of accredited institutions. The final decision favored the NCA,
formally and legally granting credibility to regional accreditation for the first time (Davis,
1945; Ewell, 2008; Newman, 1996; Selden, 1960). The courts similarly reaffirmed the
legitimacy of accrediting institutions as a result of the high-profile case of Parsons
20
College which lost accreditation in 1967. The courts denied the appeal made by Parsons
College on the basis that the regional accrediting associations were voluntary bodies
(Geiger, 1970; Orlans, 1975).
Regional Institutional Accreditation, 1949-1985
Regional accreditors met in 1949 to coordinate their activities, creating the
National Committee of Regional Accrediting Agencies (NCRAA), an institution entirely
separate from the National Commission on Accrediting. At this point only four of the six
regional associations were formally accrediting to any degree but it was hoped that
nationally all six regional associations would become the primary institutional
accreditors. Subsequent to a request by the National Committee on Accrediting (NCA)
and the NCRAA, and partially to limit government intervention, the regional associations
all agreed, and formally commenced the accreditation of higher education institutions
(Blauch, 1959; New England Association of Schools and Colleges, 1986). Ironically the
New England Association of Schools and Colleges (NEASC) while being the first
established regional accrediting agency was the last to formally begin accreditation of
higher education (New England Association of Schools and Colleges, 1986).
In 1964 the NCRAA became the Federation of Regional Accrediting
Commissions of Higher Education (FRACHE), which continued to co-exist uneasily with
the NCA. The regional accrediting associations contributed to the efforts of both groups
although the persistent effort to maintain individual regional agendas would ultimately
lead to the dissolution of FRACHE (Chernay, 1990; Ewell, 2008; Orlans, 1975). The
increasing effort to coordinate accreditation at the national level required a greater
21
commitment on the part of institutions, so in 1975 the NCA and FRACHE merged to
form the Council on Postsecondary Accreditation or COPA (Ewell, 2008).
During this third period of the evolution of accreditation, mass higher education
was realized as people began attending college at higher rates than ever before.
Consequently the federal government, now channeling federal funds to institutions
through students rather than to the institutions directly, needed a way to verify which
institutions should be eligible for those funds. The government therefore came to rely on,
or “seized” according to Arnstein (1979, p. 357), the system of accreditation already in
place despite the fact that it was never intended for this purpose (National Advisory
Committee on Institutional Quality and Integrity, 2012). This happened for several
reasons: Accrediting agencies were organizations that were both convenient and
apolitical through which the federal government could determine eligibility (Brittingham,
2009; Finkin, 1979; Geiger, 1970; Zook & Haggerty, 1936), and “without enlisting the
help of academic professionals, government [was] simply not in a position to examine
quality directly” (Ewell, 2008, p. 70). Most importantly however, a complex, established
system of accreditation was already in place sparing the government the cost of setting up
and maintaining such a system. The federal government established the National
Advisory Committee on Accreditation and Institutional Eligibility (NACAIE) with the
specific intention of advising the Department of Education on accreditation policy and
maintaining the link between federal funding and institutional eligibility as determined by
accreditation status (U.S. Department of Education, n.d.).
22
The federal government first demonstrated its reliance on regional accreditors
when it called by law for the Commission of Education to publish a list of recognized
accrediting agencies deemed to be reliable authorities of quality training in individual
institutions as part of the Veterans Readjustment Act of 1952, a renewal of the GI Bill of
1944 (Harcleroad, 1990). “In sum, what seems to have been established was essentially a
structure intended to minimize federal involvement and upon which state agencies and
federal authorities could rely” (Finkin, 1973, p. 348). However with its unprecedented
investment in higher education (directly to institutions in the form of research grants as
well as through individual students) the government became increasingly concerned
about a conspicuous lack of accountability (Ewell, 2008; National Advisory Committee
on Institutional Quality and Integrity, 2012). Selden (1960) correctly predicted increased
federal involvement, and in 1963 the government enacted the Higher Education Facilities
Act actually requiring institutions that were receiving federal funding to be accredited
(Ewell, 2008).
The involvement of the federal government would accelerate the rate of change
that accreditation underwent (Brittingham, 2008). It was during this period that
accreditation as we know it today emerged, involving aspects such as the self-study, a site
visit by colleagues from peer institutions, and regular cyclical visits (Ewell, 2008). The
cost of accreditation was climbing steadily.
Regional Institutional Accreditation, 1985-Present
During the final period of the evolution of accreditation, accountability became
the primary driving issue. There was increasing criticism of accreditation, fueled by high
23
student loan default rates, the rising relative and absolute cost of higher education, a lack
of demonstrable student learning outcomes, and Congressional mistrust. Congress
“inserted itself” (Bardo, 2009, p. 49) into the accreditation process through such means as
the creation of the National Advisory Committee on Institutional Quality and Integrity
(NACIQI) as a replacement to the National Advisory Committee on Accreditation and
Institutional Eligibility (NACAIE) established almost 30 years previously. This new
entity was expected “to play a role in system review, monitoring, dialogue and exchange,
and policy analysis and recommendations to advise the Secretary” and attested to having
“the opportunity to provide greater leadership and perspective on the design and
effectiveness of the accreditation and quality assurance process” (National Committee on
Institutional Quality and Integrity, 2012, p. 8).
In 1992 the federal government approved legislation creating State Postsecondary
Review Entities (SPREs) to review institutions, in theory effectively replacing the system
of regional peer-review based accreditation. The inability of the Council on
Postsecondary Accreditation (COPA) to anticipate and prevent the SPREs became the
breaking point that led to its dissolution, the formation of the Commission on
Recognition of Postsecondary Accreditation (CORPA) and the National Policy Board
(NPB) as interim measures, and the subsequent formation of the Council for Higher
Education Accreditation (CHEA). The measure establishing SPREs would be abandoned
after the 1994 elections when implementation proved too costly, directly and poignantly
illustrating how expensive accreditation had become (Amaral, Rosa, & Tavares, 2009;
Ewell, 2008; Rainwater, 2006). The Council for Higher Education Accreditation is the
24
current national body providing advocacy for, service to, and recognition of accreditation
(Bloland, 2001). As regional accreditors risked losing their authority and becoming an
arm of the federal government, all six adopted extensive requirements for documented
student learning outcomes in order to transition from standards-based to outcomes-based
accreditation (Bardo, 2009; Brittingham, 2009; Ewell, 2008; Hartle, 2012; Jackson,
Davis, & Jackson, 2010; Middaugh, 2012; Newman, 1996; Provezis, 2010; Wergin,
2012).
The changes implemented by regional accrediting agencies notwithstanding,
criticism of accreditation has continued. Most famously, the Spellings Commission (U.S.
Department of Education, 2006) accused accreditation of being both ineffective and a
barrier to innovation. By way of contrast, NACIQI has credited federal intervention with
the increased accountability demonstrated by the accreditation process, stating that:
“While some may consider that accreditation has not been sufficiently publicly
accountable, it is notable that, as a function of its engagement in the federal aid eligibility
process, the accreditation system has moved in the direction of greater accountability”
(National Advisory Committee on Institutional Quality and Integrity, 2012, p. 3).
Nevertheless there are indications that federal intervention will only increase causing
many in the accreditation community to be wary (Eaton, 2010; Eaton, 2012a; Gillen,
Bennett, & Vedder, 2010).
Regional Institutional Accreditation in Future Periods
With its emphasis on self-study, a reflective process which invites institutions to
continually improve, accreditation is a process that is inherently forward-looking. It will
25
continue to change and evolve as new challenges face higher education, and will need to
do so in order to remain viable (Brittingham, 2009; Brittingham, 2012; Eaton, 2012b).
Specifically accreditation will need greater advocacy and leadership (Eaton, 2012b).
Accreditation processes will need better coordination and collaboration, a more
consistent national voice, greater transparency, and the involvement of more stakeholders
(e.g., students, employers, public policymakers, etc.) in the peer review process (Crow,
2009; Dill, 1998; Ewell, 2012). Regional accrediting agencies will need to adapt to retain
relevance in guiding that evolution: Accrediting agencies need “a voice related to but
distinct from the voice of the higher education community” (Crow, 2009, p. 94). The
agencies need ways to creatively share the work and responsibilities more evenly among
themselves, stable financial resources to ensure continued operation, and more large-scale
collaboration, such as globally with international accreditors (Crow, 2009). If the
accreditation community does not take ownership in adapting to the evolving landscape
of higher education, there will continue to be increased federal regulation in the future
(American Council on Education, 2012; Eaton, 2012b; Hartle, 2012; Middaugh, 2012;
Winskowski, 2012).
These are the kinds of issues that the Council for Higher Education Accreditation
is exploring with the CHEA Initiative, “a multi-year national conversation on the future
of accreditation” (Council for Higher Education Accreditation, n.d., para. 1). The
initiative’s main purpose is to clarify accreditation’s role in the context of accountability
by balancing accountability with the original quality assurance purposes of accreditation.
Through a significant commitment to national dialogue CHEA identified eight issues, and
26
is currently working to finalize recommendations for action by stakeholders on each
issue.
On the other hand these efforts will continue to exert upward pressure on the costs
associated with accreditation (Ewell, 2008). Ewell (2012) points out that “a non-trivial
objection that can be made to adopting any of these enhancements to the review process
is that they would add cost” (p. 101). He suggests however that “some institutions appear
to be willing to bear higher costs so long as they believe they are receiving accurate and
reliable reviews conducted using more elaborate evidence-gathering procedures” (p. 101-
102).
Argument for the Study
This review of the evolution of accreditation has shown how it has grown in
importance for over a century to the point where accreditation is no longer even truly
voluntary. It has also demonstrated how the costs and commitment required of
institutions have grown relentlessly. Accreditation has steadily become more demanding
and more time consuming. With so many institutions undertaking the process it can
hardly be expected that every institution will or can make the same exact commitment to
acquiring and maintaining accreditation. For this reason it is important to develop a better
understanding of the institutional treatment of accreditation through empirical research.
Statement of the Problem
Accreditation is expensive, the result of both direct and indirect costs (National
Advisory Committee on Institutional Quality and Integrity, 2012; Willis, 1994). The
more obvious direct costs include those incurred over the course of the self-study, the
27
costs of the preparation of reports, logistical costs for the site visit, and dues to
accrediting organizations. Much more costly however are the indirect costs of the amount
of time committed to accreditation by various campus groups which generally do not
involve actual payment of funds (Longanecker, 2011; Willis, 1994). These costs are
frequently represented by opportunity costs or “displaced dollar costs” and are therefore
often difficult to ascertain (Parks, 1982, p. 4).There is a significant lack of empirical
research in general on these costs (Shibley & Volkwein, 2002) and an even more
pronounced dearth of quantitative research allowing for greater generalizability.
Because accreditation plays a critical role as gatekeeper for access to federal
funding, it is essential to the financial survival of colleges and universities. Consequently
they regularly undergo an accreditation review on some sort of cyclical schedule (e.g.,
every ten years). Although a number of years may pass between evaluations, the time
leading up to the review (generally lasting about two years) requires a great deal of
planning and coordination. Eventually a wide variety of participants across the institution
contribute a substantial effort, in particular with the preparation of the comprehensive
self-study and the coordination of the on-site visit (Wolff, 2005). Notably this group will
include university executives (such as the president, the provost, and deans), faculty,
administrative staff, students, and others. Some person at the institution must necessarily
coordinate the effort to maximize the efficiency of the process and to ensure a successful
review. Because of the cyclical nature of this review the assignment of such a demanding
task can be problematic. When the time committed by these various parties is combined
with the actual financial expenditures, the cost of accreditation can be considerable.
28
The commitment to accreditation made by different institutions will vary
inevitably because of institutional capacity or because of institutional interest. Some
organizations will be able to dedicate greater personnel time and budgetary resources to
the process and some will have greater buy-in from the various campus constituencies.
Similarly some institutions will invariably create a culture that places greater emphasis on
the process. Even the assignment of the management of accreditation could be an
indication of an institution’s commitment, i.e., whether it is assigned to a provost or other
campus executive, to an accreditation officer assigned specifically to such tasks, or to
someone different altogether. What is not known is whether this is in fact the case, and if
it is, how that commitment might vary across the higher education landscape or even
within groups of like institutions.
Purpose of the Study
The purpose of this study was to identify the costs of institutional accreditation
more clearly by ascertaining what kind of commitment colleges and universities are
making toward it, what form that commitment takes (i.e., what are the direct and indirect
costs), and how that commitment varies across the higher education landscape. The
specific costs associated with the accreditation review that this study investigates include
both the time and fiscal resources that are dedicated to the cyclical assessment. In order to
develop this understanding the study focuses on the following research questions:
• What costs are associated with institutional accreditation and how do those costs
vary between and among types of institution?
29
• How is financial commitment toward institutional accreditation manifested, i.e.,
what are the perceived direct and indirect costs of institutional accreditation?
• Do primary Accreditation Liaison Officers believe that the perceived benefits
associated with institutional accreditation justify the institutional costs?
• What kinds of patterns emerge in accreditation commitment between types of
institutions?
Significance of the Study
Eaton (2012a) notes that “all of this activity—establishing an organization, setting
standards, self-review, peer review, and accreditation judgment—is funded and managed
by colleges and universities” (p. 9). Hartle (2012) adds:
While the basic elements of accreditation review have remained the same, the
proliferation of detail that now surrounds accreditation means that the amount of
time and money devoted to the effort has increased exponentially. This
development, in no little part the result of increased external pressures and
government regulations, has increased the cost and burden associated with
accreditation… Complaints about the institutional burden created by accreditation
are not new. However, in light of the budgetary challenges facing all institutions
of higher education, the time and money devoted to accreditation creates an
increasing source of tension. (p. 19)
The long period of time between accreditation reviews can provide challenges to
its goals (Kells, 1976; Longanecker, 2011; Wolff, 2005). Because a number of years will
have passed since the last review many people at the school, including those who were
30
directly involved the previous time, may forget what was required for a successful
evaluation. On the other hand some form of continued commitment to accreditation is
critical because “no longer is institutional accreditation a decennial event; it has
developed into a relationship” (Brittingham, 2012, p. 60). It can be difficult to know what
constitutes adequate support. Complicating the question is the fact that those who will be
directly coordinating the actual review will be required to make time for the process from
a schedule already filled with other professional responsibilities, and the eventual
completion of the process will free up a significant amount of time that must
subsequently be filled. This study will be useful to institutional leaders because it will
provide greater context on how colleges and universities manage the costs associated
with accreditation. Further, it will assist such leaders in gaining a better understanding of
the real costs of the time being committed to accreditation.
Accrediting agencies are eager to serve as a resource in making these kinds of
determinations but there is a potential conflict of interest in asking the accreditor about
how much and what kind of support might be necessary for its accreditation review.
Unfortunately the literature available on the topic provides little elucidation. There is an
inadequate amount of empirical research on accreditation generally and on the costs
associated with accreditation specifically. Much of that research is so general that it
provides limited practical applications for organizations looking for standards related to
accreditation costs. This study contributes to the literature by providing contextual data
on the practice. Much of the research that is available is qualitative, making the
quantitative data that this study provides an important contrast. While the interpretation
31
of qualitative data can provide great depth of understanding, external validity is limited
due to the small sample size (Patton, 2002). This study uses a larger sample in an effort to
make the findings generalizable to like institutions.
This study will be of interest to campus executives as they assess the institutional
commitment being made to the accreditation process in periodic fashion. It will provide
and interpret data that can be used as a benchmark as leaders determine what kind of
support and activities are adequate and appropriate. This study will also be of interest to
academic officers involved with the coordination of accreditation, particularly those
serving as primary Accreditation Liaison Officers, by providing contextual guidance for
their efforts.
Definitions
Accreditation: A quality review process conducted by professional peers whereby
an institution or program is evaluated to determine whether it has a minimum level of
adequate quality.
Accreditation Liaison Officer (ALO): A designated institutional representative
who is chiefly responsible for coordinating the accreditation effort with the accrediting
agency.
Benefits of accreditation: The advantages an institution gains by having
accreditation.
Cost of accreditation: The institutional commitment in terms of budgetary
spending (direct costs) and time contributed (indirect costs) by the various campus
constituencies to the accreditation effort.
32
Council for Higher Education Accreditation (CHEA): The national body
coordinating advocacy efforts for accreditation and performing the function of
recognizing accrediting entities; CHEA reviews the effectiveness of accrediting bodies
and primarily assures the academic quality and improvement within institutions.
Gatekeeper: The role of accreditation with respect to federal funding; in order for
an institution or program to qualify for the receipt of federal funds it must be accredited
by a recognized institution, thus accreditation serves as a gatekeeper for those funds.
Institutional accreditation: Recognition of a minimum level of adequate quality at
the institutional level and without respect to individual programs of study.
Middle States Commission on Higher Education (MSCHE): Regional accreditor
responsible for the institutional accreditation of schools in Delaware, Maryland, New
Jersey, New York, Pennsylvania, Puerto Rico, the U.S. Virgin Islands, Washington D.C.,
and select locations overseas.
National accreditation: Quality review at either the institutional level or the
programmatic level conducted on a national scope rather than on a regional or state
scope.
National Advisory Committee on Institutional Quality and Integrity (NACIQI): A
Congressionally-established committee providing advisement to the Secretary of
Education on matters relating to accreditation and institutional eligibility for federal
financial aid.
33
New England Association of Schools and Colleges (NEASC): Regional accreditor
responsible for the institutional accreditation of schools in Connecticut, Maine,
Massachusetts, New Hampshire, Rhode Island, Vermont, and select locations overseas.
North Central Association of Colleges and Schools (NCA): Regional accreditor
responsible for the institutional accreditation of schools in Arizona, Arkansas, Colorado,
Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, New Mexico,
North Dakota, Ohio, Oklahoma, South Dakota, West Virginia, Wisconsin, and Wyoming.
Northwest Commission on Colleges and Universities (NWCCU): Regional
accreditor responsible for the institutional accreditation of schools in Alaska, Idaho,
Montana, Nevada, Oregon, Utah, Washington, and select locations overseas.
Peer Review: The concept governing accreditation whereby the actual review of
the self-study is conducted by knowledgeable professionals from like institutions in order
to root the decision in legitimacy and credibility.
Programmatic accreditation (or specialized accreditation): Recognition of a
minimum level of adequate quality at the level of the individual program of study without
respect to the rest of the institution as a whole.
Regional accreditation: Quality review at the institutional level conducted on a
regional scope rather than on a national or state scope.
Self-regulation: A concept whereby entities agree to govern themselves and
establish mechanisms and processes to do so; accreditation exemplifies the concept of
self-regulation.
34
Self-study: A comprehensive review usually lasting approximately a year and a
half to two years resulting in a culminating document in which an institution or program
considers every aspect of its operation in order to determine whether it has adequate
resources at all levels to fulfill its clearly defined mission.
Site visit: Generally a two to three day period in which knowledgeable
professionals from like institutions visit an institution after reviewing its self-study to
ascertain the accuracy of the self-study and identify any concerns; subsequent to the site
visit the visiting team makes an accreditation recommendation to the accrediting body
after which the accrediting body announces a formal decision.
Southern Association of Colleges and Schools (SACS): Regional accreditor
responsible for the institutional accreditation of schools in Alabama, Florida, Georgia,
Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas,
Virginia, and select locations overseas.
Specialized accreditation (or programmatic accreditation): Recognition of a
minimum level of adequate quality at the level of the individual program of study without
respect to the rest of the institution as a whole.
U.S. Department of Education: The arm of the federal government concerned
with education quality and access nationally.
Voluntary association: An organization in which membership is optional;
accrediting bodies began as voluntary associations and, strictly speaking, continue to be
so classified, however because eligibility for federal funding is tied to accreditation many
professionals question whether accreditation is truly voluntary.
35
Western Association of Schools and Colleges (WASC): Regional accreditor
responsible for the institutional accreditation of schools in California, Guam, Hawaii, and
the Pacific Basin.
36
CHAPTER TWO: LITERATURE REVIEW
The formal accreditation of institutions of higher education in the U.S. began
about a century ago as institutions strived to create both a method by which colleges
could be defined and a minimum standard of quality that could be verified through a peer
review-based system (Ewell, 2008). The concept has evolved over time both in terms of
function and application; however it has become so embedded into the culture of higher
education that it is generally considered irreplaceable (Crow, 2009). Unfortunately there
is a limited understanding of accreditation in the general public and in academe (Hardin
& Stocks, 1995; Haywood, 1974; Ikenberry, 2009; Ratcliff, Lubinescu, & Gaffney, 2001;
Volkwein, Lattuca, Caffrey, & Reindl, 2005; Young, 1983). The following review will
survey the body of literature that has been written on accreditation. The purpose of this
literature review is to contextualize the current construct of the cost of accreditation.
There are four fundamental areas to explore. The first section will explore the
literature written on how accreditation has developed over the course of the twentieth
century. It will then examine the view held internationally of accreditation in general and
as it is conducted in the U.S. specifically.
Over the course of history accreditation has been the subject of sharp disapproval,
and many improvements have come as a result of those concerns. The second section will
review critical assessments of the practice. Much of the change in accreditation over time
has been inspired by an increased demand for accountability, therefore the review will
contextualize these changes within that broader topic. Next the review will report on
37
alternatives to accreditation that have been explored including amendments to the process
as presently practiced.
The third section will examine the effects of accreditation by focusing on studies
that have considered its effects on student assessment, the effects of specialized
accreditation, and its organizational effects. Lastly this review will evaluate the studies
that have been conducted on the institutional costs associated with accreditation in order
to frame the methodological planning for this study.
The Development of Accreditation
As accreditation has evolved over the last century there have been several
important works written by leaders in the field that have summarized and guided its
development, and these will be summarized here. Other nations rely on similar
mechanisms for quality assurance of educational programs. Historically higher education
in the U.S. has been regarded as the strongest system in the world, however many other
countries have been interested in the way accreditation is practiced in the U.S. (Hayward,
2001). This section will conclude with a review of the literature on accreditation from an
international perspective.
Accreditation in the U.S.
Most of the literature on accreditation takes the form of position papers, essays,
speeches, or resources and guides for institutions undergoing the process. As with the
literature on performance policies in higher education, “with few exceptions the literature
remains largely descriptive in nature, prescriptive in tone, and anecdotal in content”
(McLendon, Hearn, & Deaton, 2006, p. 3). There have been however some rigorous
38
studies which have had far-reaching effects. The first of these was the Flexner Report
(Flexner, 1910) reviewing the dismaying state of medical education in the U.S. and
Canada. The publication of the report resulted in a rapid and dramatic increase in the
quality of medical education nationwide. The report’s unprecedented success caused the
proliferation of programmatic accreditation agencies and propelled the development and
implementation of other accreditation efforts as well.
The North Central Association (NCA) has always played a strong leadership role
in the development of accreditation (Lee & Crow, 1998; Ratteray, 2008), and it was the
NCA which conducted the first systematic study of the process with the intention of
improving the practice. In 1929 the NCA appointed a committee which surveyed 57
institutions to identify ways to make accreditation less dependent on the strict
quantitative standards that were translating poorly between different types of institutions.
The results published in 1936 by Zook and Haggerty described a new method whereby
each institution would be evaluated on the adequacy with which its resources and
practices allowed it to meet its own stated mission. This new approach, much more
qualitative in practice, fostered an educational landscape that encouraged an even greater
diversification of higher education in the U.S. The approach in turn led to the
development of a self-study as the initial step in the accreditation review. These self-
studies encouraged leaders of institutions to consider ways in which they might improve
quality. The practice was so successful that other accrediting organizations adopted and
implemented the self-study concept, and this approach undergirds accreditation to this
day.
39
Blauch (1959) wrote a history of accreditation for the U.S. Office of Education
that defined and summarized the process in comprehensive fashion for the first time. Of
particular value was an historical summary of each of the regional accrediting
associations written by representatives from the agencies. He also compiled a summary
of each acknowledged specialized accreditor in existence. This work became a basic and
important resource for most scholarly writing about accreditation for several decades.
Another important history of accreditation was written by one of its long-time
critics only a year later. Selden (1960) expressed great concern about accreditation as a
system that perpetuated social standards rather than educational quality, and as he traced
its evolution he discussed how it had been used to further the purposes of the various
groups and governments promoting it. Because of the critical nature of his assessment
this work became the preferred scholarly and historical base for most of the criticisms
during the years that immediately followed.
The majority of the regional accreditors have undertaken the task of writing
histories of their associations. Among these the North Central Association (NCA) has
been a leader, with important monographs written by Davis (1945) reviewing the first 50
years of the NCA, Geiger (1970) reviewing the following 25 years, and Newman (1996)
considering the entire history to date. All three of these particularly oft-referenced works
are insightful about the development of accreditation professionally and socially.
Puffer (1970) directed an independent study on accreditation for the Federation of
Regional Accrediting Commissions of Higher Education (FRACHE). What became
known as the Puffer Report featured the results of a survey administered to university
40
executives. This report explored the institutional benefits and weaknesses of accreditation
and suggested improvements. It reported a strong consensus that the practice was
valuable to institutions, emphasized the need to limit the influence of specialized
accreditation, and called for a stronger national coordinating body to ensure its viability
in the future. The report led to a re-formed, stronger FRACHE, and shortly thereafter to a
new national coordinating body, the Council on Postsecondary Accreditation (COPA) in
1974.
Young (1983) undertook the compilation of resources on accreditation into a
work intended to provide insights on the process and appropriate uses for accreditation.
The comprehensive work also addressed the role played by institutions and institutional
members, accrediting bodies, federal and state governments, and beneficiaries of
accreditation. Each chapter was written by noted leaders and scholars from the higher
education community. Prior to this compilation, these topics were discussed in works
scattered throughout the literature and not collected in any one volume. The work served
as a primary reference and training tool on accreditation for decades.
A change in the national oversight of accreditation occurred in the mid-1990s as a
result of the dissolution of the Council on Postsecondary Accreditation, or COPA
(Bloland, 2001). In the void created by COPA’s failure the higher education community
rallied to organize a national body that could help maintain the role of the regional
accrediting associations and defend their work from government intrusion, ultimately
leading to the creation of the Council for Higher Education Accreditation (CHEA). In
forming CHEA the initial National Policy Board published what became known as the
41
Harvey paper for its primary author (National Policy Board on Higher Education
Institutional Accreditation, 1994) inviting higher education executives to be involved in
the creation process, something previous national entities had not done. Others hoped that
the failure of COPA might lead to significant changes to accreditation as a concept:
Graham, Lyman, and Trow (1995) and Trow (1996) published works questioning
whether accreditation was adequate for the current educational climate and proposed the
adoption of the academic audit as an alternative (see discussion below). Bloland (2001)
followed this discussion for several years specifically as it related to the creation of
CHEA and the establishment of its legitimacy in the higher education community.
Through his involvement as an historical spectator and compiler of detailed notes for
each of the meetings he published a history of the formation of CHEA that explained the
role and process of accreditation and its social influence on institutions.
Most recently Ewell (2008) undertook writing a history of accreditation to
commemorate the tenth anniversary of CHEA. Similar to Blauch (1959) but with a wider
historical lens, Ewell detailed the evolution of accreditation and the way it dealt with
various challenges throughout its development. In his final chapter he placed the concept
in historical context as it related to the challenges it faces currently and the ways in which
it will need to address those challenges if it is to remain relevant.
These works provide a context for the evolution of accreditation over the course
of the twentieth and into the twenty-first century.
42
The International Perspective of Accreditation
Accreditation in the U.S. has been directly affected by the international
perspective that foreign universities have of it. In 1904 the University of Berlin
announced its intention to admit only graduate students holding degrees from one of the
fourteen institutions that were part of the Association of American Universities (AAU).
Other German institutions quickly adopted the same policy causing great concern to the
AAU which believed this was unjust to many other important U.S. universities. The
ensuing AAU pressure on the federal government to publish a classification of colleges
that would grant greater legitimacy internationally took years to accomplish and
contributed to the importance of accreditation in the U.S. (Blauch, 1959; Orlans, 1975;
Selden, 1960). In the meantime to fill the gap until there was a mechanism recognizing
other institutions, the AAU published its own accreditation list thereby unintentionally
becoming an accrediting institution itself (Zook & Haggerty, 1936). Soon after, the AAU
discontinued the practice.
Accreditation in the United States is distinctly “American” (Brittingham, 2009, p.
7) because it is a direct reflection of the development of higher education in the U.S. into
a highly decentralized system of varying educational networks with multiple institutional
agents including public, private not-for-profit, and for-profit schools. Despite the
differences between these categories of schools many of their structures and goals are
ultimately very similar (Orlans, 1975), resulting in “a peculiar blend of autonomy and
uniformity, and something uniquely American” (Geiger, 1970, p. xxii).
43
While distinct, many other countries rely on mechanisms similar to U.S.
accreditation processes for quality assurance of educational programs. The most obvious
difference internationally is in the way education is governed. In the U.S., education is
governed at the state level whereas internationally, education is generally governed
nationally by a ministry of education (Blauch, 1959; Chernay, 1990; Ewell, 2008;
Middaugh, 2012; Orlans, 1975; Selden, 1960; Shaw, 1993; Zook & Haggerty, 1936).
Consequently the way accreditation is practiced in the U.S. is a topic of great interest to
those involved with educational systems (Hayward, 2001). Several important reports
have been written that describe U.S. accreditation for the purpose of guiding the
development of accreditation structures in other countries (particularly Great Britain),
most notably in reports by Adelman and Silver (1990), Alderman and Brown (2005),
Amaral (1998), Brennan (1997), El-Khawas (2001), Van Damme (2002), Vaughn (2002),
and Wolff (1993). Similarly, Amaral, Rosa, and Tavares (2009) contrasted the federal
government’s increasing involvement and intervention with accreditation in the U.S. with
parallel developments in Europe. These reports generally summarize the strengths and
weaknesses of U.S. accreditation in an evenhanded fashion because of their intent to
inform the adaptation of the concept to a different educational system within a different
cultural concept.
The Commission of the European Communities (1993) studied educational
accountability in 18 European countries and found a set of common elements that
resembles the U.S. system of higher education accreditation. These common elements
included a meta-level agent overseeing the quality management system (such as regional
44
accreditors and the national accreditation coordinating body in the U.S.), a review based
on self-evaluation, peer review, a mechanism for reporting evaluative results back to the
institution, and a relationship between accountability outcomes and governmental
funding. All five of these elements are fundamental to accreditation in the U.S.
Several important publications have considered various systems of accreditation
in countries throughout the world. The Global University Network for Innovation
(GUNI) published an extensive report that included “the latest knowledge, research
results, experiences and practices on accreditation” (Global University Network for
Innovation, 2007, p. xviii). Through a Delphi study based on expert opinion the report
examined variations in accreditation worldwide as well as the implications of interactions
between those various systems, and identified current and future trends in accreditation.
The collection included the results of original empirical research conducted
internationally on accreditation’s benefits and challenges. Notably, the issue of the cost of
accreditation was not present among the study’s findings of concerns; otherwise the
findings resembled very closely those of empirical research on accreditation in the U.S.
The study also found a need for a method of international accreditation to enable the
comparison of institutions and programs across countries.
Bernhard (2011) investigated “the effects of higher education reforms concerning
quality assurance issues within national higher education systems” (p. 14) and conducted
a comparative analysis of such reforms in six countries (Austria, Germany, Finland, the
United Kingdom, the United States, and Canada). Similarly, Stensaker and Harvey
(2011) explored accreditation within the context of accountability historically in regions
45
throughout the world, and considered commonalities between systems. Stensaker and
Harvey found much in common between various systems including an increasing demand
for accountability and a reliance on forms of accreditation that varied from the practice as
it is conducted in the U.S. In particular they identified the academic audit as a frequently
used alternative. Schwarz and Westerheijden (2004) compiled an important collection of
chapters on quality assurance in European nations written by experts from each country.
A total of 20 countries are represented in this work.
Van Damme (2000) explored the growing internationalization of accreditation
resulting from increasing student mobility across national borders. He noted the particular
impact that this has on the U.S., “the largest receiving country of foreign students with 34
per cent of the OECD-total” (p. 3). He expressed concern over a “tendency of
convergence in international quality assurance systems” (p. 13) because of “powerful
historical and cultural differences between countries that can explain and justify variation
in quality assurance models” (p. 14). The author also raised as a primary issue the
efficacy and cost-effectiveness of any system, in particular as it relates to the burden on
personnel. He strongly recommended the development of “international mutual
accreditation networks” (p. 17) by the institutions themselves operating on “a kind of
reciprocity, interdependence and balance of interests and authority” (p. 17). Such a model
would allow the perpetuation of heterogeneity and diversification of higher education
internationally. Lenn (1996) also discussed the globalization of accreditation, framing it
in terms of multiple accreditations and trade agreements.
46
In conclusion, the U.S. system of accreditation has served as a model for self-
regulation in education world-wide and variations of the practice are evident in other
countries. These international systems demonstrate many of the same benefits and are
held in similar regard by university executives. Many of the same problems and concerns
can also be found.
This section has provided an overview of the literature that has guided the
development of accreditation throughout the last century. It also considered the view of
accreditation held internationally, particularly as it relates to accreditation in the U.S. The
focus of the literature has changed over the course of time as the values and problems
associated with accreditation have changed, however because of the monumental changes
that have taken place since the formation of the Council for Higher Education
Accreditation (CHEA) in 1996, the literature predating CHEA has become much less
relevant to the practice of accreditation presently.
While there is an abundance of literature written about accreditation, one of the
greatest obstacles to the scholarship on accreditation is the lack of empirical research on
the topic (Fallon, 2012; Lillis, 2006; Selden, 1960; Troutt, 1981; Volkwein, Lattuca,
Harper, & Domingo, 2006; Volkwein, Lattuca, & Terenzini, 2008; Warner, 1977; Young,
2010). The research that has been done has been primarily qualitative focusing on
specific institutions or agencies (El-Khawas, 1993; Peterson, 1974). Until only recently
discussions of accreditation were not widely published and usually took the form of notes
from conference from within those countries. At various points throughout history the
answer to threats to accreditation as a system has been intense public outcry supporting
47
the concept rather than a focused effort to build a body of empirical research. This has
commonly been referred to as “circling the wagons.” Consequently there is a notable lack
of published peer-reviewed studies. This significant gap makes it difficult to ascertain
how the pursuit and attainment of accreditation affects both institutions and students.
This gap in the literature drives the current study which seeks to address that lack by
providing additional, quantitative empirical research with broader external validity on the
costs associated with institutional accreditation.
Critical Assessments of Accreditation
This next section reports on critical assessments of accreditation, beginning with
the criticisms leveled against the concept. It will place those assessments within the
greater context of accountability and then explore the literature on alternatives and
amendments to accreditation practice. The concept of accreditation was conceived and
has evolved out of a critical approach to an acute need in education: As professionals
from similar institutions struggled with contemporary problems such as defining what a
college was, ensuring a minimum standard of quality in education, or facilitating the
articulation or transfer of students between institutions, they formed voluntary accrediting
agencies to develop best practices for such functions (Ewell, 2008).
As accreditation has evolved and as the demands on it have increased, criticism
has also grown. El-Khawas (2001) noted that “accrediting agencies are especially
vulnerable to the external scrutiny and the traditions of openness that are part of
American public life” (p. 119), although she acknowledged that such criticism is
inevitable in any public enterprise. Concerns about accreditation arise out of ambiguity in
48
the relationship between accreditation and accountability, especially as it relates to the
increasing demands on accreditation to fill that function (El-Khawas, 1998).
Criticisms of Accreditation
Accreditation has evoked emotional opposition since its inception and much has
been expressed in very colorful language. Accreditation has been accused of “[benefiting]
the small, weak, and uncertain” (Barzun, 1993, p. 60). It is a “pseudo-evaluative process,
set up to give the appearance of self-regulation without having to suffer the
inconvenience” (Scriven, 2000, p. 272). It is a “grossly unprofessional evaluation” (p.
271), and “it is scarcely surprising that in large areas of accreditation, the track record on
enforcement is a farce” (p. 272). Accreditors “[make] the accreditation process a high-
wire act for schools” (American Council of Trustees and Alumni, 2007, p. 12). The
system of accreditation is “structured in such a way as to subordinate the welfare of the
educational institution as an entity and of the general public to the interest of groups
representing limited institutional or professional concerns” (American Medical
Association, 1971, F-3). It has been stated that “accreditation standards have already
fallen to the lowest common denominator” (American Council of Trustees and Alumni,
2007, p. 16), and accreditation is responsible for the “homogenization of education” and
the “perseverance in the status quo” (Finkin, 1973, p. 369). “It is an impossible game
with artificial counters” which ignores the student (Learned & Wood, 1938, p. 69). It is
“a crazy-quilt of activities, processes and structures that is fragmented, arcane, more
historical than logical, and has outlived its usefulness” (Dickeson, 2006, p. 1). It “seeks
not only to compare apples with grapes, but both with camels and cods” (Wriston, 1960,
49
p. 329). “As a mechanism for the assurance of quality, the private voluntary accreditation
agencies are a failure” (Gruson, Levine, & Lustberg, 1979, p. 6). It is “to be tolerated
only as a necessary evil” (Blauch, 1959, p. 23). “While failing to protect the taxpayer and
the consumer from being ripped off by irresponsible institutions, it has also quashed
educational diversity and reform” (Finn, 1975, p. 26). At the same time (and according to
the same author) it constitutes a system of “sturdy walls and deep moats around…
academic city-states” (Carey, 2009, para. 28), and it is a “tissue-thin layer of regulation”
(Carey, 2010, p. 166). “The word ‘accreditation’ is so misunderstood and so abused that
it should be abandoned,” (Kells, 1976). According to Gillen, Bennett, and Vedder (2010),
“the inmates are running the asylum” (p. i).
Longanecker (2011) asserts that “the current institutional accreditation process is
simply not up to the task” (p. 2). He goes on to say:
Our current institutional accreditation process is no longer a viable or credible
quality assurance process. In part this is because our higher education system has
outgrown its quality assurance process. But in great part it is because our
contemporary knowledge about how best to measure and assess quality has
expanded greatly, but the nature of higher education accreditation has not kept up
with these changes in the very nature of measuring quality. (p. 3)
Similary the American Council of Trustees and Alumni (American Council of Trustees
and Alumni, 2007; Leef & Burris, 2002) asserted that “accreditation has not served to
ensure quality, has not protected the curriculum from serious degradation, and gives
students, parents, and public decision-makers almost no useful information about
50
institutions of higher education. Accreditation has, however, imposed significant
monetary and non-monetary costs” (foreword). According to presidents, provosts and
program heads in the Association of Independent Colleges and Universities in
Massachusetts, “[specialized accreditation] is costly, cumbersome, and often unfair… for
institutions trying to refresh their focus on what the public needs and bolster public
confidence in higher education, the process offers little” (Dill, 1998, p. 21).
Selden (1957) famously enumerated “the six outstanding evils of accrediting
agencies” (p. 153), and in considering the work of the regional accrediting associations
Capen (1939) lamented receiving “seven devils in exchange for one” (p. 5). Elliott (1970)
disdained unequivocally:
That the machinery of accreditation has outlived its usefulness, that voluntary
efforts are helpless in the face of today’s problems, that neither the society nor the
student is being protected from third-rate programs, and that this very same
accreditation machinery is now working to prevent flexibility and innovation
rather than to encourage new approaches. (p. 1)
Troutt (1978) expressed concern about how accreditation standards “do not
emerge out of empirical research” but rather “grow out of experienced educator’s [sic]
judgments as to what characteristics constitute a reputable institution” (p. 68). He pointed
out that while such a basis may not be entirely inappropriate per se, in this way
accreditation standards better resemble “criteria for club membership,” and while
“judgments of fitness for club membership may bring disappointment, they do not
threaten an entity’s survival. Judgments of fitness for survival should rest on both solid
51
empirical and philosophical grounds” (p. 69). Capen (1931) felt similarly that “all the
[accreditation] standards applied by these agencies are engineering standards or
organization standards or political standards” (p. 552), not standards appropriate for
measuring education.
As the federal government began to demonstrate increasing reliability on
accreditation as a means of determining eligibility for federal funds with the Veterans
Readjustment Act of 1952 and the Higher Education Facilities Act of 1963, the stakes
became larger and the tone of criticism became increasingly acerbic. In 1970 Koerner
delivered an address entitled “Who benefits from accreditation: Special interests or the
public?” as part of a national conference in which he strongly reproached the regional
accreditors. He blamed the process for stifling creativity among institutions and
preserving the status quo at the expense of educational innovation, most egregiously
while pretending to represent the public (Koerner, 1971; Newman, 1996).
Koerner’s scathing remarks marked the beginning of a period of particularly
intense criticism of higher education accreditation. Only a year later a task force was
commissioned by the Secretary of Health, Education, and Welfare to examine
accreditation and specifically the role it played in institutional eligibility for federal
funding. Chaired by Frank Newman, the committee convened twice, examining “how
well higher education was meeting the needs of society” (Department of Health,
Education, and Welfare, 1973, p. xi). The findings of the Committee echoed those of
Koerner and were critical of accreditation for its total lack of accountability to the public.
The committee called for greater federal involvement in determining institutional
52
eligibility even to the point of replacing accreditation as the gatekeeper for federal funds,
a conclusion not incongruous with that of many modern-day proponents of the return to
accreditation’s original purpose (Carey, 2010; Department of Health, Education, and
Welfare, 1973; Newman, 1996).
Orlans (1974) led a team on behalf of the U.S. Office of Education culminating
with a summary of the widespread problems in accreditation. The exhaustive report was
distributed for commentary after which it was scaled back by half and published in 1975
despite a lack of total consensus on the content (Orlans, 1975). The final work was
unreserved and provocative, and to a certain extent the colorful language it employed
served to undermine its content and recommendations (Pfnister, 1977). Orlans (1975)
questioned the value added through accreditation when comparing accredited schools
with unaccredited or proprietary schools, maintaining that many strong institutions are
not in fact accredited and many accredited institutions are not very strong. Furthermore
Orlans was critical of the inability of accreditation to distinguish between levels of
quality (because an institution either has accreditation or it does not). He also expressed
concern about the evident unrestricted expansion of accreditation, as over the years
previously unaccreditable types of institutions had become accredited too. Orlans’ views
were similar to Newman’s before him. Both wished to see all types of postsecondary
education (not just university higher education) recognized by the government as
legitimate in their respective purposes and rendered eligible for public funding, and both
expressed support for greater competition among educational and accrediting institutions
(Department of Health, Education, and Welfare, 1973; Orlans, 1975), an idea that has
53
been championed very recently (American Council of Trustees and Alumni, 2007; Gillen,
Bennett, & Vedder, 2010; Leef & Burris, 2002).
Throughout all of these criticisms runs the theme of the high costs associated with
accreditation. At times the theme is overtly addressed (Orlans, 1975) whereas other times
it is treated in more nuanced fashion (Finkin, 1979; Geiger, 1970; Zook & Haggerty,
1936). Concern about whether the benefits of accreditation merit the costs of
accreditation is however ever present.
Eaton (2012a) points out that “critics tend to overlook the value of accreditation,
especially with regard to quality improvement, and to ignore the substantial contribution
of accreditation to the growth and development of the higher education enterprise” (p.
11). Indeed, studies conducted by accrediting institutions attribute general support and
positive feelings about accreditation to university executives (Andersen, 1987; Council
for Higher Education Accreditation, 2006; Engdahl, 1981; Federation of Regional
Accrediting Commissions of Higher Education, 1970; Jackson, Davis, & Jackson, 2010;
Lee & Crow, 1998; Pigge, 1979; Puffer, 1970; Romine, 1975; Warner, 1977), and at
times dismiss negative opinions of accreditation (Romine, 1975). These studies tend to
acknowledge that there are aspects of the process that could be better implemented,
however they demonstrate a strong consensus that the process is valuable and important.
On the other hand Finkin (1994b) found the opposite, that university executives are
among the harshest of critics and lament the ways in which accrediting agencies infringe
upon institutional freedom. This construct can also be found within the FRACHE and
CHEA studies but only as a minority view. Key in reconciling these opposing findings is
54
the opinion of university administrators who support accreditation but also recognize the
costs and time involved (Andersen, 1987; Ewell, 2008; Newman, 1996).
The renewal of the Higher Education Act in 1992 came during a time of
heightened government concern over increasing defaults in student loans. Again
concerned about the lack of accountability demonstrated by accreditation, this legislation
established a new institution: the State Postsecondary Review Entity, or SPRE (Ewell,
2008). The creation of these agencies was intended to shift the review of institutions for
federal aid eligibility purposes from regional accreditors to state governments. This direct
threat to accreditation led to the dissolution of the Council on Postsecondary
Accreditation (COPA) and the proactive involvement of the higher education community
resulting in the creation of the Council for Higher Education Accreditation (CHEA). It
was the issue of cost that ultimately led to the abandonment of the SPREs when
legislation failed to provide funding for the initiative (Ewell, 2008). The governmental
concern did not dissipate however, and in 2006 the U.S. Department of Education
released a report known as the Spellings Commission which criticized accreditation for
being both ineffective and a barrier to innovation (Eaton, 2012b; Ewell, 2008).
Other concerns are evident. It is problematic when accreditation is considered a
chore to be accomplished as quickly and painlessly as possible rather than an opportunity
for genuine self-reflection for improvement, and institutional self-assessment is
ineffectual when there is faculty resistance and a lack of administrative incentive (Bardo,
2009; Commission on Regional Accrediting Commissions, n.d.; Driscoll & De Norriega,
2006; Rhodes, 2012; Smith & Finney, 2008; Wergin, 2012). One of the greatest stresses
55
on accreditation is the tension between assessment for the purpose of improvement and
assessment for the purpose of accountability, two concepts that operate in irresolvable
conflict with each other (American Association for Higher Education, 1997; Burke &
Associates, 2005; Chernay, 1990; Ewell, 1984; Ewell, 2008; Harvey, 2004; National
Advisory Committee on Institutional Quality and Integrity, 2012; Provezis, 2010;
Uehling, 1987b), although some argue that the two can be effectively coordinated for
significant positive results (Brittingham, 2012; El-Khawas, 2001; Jackson, Davis, &
Jackson, 2010; Walker, 2010; Westerheijden, Stensaker, & Rosa, 2007; Wolff, 1990).
Another concern involves the way that being held to external standards undermines
institutional autonomy which is a primary source of strength in the American higher
education system (Ewell, 1984).
There have been increasing calls within the last several years even since the
Spellings report of 2006 to reform or altogether replace accreditation as it is currently
known (American Council of Trustees and Alumni, 2007; Gillen, Bennett, & Vedder,
2010; Neal, 2008). The American Council on Education (2012) recently convened a task
force comprised of national leaders in accreditation to explore the adequacy of the current
practice of institutional accreditation. They recognized the difficulty of reaching a
consensus on many issues but nevertheless recommended strengthening and reinforcing
the role of self-regulation in improving academic excellence. They also recommended
that “a first step toward enhancing the cost-effectiveness of accreditation is to determine
more precisely what makes accreditation expensive” (p. 26); this current study seeks to
be a direct contribution toward that effort.
56
Within the last decade Ikenberry (2009) summed up succinctly the present
criticism of the cost associated with accreditation. With accountability being required and
coordinated at so many different levels (national, state, regional via accreditors, etc.) it is
inevitable that such an assessment process will take up considerable time and resources.
This concern has been evidenced in repeated attempts to coordinate programmatic
accreditation or even subsume it into regional, institutional accreditation (Bloland, 2001;
Ewell, 2008).
Accreditation within the Context of Accountability
There has been “a worldwide trend toward greater accountability and control of
higher education” (Michael, 2005). Concerns about accreditation have increased as the
demands on accreditation have grown beyond its original purpose and use (American
Council on Education, 2010; Carey, 2010; Eaton, 2001; Eaton, 2012b; Ewell, 1994;
National Advisory Committee on Institutional Quality and Integrity, 2012). A review of
the literature placing accreditation within the context of accountability provides a greater
context for an understanding of the concept as it was originally conceived.
The very use of the term “accountability” is problematic. According to Wilson
(2012) the word connotes distrust or suspicion. Assigning accountability can also
misplace attribution by implying that control or influence can be exerted when such may
not be the case. Additionally there is a real danger that the demand for accountability
could lead to standardization, quantification, and an overemphasis on that which is
measurable and a marginalization of that which is not. “Thus, it can distort rather than
enhance, constrain rather than enable” (p. 41). Ewell (2008) reinforced the specificity of
57
accreditation’s narrow role in accountability, pointing out that the original intention of
this self-regulation was quality assurance between professionals in the academic
community. He further emphasized that this kind of regulation has always been
characterized by trust, good will, and cooperation (Ewell, Wellman, & Paulson, 1997).
In theory accreditation is a voluntary process and institutions are not required to
seek it. Because of the role accreditation plays in defining academic legitimacy and in
establishing eligibility for federal funds however the process “has never been fully
voluntary” (American Council of Trustees and Alumni, 2007; Christal & Jones, 1995;
Dill, 1998; Ikenberry, 2009, p. 8; Leef & Burris, 2002; Lubinescu, Ratcliff, & Gaffney,
2001; Middaugh, 2012; Orlans, 1975; Selden, 1960). It is doubtful that so many
institutions would maintain accreditation without that association (Ewell, 2008). Thus
“nongovernmental” is a more accurate descriptor than “voluntary,” and is generally what
is intended by those who use the term voluntary to describe it (Dickey & Miller, 1972, p.
2). Because of federal reliance on the process a more appropriate description might be
quasi-governmental (Finkin, 1979; Finkin, 1994b).
In their assessment of the accountability role of accreditation, Lubinescu, Ratcliff,
and Gaffney (2001) described the review as an ongoing process undergoing constant
adaptation to changing institutional conditions. The actual attainment of accredited status
should not be the definitive goal. The self-study in particular plays a fundamental role in
the assessment of an institution (Chernay, 1990; Council of Regional Accrediting
Commissions, n.d.; Zook & Haggerty, 1936). The wide diversity of institutions across the
U.S. makes the self-study the most effective metric against which to measure the
58
performance of an institution because it is impossible to identify a common applicable
quantitative standard. Eaton (2003) voiced great fear on behalf of institutions and
accreditors alike that high-stakes testing in the style of that established by the legislation
responsible for the No Child Left Behind Act might be forced on higher education,
effectively undermining institutional diversity by enforcing a common standard of
accountability.
There is evidence that colleges are using the self-study as it was intended. Lillis
(2006) found in a qualitative review of several institutions in Ireland that the intention
behind engaging in the self-study was identifying opportunities for improvement and that
it therefore does enhance quality. She concluded that the self-study is most effective
when improvement, rather than the acquisition of accreditation, is the genuine
motivation.
The quality review aspect of accreditation involves two facets: institutional
quality improvement which is a private function, and quality assurance which is a public
function (Brittingham, 2008). Brittingham recognized that accreditation has been
successful in providing the environment in which higher education can flourish and in
improving the quality of institutions individually. She also maintained that as a measure
of accountability and in its role of providing quality assurance more broadly it has been
far less successful and must be more effective in the future. Wolff (2005) stated that
“accreditation as an agent of accountability has shifted significantly from one of process
(that is, everyone goes through a review) to one of outcomes, with an increasing focus on
student success and learning” (p. 102).
59
Accreditation is therefore one of many tools to be used in providing
accountability and not a complete form of accountability in and of itself. It is this key
point that underlies Eaton’s (2007) warning against the way the federal government is
currently questioning accreditation. She called for a new relationship between the federal
government, accrediting agencies, and institutions, where all three “increase their
investment in accountability” (p. 18). This would mean that the government would defer
to institutions the oversight of quality, to accrediting agencies the verification of that
oversight, and both institutions and accrediting agencies would take more seriously the
demand for increased accountability. She further points out that this new relationship
would actually be a return to the way the three entities functioned in the past. The author
also agrees that accreditation must be more accountable to the public in an era where the
public questions the increasing cost of higher education both in terms of time and money.
Accordingly the role that accreditation plays for other entities (e.g., the public, the
federal government, the state government, etc.), should be very precise. Problems occur
when other constituencies, whether individuals (such as students) or institutions (such as
governments), interpret that specific role more broadly than was intended at its inception.
It is this reinterpretation that has caused the re-imagining of the accreditation concept.
Accreditation is not meant to answer the call for accountability all by itself but rather to
assist in answering that call (Ketcheson, 2001). “Is accreditation accountable? Yes, it is.
However, what it means to be accountable is often in the eye of the beholder” (Eaton,
2003b, p. 19).
60
Alternatives and Amendments to Accreditation
Throughout the century as concern has been expressed about accreditation, two
kinds of approaches have been explored to improve current practice: alternatives to the
system, and amendments to the way it is presently being conducted.
Many of the proposed alternatives have been characterized either by increased
involvement on the part of the state government (such as the State Postsecondary Review
Entities) or the federal government (Department of Health, Education, and Welfare,
1973). Orlans (1975) wished to see accreditation efforts focused less on the establishment
of reputable institutions and more on promoting access to postsecondary education in
whatever form it would most benefit various groups of students. He called for the
establishment at the national level of a Committee for Identifying Useful Postsecondary
Schools where the focus would shift to any kind of postsecondary training including
technical education and study at proprietary schools. He also wished to see greater
competition among accreditors, feeling that such competition could only benefit the field
of education. This latter idea had been promoted as early as 1914 by Samuel Capen of the
U.S. Bureau of Education (Newman, 1996) and it remains current today. Trivett (1976),
relying heavily on the work of Orlans, also argued that the federal government should
play more of a role in establishing eligibility for federal funds.
Harcleroad (1976; 1980) in a much more conservative approach to changes
considered six possible futures: the present system unchanged, the present system
modified, three alternatives involving expanded responsibility for state agencies, and one
involving expanded responsibility by the federal government. He believed modifications
61
to the current system (for instance increasing the staff of regional accreditors) along with
stronger state oversight was the most likely to be achieved.
Concern about accreditation as a system of accountability stems directly from the
way such a system reflects an inherent lack of trust (Finkin, 1978; Trow, 1996).
Accreditation stifles rather than encourages intellectual development, and as long as it is
the primary mechanism used for accountability in higher education any other possible
mechanism will be neglected.
A frequently referenced alternative involves the use of academic audits. The
academic audit is an internal review not subject to imposed external motivations and it is
a more effective method for driving both external accountability and internal quality. The
concept of the academic audit is based on the principles governing financial audits
(Harcleroad, 1976; Harcleroad & Dickey, 1975; Hardin & Stocks, 1995), although
admittedly “the educational audit of a college or university constitutes a much broader
review than the traditional financial audit” (Troutt, 1978, p. 100).
Graham, Lyman, and Trow (1995) argued that the current practice of
accreditation intrudes on institutional autonomy without producing real improvement in
academic programs. In building a case for the use of academic audits they explained that
the use of external audits of internal reviews would avoid pitting internal accountability
against external accountability. Their work came out of the vacuum created by the
dissolution of the Council on Postsecondary Accreditation (COPA) when the future of
accreditation was considered to be tenuous and the time seemed ideal to create a new
system for quality assurance. On the other hand their acknowledgment of the weaknesses
62
accompanying a system of audits, only one of which they name (“epistemic drift,”
Graham, Lyman, & Trow, 1995, p. 20), did not address those weaknesses in depth. Their
ideas were so important and the audit concept was so popular however that they created
an oft-cited base for future criticisms and explorations. During the early years of the
Council for Higher Education Accreditation (CHEA) some members strove to
incorporate those ideas into accreditation practice (Bloland, 2001). Later work continued
to explore the strengths and weaknesses of auditing in greater depth (Burke & Associates,
2005; Dill, Massy, Williams, & Cook, 1996; Western Association of Schools and
Colleges, 1998; Wolff, 2005). Even very recently audits have been a part of discussions
on the future of accreditation (Bernhard, 2011; Ewell, 2012; Ikenberry, 2009).
Amaral (1998) was opposed to the use of audits, questioning whether they could
be effective at evaluating quality. He cited his experience with European institutions as
evidence that audits might not be as effective as had been suggested. The Council for
Higher Education Accreditation (Ewell, 2001) echoed Amaral, maintaining that the
primary focus of audits “is not learning outcomes per se, but rather the adequacy of the
processes that the institution employs to assure the academic integrity of its credentials”
(p. 16), a description arguably very similar to the goal of accreditation.
The most firmly established alternatives to accreditation are programs developed
by regional accreditors as enhancements to accreditation as presently conducted, largely
because they are actually being implemented. The Higher Learning Commission, one of
two independent commissions of the North Central Association of Colleges and Schools
(NCA), developed a program as an alternative institutional assessment for already
63
accredited institutions maintaining accredited status: the Academic Quality Improvement
Program or AQIP (as opposed to the NCA’s more traditional accreditation mechanism:
the Program to Evaluate and Advance Quality or PEAQ). The process is designed to
collect and return data on institutional processes in continuous fashion to guide
improvement efforts (Spangehl, 2012). Because this program is relatively new little
empirical research has been conducted on it, the vast majority of this research taking the
form of graduate dissertations. Some important criticisms have surfaced. Wellman (2000)
was concerned about the erosion of accreditors’ already weak capacity in both setting and
enforcing minimum standards, and wondered whether the new focus on teaching would
dilute the quality of higher education. Edler (2004) considered AQIP to be a diluted
version of Total Quality Management.
The Southern Association of Colleges and Schools is another example of a
regional accreditor that has developed a model for enhancement, the Quality
Enhancement Plan or QEP (Jackson, Davis, & Jackson, 2010; Southern Association of
Colleges and Schools, 2007). The QEP was designed to enhance the institutional
assessment by guiding the institution through a study focusing on a single specific aspect
of the enhancement of student learning.
The Western Association of Schools and Colleges wished to effectuate a complete
transformation of the accreditation process from a regulatory assessment that occurred
only once every 10 years to one that provided multiple points of feedback on a shorter
term (Smith & Finney, 2008). Because it is impossible to use a regulatory process to
adapt to the future, WASC amended the accreditation process with the intention of
64
allowing institutions of higher learning to become more adaptive learning institutions.
This was motivated by concerns about reports that were calling on institutions to spend
more money without connecting those recommendations to improved student learning
outcomes. By dividing the review process into separate stages occurring at different times
in order to focus separately on institutional capacity and educational effectiveness,
WASC hoped to change the attitude of institutions from one of grudgingly meeting
minimum standards to one of enthusiastically creating positive institutional change
(Ewell, 2008; Smith & Finney, 2008). Although only recently implemented by a regional
accreditor, this concept has been around for many years (Uehling, 1987a). This regional
accreditor has also made other important efforts to improve accreditation by moving
“accreditation from its current reliance on assertion and description toward a reliance on
demonstration and performance” (Accrediting Commission for Senior Colleges and
Universities Western Association of Schools and Colleges, 2002, p. 6) through a greater
use of documented and demonstrable evidence. Current practices make each accreditation
activity a “variation of a high-stakes activity” (Crow, 2009, p. 91) which serves as an
impediment to innovation and change (Crow, 2009; Western Association of Schools and
Colleges, 1998). By way of contrast WASC has recognized that regional accrediting
agencies ultimately need to create more “low-stakes, high-return opportunities for
interaction with their colleges and universities” (Crow, 2009, p. 90).
In light of the inevitable question of cost it is notable that, rather than new
systems of quality review, the most successful attempts at changing accreditation to date
are amendments to current processes which could minimize new costs. Finkin (1994a)
65
asserts that “no completely satisfactory solution to the eligibility problem exists” (p. 148).
On one hand, accreditation has become so embedded in the culture of higher education in
the U.S. that even its harshest critics have been hard pressed to identify or invent a
suitable alternative (American Council on Education, 2012; Brittingham, 2008; Crow,
2009; Ewell, 2008; Gillen, Bennett, & Vedder, 2010; Orlans, 1975; Trivett, 1976; Young,
1983). On the other hand there is a strong consensus that, despite its weaknesses,
accreditation is a better system than any that external regulation would impose on higher
education in its absence, and that it is the best system that can be had for the complexity
involved with higher education in the U.S. (Ewell, 2008; Hawkins, 1992; Orlans, 1975).
In 2012 the National Advisory Committee on Institutional Quality and Integrity
(NACIQI) made policy recommendations to the Secretary of Education on the role of
accreditation and other related matters in preparation for the reauthorization of the higher
education act. The document provided a succinct overview of the historical development
of the role accreditation plays with respect to determining eligibility for federal funding
as well as the disadvantages of that arrangement. In its original draft (National Advisory
Committee on Institutional Quality and Integrity, 2011) the document proposed three
primary options to consider for accreditation going forward: retaining the role
accreditation currently plays as is, separating accreditation from federal aid eligibility,
and modifying the relationship between the two. In the final report however (National
Advisory Committee on Institutional Quality and Integrity, 2012) the committee
recommended the retention of “accreditation in the institutional eligibility process” (p. 2).
The document elaborated on 25 important observations and options to consider in light of
66
this recommendation. These options were grouped into considerations concerning the
role and scope of the three primary actors in educational quality assurance (the federal
government, the state government, and accrediting agencies), the relationship between
the three actors, the use of data in quality assurance, and the role that NACIQI should
play. Most striking among these options was one calling “for a system of accreditation
that is aligned more closely with mission or sector or other educationally relevant
variable, than with geography” (p. 5), and one reminiscent of the recommendations of
Orlans (1975) affording institutions “greater opportunity to choose among accreditors”
(p. 5). The document concluded by recommending a continued role for NACIQI in
oversight. In immediate response several organizations (e.g., the American Council on
Education, the American Association of State Colleges and Universities) wrote to
NACIQI in defense of accreditation, asserting that, while imperfect, the accreditation
system is generally functioning as it should. They further maintained that accreditation is
continually evolving to respond to the demands being placed upon it.
Despite outspoken voices against it, there is general consensus that accreditation
is critical to academe and accomplishes its goal of institutional quality assurance
(Bloland, 2001). To dissenters, Banta and Associates (2002) offer a powerful refutation:
Though accrediting standards may seem so generic as to allow all sorts of wiggle
room, though self-studies and visiting teams vary in quality and the rhythm of
fifth-year reports, and decennial reaffirmation may seem so sluggish as to be
utterly ineffectual, the influence of accreditation on campus assessment has been
powerful over the last ten or fifteen years, like the flow of a glacier. Glaciers do
67
move, albeit imperceptibly, and in their path they transport boulders, scour
valleys, and carve new river beds. Related to that epic pace, there is the constancy
of accreditation. While administrators, legislators, or state boards may become
distracted by other issues, accreditation keeps coming back, even if it does take
five or ten years to do so. (p. 253)
As the cost of accreditation has increasingly been cited as a preeminent concern, it
becomes ever more important to attain a sound understanding of that cost. The research
conducted here is intended to provide an understanding specifically as it relates to various
types of institutions. Many of the criticisms cited in this section lack specificity when
expressing concern about the rising costs associated with accreditation. Consequently it is
difficult to articulate the opportunity costs associated with the role accreditation is
currently playing in the context of accountability to the many constituencies that are
relying upon it. A better understanding of cost will also greatly enhance further
discussion of alternatives or amendments to accreditation because it will provide a critical
framework in which to consider them.
Effects of Accreditation
Next this review will consider the effects of accreditation as demonstrated by
studies conducted specifically on how accreditation and student assessment are related,
the outcomes associated with specialized accreditation, and the way accreditation affects
organizations as a whole. As noted there is a significant gap in the literature on
accreditation particularly as it relates to empirical research. The most substantial body of
literature on accreditation deals with the effect that it has on the assessment of student
68
learning, “the only sound basis for ascertaining the quality of a degree” (Ewell, 2008, p.
128). The federal government, state governments, and the private sector all rely on
accreditation to ensure that academic quality is high (Chernay, 1990; Council for Higher
Education Administration, 2010; Eaton, 2012b; Ewell, 2008), however even though it has
been part of higher education in the U.S. for over a century accreditation has only been
actively involved with assessment since the 1990s (Baker, 2002; Beno, 2004;
Brittingham, 2009; Ewell, 2008; Smith & Finney, 2008; Stensaker & Harvey, 2006).
Programmatic accreditation has been a critical part of higher education since the
extraordinary success of the Flexner Report (Flexner, 1910). The increasing institutional
cost associated with the requirements of increasing numbers of accreditation reviews has
been problematic for universities ever since (Blauch, 1959; Ewell, 2008). Accreditation
also has real effects organization-wide, and campus executives must be aware of these in
considering institutional involvement or non-involvement with various accreditations.
The studies reviewed here serve as an empirical base for the study described in chapter
three.
Accreditation and Student Assessment
Focusing on student assessment was initially not considered to be a pragmatic
purpose for accreditation. Zook and Haggerty (1936) commented in their seminal study
on how the assessment of student outcomes as part of accreditation is “scarcely…
feasible” (p. 118). Over three decades later Astin (1968) and Huffman and Harris (1981)
concluded that having accreditation did not contribute to student outcomes. They found
that traditional indices of accreditation (e.g., the quality of facilities, faculty, etc.) “have
69
little or no measurable impact on student intellectual outcomes” (Huffman & Harris,
1981, p. 28). Where accreditation is supposed to be a quality assurance mechanism this
finding questioned the very purpose it was meant to serve. Regional accrediting
associations have come to acknowledge this issue, and this has contributed to a shift in
priority. Ruppert (1994) conducted case studies in 10 states to consider the relationships
between various systems of accountability and respective performance indicators. He
concluded that because of the way accreditation now evaluates student learning it can be
considered a partner in assessment (Ruppert, 1994). Banta and Associates (2002)
concluded that the assessment movement re-legitimized accreditation.
Since this shift in focus there is evidence that the relationship between
accreditation and student outcomes has changed significantly. Kuh and Ikenberry (2009)
surveyed chief executives at all regionally accredited institutions granting undergraduate
degrees and found student assessment to be driven more by accreditation than by external
pressures (such as the demand for accountability). They further suggested that
accreditation is a primary driver of student outcomes assessment activity and therefore
institutional quality. The authors acknowledged an important selection bias in working
with data that were provided by institutions already more engaged in assessing student
learning, however the findings are encouraging.
The relationship between accreditation and assessment has always been strained
because of the tension between assessment for the sake of improvement and assessment
for the sake of accountability (Crow, 2009; Ewell, 2009; Ewell, Wellman, & Paulson,
1997; National Advisory Committee on Institutional Quality and Integrity, 2012;
70
Provezis, 2010). Where much is at stake when assessment is made for the purposes of
accountability, identifying opportunities for improvement can be perceived as risky.
Accreditation and other external demands for accountability have helped to fuel the
interest in and the demand for assessment (Eaton, 2012b), however to be an effective tool
assessment must be removed from those external demands (Wolff, 2005). Lillis (2006)
found this to be the most effective approach in a case study of three self-study programs
spanning an eight year period. One of the primary downsides of the process discovered at
these institutions specifically was the high cost in terms of overhead and time involved.
In a comprehensive case study of a single institution in California, Driscoll and De
Norriega (2006) identified similar positive results decoupling assessment from external
demands for accountability, but also described the significant investment made by
institutional members campus-wide.
The Council for Higher Education Accreditation (Council for Higher Education
Accreditation, 2003; Eaton, 2008) maintained that accreditation has aggressively tackled
the need for assessment evaluation. Accrediting agencies take advantage of the great
autonomy inherent in the system of American higher education to encourage individual
institutional efforts to determine appropriate assessment measures (Hunt, 1990). Over
time, these accrediting agencies have amended their evaluation processes and standards
to make the focus on student learning outcomes a central aspect of the accreditation
review process, thereby improving student learning through accreditation (Bers, 2008;
Council of Regional Accrediting Commissions, 2003; Council of Regional Accrediting
Commissions, n.d.; Crow, 2009; Haviland, 2009; Ryan, 2005). Through a review of the
71
standards of each regional accreditor and interviews with representatives from those
agencies, Provezis (2010) found that the regional accrediting agencies now expect the
explicit definition and assessment of learning outcomes through multiple measures, and
the use of these tools for institutional improvement.
When done as intended by accreditors, accreditation guides colleges and
universities in linking academic content to institutional mission (Denoya, 2005). Peterson
and Augustine (2000) found institutional dynamics and the region of accreditation to be
primary influences on the ways in which institutions approached student assessment.
They observed that the North Central Association, the Southern Association, and the
Middle States Association are frequently cited as leaders in terms of student assessment,
but they also recognized that these associations have been doing this kind of assessment
the longest and therefore might reasonably be expected to lead in this fashion.
Cabrera, Colbeck, and Terenzini (2001) investigated classroom practices and their
relationship with the gains in professional competencies developed by students. In their
study involving 1,250 students from seven universities they found that accrediting
agencies may be encouraging more effective instructional technique on the part of faculty
through these expectations. In her remarkable collection of case studies of both
institutions and regional accrediting agencies, Banta and Associates (2004) provided
several examples of ways that specific institutions are meeting accreditation’s demands to
include assessment in their activities. In five case studies, Jung (1986) linked the direct
involvement of accrediting agencies to improvements in higher education. In a study of
72
top administrators on a mid-sized campus, Underwood (1991) identified the accreditation
of specific programs as an important assessment activity in and of itself.
Studies by Volkwein, Lattuca, Harper, and Domingo (2006) and by Volkwein,
Lattuca, and Terenzini (2008) measured changes in student outcomes in engineering
programs following the implementation by the Accreditation Board for Engineering and
Technology (ABET) of a new accreditation focus. They found evidence connecting the
changes to improvements in undergraduate education. They also found accreditation to be
a primary driver in engaging faculty and in improving student outcomes leading to the
conclusion that accreditation has played an important role in engineering education. Both
sets of authors qualified their findings by stating that other influences outside of
accreditation have also contributed to the resultant changes. They also suggested that
there may be greater generalizability of the findings to other programs and institutionally.
Indeed Brittingham (2012) recognized quantitative disciplines with programmatic
accreditation such as engineering as leading the way in the assessment of student
learning.
Acquiring accreditation requires faculty involvement, and Barak and Breier
(1990) illustrated the importance of tying accreditation to program review to make it
valuable. The link between accreditation and the assessment of student learning is
strongest when the process relies on faculty. Driscoll and De Noriega (2006) also
illustrated how important faculty commitment is to the entire process, and the institution
they studied reinforced that importance by linking assessment with accreditation as an act
73
of scholarship. In sum, accreditation has helped cause there to be an increase of
assessment at more institutions (Banta & Associates, 1993).
Specialized Accreditation
Other than the specificity of the endorsement that specialized accreditation
provides, the defining characteristic of programmatic accreditation is that it is granted
and monitored by national organizations rather than regionally by geographic region
(Adelman & Silver, 1990; Eaton, 2009; Hagerty & Stark, 1989). Consequently there is a
distinct tension between the communal demands placed on an institution by the pursuit of
institutional accreditation and the specific, often overlapping demands placed on an
institution by the specialized accreditation of individual programs (Bloland, 2001). As the
work published by the Global University Network for Innovation (2007) noted, “for
institutional accreditation to be effective, it cannot ignore academic programmes, just as
programmatic accreditation cannot ignore whether the broader institutional environment
is meeting its objectives. Both are complementary” (p. 10). In theory it can be cost
effective to overlap accreditation efforts where possible (see for example good practices
suggested by the Western Association of Schools and Colleges, 2009; Shibley &
Volkwein, 2002), but coordinating institutional assessment with any number of
programmatic accreditation reviews can become complicated.
Specialized program accreditations are a distinctly important aspect of
institutional quality assurance, often because variations between individual programs
within a single institution can be greater than variations between entire institutions. The
strength of program accreditation is that its review can be much more focused because it
74
is carried out by colleagues from peer institutions who are specialists in specific
disciplines (Ratcliff, 1996).
Research on program accreditation suffers from the same lack of volume and
rigor as research on institutional accreditation. Some studies link individual program
accreditation to stronger faculty instruction (Cabrera, Colbeck, & Terenzini, 2001;
Daoust, Wehmeyer, & Eubank, 2006), while others find that program accreditation does
not emphasize student outcomes explicitly (Hagerty & Stark, 1989). Developing a fuller
body of empirical research on specialized accreditation will be important because the
effects of program accreditation are significant: Program accreditation defines the
parameters of professional education (Ewell, Wellman, & Paulson, 1997; Hagerty &
Stark, 1989) and therefore national professional standards (Bardo, 2009; see for example
American Accounting Association, 1977; Floden, 1980; Raessler, 1970).
Organizational Effects of Accreditation
It is evident that accreditation has an effect on educational institutions (Asgill,
1976; Banta & Associates, 2004; Barak & Breier, 1990; Bardo, 2009; Casile & Davis-
Blake, 2002; Dillon, 1997; Driscoll & De Norriega, 2006; Graffin & Ward, 2010; Kis,
2005; Kuh & Ikenberry, 2009; Larsen & Vincent-Lancrin, 2002; Provezis, 2010; Rusch
& Wilber, 2007; Wiley & Zald, 1968; Young, 2010). Accreditation is an important
mechanism by which the social control of institutions of higher education is necessarily
guided (Zook & Haggerty, 1936) and by which academic standards are controlled
(Selden, 1960). Accreditation is often the explicit institutional design, but the process of
acquiring it can have other positive effects. For example El-Khawas (1998) found that
75
when accreditation is tied to performance funding, achieving it may be the stated goal but
better student assessment measures can result.
Driscoll and Noriega (2006) and Lillis (2006) found independently that the self-
study is most effective when the forces motivating its genesis coincide with a genuine
institutional desire to improve. El-Khawas (2000) similarly found that accreditation was
not the actual impetus for change at 30 institutions ranging in size from 1,000 to 32,000
students across 20 states and five accrediting agencies. The author studied external
pressures and internal processes shaping change at those institutions, although she
acknowledged a selection bias inherent in the self-selection by these generally successful
campuses electing to participate in a competition celebrating change.
In a study published on the internet and with potentially limited external validity
(focusing on agricultural programs in Australia), Dillon (1997) investigated the
credentialing of professional development activities for teachers. This unique
investigation into the role of accreditation in making credentialing programs attractive to
professionals discovered that a lack of accreditation had a strongly limiting effect on
institutions. This finding is potentially important if it can be replicated on a broader scale.
Similarly, Asgill (1976) found that regional accreditation was “highly prized by chief
administrators” (p. 289) regardless of the type of administrator or institution, a finding
later echoed by Graffin and Ward (2001).
Casile and Davis-Blake (2002) explored how a change in accreditation standards
would affect organizational responsiveness. They found that “schools were more likely to
seek accreditation if they had been exposed to the benefits and criteria for accreditation
76
via membership in the [American Assembly of Collegiate Schools of Business]” (p. 191),
however they also found this responsiveness to be related to the competitiveness of the
environment in which those schools were operating.
Kells and Parrish (1979) studied the effect of multiple accreditation relationships
on institutions. They sampled purposively from all six regions to have a wide
representation of geography, diversity of state population density, and institution type. As
part of their study they also conducted a longitudinal analysis of institutions in the Middle
States region from 1970 to 1978. They had difficulty relating their data “in a meaningful
way to the perceived levels of institutional distress about duplication of effort, cost, and
the other complaints about multiple accreditation relationships” (p. 15), however they did
find strong correlations between the number of accrediting relationships an institution
held to both institutional size and advanced degrees offered. In the longitudinal study
they saw an increase in accrediting relationships from 2.45 to 2.96 from 1970 to 1978,
but found this primarily held to institutions with more advanced degrees. They also found
specific specialized accreditors to be consistently dominant across all institutions
considered. The authors did not consider the cost of accreditation specifically but rather
the stress on an institution. This study is a good model for future research. In a follow up
study (Kells & Parrish, 1986) the authors used the same methodology to find that
accreditation activity was growing rapidly, having increased by one-fourth in just seven
years.
Finally, the process of acquiring accreditation can also have a profound effect on
individuals working within an institution, shaping the perception of those individuals
77
(Procopio, 2010), forcing the change of institutional culture (Wiedman, 1992), and
generally exerting social control on participants (Wiley & Zald, 1968).
In reviewing the body of empirical studies on the effects of accreditation it is
evident that the cost associated with accreditation review is a consistent, although at
times implicit theme running through this literature. Studies on how accreditation affects
student assessment, the effects of specialized accreditation, and the organizational effects
of accreditation have been frequently compelled to consider costs in various ways. The
final section of this review will consider the studies that deal specifically and directly
with the costs of accreditation.
Costs of Accreditation
Curiously, despite the constant presence of the concern over the costs of
accreditation, research specifically on the costs of accreditation is largely absent from the
literature (Shibley & Volkwein, 2002). Confounding the issue of cost is the disentangling
of the monetary costs of accreditation and the non-monetary but more costly commitment
of time. Reidlinger and Prager (1993) suggested two reasons that “rigorous cost-based
analyses of accreditation” (p. 39) have not been pursued. The first reason is the traditional
belief that voluntary accreditation is far preferable to governmental control and the
ensuing assumption that accreditation at any price is worth the cost. The second reason is
the “methodological difficulty of relating accreditation’s perceived benefits to real dollar
costs” (p. 39); essentially, “everyone is counting by different rules” (p. 42). This section
will demonstrate both the paucity of literature on the costs of accreditation and the
difficulties of evaluating those costs.
78
The Council for Higher Education Accreditation (CHEA) has published an
almanac every two years since 1997 which provides an overview of all accreditation as
practiced in the U.S. The almanac is a valuable reference tool giving cost data as it relates
to the number of volunteers, number of employees, and unit operating budgets, but the
scope of data is large and describe the operating budgets of regional accreditors rather
than individual colleges and universities, thereby giving little insight into the cost of
accreditation to institutions.
As part of a comprehensive self-study in 1998, the North Central Association of
Colleges and Schools explored the perception of accreditation costs (Lee & Crow, 1998).
The self-study found that the majority (53%) of respondents considered the benefits of
accreditation to outweigh the costs and a third (33%) of respondents considered benefits
and costs to be equal. Only 13% rated the costs to be greater than the benefits. More
significantly the study found these responses to vary somewhat by institutional type, most
notably with research and doctoral institutions being less inclined to indicate that benefits
outweighed costs and responding less positively about the effectiveness and benefits of
accreditation generally. The study suggested that these institutions might already have
processes in place internally to serve the purposes intended by accreditation, in which
case an audit system might be an appropriate alternative to traditional accreditation,
particularly with well-established institutions. Similarly Warner (1977) and Pigge (1979)
each authored a study (for the Western Association of Schools and Colleges and for the
Committee on Postsecondary Accreditation respectively) in which respondents
acknowledged the cost as a significant burden of accreditation although they did not find
79
it to be excessive. The majority of respondents indicated that the benefits of accreditation
exceeded its costs. The Warner (1977) study also considered how accreditation affects
budget allocations, finding that about a third of responding institutions had changed
allocations based on accreditation results, although the study did not explore in detail
how.
Wood (2006) drew upon her experience with accreditation to build a model
describing three stages of preparation for the process of accreditation. Her assessment of
the costs of accreditation included the release time necessary for the various coordinators
of the accreditation review as well as the monetary costs of training, staff support,
materials, and the actual site visit of the accreditation team. Willis (1994) explored many
of these same costs, distinguishing between direct costs (including accreditor fees,
operating expenses specifically pertaining to the accreditation process, direct payments to
individuals involved, self-study costs, travel costs, and site visit costs) and indirect costs.
He identified indirect costs as “probably many times greater than the direct costs due
mainly to the personnel time required at the institution” (p. 40). He also cautioned against
underestimating these costs: Individuals with accreditation responsibilities are not
performing other tasks that must inevitably be assigned to someone else. In his survey of
accreditation issues, Andersen (1987) similarly found a strong consensus that
accreditation took too much time and that the dollar cost was too high.
Kennedy, Moore, and Thibadoux (1985) went a step further in an attempt to
establish a methodology for cost determination. They covered principally the span of
time beginning with the initial planning of the self-study through the presentation of the
80
study, a period of approximately 15 months. The team also recognized the opportunity
costs in terms of time spent by all significant campus parties as well as the cash outlays,
and also assigned a median salary to time spent on accreditation in order to monetize the
total cost. Time invested was calculated through the distribution and collection of time
logs for which they had a high return rate (79% for fully completed logs, 93% total).
They found that the time spent by faculty and administrative staff accounted for a striking
94% of the total cost of the accreditation review, over two-thirds of which time was
attributable to the administrative staff, demonstrating that the time invested in
accreditation is easily the most significant cost involved. Finally the team concluded that
the cost was not excessive, especially in light of the fact that the total cost could
effectively be spread out over the seven-year span following the review until the next
self-study had to be prepared.
Doerr (1983) explored actual costs. He used the context of a case study to explore
whether the cost of accreditation merited the benefit received from it with respect to the
pressure from university executives to acquire additional programmatic accreditations.
He examined both the financial costs and the opportunity costs of the institutional
accreditation granted by SACS and four programmatic accreditations maintained by the
University of West Florida in a single year (1981-1982) by assigning an average wage
per hour to faculty and secretarial work and adding material supplies. He estimated a total
cost of $50,030.71 for those reviews alone, and considered inevitable additional costs for
subsequent years, specifically those associated with maintaining membership in
accrediting organizations and those associated with preparing for additional specific
81
programmatic reviews. He concluded by considering the total fiscal value of the
opportunity costs, considering ways this money might otherwise have been spent.
A study conducted by Kells and Kirkwood (1979) in the Middle States region
specifically on the costs of the self-study found that in terms of direct costs, the self-study
did not involve significant expense. Almost half of the respondents spent under $5,000 on
the self study. The study also found a practical, evident upper limit of between 100 and
125 people directly involved in the self-study with a greater proportion of faculty (41-
50%) than of staff (21-30%), and relatively few students. The authors felt that
institutional size was possibly a significant factor as size seemed to dictate the number
and constitution (in terms of faculty) of committees and the cost of the self-study process.
In a case study of a public institution in the Middle States region with multiple
accrediting relationships, Shibley and Volkwein (2002) compared the costs and benefits
of re-accreditation in 2000 with the costs of re-accreditation in 1991. Simultaneously they
evaluated the benefits of a joint accreditation effort (combining institutional review with
programmatic review). As Willis (1994) had suggested and consistent with the studies
cited previously, they found that the “true sense of burden arose from the time
contributed to completing the self-study process rather than from finding the financial
resources to support self-study needs” (Shibley & Volkwein, 2002, p. 8). They found the
costs of the later re-accreditation to be lower. The separate accreditation processes had
more benefits for individuals than the joint effort, however the joint process was less
costly and the sense of burden was reduced.
82
Several other studies looked into the expense of accreditation relative to the
benefits. Bitter, Stryker, and Jens (1999) and Kren, Tatum, and Phillips (1993) conducted
studies concerning specialized accreditation for accounting, and both found that non-
accredited programs believed that the costs of accreditation were too high and
outweighed the benefits. Bitter, Stryker, and Jens (1999) additionally noted that
respondents from non-accredited programs held this belief despite feeling that their
accounting faculty valued accreditation and thought that accreditation would improve the
reputation of the program. Britt and Aaron (2008) surveyed radiologic programs without
specialized accreditation and found the expense of accreditation to be the predominant
reason behind the decision not to acquire it. The time commitment required was also
cited as a frequent response. Respondents for many of these programs noted that
decreasing the expense would encourage them to seek accreditation in the future. The
study also found that several programs followed programmatic accreditation standards
and guidelines despite not being formally accredited, although the authors noted that
without the verification of accreditation it was not possible to note how closely the
programs were adhering to those standards.
The Florida State Postsecondary Education Planning Commission (1995) defined
cost to vary widely among respondents to its inquiry, only sometimes including monetary
assignments for the indirect costs of time committed. Cost was frequently listed as a
primary concern in terms of resources, time, and energy spent, and was the primary
reason for not seeking accreditation by those institutions lacking it. The study classified
benefits into three groups: benefits to students, benefits to the department, and benefits to
83
the institution. The Commission recommended balancing the direct and indirect costs of
accreditation against potential derived benefits to each group before deciding whether to
pursue it. Schermerhorn, Reisch, and Griffith (1980) found the time commitment
preparing for accreditation in terms of personnel to be among the most significant
shortcomings of the process.
The cost of accreditation to institutions is significant but is more exacting in terms
of time than money. A review of these studies focusing specifically on the costs of
accreditation demonstrates the lack of research that has been done on the costs of
accreditation in general, the costs of institutional accreditation specifically, and the lack
of quantitative research providing broad external validity. This gap in the literature
motivates the present study.
Conclusion
This chapter has provided an overview of the literature beginning with a review of
the literature on accreditation as practiced in the U.S. and as it has developed over the
course of history, as well as the view held of accreditation internationally. It then
considered the critical assessments that have been made and placed accreditation within
the broader context of accountability, after which it explored alternatives to the current
process. Next it reviewed studies considering the effects of accreditation on student
assessment, the effects of specialized accreditation, and the organizational effects of
accreditation. The review concluded with a survey of the literature on the cost of
accreditation specifically. As has been noted, the issue of cost runs as a common thread
throughout all of it.
84
Crow (2009) speculated that, because accreditation is so embedded in the culture
of American higher education, it will continue for quite some time. He viewed this as an
invitation to accreditation to rise to contemporary challenges. Without addressing the
issue of cost, accreditation will be hard pressed to do so. Ikenberry (2009) pointed out
that even today “knowledge of and support for accreditation remains a mile wide but an
inch deep” (p. 4). It is the purpose of this study to assist in deepening that knowledge.
85
CHAPTER THREE: METHODOLOGY
Accreditation emerged naturally at the turn of the twentieth century as a logical
part of the evolution of higher education in the U.S. As it became important to formally
define what constituted a college, multiple groups of educators within the various regions
of the country joined together to form voluntary associations to address common
concerns. Quickly they adopted the role of institutional quality assurance and over the
course of the next half century assumed a comparable vocabulary of accreditation. The
self-regulatory nature of these groups enabled them to operate largely without
government intervention.
Two factors contributed to a rapid escalation in the importance of accreditation.
First, accreditation promptly proved to be an effective quality assurance mechanism (both
for institutions and for individual professional disciplines) leading to an increased interest
in the practice and a proliferation of accrediting associations at both the institutional and
programmatic levels. Second, in the mid-1900s federal legislation made a significant
increase in the amount of funding it was putting into higher education and almost
simultaneously tied that funding to accreditation as an already-established and therefore
cost-effective means of quality verification for the government. This was only cost
effective for the government however because those costs were borne by the institutions,
and as the stakes rapidly rose the institutional costs associated with accreditation climbed
as well.
The purpose of this study was to investigate the institutional costs of
accreditation. This study examined the following research questions:
86
• What costs are associated with institutional accreditation and how do those costs
vary between and among types of institution?
• How is financial commitment toward institutional accreditation manifested, i.e.,
what are the perceived direct and indirect costs of institutional accreditation?
• Do primary Accreditation Liaison Officers believe that the perceived benefits
associated with institutional accreditation justify the institutional costs?
• What kinds of patterns emerge in accreditation commitment between types of
institutions?
Much of the literature on the costs of accreditation also considers the benefits that
result from the practice. This study focused on accreditation costs although some
discussion of benefits was inevitable. Studies on the costs of accreditation are
methodologically challenging for several reasons, primarily the simultaneous subjective
and objective nature of cost measures, the inconsistency with which costs are accounted,
and the “difficulty of relating accreditation’s perceived benefits to real dollar costs”
(Reidilinger & Prager, 1993, p. 39; Stoodley, 1985). In other words, “there are many
methodological problems in quantitatively assessing qualitative outcomes” (Reidlinger &
Prager, 1993, p. 39). Further complicating the issue is the difficulty of determining a
value for indirect costs (Parks, 1982). It is for this reason that so few studies of
accreditation costs have been conducted to date (Reidlinger & Prager, 1993).
In order to address the methodological difficulty of cost studies of accreditation it
was necessary to define the term “cost” operationally. For the cost analysis in their case
study, Kennedy, Moore, and Thibadoux (1985) established a method and a rationale that
87
evaluated costs incurred “from initiation of self-study planning… through preparation
and dissemination of the final self-study report” (p. 177). Similarly this study focused on
the direct costs and indirect costs associated with institutional accreditation during that
same time period, approximately two years. Direct costs were defined as the amount of
money spent compiling the self-study document and the amount spent on the site visit.
The fees paid to accrediting organizations were excluded because these fees are
standardized within regions.
In terms of indirect costs, Shibley and Volkwein (2002) observed that the time
commitment required by accreditation constitutes a greater burden than the direct fiscal
costs. Freitas (2007) noted that personnel time generally comprises the majority of an
institutional budget. Relatively speaking then the indirect costs should constitute a much
greater portion of the actual financial cost of accreditation to institutions than do the more
easily measured direct costs. Despite the fact that these costs do not have an inherent
monetary value they are critical to consider because they are essentially “displaced dollar
costs” (Parks, 1982, p. 4). This study therefore explored the magnitude of the cost of the
cumulative time commitment. Because of the varying value of time among the different
campus constituencies, this study differentiated the time spent on accreditation
accordingly, distinguishing between the primary accreditation liaison officer, campus
executives (such as the president or provost), faculty, administrative staff, students, and
others.
88
Research Design
The methodology of this study was a survey administered via the internet with
subsequent descriptive and inferential statistical analyses. The study used frequencies,
chi-square tests, and ANOVA and other comparison of means tests to gain understanding
about the cost of accreditation at four-year institutions. A researcher-developed survey
enabled the collection of data on the costs of accreditation that have not before been
systematically considered (i.e., time committed in addition to fiscal costs). It was
expected that this process would provide a large data set with robust external validity that
could explore which institutional characteristics correspond with levels of institutional
commitment to the accreditation process. There has been a limited amount of quantitative
empirical research on accreditation generally and very little of it has focused on
accreditation costs (Reidlinger & Prager, 1993; Shibley & Volkwein, 2002). This study
was intended to assist in filling a significant gap in the literature.
Population and Sample
The population for this study was all regionally accredited institutions of higher
education granting baccalaureate degrees in the United States. The study focused on this
population because of its relationship to accreditation’s origins and initial purpose of
assuring the quality of liberal arts programs. Other colleges and universities such as
community colleges and trade institutions have only had the option of pursuing
accreditation since relatively recently (Orlans, 1975; Ratteray, 2008). The existence of
separate commissions for these kinds of institutions within two of the six regional
89
associations (the New England Association of Schools and Colleges and the Western
Association of Schools and Colleges) is further evidence of this fact.
A search was conducted through the Integrated Postsecondary Education Data
System (IPEDS) Data Center for data from the year 2009-2010 (the most recent year
available) to identify institutions from each of five categories in the Carnegie Basic
classification system: research/doctoral institutions, master’s institutions, baccalaureate
institutions, special focus institutions, and tribal colleges. A separate search was
conducted for each of the six regions to identify colleges and universities from states
within those regions: the Middle States Commission of Higher Education (including the
states of Delaware, Maryland, New Jersey, New York, Pennsylvania, and Washington
D.C.), the North Central Association Higher Learning Commission (including the states
of Arizona, Arkansas, Colorado, Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota,
Missouri, Nebraska, New Mexico, North Dakota, Ohio, Oklahoma, South Dakota, West
Virginia, Wisconsin, and Wyoming), the New England Association of Schools and
Colleges (including the states of Connecticut, Maine, Massachusetts, New Hampshire,
Rhode Island, and Vermont), the Northwest Commission on Colleges and Universities
(including the states of Alaska, Idaho, Montana, Nevada, Oregon, Utah, and
Washington), the Southern Association of Colleges and Schools (including the states of
Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South
Carolina, Tennessee, Texas, and Virginia), and the Western Association of Schools and
Colleges (including the states of California and Hawaii). The list was then compared to
the directory of institutions for each of the regional accreditors. Where institutions were
90
listed multiple times (such as for multiple campuses) they were only counted uniquely if
separately accredited by the regional association. Accordingly the population of
regionally accredited institutions was identified as noted in Table 3.1.
Table 3.1: Number of Regionally-Accredited Institutions by Carnegie Classification
Region
Doctoral/
Research Master's Baccalaureate
Special
Focus Tribal Total
MSCHE 53 136 122 77 0 388
NCA 82 184 225 153 23 667
NEASC 22 49 65 37 0 173
NWCCU 17 29 25 15 8 94
SACS 76 151 169 66 0 462
WASC 28 45 27 43 0 143
Total 278 594 633 391 31 1927
The survey was distributed to individuals serving as the primary regional
Accreditation Liaison Officer (ALO) for each four-year degree granting college or
university from participating accrediting regions. Two of the six regions (MSCHE and
WASC) provide the name of the ALO for each institution in a directory online as a matter
of public record. An official from MSCHE invited the researcher to use this directory to
contact ALOs from each institution. Two regions (WASC and SACS) elected to
participate directly in the study and assisted in identifying contact information for the
relevant ALOs. Therefore ALOs from these three regions were included in the study. The
other three regional associations were also contacted and invited to participate in the
same manner but were unable to participate in the study at that time. Officials from
NEASC participated in the study at a later point, however the results of ALO responses
91
could not be included with the results reported here and will be reported separately.
Officials from the other two regional accreditors expressed interest in participating at a
later date.
There were 11 institutions from MSCHE and SACS that were ultimately not
included because of invalid or unidentifiable email addresses (including three
doctoral/research institutions from MSCHE, one master’s institution from MSCHE, two
special focus institutions from MSCHE, one doctoral/research institution from SACS,
two master’s institutions from SACS, one baccalaureate institution from SACS, and one
special focus institution from SACS). Accreditation Liaison Officers from 982 of the
1,927 institutions (51.0% of the population) were therefore included in the survey.
Instrumentation
Dillman, Smyth, and Christian (2009) identifed surveys as “a remarkably useful
and efficient tool for learning about people’s opinions and behaviors” (p. 1). A survey
was the best means for collecting data to reveal trends and patterns on the costs of
accreditation. In order to maximize survey response, efforts were made to establish trust,
to increase the benefits of participation, and to decrease the costs of participation.
Participants (ALOs) were purposefully selected so that the questions would be relevant to
their professional expertise. The survey was delivered to the specific individual serving
as ALO (rather than to the institution in hopes that the right person would eventually get
it) in order to maximize response rate. To establish trust the researcher honestly identified
himself by institution and by credential (as a doctoral student conducting research at the
University of Southern California), and communicated the importance of the task by
92
illustrating the value of the results of quantitative research on accreditation costs and by
appealing to the professional expertise of the ALOs. The researcher ensured the
confidentiality and security of collected data by describing the steps that would be taken
to maintain confidentiality. To establish the benefits of participation the researcher
provided information about the survey by describing how the results would contribute to
increased knowledge on the topic. The researcher showed positive regard toward ALOs
and explicitly thanked them, providing contact information for follow up questions. The
researcher offered to share survey results as an incentive for survey participation. In
follow-up emails encouraging response the researcher provided social validation by
updating invitees on the level of participation generated thus far. To increase
participation the researcher made it convenient to respond by providing the survey link
directly in the invitation email, avoided subordinating or confusing language, made the
survey relatively short and easy to finish, identified the specific number of minutes
completion of the survey was expected to take, and minimized questions seeking personal
or sensitive information (Dillman, Smyth, & Christian, 2009).
Reliability
Quantitative reliability tests whether the measure of a study is consistent and
accurate. The smaller the error in a test the more reliable it is (Robinson Kurpius &
Stafford, 2006; Salkind, 2011). The instrument was designed to reduce four kinds of
survey error identified by Dillman, Smyth, and Christian (2009). Sampling error was
minimized by surveying ALOs from the entire population of four-year degree granting
institutions in participating accrediting regions. Coverage error was minimized by
93
identifying specific professionals as ALOs directly through the regional accrediting
associations. Non-response error was minimized by designing the survey as described
above to motivate as many of those surveyed as possible to respond. Measurement error
was minimized by wording the survey questions simply and clearly so that respondents
could understand and answer what was being asked. Qualitative reliability is a reflection
of consistency throughout a study (Creswell, 2009). To maximize qualitative reliability,
the same invitation and follow-up communications were used for all three participating
regions, and the same survey instrument was used with the exception of a final, additional
question asked of ALOs from WASC as described below.
Validity
Quantitative validity is a measure of how well a study does what it is designed to
do (Robinson Kurpius & Stafford, 2006; Salkind, 2011). Validity was maximized by
carefully constructing the survey instrument such that the only questions being asked
were directly related to the research questions. Each question was carefully and simply
worded to avoid confusion and to maximize respondent participation. The survey was
then given to several individuals with experience in institutional accreditation to assess
face validity. Specifically, feedback was provided by faculty from the researcher’s
program, the experienced ALO of the researcher’s home institution, and senior officials
from two of the regional accrediting agencies. The survey was then amended according to
the recommendations of these individuals to establish content validity. Qualitative
validity is a reflection of measures that are taken by the researcher to ensure accuracy of
the findings (Creswell, 2009). Throughout the data collection and coding process for this
94
study the researcher used a concurrent embedded strategy for triangulation to enhance
validity, was cognizant of the importance of verifying the accuracy of data, and guarded
against “a drift in the definition of codes, a shift in the meaning of the codes during the
process of coding” (Creswell, 2009, p. 190).
Data Collection and Analysis
Following approval by the Institutional Review Board (IRB) the survey was
distributed via email to the list of ALOs. The survey was distributed in early October
2011 in order to arrive shortly after activity associated with the beginning of the new
academic year had subsided. It was expected that this would increase the likeliness of
response. The procedure began with the invitation letter in which the link to an online
survey hosted on SurveyMonkey was embedded (see Appendix B). A PDF copy of the
survey was attached to the email invitation to facilitate task-sharing in completing the
survey as desired by the ALO. The letter employed phrases that were key for the
accreditation community in order to increase the credibility of the request and make it
more appealing for ALOs to participate. The letter indicated the confidentiality in which
responses would be kept, and it offered to provide the results to anyone interested. The
researcher followed up twice, at intervals of one to two weeks after each previous
notification, in order to re-invite and encourage non-respondents to complete the survey.
The survey instrument (see Appendix C) was developed through a process of
collaborative feedback between the researcher, faculty in the USC Rossier School of
Education, and accreditation experts as noted following the guidelines of Dillman,
Smyth, and Christian (2009). Each question in the survey was designed to contribute to
95
the answer of the research questions. Questions were crafted to be technically accurate, to
ask a single question at a time in as few words as possible, to use specific and familiar
words that specified the concepts clearly, to use complete sentences with correct
grammar and simple sentence structure, and to be sure the question indicated the
appropriate response task (Dillman, Smyth, & Christian, 2009). Recognizing that open-
ended, description questions require more time to complete and could potentially act as a
deterrent, the survey limited these and designated them as optional, including them at the
end of each section where respondents would be less likely to abandon the survey without
submitting it. Taking such steps to make the survey “respondent-friendly” (p. 156) was
an effort to maximize response yield. By collecting quantitative and qualitative data
simultaneously the researcher employed a concurrent embedded strategy for triangulation
in which the results of the quantitative data were ultimately embedded in the results of
the qualitative data (Creswell, 2009).
The survey consisted of four sections, the demographic information pertaining to
the ALO answering the survey and the institution that he or she represented, the questions
on the direct costs of accreditation, the questions on the indirect costs of accreditation,
and an open-ended section exploring explanation for the costs. The first section collecting
demographic information asked for the accreditation region and Carnegie Basic
classification of the institution being represented, the official title of the person filling out
the survey, and clarified whether this person was the actual ALO for the institution. The
title of the person filling out the survey (and whether that was the ALO) pertained to how
accreditation responsibilities are assigned and addressed the research question asking in
96
what ways institutional commitment toward accreditation is manifested. The survey then
asked for the month and year of the last institutional accreditation review in order to put
the rest of the survey in perspective (i.e., answers might vary somewhat depending on
whether an institution was under review at the time, had recently completed a review, or
had not been under review for some time). Next it asked for an estimation of the
percentage of time an ALO devoted specifically to institutional accreditation both when
the school was preparing for a formal review and when it was not. The final item in this
section gave respondents an opportunity to make any additional comments they felt
would be appropriate and necessary for the interpretation of the data provided. It was
important to provide this kind of opportunity to respondents in case they wished to
qualify their previous responses. The section was explicitly labeled as optional in order
not to deter respondents from completing the survey if they did not have additional
comments to make. The data provided by answers to this final item served to triangulate
the previous quantitative results. The subsequent comparative analysis helped establish
the “degree of convergence” of the various data (Patton, 2002, p. 559). Descriptive
statistics identified the number of schools for which a response to the survey had been
provided by accreditation region and institution type, and described a profile for the
respondents. The formal titles of ALOs were also analyzed descriptively.
The second section of the survey asked the ALO to indicate the direct costs (fiscal
costs of the preparation of the self-study document and the site visit) associated with
accreditation. The study described and compared the means of these direct costs among
the various groups with the appropriate tests.
97
The third section of the survey asked the ALO to estimate the indirect costs (time
spent) associated with accreditation. These questions asked about the number of people
involved from and the total number of hours spent by six distinct campus groups: the
ALO, executives, faculty, staff, students, and others (along with a chance to identify who
was in the final group). The study again described and compared means among the
various groups with the appropriate tests. Because it is both desirable and “indispensable
to quality education” (Parks, 1982, p. 4), the study then assigned monetary value to the
time spent to monetize indirect costs.
The fourth and final section of the survey consisted of three optional, open-ended
questions. The first asked the ALO what were the most important institutional benefits of
going through the accreditation process. Answering this question facilitated the second
question about whether the costs of accreditation were justified by creating a context for
it. The third and final question posed only to ALOs for WASC institutions explored how
much of the accreditation commitment was spent on compliance-driven activities as
opposed to enhancement-driven activities. These questions treated the research question
exploring whether ALOs believe that the perceived benefits associated with institutional
accreditation justify the costs. The primary question in this section was essentially a
“yes” or “no” question on whether costs were justified, however respondents were given
the opportunity to comment in greater depth if they so wished. A chi-square test was
conducted to test for significance of difference between accreditation regions or Carnegie
classifications in whether ALOs felt that costs were justified, and the comments were
qualitatively analyzed.
98
Like the final items in each of the previous sections these open-ended questions
invited more detailed responses that yielded rich, qualitative data. Responses were
examined inductively to identify common patterns about the responsibilities of ALOs,
about direct and indirect costs, and about why the benefits of accreditation did or did not
merit the costs according to the ALOs. Recurrent themes were coded and areas of
convergence and divergence were identified as they emerged from the data. These
qualitative data provided great insights into the issues explored by this study, and the
mixed method analysis greatly enhanced the strength of the findings.
The primary independent variables that defined the study were the accreditation
region and Carnegie classification for the institution of each responding ALO. The
primary dependent variables were the direct costs (the document cost and the site visit
cost) and the indirect costs (total number of senior administrators, faculty, staff, students,
and others involved, as well as cumulative hours contributed by the ALO, senior
administrators, faculty, staff, students, and others).
Response
As noted above, 982 survey invitations were distributed to what was 51.0% of the
population of ALOs, 382 invitations to MSCHE institutions, 457 invitations to SACS
institutions, and 143 invitations to WASC institutions. By Carnegie Basic classification
invitations were distributed to 153 doctoral/research institutions (55.0% of the
doctoral/research population), 329 master’s institutions (55.4% of the master’s
population), 317 baccalaureate institutions (50.1% of the baccalaureate population), and
183 special focus institutions (46.8% of the special focus population). Because no tribal
99
colleges are accredited by MSCHE, SACS, or WASC, none could be included in this
study.
Gross Response Rate
A total of 345 surveys were completed online yielding a gross response rate of
35.1%. Table 3.2 shows response rates by region and by Carnegie classification. Omitted
in this table are one submission from a respondent in the NCA (a special focus
institution) and two submissions from respondents in the NEASC (a master’s institution
and a baccalaureate institution). These responses were submitted despite no invitations
being sent to ALOs from either of these regions. These respondents might have
participated in the survey despite not having received an invitation because of turnover
between institutions (a common theme treated below) or because of professional
collegiality where one ALO from a targeted region may have shared the survey link with
an ALO from a non-targeted region. Additionally three respondents did not indicate the
accreditation region to which they belonged (two baccalaureate institutions and an
institution for which no Carnegie classification was indicated), and four respondents did
not indicate a Carnegie classification (two from MSCHE, one from SACS, and one for
which no accrediting region was indicated). These nine total responses were omitted in
all subsequent quantitative analyses because of the treatment of accreditation region and
Carnegie classification as independent variables; however they were included in the
qualitative analyses. (One institution listed neither accreditation region nor Carnegie
classification. This respondent did not provide any comments in open-ended sections so
this response did not bias qualitative analyses either.) Because none of the questions on
100
the survey were forced-response questions a number of respondents elected to complete
only certain portions of the survey, thus addressing only certain topics. Net response rates
are reported by section in chapter four.
Table 3.2: Gross Response Rate
Region
Doctoral/
Research Master's Baccalaureate
Special
Focus Total
MSCHE 18 34 27 18 97
% within region 18.6% 35.1% 27.8% 18.6% 100.0%
% within classification 21.4% 33.0% 25.5% 41.9% 28.9%
% of total 5.4% 10.1% 8.0% 5.4% 28.9%
SACS 51 56 65 11 183
% within region 27.9% 30.6% 35.5% 6.0% 100.0%
% within classification 60.7% 54.4% 61.3% 25.6% 54.5%
% of total 15.2% 16.7% 19.3% 3.3% 54.5%
WASC 15 13 14 14 56
% within region 26.8% 23.2% 25.0% 25.0% 100.0%
% within classification 17.9% 12.6% 13.2% 32.6% 16.7%
% of total 4.5% 3.9% 4.2% 4.2% 16.7%
Total 84 103 106 43 336
% within region 25.0% 30.7% 31.5% 12.8% 100.0%
% within classification 100.0% 100.0% 100.0% 100.0% 100.0%
% of total 25.0% 30.7% 31.5% 12.8% 100.0%
Respondents
Because the invitation to complete the survey was sent to ALOs via email, a great
deal of information could be gathered about the respondents from responses to and
questions about the survey invitation. For those who indicated by email that they did not
wish to participate (rather than by simply not completing the survey) the three most
commonly cited reasons were the turnover that had happened since the last review
101
resulting in the most knowledgeable person or people no longer being available to
contribute, concern about the amount of time it would take to complete the survey, and
the amount of time that had elapsed since the last review. (As one participant articulated:
“Our institutional memory for the 2004 self-study has evaporated to a great extent, and
even this limited information was extremely difficult to obtain.”)
Another reason that was mentioned less frequently for not participating in the
survey concerned the privacy of the information being requested. Some participants took
the matter under consideration, one expressing the need “to discuss this matter with the
university executive team and… our accreditation organization to see if I can expose the
information to you,” and another pointing out institutional policy according to which
approval was required from the responding institution’s Institutional Review Board
(IRB), rather than the researcher’s IRB. One declination stated, “Being a private
university that does not generally wish to share information such as this, it is not
something we can do.”
There was no timeframe on which a survey invitation such as this could have been
distributed that would have been ideal for all institutions. Several participants asked for
clarification on the timing allowed for completion of the survey because they were
heavily involved at that moment with an actual site visit from the regional accreditor, a
specialized accreditor, or even both simultaneously. The response via email however was
positive and supportive of the research being undertaken, even from many who were
declining to participate. One referred to the lack of literature mentioned in chapter two:
“Accreditation is an area with scant literature so I look forward to [this study’s]
102
contribution to the field.” There were also several offers of assistance if needed, one
coming from a self-professed “accreditation junkie.” A substantial number wished to
receive a copy of the findings once the study had concluded.
Of the 342 respondents who answered whether they are currently the ALO, the
vast majority, a total of 309 (90.4%), were currently serving in that capacity (85.4% of
respondents from MSCHE, 91.2% of respondents from SACS, and 96.4% of respondents
from WASC; 90.4% of respondents from doctoral/research institutions, 85.4% of
respondents from master’s institutions, 95.3% of respondents from baccalaureate
institutions, and 90.9% of respondents from special focus institutions). It was intended
that ALOs would respond to the survey as they are the most knowledgeable about the
subject matter (particularly as accreditation costs pertain to their specific institutions)
thereby contributing to the face validity of the responses and lending credence to the
qualitative analyses that follow. It was clear from various individual responses to the
survey invitation that the task of information collection was delegated in many instances,
which also contributed to the strength of content validity.
On the other hand there were only 171 positive responses out of 333 total (51.4%)
to the question, “Were you the Accreditation Liaison Officer (ALO) at the time of the last
institutional accreditation review?” (comprised of 50.0% of respondents from MSCHE,
55.1% of respondents from SACS, and 63.0% of respondents from WASC; 51.2% of
respondents from doctoral/research institutions, 49.5% of respondents from master’s
institutions, 49.0% of respondents from baccalaureate institutions, and 59.5% of
respondents from special focus institutions). The turnover either in university personnel
103
or assignment or both is noteworthy, and supports the turnover theme identified above
from respondents to the email invitation. This turnover was also evident even in the brief
period preceding distribution of the survey. Contact information was collected by the
researcher in June 2011 for the ALOs of the two accrediting regions that list them online
as part of a public directory, however just three months later when this information was
checked immediately prior to the actual distribution of the survey, a number
(approximately five percent) had already changed. The way WASC updates their list of
ALOs monthly is another indication of the need to manage the regular turnover among
this group.
On average just under four years six months had passed since the last full
accreditation review was conducted for all responding institutions. The average was
highest for MSCHE (five years one month), lower for SACS (four years nine months),
and lowest for WASC (just under three years). The variability was much lower by
Carnegie classification: four years seven months for doctoral/research institutions, four
years eight months for master’s institutions, four years seven months for baccalaureate
institutions, and four years one month for special focus institutions.
Quantitative Variability Demonstrated by Responses
There was significant variability in the responses throughout the various
categories of direct and indirect costs. For example the calculated standard deviations
often exceeded the means, and the skewness and kurtosis values were frequently quite
high. The variability for indirect costs was higher than the variability for direct costs.
Respondents commented at length on the difficulty of reporting indirect costs in
104
particular, and the exhibited variability illustrates this well. All of these values are
recorded in Appendices E and F.
While such high variability in a quantitative study is certainly not ideal, the data
are still quite valuable and provide great insight into the question of accreditation costs.
Aside from the biases usually associated with self-reported data, the topic of accreditation
costs is particularly prone to this problem and high variability in response is characteristic
of the few studies that have been done on the topic previously (Florida State
Postsecondary Education Planning Commission, 1995; Freitas, 2007; Kennedy, Moore,
& Thibadoux, 1985; Parks, 1982; Reidlinger & Prager, 1993). More importantly, the high
variability in accreditation costs is likely representative of the variation between
institutions within the various operationally defined categories. One would expect great
variability within the categories of accrediting regions because each of these categories
would include institutions from all four very different Carnegie classifications. Even
within the categories of Carnegie classifications however there is great variation between
institutions. For example some doctoral/research institutions have a multi-billion dollar
budget while others have a budget only in the low millions. This variation will inevitably
affect the costs of accreditation at the different institutions and therefore the variability of
responses to the survey.
As noted earlier, steps were taken to maximize the reliability of the data collected
for this study by reducing error where possible (to maximize quantitative reliability) and
maintain consistency (to maximize qualitative reliability). Therefore all respondents who
participated in the study were making the same kinds of estimates and assumptions about
105
both direct and indirect costs for submission on the same instrument. In this way the data
are relatively accurate. The resultant data set that was collected is one of only a very few
on accreditation costs ever collected, and thereby constitutes one of the best sources of
information on this topic. Additionally, the data set is large enough to even out any
inconsistencies in reporting especially with a few additional steps that were taken (e.g.,
removing extreme outliers, transposing data, etc.) that are described in the next chapter.
The findings of this study comprise a sort of best estimate of accreditation costs to date,
and while it is a limitation of the study, the high variability of responses does not
undermine the results or their interpretation.
Delimitations and Limitations
Several important delimitations and limitations must be considered. This section
will address delimitations first, after which it will address limitations. The limitations it
will treat include limitations on data collection, limitations on the quantitative data,
limitations on the qualitative data, and limitations on the methodology used for the
monetization of indirect costs.
Delimitations
Three delimitations narrowed the scope of this study. First because of the smaller
number of organizations providing institutional accreditation and because of the more
comprehensive nature of institutional accreditation reviews, this study focused on that
review rather than programmatic accreditation. Second because of the special interests
represented by the national institutional accreditors, this study focused on institutions
accredited by the six regional accrediting agencies (MSCHE, NCA, NEASC, NWCCU,
106
SACS, and WASC). Finally because of the relatively recent development of two-year
institutions (within the context of the history of U.S. higher education) and their even
more recent commencement of accreditation, this study only surveyed four-year degree
granting institutions. Including both types of institutions would have broadened the
population significantly and was outside the scope of this study.
Limitations on Data Collection
Two types of limitations affected the collection of data for this study, the
identification of ALOs who should complete the survey and the collection of appropriate
data from them. In terms of identifying ALOs, the study was limited by the researcher’s
ability to identify the ALO for each institution included in the sample. Some of the
regional accreditors provide access to this information online whereas others do not make
it evident, in which cases the researcher was compelled to rely upon the regional
accrediting agencies directly. The study also risked being limited by non-response error,
or error resulting from some significance in difference between the group of ALOs who
responded to the survey and the group of ALOs who did not respond to the survey
(Creswell, 2009; Dillman, Smyth, & Christian, 2009). In order to minimize this limitation
the study surveyed the entire population of ALOs from four-year, degree-granting
institutions in the three accrediting regions that were surveyed.
As far as data collection, the study was also limited by the willingness or ability
of ALOs to share this information candidly. The questions attempted to gather into
relatively succinct data a significant amount of effort. In other words it may have been
difficult for the ALOs to compute or estimate a single total amount for either of the direct
107
costs, and it was likely even more difficult for the ALOs to include into a single number
the total number of people involved and the total number of hours spent for each
operationally-defined group (indirect costs). It was anticipated that respondent ALOs
would be busy professionals with demanding schedules for whom finding the time to
gather this information and subsequently fill out a survey such as this might have been
difficult. Additionally because of the degree of confidentiality maintained particularly at
private institutions, it was expected that there might be some reluctance about sharing
detailed information on this topic particularly with respect to institutional budgets and
despite the assurance of confidentiality. Providing a PDF copy of the survey attached to
the invitation email to facilitate task sharing as desired by the recipient ALO was one
way of addressing this limitation. Finally if an institution being surveyed was (or recently
had been) on probation with its regional accrediting association, the willingness of the
ALO representing that institution or the tenor of the ALO’s response may have been
affected.
An additional limitation on data collection related to the operational definitions of
the study was the broadness of category of professionals for which indirect costs were to
be noted. The classification of senior administrators (e.g., president, vice-presidents,
provost, vice-provosts, deans, etc.) comprises a broad range of university responsibilities
and, subsequently, remuneration. Similarly the classification of faculty does not
differentiate between faculty ranks (i.e., full professor, associate professor, assistant
professor, etc.) introducing another broad range of responsibility and remuneration. By
asking for too much specificity within these categories completion of the survey would
108
have become much more difficult and the response rate would have been adversely
affected. The monetization of committed time was consequently more approximate than
precise, requiring the application of an established average salary despite the wide
variability.
Limitations on Quantitative Data
Many of the limitations on quantitative data were mentioned by the respondents
as they answered the survey. As was appropriate, most respondents commented on the
reliability of the numbers they were providing. Many were careful to note how they felt
that the numbers they were sharing for indirect costs in particular were either difficult to
estimate or inestimable altogether because of the difficulty of trying to ascertain after the
fact (sometimes well after the fact) the number of people and total hours involved by an
immense number of campus representatives. In his study on the costs of health program
accreditation, Parks (1982) cautioned that “it is not unusual for administrators to
overestimate the amount or cost of ‘extra’ work” (p. 6), although at least one respondent
to the survey for this study felt that he or she was underestimating based on the feedback
provided by colleagues.
As noted above the numbers reported by respondents also demonstrated relatively
high variability. A study on accreditation costs conducted by the Florida State
Postsecondary Education Planning Commission (1995) similarly exhibited a “wide
disparity of total costs as reported by the respondents… which may be the result of self-
reporting inconsistencies and individual differences in the interpretation of accreditation
costs, particularly in the calculation of the indirect costs of the accrediting process” (p.
109
12). Similar inconsistencies and individual differences of interpretation likely affected
responses for this study, and as with the Florida study, “the interpretation of the median
totals should be done with caution” (p. 12). Freitas (2007) also addressed this kind of
high variability, commenting that “the cost of accreditation in time and personnel was the
most difficult for respondents to address” (p. 99).
Another limitation on the quantitative data provided by respondents was the
amount of turnover since the last accreditation review as evidenced both by the question
directly addressing this (“Were you the ALO at the time of the last institutional
accreditation review?”) as well as open-ended comments specifically mentioning that.
Respondents often were still able to provide data but the data they submitted was not
always informed by personal experience.
Consequently most of the quantitative data analyzed in this study were not
normally distributed and this constitutes another limitation of the study. Non-parametric
tests were still possible and the richness of the qualitative data provided by respondents
allowed a concurrent embedded strategy for triangulation enabling the quantitative and
qualitative analyses to work together.
A post hoc power analysis calculated with G*Power 3 (Faul, Erdfelder, Lang, &
Buchner, 2007) of the parametric analyses that were conducted revealed that for the
ANOVA on combined direct costs by Carnegie classification the study had a power of
0.8 to detect a medium effect size of 0.23. For the chi-square test on the justification of
accreditation costs the study had a power of 0.8 to detect a medium effect size of 0.22
when testing by accrediting region and a medium effect size of 0.24 when testing by
110
Carnegie classification. It is therefore possible, with a larger sample or with a more
robust response, that significance could be detected where it was not previously or that an
even smaller effect could be detected.
In Parks’ (1982) study on accreditation costs the author acknowledges the
limitations of a relatively simple survey to accurately convey “the innumerable variations
in cost conventions” in use among all the surveyed institutions (p. 12). Consequently for
this study, just as Parks noted, “Numbers cited in this report are extrapolations from a
sample of estimations provided by institutions. Values reported must not be cited as
precise or definitive. They should serve to stimulate more informed dialogue on the
issues raised” (p. 12).
Limitations on Qualitative Data
As with any qualitative data collection it is important to recognize that the act of
measurement and the presence of the measurer may have an effect on the data being
collected (Bogdan & Biklen, 2007; Creswell, 2009; Patton, 2002). In this case the
opportunity to comment on the various aspects of accreditation costs may have
influenced ALOs who felt particularly strongly about the topic to contribute. Indeed
many of the responses were enthusiastic or emotional. Additionally the presence of an
extra question for ALOs from the WASC region (the final question in the final section:
“What percentage of the reported costs was incurred solely from meeting the
requirements of accreditation, and what percentage was incurred by initiatives the
institution would have undertaken anyway (but which used the accreditation process as a
111
vehicle for improvement)?”) may have affected responses from WASC ALOs,
particularly with respect to that theme in the benefits section.
Limitations on the Methodology Used for the Monetization of Indirect Costs
Several limitations on the methodology used for the monetization of indirect costs
are noteworthy. A 40-hour work week and a 52-week year (resulting in a 2,080-hour
work year) were used as the work standard for calculating hourly salaries. It is widely
understood that many professionals work more than 40 hours a week, and a few of the
open-ended comments specifically addressed this. On the other hand as a nationally-
accepted standard for work week hours these numbers provide a requisite benchmark for
these calculations.
Another limitation for this methodology was the averaging of salaries necessary
to make such a calculation. Clearly some individuals within each group will have very
high salaries (for instance the university president in the senior administration group)
while others will have much lower salaries (for instance an associate or assistant dean in
the same group). The various levels of faculty rank will also exhibit variability in range
of salary pay albeit in a narrower range. To account for this as much as possible a
weighted average was taken from the Administrative Compensation Survey for the 2010
to 2011 academic year published by the College and University Professional Association
for Human Resources, or CUPA-HR (College and University Professional Association
for Human Resources, 2011a), from the Report on the Economic Status of the Profession
by Category, Affiliation, and Academic Rank for 2010 to 2011 published by the
American Association of University Professors (Thornton, 2011), and from the Mid-
112
Level Administrative and Professional Salary Survey for the 2010 to 2011 academic year
published by CUPA-HR (College and University Professional Association for Human
Resources, 2011b) for the relevant categories. Multiplying the resultant average hourly
salary by the means of reported total number of hours contributed to accreditation
resulted in a reasonable estimate that can be used in further discussions on the total
combined direct and indirect costs of accreditation.
A final limitation on the monetization of indirect costs was the exclusion of costs
as reported by extreme outliers (a single respondent institution that reported a site visit
cost that was greater than three times the next highest reported site visit cost, and
respondent institutions that reported an extreme value in more than one category of
indirect costs). This exclusion was executed to maximize the objectivity of the results.
113
CHAPTER FOUR: RESULTS
The purpose of this study was to investigate the costs of regional accreditation as
reported by Accreditation Liaison Officers (ALOs). This chapter will present the results
of the survey distributed to ALOs in three of the six regional accrediting agencies. The
chapter will begin by re-stating the research questions and reviewing the survey
instrument, and will list the variables that were used. Following this, the chapter will
review the responses submitted by ALOs as grouped by research question.
The study was designed to address the following research questions:
• What costs are associated with institutional accreditation and how do those costs
vary between and among types of institution?
• How is financial commitment toward institutional accreditation manifested, i.e.,
what are the perceived direct and indirect costs of institutional accreditation?
• Do primary Accreditation Liaison Officers believe that the perceived benefits
associated with institutional accreditation justify the institutional costs?
• What kinds of patterns emerge in accreditation commitment between types of
institutions?
The survey instrument can be found in Appendix C. The survey was designed to
elicit quantitative responses from ALOs on topics such as the total direct and indirect
costs of accreditation activities. Because of the imprecise nature of estimating financial
costs and time committed after the fact and the high variability of responses, the data
tended not to be normal in distribution. Ample opportunities were given for open-ended
comments, and the resultant responses proved to be appropriate for qualitative analysis.
114
This study was originally intended to be a mixed-methods study with the qualitative
analyses embedded within the quantitative analyses; however the resulting data lent
themselves best to an analysis using the qualitative methods as the primary analytic
methods and the quantitative methods in support of them. Where qualitative analysis was
appropriate, patterns or themes were allowed to emerge in dynamic fashion (Bogdan &
Biklen, 2007; Creswell, 2009; Patton, 2002). These themes were categorized as data
converged around meaningful recurrences in the comments, and the ensuing categories
were defined such that real differences existed between categories. This inductive
analysis was then used to identify patterns between accreditation regions and types of
institutions. Comments will be made in the relevant sections below about non-response
and data decisions where relevant. In the interest of anonymity, the names of specific
institutions or regional accreditors have been omitted from comments where they were
mentioned specifically.
The Assigned Value of Accreditation
The time spent on accreditation processes represents a significant proportion of
the total cost of accreditation. The vast majority of comments made by ALOs
acknowledged this, however determining the value of that time is treacherous. Different
institutions assign the responsibility to various positions according to the structure or
need of those institutions; however the monetary cost in salary and the burden on time
clearly varies among the various titles of professionals serving as ALOs. For instance, it
will be much more expensive for a president or a vice president with many institutional
duties to fulfill accreditation responsibilities than for a staff member at a director level
115
who is tasked with a more narrow range of assignments. Such a staff member will have
fewer demands competing for his or her time and the cost in salary remuneration for the
hours spent on accreditation will be much lower. For this reason, an analysis of
institutional titles by accreditation region and by Carnegie classification can give some
indication of both the cost of accreditation and the level of commitment of different
institutional categories, i.e., where a more highly-recompensed individual is serving as
the ALO, the institution is making a greater financial commitment to the accreditation
process. This section will consider the cost of accreditation in terms of the formal titles of
ALOs as well as the percentage of time committed to the accreditation by the formally
designated ALOs.
Formal Titles of ALOs
One possible way of considering how institutional commitment to accreditation is
manifested is by investigating the formal titles of those professionals serving as ALOs.
Are ALOs typically senior level administrators (such as a Vice President or a Provost),
faculty members, administrative staff, or something else? The Middle States Commission
(MSCHE) lists as part of its institutional directory the name and title of ALOs for each
accredited institution. Through their direct participation in this study both SACS and
WASC provided a list of formal titles of ALOs. There was some overlap between title
categories due to multiple titles being assigned to a single individual (e.g., Vice President
and Provost, Assistant Provost and Dean, etc.). Eight categories were created: President
(including Chancellor and Rector), Vice President (including Vice Chancellor and Vice
Rector), Provost, Dean, Faculty (including Chair), Accreditation Liaison Officer (where
116
this was explicitly a part of the title), Director (when this was listed specifically as the
title), and Staff (all other non-faculty and non-senior administration positions). Only one
occurrence of one title did not clearly fit into any of these categories, that of “Functional
Lead-Student,” so this was included in the Staff category for the applicable region. Table
4.1 shows the percentage of titles in each category by region and Table 4.2 shows the
percentage of titles in each category by Carnegie classification.
Table 4.1: Formal Titles of ALOs by Accreditation Region
Region President
Vice
President Provost Dean Faculty ALO Director Staff
MSCHE 5.2% 40.2% 33.9% 24.0% 5.0% 1.0% 10.2% 5.0%
SACS 0.7% 45.4% 26.1% 15.0% 3.9% 1.3% 18.5% 3.7%
WASC 0.0% 33.8% 35.9% 23.2% 10.6% 4.2% 10.6% 5.6%
Total 2.3% 41.7% 30.6% 19.7% 6.2% 1.6% 14.1% 4.5%
Table 4.2: Formal Titles of ALOs by Carnegie Classification
Carnegie
classification President
Vice
President Provost Dean Faculty ALO Director Staff
Doctoral/Research 1.3% 28.4% 54.8% 7.7% 5.2% 0.7% 15.5% 3.2%
Master's 1.8% 49.3% 36.6% 11.1% 7.2% 1.2% 12.6% 0.6%
Baccalaureate 1.9% 42.5% 21.9% 27.0% 6.0% 6.4% 8.6% 2.2%
Special Focus 5.0% 37.9% 13.7% 33.0% 5.5% 1.1% 14.8% 1.7%
Total 2.3% 41.7% 30.6% 19.7% 6.2% 1.6% 14.1% 4.5%
In the MSCHE region specifically, 16 titles (4.2%) had the term “accreditation”
within the title, 85 (22.2%) had “accreditation” or a related term (e.g., assessment,
evaluation, institutional effectiveness, etc.), and 296 (77.3%) had “accreditation” or a
related term or a loosely related term (e.g., academic affairs, student development,
117
curriculum and instruction, etc.) in the title. Overall a Vice President was most frequently
listed as the ALO (40.2%), most often within master’s institutions and least often within
doctoral/research institutions. The next most frequent ALO was a Provost (33.9%), most
frequently in doctoral/research institutions and least frequently in special focus
institutions. While infrequent, the President served as ALO in 20 cases, most frequently
in special focus institutions and least frequently in doctoral/research institutions.
Doctoral/research institutions had the heaviest occurrence of Director and other Staff
positions serving as ALO, but the smallest occurrence of Dean and Faculty positions.
Every explicit occurrence of ALO as part of the title occurred in baccalaureate
institutions. Baccalaureate institutions also had the heaviest occurrence of Deans serving
as ALO. By Carnegie classification, doctoral/research institutions usually had a Provost
serving as ALO, while master’s, baccalaureate, and special focus institutions usually had
a Vice President serving as ALO. Of all three regions, MSCHE had the greatest
occurrence of ALOs serving with double titles.
In the SACS region, 11 titles (2.4%) had the term “accreditation” within the title,
with 312 titles (67.3%) containing “accreditation” or a related term, and 372 (80.9%)
containing “accreditation” or a related term or a loosely related term. As in MSCHE a
Vice President was most frequently listed as the ALO (45.4%), again least frequently for
the doctoral/research institutions. The provost position was the next most frequently
occurring ALO (26.1%), most often for doctoral/research institutions and least often for
special focus institutions. The President served as ALO in three cases (0.7%). The Dean
served as the ALO most frequently for special focus institutions (27.3%) and least
118
frequently for doctoral/research institutions (4.0%). A Director served as the ALO most
frequently for doctoral/research institutions, however there were no occurrences of
another administrative staff position serving as ALO for doctoral/research institutions.
The six occurrences (1.3%) of ALO explicitly within the title occurred in baccalaureate
and master’s institutions. As with MSCHE, doctoral/research institutions usually had a
Provost serving as the ALO (55.3%) while master’s, baccalaureate, and special focus
institutions usually had a Vice President serving as the ALO.
For WASC, eight titles (5.6%) had the term “accreditation” within the title, with
87 titles (61.3%) containing “accreditation” or a related term, and 111 (78.2%) containing
“accreditation” or a related term or a loosely related term. A Provost was most frequently
listed as the ALO (35.9%) with a Vice President serving as the second most frequent
ALO (33.8%). There were no occurrences in WASC of a President formally serving as
ALO. A Dean served as ALO most frequently at special focus institutions (46.3%) and
least frequently at master’s institutions (6.5%). A faculty member filled the function of
ALO most often at baccalaureate institutions (14.3%) and least often at special focus
institutions (7.3%). There were six occurrences of ALO explicitly within the title (4.2%)
spread throughout the four Carnegie classifications. There were no instances of a Director
serving as ALO in doctoral/research institutions, and another administrative staff member
served as ALO most frequently at baccalaureate institutions. As with the other two
regions, doctoral/research institutions usually had a Provost serving as the ALO (66.7%),
with a Vice President serving as ALO most frequently for master’s and baccalaureate
119
institutions. For special focus institutions the role of ALO was most frequently filled by a
Dean (46.3%).
For all three regions the professional most frequently serving as ALO was either a
Vice President or a Provost. These two positions accounted for 61.4% of all ALOs in
MSCHE, 62.0% of all ALOs in SACS, and 62.0% of all ALOs in WASC (a total of
61.7% of all positions). This is an extraordinary demonstration of consistency between
accreditation regions. There was wider variability between Carnegie classifications. A
Vice President or Provost served as ALO in 72.2% of all doctoral/research institutions,
70.6% of all master’s institutions, 56.2% of all baccalaureate institutions, and 44.5% of
all special focus institutions. For baccalaureate institutions the majority of ALO positions
were distributed relatively evenly across Vice President, Provost, and Dean positions. For
special focus institutions however, the position of Dean served most frequently as the
ALO, in all cases more frequently than the Provost.
ALO Time Committed to Accreditation
While the designation of a particular position to serve as ALO can be viewed as a
manifestation of institutional commitment to accreditation, one way to ascertain the
individual commitment demonstrated by each ALO is to examine the percentage of time
committed to the accreditation process. ALO respondents indicated the percentage of
time that they spent on accreditation both during the period when preparing for and
including the full institutional review and during the period when not preparing for the
full institutional review. They also commented on that time. Four of these comments
indicated that the time demands were reasonable. The vast majority of comments
120
however focused on how difficult it was to meet these demands on both a professional
and a personal level.
There were four instances where respondents indicated hours per week rather than
percentages. These values were re-calculated as percentages based on a traditional 40-
hour work week. While arguably the 40-hour work week is not the norm it is generally
considered the standard. (As one ALO commented: “This was pretty much a full-time job
for me for the last 6 months of the process. If we assume that ‘full time’ means 40
hours/week (which, of course, it doesn’t) that gives us 1,000 hours.”) Where respondents
gave a range of percentages the mean of that range was taken.
Neither the 299 responses on time spent during the review nor the 308 responses
on time spent outside of the review were distributed normally so the data resisted a test of
correlation. Instead a non-parametric test, the Independent-Samples Kruskal-Wallis test,
was used. On average, ALOs spent 54.3% of their time on accreditation during the period
preparing for and including full institutional review. ALOs averaged 50.5% of their time
at MSCHE institutions, 58.8% of their time at SACS institutions, and 46.3% of their time
at WASC institutions; 55.4% of their time at doctoral/research institutions, 52.2% of their
time at master’s institutions, 57.1% of their time at baccalaureate institutions, and 50.1%
of their time at special focus institutions. The Independent-Samples Kruskal-Wallis test
of difference between region resulted in a significance of p = 0.002. The same test of
difference by Carnegie classification resulted in a significance of p = 0.431.
Consequently the difference in percentage of time spent on accreditation by ALOs as
121
reported on this survey was significant between regions but not between Carnegie
classifications.
On the other hand ALOs spent on average 21.1% of their time on accreditation
when not in the period preparing for full institutional review. By region this was 20.6%
of ALO time for MSCHE institutions, 21.5% of ALO time for SACS institutions, and
19.7% of ALO time for WASC institutions; 22.4% of ALO time for doctoral/research
institutions, 19.0% of ALO time for master’s institutions, 22.1% of ALO time for
baccalaureate institutions, and 20.2% of ALO time for special focus institutions. The
Independent-Samples Kruskal-Wallis test of difference between region resulted in a
significance of p = 0.420 and the same test of difference between Carnegie classification
resulted in a value of p = 0.556 causing the rejection of the null hypothesis in both
instances. Significant differences do not exist between either accreditation region or
Carnegie classification for percentage of ALO time spent on accreditation outside of the
period preparing for the full institutional review.
ALOs commented amply in the survey on the percentage of time they spent on
accreditation. About a third of these comments were made to clarify what they listed as
the percentage of time committed to accreditation. One ALO pointed out that “the work
ebbs and flows. When you’re busy, the work takes 100% of your day or several days.
When there’s nothing happening (no new, changed, closed programs, no new sites), it’s
minimal.” Another echoed: “Accreditation work is episodic and peak-load oriented; [it is]
difficult to derive an on-going overall estimate.” Said a third, “It is really hard to estimate
the percentage of time during the reaffirmation process—you just work until the job is
122
done.” Other responses focused on how the work is “continuous rather than episodic,”
requiring constant monitoring. One ALO expressed difficulty about estimating time
devoted to accreditation because:
It’s never really over. Even after you finish the process, there are usually interim
reports, and now [the accreditation region] is redesigning its process to require
reports on specific datasets every three years. This is good for accountability
purposes, but will [require] a lot of resources for universities.
Similarly another ALO said, “I have not yet experienced my position at a time when we
were not preparing for formal review.” In other words, the presence of accreditation is
perennial.
In 10 instances ALOs commenting on this topic focused on the context in which
work on accreditation is done: “To be effective, I think of accreditation implications with
every decision we make so it is difficult to split the time.” Another ALO said: “My goal
is to make these processes part of day-to-day functions and not just to be done at
accreditation time.” Other ALOs also said that every decision is made with respect to
how it will affect accreditation because “everything I do has some bearing on
accreditation reviews.” Therefore “it is hard to know what is part of institutional
accreditation and what is not.” By way of illustration, one ALO said that “most
institutional research work indirectly impacts accreditation review work. For example,
when we conduct a survey or prepare a report six years before a formal review, it may
(and often should) end up in an accreditation report.” These comments further reveal the
continuous presence of accreditation at the institution, but further demonstrate how the
123
level of demand varies according to a multitude of factors. They also reveal the
commitment many ALOs have not only to completing the necessary tasks but also to
taking advantage of the process to further other university interests.
Only a few of the comments explicitly stated that the time demands of
accreditation were reasonable. Said one ALO, “In principle, it should not be
overwhelming, if planning is done properly and assessment data is in place. Of course,
those are major challenges.” Other comments focused on how previous experience with
accreditation review enabled staff to prepare by ensuring that adequate time and
resources were in place to support the review, even where this had necessitated the
creation of new positions or a new office focused on accreditation exclusively.
Most respondents however were not so positive about the requirements of
accreditation. These respondents also thought of accreditation in all-encompassing terms
but not as generously, for example: “There is a constant and looming presence of
accreditation regarding much of what we do.” The strongest theme by far running
through these comments was the feeling of frustration or being overwhelmed by the
demands of accreditation work. Comments were made about accreditation requiring “way
too much [time]” or “far, far too much” or “an inordinate amount of time.” Another ALO
blamed the accrediting region, saying that “the demands placed upon us by [the region]
are completely unreasonable and unsustainable.” Two separate ALOs spoke of the
percentage of time spent on accreditation using the same extreme number, one saying, “I
am not sure that 110% of time would be adequate!” and the other, “The worry factor
would have brought the percentage during our last cycle up to 110 percent.” Others
124
described accreditation as a “costly ridiculous process” or as “cumbersome and overly
elaborate,” and many expressed concern that the demands of accreditation were only
increasing. Because accreditation work is perpetual it often becomes overwhelming.
Thus a professional responsibility that is constantly overwhelming risks having a
personal effect on individuals as well. Most striking about the comments made by ALOs
on the percentage of their time spent on accreditation was the number of comments that
were made about the toll it was taking on them personally:
There was no release from my regular IR and/or assessment duties and…
preparation was an extremely time-consuming add-on, culminating in a literal
24/7 sprint from May to September (maybe 19/7 since I did have to sleep about 5
hours a day)!
Other ALOs talked about the need to work extra hours: “Given the demands of the dean’s
office the above percentages are [essentially] done in an overtime capacity, that is during
nights and weekends.” Similarly:
Although I listed 80% of my time devoted during the preparation and formal
accreditation time frame—my work week for the 18 months prior to the on-site
visit was approximately 70+ hours a week, 6 days a week. My normal week is
about 45-55 hours.
Extra time was required to meet the various needs of regional accreditation as illustrated
by this ALO:
Very hard to estimate amount of time actually allocated. The larger issue is the
timing of the time demands, coming in often unpredictable spurts when least
125
expected—[our region] representative suddenly [made a] “request” [for] more
information requiring more days of preparation. Eg: Just Monday this week I
found an e-mail sent to me on Sunday (!) requiring information before Tuesday!
Result, I had to work till nearly 10 PM Monday night to satisfy the accreditation-
related “request.”
The demands of this extra time also took a toll on other professional responsibilities: “It
[the percentage of time required for accreditation] is increasing all the time and threatens
my ability to take care of other duties.”
ALOs who also served as faculty commented on the impact that accreditation
responsibilities sometimes had on their teaching. While some were able to teach less (or
unable to maintain the same teaching load), others had to make room in their schedule:
You have no choice but to contribute all the time. During that time I did still teach
and do my regular job, but [the accrediting region] took almost [all] of my
designated workweek. I would have to stay late or come on weekends to catch up
on my regular position’s responsibilities.
More than one ALO spoke of “drowning in accreditation work.” For some, the
quantification of percentages committed to accreditation was difficult:
These percentages… are not very meaningful because they presuppose that the
cost to the university can be measured in the liaison’s personal involvement.
While that is a costly feature of accreditation, it is the very tip of the ice berg. The
number of people (full-time, part-time, and on a consulting basis) we have
devoted to satisfying the accreditation expectations has grown substantially in ten
126
years. And the amount of time we are now demanding from faculty and staff to
handle accreditation tasks is also growing substantially. It feels like “all
accreditation all the time.” The tail is virtually wagging the dog.
This language is reminiscent of the way Banta & Associates (2002) compared the
influence of accreditation on assessment to the inexorable power of a glacier.
Another ALO also pointed out succinctly that accreditation demands were more
than just professionally overwhelming: “It takes over my life—not just my work life, but
my whole life.” The personal toll of this constant professional focus on many very
capable individuals is very real and undeniable. This frustration was prevalent throughout
the survey responses, continually re-surfacing among most of the survey sections, not just
those treating the percentage of time spent on accreditation. From the strong language
and emotion that were constantly evident in these comments it became clear that this is a
sensitive issue. While the results of the survey are not statistically generalizable, because
of the persistent recurrence of this theme it seems reasonable to conclude that the
personal cost of managing accreditation is a concern for non-respondents as well.
It is clear from these data that accreditation is valuable on both an institutional
and a personal level. Institutionally the responsibilities are generally assigned to high-
level positions (i.e., Vice President or Provost). The assignment of accreditation oversight
to these positions means a higher cost to the institution since these salaries are higher, and
a higher cost to the person filling the assignment since these positions tend to have
oversight of many other institutional priorities. At the same time the personal cost is often
high because the accreditation responsibilities are often so demanding as to make an
127
imposition on the personal life of the ALO. On the other hand while these comments
demonstrate the high toll such demands can take, the fact that these ALOs are meeting
those demands even at such personal cost is a striking testament to their commitment to
the institution if not to the accreditation process.
The Perceived Direct and Indirect Costs of Institutional Accreditation
While the determination of value can be problematic, the determination of cost is
much more objective. Fiscal costs can be directly added, and the indirect costs of time
can be monetized. This section will explore the perceived direct and indirect costs of
institutional accreditation. The section will begin by investigating the direct costs as
reported by ALOs after which the indirect costs will be investigated. Both of these
investigations will include a quantitative and a qualitative analysis. The section will
conclude by considering the combined fiscal value of direct and indirect costs by
assigning a monetary value to the indirect costs and adding those to the direct costs.
Reported Direct Costs of Accreditation
Survey respondents reported the direct cumulative financial cost of the
preparation of the self-study document (including the cost of materials, copying, printing,
mailing, fees for professional services, etc.), and the direct cumulative financial cost of
the site visit (including the cost of travel, accommodations, food, stipends/honoraria,
etc.). In seven instances respondents listed a range, in which case the mean of that range
was taken. In the four cases where respondents gave values that included cents (e.g.,
$4,957.39; $5,521.29; $11,071.60; $29,932.87) the figures were rounded to the nearest
dollar. One responding institution reported a site visit cost that was greater than three
128
times the next highest reported cost. The data from this institution (a master’s institution
from SACS) were suppressed from the direct cost analyses as an extreme outlier.
A total of 214 respondents (63.9%) reported a direct cost for the document. The
mean of the document cost was $50,979 with a standard deviation of $99,253 and a 95%
confidence interval of $37,605 to $64,353. The mode was $5,000 and the median was
$12,000. The lowest reported document cost was $0 and the highest reported document
cost was $600,000. The data had high variability and were not normally distributed
(having a skewness of 3.3 and a kurtosis of 12.4), therefore steps were taken to see if the
data could be interpreted normally. An analysis of the open-ended comments provided by
respondents who submitted outliers did not identify any differentiation between these
responses and others, therefore outliers were removed. Even after removing outliers,
transposing the mean by taking the square root, and transposing the mean by taking the
base-10 logarithm however, the data still resisted a normal distribution. Therefore the
removed outliers were replaced in the data set and the Independent-Samples Kruskal-
Wallis test, a nonparametric test, was conducted to determine whether there was
significant difference between the reported means of document cost by accreditation
region or Carnegie classification. The value of the Independent-Samples Kruskal-Wallis
test on accreditation region was p = 0.560. The value of the Independent-Samples
Kruskal-Wallis test on Carnegie classification however was p = 0.001. Therefore the
difference in mean reported document cost was significant by Carnegie classification but
not by accreditation region. Table 4.3 lists the descriptive statistics of document cost by
129
Carnegie classification. The reported document cost was highest for doctoral/research
institutions, followed by master’s, baccalaureate, and special focus institutions.
A total of 210 respondents (62.7%) reported a direct cost for the site visit. The
mean of the site visit cost was $20,591 with a standard deviation of $19,656 and a 95%
confidence interval of $17,917 to $23,265. The mode was $20,000 and the median was
$15,000. The lowest reported site visit cost was $3,000 and the highest reported site visit
cost was $150,000. These data also demonstrated high variability and were also not
normally distributed (having a skewness of 3.1 and a kurtosis of 13.5), therefore steps
were taken to see if they could be interpreted normally. An analysis of the open-ended
comments provided by respondents who submitted outliers again did not identify any
differentiation between these responses and others, and only two respondents who
provided outlier data for site visit cost also provided outlier data for document cost,
therefore outliers were removed. After removing outliers, transposing the mean by taking
the square root, and transposing the mean by taking the base-10 logarithm however, the
data still resisted a normal distribution. Therefore the removed outliers were again
replaced in the data set and the nonparametric Independent-Samples Kruskal-Wallis test
was conducted to determine whether there was significant difference between the
reported means of site visit cost by accreditation region or Carnegie classification. The
value of the Independent-Samples Kruskal-Wallis test on accreditation region was p =
0.245. The value of the Independent-Samples Kruskal-Wallis test on Carnegie
classification however was p = 0.002. Therefore the difference in mean reported site visit
cost was significant by Carnegie classification and not by accreditation region. Table 4.4
130
Table 4.3: Document Cost by Carnegie Classification
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 43 $88,271 $125,232 $5,000
a
$32,171 $200 $600,000 $49,730 $126,812 2.3 5.9
Master's 71 $53,528 $104,660 $10,000 $12,000 $300 $600,000 $28,755 $78,300 3.3 12.3
Baccalaureate 67 $33,777 $75,353 $1,000
a
$10,000 $250 $534,900 $15,397 $52,157 5.1 30.7
Special Focus 33 $31,827 $80,214 $1,000 $10,000 $0 $400,000 $3,384 $60,270 3.9 15.5
a
Multiple modes exist. The smallest value is shown.
Table 4.4: Site Visit Cost by Carnegie Classification
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 44 $28,401 $26,480 $15,000 $17,250 $4,500 $120,000 $20,350 $36,451 1.7 2.9
Master's 70 $22,895 $20,544 $25,000 $20,000 $4,000 $500,000 $17,997 $27,794 3.8 21.0
Baccalaureate 64 $15,991 $14,198 $20,000 $14,000 $3,000 $100,000 $12,444 $19,538 3.8 19.7
Special Focus 32 $14,013 $9,807 $20,000 $12,000 $3,000 $40,000 $10,477 $17,549 1.1 0.7
131
lists the descriptive statistics for site visit costs by Carnegie classification. The reported
site visit cost was highest for doctoral/research institutions and this was roughly double
that of the cost reported by baccalaureate institutions and special focus institutions.
For the purpose of this study, the document cost and site visit cost when both
present and combined constitute the total direct financial cost of institutional
accreditation. A total of 204 respondents (60.9%) reported both costs. The mean
combined cost was $73,591 with a standard deviation of $109,713 and a 95% confidence
interval of $58,445 to $88,736. The mode was $40,000 and the median was $35,000. The
lowest combined cost was $4,500 and the highest combined cost was $620,000. The data
were again not normally distributed (having a skewness of 2.9 and a kurtosis of 9.0),
therefore steps were taken to see if these data could be interpreted normally. Outliers
(having a combined cost greater than $250,000) were removed and the mean was
transposed by taking the square root, after which the data were still not normally
distributed. However the data did prove to be normally distributed when they were
transposed by taking the base-10 logarithm. A one-way ANOVA was conducted to test
for significance of difference between accreditation regions and a second one-way
ANOVA was conducted to test for significance of difference between Carnegie
classifications. The one-way ANOVA on the base-10 logarithm of the means of
combined direct costs by accreditation region yielded a significance value of p = 0.497,
showing that there was not a significant difference in means of combined direct costs
between accreditation regions. The one-way ANOVA on the base-10 logarithm of the
means of combined direct costs by Carnegie classification yielded a significance value
132
less than p = 0.001, showing that there was a significant difference in means of combined
direct costs between Carnegie classifications. In each case the p-value for Levene’s test
was not significant demonstrating that the data did have homogeneity of variance.
Table 4.5 shows the means of combined direct costs by Carnegie classification
(excluding outliers with combined cost greater than $250,000; note that means for
combined direct costs by Carnegie classification including these outliers but excluding
the extreme outlier as indicated above can be found in Appendix E). The Tukey Honestly
Significant Difference test showed significant differences in means between
doctoral/research institutions and baccalaureate colleges, between doctoral/research
institutions and special focus institutions, and between master’s institutions and special
focus institutions.
Only 61.3% of survey respondents reported on one of these direct costs of
accreditation. There are a few reasons why ALOs may have submitted the survey without
indicating direct financial costs. As noted above, concern over the private nature of these
data caused at least some invited ALOs to decline completing the survey. This same
concern may also explain the reluctance of ALOs to give detailed cost information.
Respondents may have wished to participate in the study or support it, and therefore
elected to make a qualitative contribution while still withholding some of the requested
figures in deference to institutional preference or policy. Another primary reason may be
limited access to summary data of this nature. As one respondent noted: “I am really
sorry but we simply did not track the costs very closely—the rationale being, I believe,
133
133
Table 4.5: Combined Direct Costs by Carnegie Classification
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 34 $63,749 $52,704 $60,000 $51,000 $9,500 $215,000 $45,360 $82,138 1.3 1.4
Master's 63 $47,558 $39,690 $40,000 $35,454 $8,500 $185,000 $37,562 $57,554 1.8 3.6
Baccalaureate 60 $39,639 $41,228 $20,000 $25,354 $5,557 $200,000 $28,988 $50,289 2.3 5.6
Special Focus 30 $26,090 $23,982 $35,000 $20,250 $4,500 $120,000 $17,135 $35,045 2.4 7.5
134
134
that it makes no difference because we have to do what needs to be done anyway
regardless of cost.”
All three direct cost measures follow the same pattern: The costs are highest for
doctoral/research institutions, less for master’s institutions, even less for baccalaureate
institutions, and lowest for special focus institutions. This could simply be an indication
of institutional budget, reflecting for instance the large, multi-faceted nature of the
budgets of doctoral/research institution and the much smaller, more focused nature of the
budgets of special focus institutions.
ALO Comments on Direct Costs
ALOs were given the opportunity to comment on the direct costs of institutional
accreditation and these comments were analyzed qualitatively. The three primary themes
emerging from an analysis of these comments were the limited reliability of reported
costs, the reasonableness of the costs, and the high magnitude of the costs.
About half of the comments were clarifying comments. Many respondents
specified that the estimates they provided were rough in nature because they had limited
or no access to the relevant financial records at the time of completing the survey
(frequently because they were kept by another campus office), because no or limited
records had been kept by the institution (sometimes because keeping track of such
specific costs constituted too much work), or because the respondents had not
participated in the previous review. The ultimate effect of this turnover is an increased
cost to the institution both in terms of time and money as each new ALO must learn how
135
to manage the accreditation process at that school for the first time during the next
review.
Two direct cost items that were consistently mentioned that had not been
suggested in the survey were software for the document and gift or token items for
visitors (“swag” according to two respondents) for the site visit. Another direct cost item
that was intentionally excluded from the survey (because it falls outside the scope of this
study) was the fee assessed by the regional accreditor. This cost was deliberately
excluded because all institutions pay such a fee to the regional accreditor, and specifically
addressing (and excluding) it was an effort to further standardize the direct costs reported.
At least one respondent disagreed with this strategy, opining:
Fees paid to accrediting organizations… are substantial and are directly borne by
the institution undergoing the site visit. They are not optional and including them
as direct costs will more accurately convey the real costs of the site visit to the
institution.
There were also several comments that alluded to the indirect costs of accreditation by
referring to how salary expenses constituted the greatest financial cost, although the
survey specifically asked that these costs be excluded from the direct costs reported.
These comments illustrate how ALOs were acutely (and often painfully) aware of all of
the costs of institutional accreditation.
Only five respondents indicated in some way that they found the costs to be
reasonable. Some had adequately anticipated the direct costs or did not consider them to
be overly high, especially in light of the return on the investment. Said one respondent:
136
I am aware of the occasional argument that the costs are too high for the
demonstrable return. However, institutions need to be accountable to someone
outside their own governance structures and state organizations for quality control
purposes. The “peer review” approach is far superior to state or federal review
processes, which are the only realistic alternatives. If you distribute the costs out
over time, they are actually quite reasonable and cost effective from a variety of
perspectives.
Another respondent focused more on what the institution got out of the process:
The cost of accreditation is minimal in relationship to the benefit of accreditation.
If an institution uses it properly, the self-study highlights the areas in need of
improvement and permits the institution to address these. Without accreditation,
many institutions would never examine their effectiveness. “It is not what you
expect that gets done, it is what you inspect!”
This idea was present even when another respondent acknowledged that the expense
involved was high: “It was costly. However, without this process, our university would
not be looking very carefully at student learning or the engagement of students, or
changes in student and faculty demographics, or graduation and retention of students.” A
fourth respondent emphasized the institution’s responsibility to manage the costs: “I
don’t think schools need to make these reports a costly venture. We were judged on the
content of our report, not the format.” There is an inevitable cost associated with
accreditation; these respondents focused on how the institution benefitted from paying
that cost.
137
About twice as many comments on this topic however were about how the costs
were in fact too high. Respondents described direct costs using such language as “a
burden,” “way too high,” “unsustainable,” and “a financial nightmare.” One respondent
said, “The burden only negatively increases with the costs. Also there is an unspoken
perception that costs are what it is about. There is never a waived fee or an opportunity
for support.” Some respondents elaborated on very specific examples of why the costs
were so high:
Costs of visits are, in my view, totally outrageous. [Our accrediting region] insists
that visitors are kept, not just in good but in the very BEST hotel in town and
insist on dining in the MOST expensive restaurants in town. (None of us live in
this life style in our own real lives and our institution won’t let any of us travel on
company business in this lavish style.) Even in relatively small things, little regard
is shown to the impact on an institution’s budget. On our last visit, the visiting
team and [accreditor] liaison insisted on the institution twice busing the entire
cabinet and most of the administration across town for one hour meetings at the
hotel when they could have easily met at the college saving the need for renting a
huge bus—and the need to interrupt half the day for the whole administration. The
accrediting organization holding to these standards has led peer reviewers to
expect to be treated in this fashion—so no institution dares object in real time.
Another response discussed the normalization of escalating costs:
We provided whatever amenities and arrangements the chair of the visiting
committee wanted (he was pretty specific about it) and also consulted with peers
138
about what they provided. There is definitely a competitive element of “keeping
up with the Jones” in putting on the onsite visit. No one wants to appear to be
cheap in making the reviewers feel comfortable.
This sentiment was even expressed by one respondent who felt that the costs were in fact
justified:
Yes [the costs are justified]. However, we seem to be in a sort of arms race in
entertaining the on-site committee. The expectation of excellent food and drink,
top notch accommodations, gift baskets, etc. almost feels like bribery. We spent a
lot on the actual visit. We did all of the prep in-house, so there was very little
direct cost in producing the self-study document.
These comments illustrate how some respondents felt that the direct costs and the nature
of those costs were simply unreasonable.
The direct costs of the accreditation process are real and calculable. That cost is
far from negligible and varies more significantly by Carnegie classification than by
accreditation region, probably because of the greater institutional variation between type
of institution than between location of institution. Respondents’ comments offered
extensive clarification on how they were counting that cost because they wished to
convey an accurate portrayal of the process, further testimony to their professional
commitment. The topic is emotionally charged however. Respondents who felt that costs
were too high used strong language to illustrate, and even those who felt they were not
too high were sometimes defensive.
139
Reported Indirect Costs of Accreditation
The calculation of indirect costs introduces far more variables than the calculation
of direct dollar costs which are more easily counted. Survey respondents reported the
indirect costs of institutional accreditation in terms of time spent on the process. Survey
respondents reported the total number of people involved with the accreditation review at
some point and the cumulative total of hours contributed by each group in each of five
categories: senior administration such as president, vice presidents, provosts, deans, etc.
(195 responses on total number and 176 responses on cumulative count of hours), faculty
(193 responses on total number and 175 responses on cumulative count of hours),
administrative staff (194 responses on total number and 176 responses on cumulative
count of hours), students (189 responses on total number and 170 responses on
cumulative count of hours), and others such as trustees, alumni, etc. (172 responses on
total number and 160 responses on cumulative count of hours). They also reported the
cumulative total of hours contributed by the ALO him or herself (178 responses). In some
instances respondents listed a range in which case the mean of that range was taken.
Where respondents listed a number of years (e.g., “25 hours per week for 1.5 years”), a
52-week year was used to compute the total.
In a review of the data, the reported indirect costs for several institutions stood out
starkly as extreme outliers that were significantly skewing the costs. (In two cases this
highest reported value was more than double the amount of the second highest value.) A
review of the top five extreme values for each of the six categories of cumulative hours
contributed to accreditation revealed eight respondent institutions that reported an
140
extreme value in more than one category. The data from these eight institutions (two
baccalaureate institutions from MSCHE, two baccalaureate institutions from SACS, two
master’s institutions from SACS, one doctoral/research institution from SACS, and one
baccalaureate institution from WASC) included all of the most extreme values and were
suppressed from the indirect cost analyses. Tables 4.6 and 4.7 show descriptive statistics
for the reported indirect costs of institutional accreditation.
Table 4.6: Total Number of People Involved with Accreditation
95% confidence interval
Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 11.2 11.7 5 8 1 115 9.5 12.9 4.6 34.0
Faculty 38.3 38.1 50 29 0 250 32.8 43.8 2.3 7.7
Staff 18.3 27.9 2
a
10 0 250 14.2 22.3 4.5 28.7
Students 30.6 98.5 10 10 0 1,000 16.2 45.1 7.1 59.3
Others 10.4 14.3 10 6 0 100 8.2 12.5 3.9 20.6
a
Multiple modes exist. The smallest value is shown.
Table 4.7: Cumulative Hours Spent on Accreditation
95% confidence interval
Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 1,408.7 1,610.2 2,000 800 0 10,000 1,164.2 1,653.2 2.0 5.4
Senior
Admin 934.9 1,443.6 100 400 10 9,750 714.4 1,155.5 3.1 12.3
Faculty 1,842.0 5,602.8 1,000 600 0 63,750 983.4 2,700.6 8.8 92.3
Staff 1,647.1 3,926.2 100
a
500 0 35,000 1,047.3 2,247.0 6.1 44.7
Students 271.0 831.1 100 60 0 8,000 141.7 400.4 6.5 51.0
Others 72.1 148.8 50 30 0 1,200 48.1 96.0 4.7 26.7
a
Multiple modes exist. The smallest value is shown.
141
The reported indirect costs for institutional accreditation had higher variability
than the reported direct costs, and none of the indirect costs were distributed normally.
Consequently the nonparametric Independent-Samples Kruskall-Wallis test was
conducted to determine whether the difference in means between the various reported
indirect costs by accreditation region and by Carnegie classification was significant.
Table 4.8 shows the significance values for the test by accreditation region. Significant
difference was discovered between the means by accreditation region for total number of
senior administrators involved (13.3 for MSCHE, 11.3 for SACS, and 8.7 for WASC),
total number of faculty involved (43.0 for MSCHE, 32.3 for SACS, and 48.0 for WASC),
total number of staff involved (25.5 for MSCHE, 16.2 for SACS, and 15.0 for WASC),
cumulative faculty hours contributed (1,840.7 for MSCHE, 2,288.9 for SACS, and 681.5
for WASC), and cumulative staff hours contributed (1,226.0 for MSCHE, 2,215.6 for
SACS, and 662.4 for WASC).
Table 4.9 shows the significance values for the test by Carnegie classification.
Significant difference was discovered between the means by classification for total
number of senior administrators involved (14.2 for doctoral/research institutions, 14.3 for
master’s institutions, 8.2 for baccalaureate institutions, and 6.2 for special focus
institutions), total number of faculty involved (46.3 for doctoral/research institutions,
40.4 for master’s institutions, 37.4 for baccalaureate institutions, and 24.5 for special
focus institutions), total number of staff involved (23.9 for doctoral/research institutions,
25.1 for master’s institutions, 11.4 for baccalaureate institutions, and 8.7 for special focus
institutions), total number of students involved (21.7 for doctoral/research institutions,
142
142
Table 4.8: Test for Significant Difference between Means by Accreditation Region
Independent-Samples Kruskal-Wallis Test P-Value Significance
The distribution of cumulative hours spent by ALO is the
same across categories of accreditation region. 0.141 FALSE
The distribution of total number of senior administrators is
the same across categories of accreditation region. 0.038 TRUE
The distribution of cumulative hours spent by senior
administrators is the same across categories of
accreditation region. 0.128 FALSE
The distribution of total number of faculty is the same
across categories of accreditation region. 0.031 TRUE
The distribution of cumulative hours spent by faculty is the
same across categories of accreditation region. 0.042 TRUE
The distribution of total number of staff is the same across
categories of accreditation region. 0.007 TRUE
The distribution of cumulative hours spent by staff is the
same across categories of accreditation region. 0.011 TRUE
The distribution of total number of students is the same
across categories of accreditation region. 0.338 FALSE
The distribution of cumulative hours spent by students is
the same across categories of accreditation region. 0.197 FALSE
The distribution of total number of others is the same
across categories of accreditation region. 0.801 FALSE
The distribution of cumulative hours spent by others is the
same across categories of accreditation region. 0.158 FALSE
48.6 for master’s institutions, 29.6 for baccalaureate institutions, and 9.1 for special focus
institutions), total number of others involved (8.6 for doctoral/research institutions, 14.4
for master’s institutions, 9.9 for baccalaureate institutions, and 6.0 for special focus
institutions), cumulative faculty hours contributed (1,185.6 for doctoral/research
institutions, 2,538.5 for master’s institutions, 2,240.4 for baccalaureate institutions, and
584.1 for special focus institutions), cumulative staff hours contributed (3,489.8 for
doctoral/research institutions, 1,947.2 for master’s institutions, 616.9 for baccalaureate
143
institutions, and 762.3 for special focus institutions), cumulative student hours
contributed (383.6 for doctoral/research institutions, 497.1 for master’s institutions, 73.4
for baccalaureate institutions, and 49.3 for special focus institutions), and cumulative
hours contributed by others (90.6 for doctoral/research institutions, 99.5 for master’s
institutions, 39.3 for baccalaureate institutions, and 50.5 for special focus institutions).
There are more statistical differences in indirect costs between Carnegie classifications
than there are between accreditation regions. Summary tables of descriptive statistics of
indirect costs can be found in Appendix F.
Between 47.6% and 58.0% of survey respondents reported on some aspect of the
indirect costs of accreditation. The reasons why ALOs may have submitted the survey
without indicating indirect costs are similar to the reasons that ALOs may have submitted
the survey without indicating direct costs: most probably for privacy reasons and because
of limited access to the data. In the case of indirect costs however the latter is more likely
responsible for missing data because of the difficult and complicated nature of tracking
such information, especially after the fact, as noted above.
While it can be difficult to draw conclusions about the differences in time
committed between accrediting region and between Carnegie classification, significant
differences exist consistently in both categories between the number of senior
administrators involved in the process, the number of faculty and faculty hours involved
in the process, and the number of staff and staff hours involved in the process. The
difference between Carnegie classifications in the number of senior administrators
involved can likely be explained by the assigned value of accreditation as demonstrated
144
144
Table 4.9: Test for Significant Difference between Means by Carnegie Classification
Independent-Samples Kruskal-Wallis Test P-Value Significance
The distribution of cumulative hours spent by ALO is the
same across categories of Carnegie classification. 0.106 FALSE
The distribution of total number of senior administrators is
the same across categories of Carnegie classification. 0.000 TRUE
The distribution of cumulative hours spent by senior
administrators is the same across categories of Carnegie
classification. 0.148 FALSE
The distribution of total number of faculty is the same
across categories of Carnegie classification. 0.000 TRUE
The distribution of cumulative hours spent by faculty is the
same across categories of Carnegie classification. 0.024 TRUE
The distribution of total number of staff is the same across
categories of Carnegie classification. 0.004 TRUE
The distribution of cumulative hours spent by staff is the
same across categories of Carnegie classification. 0.002 TRUE
The distribution of total number of students is the same
across categories of Carnegie classification. 0.007 TRUE
The distribution of cumulative hours spent by students is
the same across categories of Carnegie classification. 0.000 TRUE
The distribution of total number of others is the same
across categories of Carnegie classification. 0.020 TRUE
The distribution of cumulative hours spent by others is the
same across categories of Carnegie classification. 0.043 TRUE
in the title analysis section above. Larger institutions and institutions with larger budgets
(e.g., doctoral/research institutions as opposed to special focus institutions) appear to be
assigning this responsibility to more senior administrators. The question of budget may
also explain the difference between Carnegie classifications in the involvement of staff
and the number of staff hours: Institutions with larger budgets can afford to assign more
staff and more staff time in support of the process. Similarly the difference between
Carnegie classifications in the involvement of faculty and the number of faculty hours
145
may have more to do with the inter-category difference in faculty expectations than
anything else. The differences in each of these categories between accrediting regions
may be a function of the difference in emphasis demonstrated by the regional accreditors
through their accreditation oversight and follow up.
ALO Comments on Indirect Costs
As with the survey section on direct costs, respondents were given the opportunity
to provide-open-ended comments on the indirect costs they reported. As mentioned
above, respondents commented on the time spent as being the most significant cost of
accreditation both in terms of the amount of actual time and the financial value of that
time. One respondent succinctly summarized this idea: “The cost in time is much more of
a burden than the financial cost.” The primary themes that emerged from the qualitative
analysis of these comments followed a pattern very similar to that of the direct costs
comments analysis: over three-quarters of respondents commented principally on the
limited precision of the indirect cost numbers they reported, while 16 commented on the
value of the activities undertaken in fulfilling accreditation requirements. A very few (in
this case three) commented on the justifiability of these activities, while the rest deemed
the indirect costs to be too high.
In terms of clarification, many respondents were careful to note that they
considered the requested numbers to be either difficult to estimate or inestimable
altogether because of the “variability involved in the time allocations” or the “hierarchies
of involvement: Department chair involves other faculty, senior administrators involve
146
other administrators and assistants. Very few are not involved in some way.”
Commenting on the difficulty of putting together such estimates one respondent said:
It would be an immense effort to attempt to gather information on indirect costs of
the many who contributed to the effort. Even the direct personnel costs would be
virtually impossible to calculate. I worked on the project for probably two years in
varying amounts of time, and could not begin to estimate my time.
Another respondent offered a striking analogy:
This is like my asking you, how many total hours, counting courses, library study
time, commuting, and such, have you spent completing all your graduate work
since your last year of college—but don’t count anything in the last year.
Whatever number you or I could give to these questions, is completely a wild
guess. I understand why you would like it, but trying to reconstruct this 7-8 years
later, when I was not the ALO, is futile.
Another reason respondents consistently mentioned for providing estimates rather
than more accurate numbers was the amount of time that had lapsed since the last review.
Said one respondent: “We did not track these costs during our preparations for our
accreditation review nor for the monitoring period thereafter. To estimate them now
would require substantial costs and time.” Another echoed: “We could potentially track
time in preparing for our next self-study but it would be impossible to reconstruct that
timeline in hindsight.” As mentioned in the justification for this study, having a
benchmark for reference in future years could greatly facilitate subsequent iterations of
the process.
147
One final common reason used in cautioning the researcher about the accuracy of
reported indirect costs was the amount of personnel change that had happened since the
last review took place. For example one respondent said, “I did not participate in this last
visit. The person who was responsible for it retired right after the visit so these kind of
details retired with her.” As noted for direct costs, this kind of turnover results in
increased cost to the institution as management of the process must be entirely re-learned
by the individual newly assigned to it.
Another response illustrated the dynamic way that personnel change and topics
such as accreditation interact:
I don’t think I can quantify this with any accuracy. In response to the survey
questions above, I’ve reported only the direct costs of the accreditation visits and
report preparation. It’s very difficult to quantify the indirect costs associated with
accreditation. For example, we hired a number of faculty and staff before, during
and after our accreditation cycle whose job descriptions included tasks related to
accreditation initiatives. I think we would have filled most of all of these positions
anyway (since the university was and is in growth mode), but we may have
prioritized the hires differently as a result of the accreditation process, both by
hiring in anticipation of accreditation requirements and by hiring in response to
recommendations from the visiting teams.
Some respondents wished to point out that the actual numbers were probably
higher than what was being reported: “This is probably an underestimate, both in
numbers of people and in numbers of hours,” and, “If anything, these estimates are on the
148
low side.” The difficulty that respondents expressed in estimating indirect costs would
certainly help explain the high variability and lack of normal distribution for the data.
As noted above, 16 respondents took the opportunity to comment directly on how
worthwhile the time spent on accreditation was. A small number of commenters felt that
the time spent on accreditation was justified despite the costs that it incurs. For example:
It is difficult to separate accreditation from continuous institutional effectiveness.
Our college has a cycle of program reviews, surveys, pre-post tests, etc. for
measuring and determining institutional effectiveness. We use all this data for our
accreditation. Completing the report is the act of pulling all this data together
according to the Core Requirements and Comprehensive Standards. If
institutional effectiveness is a continuous process, [then] accreditation is a
compilation of data with a narrative.
Another respondent specified the deliberate nature with which costs were accepted: “My
institution felt it was important to include many associates in the process despite the
significant time commitment required to do so.” The following comment focused on why
exactly accreditation is so important: “Yes, I am convinced that without accreditation
universities would not focus on the quality of academic program, faculty credentials, etc.
as they do.” It seems apparent that some institutions focused on how the accreditation
process supported already existent efforts despite its costs.
Far more comments however expressed concern about just how high the cost of
accreditation is. The indirect costs were “[a] tremendous drain on the institution.”
Another comment mentioned how “massive numbers of manpower hours across campus
149
are devoted each round,” while a separate respondent said, “These costs were WAY
above what we originally estimated.”
For some respondents the indirect costs of accreditation were too high in terms of
opportunity costs: “There is no way to measure the opportunity costs (i.e., the things that
we could have been achieving had the accreditation visit not taken precedent).”
Commented another: “The $500.00 I included earlier does not include my salary nor the
time spent by some 85 faculty members and university administrators. The man hours
spent that could not be devoted to other tasks.” Yet another said: “Once again, this is time
taken away from our primary mission.”
The topic of the personal toll that accreditation responsibilities can take surfaced
again:
Three people did the majority of the work. I had two faculty members who were
appointed co-chairs. They helped organize the faculty at large into committees to
begin addressing the sections (on wiki). Once these were complete (after a year), I
stripped them all out into Word. From there, we split the writing, editing, and
reviewing responsibilities equally among the three of us. Documentation was
coordinated using a Google Docs spreadsheet, which was invaluable. I typed the
entire document myself, along with the majority of the edits, all the formatting,
TOC, pagination, covers, etc. It was very much like a dissertation project for me.
It was an exhausting process. And, in the end, the accolades from the President
(who was on sabbatical 2/3 of the year) and a provost who might as well have
150
been with him, went to the [accreditation project] leaders and the two faculty co-
chairs. The job of [accreditation region] liaison is thankless.
For these respondents, the costs were indeed very high. The preceding comment offered a
very specific example of how this was so. One final illustrative comment concerned the
toll taken by the indirect costs associated with the assignment of accreditation duties
systemically:
Accreditation is a constant concern in organizing, hiring, curriculum
development, establishment of branch campuses, and substantial changes.
Presidents and system administrators do not budget for this time spent on
accreditation duties. Therefore, accreditation activities are added on top of an
already heavy workload. It is critical, but not properly accounted for in terms of
hundreds and in some cases thousands of hours required in preparing and
documenting the meeting of standards.
The last comment illustrates that the reason accreditation costs were unreasonable
according to at least some of the respondents is because adequate support is not provided
to ALOs to manage the process and its attendant costs.
The indirect costs of the accreditation process are much harder to quantify than
the direct costs. It is difficult to account for time spent on two levels: A great many
people are involved to some greater or lesser extent in some way, and so much time is
spent on accreditation over the multiple years of a review that it is virtually impossible to
tally all of that time. As with the direct costs, respondents commented thoughtfully and
carefully on the subject so as to represent this complex issue as accurately as possible,
151
again demonstrating a profound work ethic. Emotions ran at least as strongly for indirect
costs as for direct costs, probably because of the very personal way in which time spent
affects individuals.
Direct and Indirect Costs Combined
One important way to explore the total costs of accreditation is by combining the
direct and indirect costs into one sum total. In order to determine a total financial cost to
institutions the indirect costs must be monetized and added to the direct costs. Because
respondents to the survey indicated the cumulative number of hours contributed to the
accreditation process by each group of participants, this monetization can be
accomplished by calculating an average hourly salary rate for each group and multiplying
that rate by the cumulative number of hours.
An average hourly salary rate was determined for each of the six primary groups
involved with accreditation. The average hourly salary for ALOs was calculated from the
Administrative Compensation Survey for the 2010 to 2011 academic year published by
the College and University Professional Association for Human Resources, or CUPA-HR
(College and University Professional Association for Human Resources, 2011a).
“Positions covered in the survey are selected on the basis of an analysis of administrative
positions found at most higher education institutions” (p. 5). As per the findings of the
analysis of ALO titles discussed earlier in this study (that 61.7% of all ALO positions
were filled by a Vice President or Provost), the average annual salary of all Vice
President and Provost positions was taken from the categories Senior Executive and
Chief Functional Officers, Academic Affairs, and Student Affairs in the table
152
“Unweighted Median Salary by Carnegie Classification—All Institutions.” This average
was calculated by summing the multiples of median salary and number of occurrences for
each provostial and vice presidential position, and dividing the total of the salaries by the
total number of occurrences. The average salary according to this methodology was
$136,059.91. The average salary divided by 2,080 (the standard number of work-hours in
the year: 52 weeks times 40 hours) yielded an hourly rate of $65.41 per hour.
The average hourly salary for senior administrators (presidents, vice presidents,
provosts, deans, etc.) was also determined from the CUPA-HR Administrative
Compensation Survey for the 2010 to 2011 academic year. For this average the median
times for all positions (excluding Director positions which were included with staff
positions in the parameters for the analysis of ALO titles discussed earlier in this study)
were taken from the categories Senior Executive and Chief Functional Officers
(presidents and other chief administrators), Academic Deans, Associate/Assistant
Academic Deans, Academic Affairs (provosts and other chief officers), and Student
Affairs (vice presidents). This average was also calculated by summing the multiples of
median salary and number of occurrences for each position, and dividing the total of the
salaries by the total number of occurrences. The average salary according to this
methodology was $134,484.96. The average salary divided by 2,080 work-hours in the
year yielded an hourly rate of $64.66.
The average hourly salary for faculty was determined from the Report on the
Economic Status of the Profession by Category, Affiliation, and Academic Rank for 2010
153
to 2011 published by the American Association of University Professors (Thornton,
2011). According to the survey methodology:
This figure represents the contracted salary excluding summer teaching, stipends,
extra load, or other forms of remuneration. Department heads with faculty rank
and no other administrative title are reported at their instructional salary (that is,
excluding administrative stipends). Where faculty members are given duties for
eleven or twelve months, salary is converted to a standard academic-year basis by
applying a factor of 9/11 (81.1 percent) or by the institution’s own factor,
reflected in a footnote to the appendix tables of this report. (American Association
of University Professors, 2011, para. 2)
The average salary of all faculty ranks combined was $81,009 per year. 81.1% of the
2,080 work-hour year (excluding summer as noted above) is 1,686.9 hours. The annual
salary divided by the number of yearly faculty work-hours yielded an average hourly rate
of $48.02.
The average hourly salary for staff was calculated from the Mid-Level
Administrative and Professional Salary Survey for the 2010 to 2011 academic year
published by CUPA-HR (College and University Professional Association for Human
Resources, 2011b). “Positions covered in the survey are… selected on the basis of an
analysis of mid-level administrative and professional positions found at most higher
education institutions” (p. 5). The average annual salary of all positions was taken from
the categories Academic Affairs, Business and Administrative Affairs, and Student
Affairs in the table “Unweighted Median Salary by Carnegie Classification—All
154
Institutions.” This average was calculated by summing the multiples of median salary and
number of occurrences for each position, and dividing the total of the salaries by the total
number of occurrences. The average salary according to this methodology was
$48,556.50. Dividing by 2,080 annual work-hours yielded an average hourly rate of
$23.34.
For the average hourly salary for students, the federally established minimum
wage of $7.25 per hour effective July 24, 2009 was used (U.S. Department of Labor,
n.d.).
Survey respondents gave total number of Others involved as well as the
cumulative hours contributed by Others. Respondents were given the opportunity to
comment on their answers for this group and most commented on who they included in
this category, usually alumni, members of the Board of Regents or the Board of Trustees,
campus visitors, community members, employers, or other volunteers. The same rate of
the federally established minimum wage rate of $7.25 per hour was used for the category
of Others involved in the accreditation process as a kind of token or honorarium payment.
This relatively low payment rate also served to minimize extreme differences between
institutions that reported a high involvement of others and institutions that did not.
The average hourly salary for each group of participants in the accreditation
process as determined by this methodology was multiplied by the average total number of
cumulative hours for each of the three participating regions and for each of the four
Carnegie classifications excluding the nine extreme outliers noted above (one for direct
costs and eight for indirect costs). These monetized indirect costs were then added to the
155
155
Table 4.10: Combined Direct and Indirect Costs of Accreditation by Accreditation
Region
Category
Total
Costs
Total
Direct
Costs
Total
Indirect
Costs
Percentage of
Total Cost
from Direct
Costs
Percentage of
Total Cost
from Indirect
Costs
MSCHE $345,591 $60,764 $284,827 17.6% 82.4%
SACS $405,481 $78,829 $326,652 19.4% 80.6%
WASC $230,690 $67,464 $163,226 29.2% 70.8%
Overall Average
per Institution
$327,254 $69,019 $258,235 22.1% 77.9%
Table 4.11: Combined Direct and Indirect Costs of Accreditation by Carnegie
Classification
Category
Total
Costs
Total
Direct
Costs
Total
Indirect
Costs
Percentage of
Total Cost
from Direct
Costs
Percentage of
Total Cost
from Indirect
Costs
Doctoral/Research $414,586 $112,062 $302,524 27.0% 73.0%
Master's $431,810 $78,149 $353,661 18.1% 81.9%
Baccalaureate $312,146 $51,633 $260,513 16.5% 83.5%
Special Focus $205,871 $46,365 $159,506 22.5% 77.5%
Overall Average
per Institution
$341,103 $72,052 $269,051 21.0% 79.0%
average direct costs for each of the seven categories (three accrediting regions and four
Carnegie classifications) for a per-institution average. These figures are noted in Tables
4.10 and 4.11 along with the percentage of this average derived from direct and indirect
costs.
In reviewing this table the reader is reminded that the ALO-reported direct and
indirect costs were not normally distributed data and caution should therefore be taken
when interpreting these results. A notable difference is evident both between regions
156
(where the per-institution average for SACS is almost double that for WASC) and
between Carnegie classifications (where the per-institution average for master’s
institutions is greatest and more than double that for special focus institutions). The
higher average for master’s institutions over the average for doctoral/research institutions
is a contrast to the comparison of direct costs where doctoral/research institutions
consistently had the highest costs. Of special note is the fact that on average, indirect
costs account for almost 80% of the total per-institution average of institutional
accreditation costs regardless of the independent variable whereas the direct financial
costs constitute a distinct minority of the total per-institution average.
In considering the perceived direct and indirect costs of accreditation it is evident
that both are high. The numbers are notable in and of themselves and in particular it is
striking that the combined cost is approximately $327,000 or $341,000 per institution on
average. Whether this average amounts to a lot relatively will be considered more fully in
the next chapter; however this is clearly not a sum total that can be casually considered.
Similarly the open-ended comments illustrated how high the indirect costs can be in
terms of personal and professional cost to the officials fulfilling accreditation
responsibilities. Institutions that neglect to budget for these costs either in terms of the
fiscal budget or in terms of the toll on personnel will likely experience negative
repercussions.
The Justification of Institutional Accreditation Costs
One of the purposes of this study was to determine whether Accreditation Liaison
Officers believe that the benefits associated with institutional accreditation justify the
157
institutional costs. The final question of the survey therefore asked ALOs this very thing.
In order to adequately and fairly prepare respondents to make such an evaluation in
context, the question immediately preceding this one asked ALOs what were the most
important benefits to their institutions of going through the accreditation process.
Respondents commented abundantly on these final two questions providing a wealth of
rich, qualitative data. This section will provide a qualitative analysis of the benefits
associated with institutional accreditation as described by survey respondents. Next this
section will include a quantitative analysis of whether respondents felt that the costs were
justified, after which open-ended comments concerning the overall justification of costs
will be qualitatively analyzed as well.
The Benefits of Institutional Accreditation
The 220 responses on the benefits associated with institutional accreditation were
representative of the overall sample: 56 from MSCHE, 117 from SACS and 44 from
WASC (with three additional responses from other regions); 53 from doctoral/research
institutions, 69 from master’s institutions, 65 from baccalaureate institutions, and 32 from
special focus institutions (with one additional response from an unlisted Carnegie
classification). Through inductive analysis the comments were reviewed and coded as
they converged around common themes. The most frequently cited benefit of
accreditation was the resultant institutional self-evaluation, with university improvement
mentioned the next most frequently. The next themes to emerge more distantly were the
opportunity to use accreditation as a vehicle for improvement, the resultant campus unity,
and the opportunity to have an institutional review conducted by an outside entity. Other
158
topics that emerged were the ability to offer financial aid, the simple value of being
accredited (or re-accredited), and the reputation boost provided by having accreditation.
Finally the opportunity to share best practices, the chance to celebrate institutional
accomplishments, and fear were also mentioned. Each of these topics will be illustrated
here by respondent comments.
Benefits of accreditation: Self-evaluation and university improvement.
In listing institutional self-evaluation as a benefit of accreditation, it was clear that
respondents were referring specifically to the opportunity provided by deliberate self-
reflection to make a genuine institutional self-assessment, and not necessarily the self-
study in and of itself per se. Over a third of respondents identified this as a benefit of
accreditation and it was most frequently listed by respondents from WASC institutions
and from master’s institutions. One respondent said: “Without accreditation, I am quite
sure that we would rarely dig as deep in thinking about who we are and what we do as an
institution.” Another respondent considered the value to be in “having a ‘conversation’
about the institution’s strengths, weaknesses, challenges, and plans for the future.”
Thanks to accreditation various institutions were able to “[revise and clarify] this
institution’s Mission Statement,” and “clarify the core commitment we have to the liberal
arts across disparate traditional, nontraditional, professional, and non-professional
programs.” One commenter concluded, “While unpleasant and indirect, the benefit was
the need to truly defend the work that the institution does and believe in. Not to take
anything for granted. Not to believe that academic status or accolades provide any
159
meaning.” These respondents appreciated the compelling opportunity provided by the
accreditation requirement to ask hard questions and conduct an authentic review.
It is important to note however that this benefit of self-evaluation is a separate
topic from actual university improvement as illustrated by this comment:
The [self-assessment] allowed us to assess where the institution stands on all…
measures of institutional health. However, I do not feel that the University has
taken full advantage of the benefits of having gathered, analyzed, and presented
this data following the exercise.
Another respondent felt similarly, although this person blamed the regional accreditor
rather than the institution, citing as a benefit: “the self evaluation process—which would
have been more helpful had the overall accreditation process been more open to [or]
focused on real institutional improvement than just trying to meet the standards.”
On the other hand, university improvement, or actual action taken toward
ameliorating the institution, was listed separately by over a quarter of respondents as
another benefit and was the second most cited benefit of institutional accreditation,
highest among respondents from SACS institutions and doctoral/research institutions.
Some examples were very specific, such as the “restructuring of this institution’s General
Education Program,” or the “initiation of Assessment Committee,” or the “reorganization
of Faculty Senate,” or even, “we cleaned out some deadwood.” One respondent said:
“The process truly changed us—as a matter of course, outcomes assessment has become
a way of doing business. For new programs, needs assessment begins with asking: ‘how
does this fit within the accreditation principles’.” Similarly for another institution, “The
160
process resulted in a renewed focus on program excellence rather than a task that was
necessary, but had little practical value.” These comments on university improvement are
different in nature from the comments focusing on self-evaluation as a benefit of
accreditation. While it is likely that real, positive change and improvement were effected
at many of the institutions where “self-evaluation” was the focus of this response, the
different nature of the comments illustrates two contrasting ways in which accreditation
benefits were considered.
These two benefits, self-evaluation and university improvement, were the most
widely acknowledged benefits of the accreditation process and it therefore seems
apparent that ALOs are cognizant of the different yet related nature of the two concepts.
Benefits of accreditation: Using accreditation as a vehicle for improvement.
The next most frequently cited befit of accreditation was the opportunity to use
accreditation as a vehicle for improvement. Of all the described benefits, the use of
accreditation as a vehicle for improvement demonstrated the greatest depth of comments.
For every other category respondents frequently only listed the benefit or described it in a
few words, whereas for this topic many chose to write at some length. The accreditation
process was intentionally used for a variety of purposes. This theme was most frequently
mentioned by WASC institutions and by master’s institutions.
One response described the benefit of accreditation as “the requirement of an
externally imposed self-review/assessment process using external standards within the
context of a particular institutional mission. Colleges might never engage in this activity
otherwise.” For another institution:
161
The process required that the faculty, administration and Trustees define
institutional goals in a more systematic way. Moreover, it required that the
institution consider how it can measure progress towards those goals. The
accreditation process was the impetus for many of these conversations that would
otherwise likely not have happened.
Another comment agreed with the idea that the accreditation requirement served a useful
purpose: “[The accreditation process] caused us to take stock, identify significant issues
and challenges, devote a considerable amount of time to addressing these (which needed
to be done). We might not otherwise have gotten around to it.”
Some respondents described the way the accreditation process was used to change
campus culture as illustrated here:
Our institution purposefully used our reaccreditation process as an opportunity to
engage in an in-depth self study, and to focus on academic and co-curricular
issues that were, and continue to be, of high priority to the college. The fact that
we were able to engage more than 80% of our faculty in this made the
reaccreditation process more meaningful. It allowed us to reintroduce the value
and utility of academic assessment to our faculty in a manner that was both useful
to our institutional scholarship and in compliance with [our regional accreditor’s]
expectations.
Another example of changed campus culture came when accreditation:
Provided the impetus to create an academic and administrative assessment
program that has sustained. There had been some administrative efforts, but this
162
was largely new territory for academic units that were not already engaged in
specialized accreditation with the like of ABED, AACSB, etc. In addition, the
accreditation process served a catalytic function in terms of experimentation with
new multi-disciplinary curricular delivery mechanisms.
Another respondent stated:
Most significant benefit—driver to get what we really do want to get done done
faster than we probably would otherwise do. Made us focus, acted as an
incentive/tool to force us to accomplish what we need to accomplish… Having an
outside engine to ensure we stay on target has proven to be a very helpful
motivator.
For these institutions, improvement evolved naturally and organically out of the
accreditation process.
Several respondents appreciated the “pressure” of accreditation, listing as a
benefit the “pressure to critically scrutinize current practice and evaluate results [which]
led to good ideas for change.” Another said: “Going through the accreditation process is
healthy for institutions and ours is no exception… Our relationship with [the regional
accreditor] is very good and we appreciate their assistance in keeping us focused on good
educational outcomes.” Another respondent also appreciated the way the regional
accreditor “frequently pushes us to do what we should be doing as an institution.”
Accreditation also served as an “incentive” for some institutions, such as for the
respondent who said that accreditation “incentivizes paying attention to things that would
otherwise be ignored. It fosters strategic planning, record keeping, and top down
163
institutional review. It gives us opportunities to improve the way we do business.”
Similarly:
Institutions MUST take the time to examine everything. Invariably, you find
policies or processes that are not being followed, are out of date, etc. The
accreditation process allows you the time to look at things you don’t normally
take time to look at. And, the incentive to correct things that need correcting in a
timely manner.
This incentive can be powerful, as described by this comment: “The last full review
resulted in the institution being placed on sanction. This was a catalyst for important
changes to occur in the institution.”
In several cases the actions taken as a result of the accreditation process were not
very different from actions that would have been taken anyway, but accreditation
improved upon the outcome. For instance:
[Accreditation] focused our attention on capacity and educational effectiveness
issues in a rigorous, comprehensive way. Although we would have done most of
this in any event, the pressure of the target and process assured that we did not
postpone or procrastinate given other demands on our time and resources. While
we would have undertaken evidence-based decision-making and evaluation, we
probably would [have] documented less formally, completely or compactly as we
did for the accreditation process.
Consequently accreditation could be and was frequently used to advance an
institutional agenda. Comments gave specific examples of how this was done, including
164
“[using] the self-study document… as an ideal stepping stone for the next strategic plan,”
“using the accreditation process to prompt specific reforms desired by administration,”
and even “cleaning up house a bit.” Another respondent said, “We used the ‘accreditation
scare’ to shape up some areas that needed it.” A very specific example of using
accreditation as a vehicle for institutional improvement went as follows:
The senior leadership team chose to use the accreditation process to leverage
institutional change in needed areas (shared governance, technology, advising and
its impact on academic progress, etc.). Using the four standards and the concept of
accreditation allowed this to happen. Entire campus community was invited to
participate, learn about the changes, and take part. The senior leadership team
chose to use the accreditation process to leverage institutional progress in new
initiatives. The themes selected—social justice, diversity, and shared inquiry—are
directly related to institution’s three traditions (Catholic, Lasalian, liberal arts). In
this way, the accreditation process allowed us to conduct authentic and
comprehensive inventory of our status in these areas, and to promote progress on
the new initiatives.
At several institutions where improvement was not necessarily a natural and organic
result of the accreditation process, campus executives were still able to purposefully
employ the process to that end anyway.
The idea of using accreditation as a vehicle for institutional improvement was of
particular interest to officials from WASC. In agreeing to participate directly in the study,
officials from WASC asked that an additional question be included in the survey at the
165
end of the final section on the benefits and justification of accreditation: “What
percentage of the reported costs was incurred solely from meeting the requirements of
accreditation, and what percentage was incurred by initiatives the institution would have
undertaken anyway (but which used the accreditation process as a vehicle for
improvement)?” 36 of the 56 WASC respondents treated this question specifically.
According to their responses an average of 67% of the reported costs was incurred solely
from meeting the requirements of accreditation (66% for the seven doctoral/research
institutions, 67% for the nine master’s institutions, 76% for the eight baccalaureate
institutions, and 63% for the 12 special focus institutions).
Said one WASC respondent, “It would have been very difficult to do this
important work without the external incentive of re accreditation. So probably 100% was
incurred for accreditation, but only because we couldn’t have done it otherwise.” Another
respondent was “not sure [the] institution would have undergone self study if
accreditation had not required it.” Similarly another commenter said that “not having
external leverage might have allowed us not to address challenges,” while a different
respondent maintained that the “accreditation process gave structure and… targets for
improvement.”
On the other hand, one WASC respondent did not appreciate the external
motivation:
Almost all of the costs were for fulfilling the requirements of accreditation. We
can make improvements quickly and reasonably. The data produced by all the
accreditation reports is not useful. Being a small school, our faculty know the
166
students and interact with them closely and monitor their intellectual progress.
They base their decisions on this kind of one-on-one interaction more than data
from a study.
Others found it difficult to separate the costs incurred solely from meeting the
requirements of accreditation and those incurred by initiatives the institution would have
undertaken anyway. One respondent said, “It is hard to separate the two, in that using
accreditation as a vehicle for improvement motivated campus community to actually
undertake the change.” Likewise another commented, “It’s impossible to separate these
out. I have to say that we just accept any costs in time, money, and energy as part of the
process of ‘continuous improvement’.”
Clearly this is a complicated question. A WASC respondent from a special focus
institution summarized:
Of course, this question is not easy to answer. Institutions certainly are always
interested in ways to improve. That does not change because of accreditation. But
I would say that development of assessment, learning outcomes, and program
reviews, in their eventual form, were largely jump started by our work on
accreditation.
The ability to use the accreditation process as a vehicle for institutional
improvement was a commonly cited benefit of accreditation. Institutional authorities
including senior administrators (not just ALOs) used the process sometimes incidentally,
but often purposefully and deliberately. Since the accreditation process certainly can lead
167
to institutional improvement, university leaders are trying to employ the process as
productively as possible, particularly since it is required anyway.
Benefits of accreditation: Campus unity and outside review.
The campus unity that resulted from the accreditation and the opportunity to have
an institutional review conducted by an outside entity also figured relatively prominently
in the open-ended responses, being mentioned 32 separate times. ALOs appreciated the
chance to unify various campus constituencies through the accreditation process:
The process necessarily requires teamwork from across the institution. No one
person knows everything that needs to go into a compliance certificate. The
process forces you to work with others and create something collaboratively. You
get to KNOW your institution… all parts of it… better [sic]. In reviewing the
assessment sections, you gain an appreciation for people doing jobs that you don’t
normally observe. You have some “Aha” moments (e.g., “I didn’t know we did
that!”). :-) Also, it’s gratifying to get compliments on your work putting things
together for the committee, but more importantly on how well your institution
runs.
For other respondents accreditation became an opportunity for “mobilizing the campus
for improvement” and “to bring together a large group of faculty, staff, and
administrators to complete the self-study.” For another institution, pursuant to the review,
“The campus community is more aware of the accreditation process and terms such as
culture of evidence and quality assurance.” One respondent “found the process personally
168
to be very enlightening. I learned a lot and enjoyed being involved in the reaccreditation
process.”
In at least one instance increasing campus unity had important long-term effects
as well such as was illustrated here:
Though cumbersome at times, we engaged a very broad group of contributors to
the committee structure for both the reaffirmation self-determination of
compliance and the development of [improvement projects]. In hindsight, that
was a very sound investment in that we did not fall into the mode of “[The
regional accreditor] is gone—we can get back to doing things our way.”
Another commenter described using the accreditation process to increase campus unity:
“We use the self-study process as a way of having the community fully understand our
strengths and weaknesses.” At these institutions going through the accreditation process
was a positive experience precisely because it involved so many people from throughout
the school, and participation became a part of the campus culture. Successfully executing
such a monumental task brings people together.
Those who lauded the value of a review conducted by individuals or groups
outside of the institution described it as “invaluable” and “among the best assessment
strategies.” One respondent appreciated the group attending the site visit because “the
team represents a low-cost consulting body that provides useful and impartial
assessment.” Without accreditation such an opportunity would almost certainly be more
expensive, and would likely not be as readily available. This topic was addressed by 25
different respondents.
169
Benefits of accreditation: The ability to offer financial aid, having
accreditation, reputation.
The next three most cited benefits of accreditation, the ability to offer financial
aid, the simple fact of having accreditation, and the reputation associated with being
accredited, each garnered 20 unique comments. Those addressing financial aid listed the
ability to offer access to these funds as a perspective given “from a pragmatic point of
view,” although according to another commenter it is “hard to argue that the ability to
offer financial aid is a benefit when that is critical to the survival of an institution.”
Others mentioned the mere fact of having accreditation (or reaccreditation) as a
benefit of going through the accreditation process. According to one respondent, “The
process is necessary for the university to retain its accredited status; without this, it’s
virtually impossible to function within American higher education.” Similarly the
reputational value or level of credibility provided by having accreditation was mentioned
with equal frequency, most often by representatives from special focus institutions.
Having accreditation provides “name recognition” and “legitimization of the institution,”
and assures the “street value of our degree.” One commenter said, “While this sounds like
a no-brainer, it means that students know that they are getting a quality degree and they
have access to federal loans.” For another respondent the very fact of having
accreditation provides “the opportunity to demonstrate our stability and prosperity to
internal and external constituencies.” As a standardized, national recognition,
accreditation provides a certain measure of credibility. These three more practical
benefits were the last to be listed consistently by ALOs from various accreditation
regions and Carnegie classifications.
170
Benefits of accreditation: Sharing best practices, celebration, fear.
Three other benefits were mentioned in the open-ended comments that are worth
noting here. Eight comments discussed the value of learning or sharing best practices
with other institutions through the accreditation process. According to one response, “We
are an aspiring institution; we want to become better at what we do with and for students.
The accreditation standards and criteria for review have given us direction and
benchmarks.” Another perspective on the same topic added that through accreditation,
“we ‘keep up with the Joneses’ of our peers.” The accreditation process serves as one
way to measure against important benchmarks.
Four responses focused on the opportunity that accreditation provided to celebrate
the work the institution had already done: “The self evaluation process not only brought
out opportunities for improvement but helped us celebrate our accomplishments.”
Another commenter said, “The accreditation process definitely imparts a sense of
academic and institutional renewal.” For a third, accreditation provides an “institution
wide sense of accomplishment.”
Finally, three comments considered accreditation fearfully:
As we prepare for the next visit in 3 years I am finding that much of this is
necessary but not in the detail that [the accrediting region] demands. So much of
what we do seems to be in response to fear of [the region] or the US Dept. of
Education.
The second comment echoed that sentiment: “There seems to be a nature to the effort that
it is built not to necessarily better the institution or provide insight, but to fulfill the
171
accreditation obligation/threat.” A final respondent said, “[Accreditation] also protects us
from further intervention from the DOE.” Federal intervention is considered warily and
this theme of fear or apprehension became more apparent in the responses to the question
on whether accreditation costs are justified, and will be treated below.
Benefits of accreditation: Conclusion.
The purpose of asking ALOs to identify the benefits of accreditation was to
prepare them to adequately and fairly assess whether accreditation costs were justified,
however in reviewing the frequency of occurrence of the benefits listed, several things
became apparent. Institutions are clearly using accreditation for the quality enhancement
purpose for which it was originally intended over a century ago: Accreditation provides
an honest, institution-wide self-assessment with the intention of leading to university
improvement. These benefits far outweighed every other listed benefit combined in terms
of the frequency with which they were identified. The next most frequent group of
benefits, improved campus unity, the opportunity to use accreditation as a vehicle for
improvement, and the accompanying outside review, are also benefits that ostensibly
improve the quality of the school. The group of benefits not directly related to quality
improvement, the ability to offer financial aid, the simple fact of having accreditation and
the reputation of having accredited status, the opportunity to celebrate institutional
accomplishments, and the fear associated with not having accreditation, represented a
distinct minority among listed benefits. Approximately 84% of benefits described related
directly to the improvement of institutional quality whereas only about 16% of benefits
172
described could not be directly tied to quality. This is a clear demonstration of the
motivation inspiring ALOs in the fulfillment of their responsibilities.
Are Accreditation Costs Justified?
At the end of the survey ALOs were pointedly asked: Considering the benefits
and in your opinion, were the costs of accreditation justified? Many respondents
answered with a simple yes or no answer. Some answers were enthusiastically positive
and included: “Absolute YES!!” and “Definitely!” as well as “Oh, yes, very justified,”
“More than our expectations,” and “Yes, worth it in every category.” Other answers that
left no room for error in interpretation included: “Completely well justified. A very small
price to pay for significant improvements, many of which would not have occurred
without the [regional accreditor] looming,” and “Without a doubt—the investment is
important and justified.” Even some respondents who acknowledged the high nature of
costs were unequivocal in their feelings that the costs were justified: “I have no way to
recoup the costs of accreditation. But I believe they are worth every penny and dollar.”
Other answers contained equal clarity but were emphatically negative, saying: “NO!” and
“Good heavens, no. The hoops we jumped through were ridiculous,” as well as: “I do
NOT think they were justified.” Others respondents answered with a clear yes or no but
also elaborated on their answer. On the other hand, some respondents answered with
comments on whether they felt accreditation costs were justified but without giving a
clear yes or no answer, using language such as “Marginally,” “Somewhat,” “Tough to
say,” or “To some extent but not totally justifiable.”
173
Each of these responses fell into one of four categories. Those that could be
clearly interpreted as either “yes” or “no” were coded as such, those that could not be
clearly interpreted as either “yes” or “no” were categorized as “did not specify,” and
those who did not answer the question constituted the fourth category. Through this
quantization of qualitative data an analysis was conducted to determine whether there
was significant difference between accreditation regions or between Carnegie
classifications in how ALOs felt about whether the costs were justified. The open-ended
comments were also analyzed qualitatively and recurrent themes were identified.
Out of 336 survey responses, 150 respondents (44.6%) felt that accreditation costs
were justified, 43 respondents (12.8%) felt they were not justified, 22 respondents (6.5%)
answered the question without providing a clear yes or no answer, and 121 respondents
(36.0%) did not answer the question. Tables 4.12 and 4.13 show the percentages of
respondents who felt the costs were justified by accreditation region and by Carnegie
classification.
A chi-square test was run to determine whether the difference between
accreditation regions in whether ALOs felt that accreditation costs were justified was
large enough to be statistically significant. The test value of p was less than 0.001
indicating that there is in fact a difference by region in whether ALOs felt that costs were
justified. A separate chi-square test was run to determine whether the difference between
Carnegie classifications in whether ALOs felt that accreditation costs were justified was
large enough to be statistically significant. The test value of p = 0.308 indicates that the
difference between Carnegie classifications in whether ALOs felt costs were justified was
174
174
Table 4.12: Whether Accreditation Costs Were Justified by Accreditation Region
Region
Yes, costs
were
justified
No, costs
were not
justified
Did not
specify
Did not
answer Total
MSCHE 43 7 4 42 96
% within region 44.8% 7.3% 4.2% 43.8% 100.0%
% within category 28.7% 16.3% 18.2% 34.7% 28.6%
% of total 12.8% 2.1% 1.2% 12.5% 28.6%
SACS 81 17 17 68 183
% within region 44.3% 9.3% 9.3% 37.2% 100.0%
% within category 54.0% 39.5% 77.3% 56.2% 54.5%
% of total 24.1% 5.1% 5.1% 20.2% 54.5%
WASC 26 19 1 11 57
% within region 45.6% 33.3% 1.8% 19.3% 100.0%
% within category 17.3% 44.2% 4.5% 9.1% 17.0%
% of total 7.7% 5.7% 0.3% 3.3% 17.0%
Total 150 43 22 121 336
% within region 44.6% 12.8% 6.5% 36.0% 100.0%
% within category 100.0% 100.0% 100.0% 100.0% 100.0%
% of total 44.6% 12.8% 6.5% 36.0% 100.0%
not significant. Therefore the accreditation region in which an institution was located had
more effect on whether ALOs felt that accreditation costs were justified than did the
Carnegie classification to which the institution belonged. This could possibly be a
reflection of how ALOs in each accreditation region feel about the different approaches
to accreditation and the accreditation process taken by the different regional accreditors.
On the other hand institutions from different Carnegie classifications appear to be
similarly affected by the accreditation process. These differences will be more fully
explored in the next chapter.
175
Table 4.13: Whether Accreditation Costs Were Justified by Carnegie Classification
Carnegie classification
Yes, costs
were
justified
No, costs were
not justified
Did not
specify
Did not
answer Total
Doctoral/Research 34 9 9 32 84
% within classification 40.5% 10.7% 10.7% 38.1% 100.0%
% within category 22.7% 20.9% 40.9% 26.4% 25.0%
% of total 10.1% 2.7% 2.7% 9.5% 25.0%
Master's 51 10 7 35 103
% within classification 49.5% 9.7% 6.8% 34.0% 100.0%
% within category 34.0% 23.3% 31.8% 28.9% 30.7%
% of total 15.2% 3.0% 2.1% 10.4% 30.7%
Baccalaureate 44 15 4 43 106
% within classification 41.5% 14.2% 3.8% 40.6% 100.0%
% within category 29.3% 34.9% 18.2% 35.5% 31.5%
% of total 13.1% 4.5% 1.2% 12.8% 31.5%
Special Focus 21 9 2 11 43
% within classification 48.8% 20.9% 4.7% 25.6% 100.0%
% within category 14.0% 20.9% 9.1% 9.1% 12.8%
% of total 6.3% 2.7% 0.6% 3.3% 12.8%
Total 150 43 22 121 336
% within classification 44.6% 12.8% 6.5% 36.0% 100.0%
% within category 100.0% 100.0% 100.0% 100.0% 100.0%
% of total 44.6% 12.8% 6.5% 36.0% 100.0%
ALO Comments on the Justification for Accreditation
As noted, the majority of ALOs felt that accreditation costs were in fact justified.
The data provided in this section however proved to be the most expansive of any in the
survey. Through an inductive analysis of these open-ended comments, greater subtlety
beyond a simple yes or no response was examined. One-third of the comments described
why accreditation costs were justified while a second third of the comments described
how the costs were justified but acknowledged that they were still very high. One-fifth of
176
the comments focused on why accreditation costs were not justified. Other comments
discussed how there is really no option but to have accreditation, and a final minority
focused on the way that accreditation is better than any alternative system that would
operate in its absence. This section will address each of these themes in turn.
Identified primary theme: Accreditation costs are justified.
A total of 42 comments spoke glowingly of accreditation. Three comments in
particular described at length why the commenters felt accreditation costs were justified:
No question about it—Absolutely. The benefits to our institution were enormous
in terms of the improvements that the self-review generated in preparation for the
visit and as a result of the self-analysis. The changes would have been a long time
coming without this review. I count the accreditation review as the most
important aspect of the institutional assessment processes because of the
qualitative maintenance requirements. The periodic requirements that make it
difficult to back slide, and the external control factor makes it difficult for faculty
and administrators to postpone or avoid difficult, awkward, or sensitive issues or
concerns. I have never encountered a president that did not admit that the
accreditation activity resulted in important institutional changes and was of
benefit to the college or university. Folks are not always happy with the
accreditation outcomes, but the institution is better for the self-analysis
experience, external review, and the resulting institutional improvements.
Similarly:
177
Generally, yes. The real cost was people's time and that investment has paid
dividends. People involved, either directly or indirectly, gained a sense of
ownership of the process, and with it a commitment to succeed. As well, the
process helped us to see many brutal facts about ourselves and our programs. We
ended up with a few areas that required additional reporting (but not of the serious
nature as to warrant sanction), but we knew before the verdict that that was likely.
As a result of the broad community effort to be part of the process, there really
were no surprises and I credit that to the depth and honesty of the self-assessment.
Our self assessment of not fully in compliance in several areas matched quite well
with the visiting committee's comments and recommendations.
A third comment acknowledged that the costs were high while explaining why they were
still justified:
It was clearly costly, but it was an important institutional investment… because…
[sic] while [our regional accreditor] was the impetus, [the university] went about
this with the mentality that this was going to be in the institution's best interests.
In that light, it helped the institution to forge better faculty-student interfaces,
created more accessible repositories of data and information that had previously
been scattered hither and yon, and also helped to build better alliances between
faculty and staff. Put another way, the vast majority of what went into preparing
for the last accreditation review has been subsequently used in ways that
otherwise benefited the institution.
178
Accreditation is supposed to benefit the institution directly and many respondents felt that
it in fact did.
Other comments focused on the benefits of accreditation to the institution from
the personal perspective the ALO had gained as demonstrated by this response:
Most academics are not overly interested in accreditation activities. Fortunately, I
have been involved in these activities as faculty member, administrator, volunteer
chair and committee member (many times), and a professional accreditation
officer. Thus, I have a very different perspective on the value and usefulness of
the process, products, and impact on institutions and the academy. The time
devoted is well spent from a personal, professional, and institutional standpoint.
One commenter talked briefly about the amount of work involved in mobilizing all the
different constituencies involved in the accreditation review (students, faculty, board,
members, alumni, staff, etc.), and then added, “As the ALO, this is directly part of my
work assignment, so while I don’t always relish it, I don’t resent the work hours
dedicated to the process.” Likewise another commenter said, “I firmly believe in the
value of accreditation and give as much time as possible serving on accreditation
committees for other institutions.” There were immediate and personal professional
benefits of participating in the accreditation process to the individual ALO directly.
Several comments of those defending the value of accreditation and its
corresponding costs articulated how it is ultimately the responsibility of those at the
institution to make the accreditation process work for them, for instance:
179
I think the cost is a tricky issue. If the institution is committed to making changes,
then the cost is potentially worth it because the external review may raise interest
in needs that might otherwise go unmet. If the culture is such that nothing will
change, and the accreditation box is checked off, then the costs are necessary and
the failure to justify falls on the institution.
A different ALO had learned from experience the importance of actively and deliberately
taking ownership of the process:
Yes, at times it felt like we were jumping through hoops and going through the
motions, but once I realized that we could own the process and drive it in ways
that were beneficial to our institution, I found it extremely useful. Yes
[accreditation costs are justified], though the challenge is sustaining engagement
in ways that support institutional renewal rather than treating accreditation
episodically.
Thus accreditation appears to be more beneficial to institutions when it is undertaken
positively and proactively.
According to two respondents, a lot of whether the costs are justified is
determined by what happens after the accreditation review has been completed. Said one:
“The accreditation process resulted in continuous improvement that has [impacted] our
institution deeply and permanently.” The other remarked: “The process is definitely
worth it. Our self-study was used and referred to for years afterward. It was a great way
to tell our story.” Another commenter pointed out that the costs are justified because the
accreditation standards are “not important because [the regional accreditor] says they are,
180
but important because they lead to high quality, effective higher education.” In succinct
summary another comment said, “I think that the costs can be justified if you go into the
process with the right attitude and approach.”
There were also abundant specific examples of why ALOs felt that the costs of
the accreditation process, “the cost of doing business” as more than one respondent called
them, were justified. These included how accreditation “provides a detailed and big
picture look at ourselves as an institution… the best operational and planning data you
can get!” One institution found that “the process and costs make us… much better in
achieving student learning,” and for another, “the accreditation process was a key
component of our success in improving graduation and retention rates.” Yet another ALO
stated that the accreditation costs had been worth it “in the case of our most recent
review... [because] the college was in dire need of process clarification and strategic
focus.” According to a separate comment:
We discovered areas where we could make improvements; we implemented
changes where appropriate. We learned so much about the university. We tried to
use the process as a positive opportunity. We are currently ongoing review—our
documents are due next year (2012) and the site visit is scheduled for April 2013.
Once again we are using this as a positive opportunity to make a difference on our
campus.
Other comments mentioned how the costs that institutions had experienced in
previous reviews would serve as “a foundation that will help reduce some of the costs
that will be incurred in future accreditation activities,” and that future visits “will cost us
181
less $$ and time as we prepare for the visit.” This anticipated reduction in costs included
the indirect costs of time as well:
For what it may be worth, the proportions of time that will be displayed will be
higher than what I will suspect be the case going forward because now that there
is a base to build on it will not require so much of my time as the ALO. That said,
there are now others who have responsibilities that were the responsibility of the
ALO previously.
Therefore additional institutional benefits were anticipated in terms of better efficiency
during future accreditation reviews.
In conclusion, one respondent summarized the value behind accreditation costs by
saying:
I believe that no matter what the cost in personnel time and institutional dollars
for the team to come, that the time invested in self-study and being required to
take a hard look at one’s own institution is an amazing and profitable process.
Then, because of accreditation, the institution cannot stop doing what it should
always be doing, which is measuring its outcomes and goals to see if they are
being met and consistently documenting self-improvement. The accreditation
process also protects institutions from getting into situations which would
otherwise harm them. If my institution were not regionally accredited, I could
form any partnerships, build on in any way, and add any program. But with
accreditation, I have to work with my accreditor to obtain permission to begin
programs, form alliances. By the time I work through all of the requirements, I
182
have a better idea of what it will really cost and what will really be the end
results. Can you tell that I am all for regional accreditation?
Where a full two-thirds of the comments made said that accreditation costs were
justified, half of those as illustrated here focused on how and why that was so. These
comments essentially validate the costs of accreditation and reflect how more than three
times as many respondents felt that costs were justified (77.7% of dichotomous yes or no
responses) than that felt they were not justified (22.3% of dichotomous yes or no
responses).
Identified primary theme: Accreditation costs are justified, but the costs are
high.
Although the vast majority of comments expressed unequivocally that
accreditation costs were justified, half of those also acknowledged ways that the costs
were “painful” or “pretty darn expensive.” For one ALO it was the site visit in particular
that compromised the value: “Accreditation is very expensive. It seems that a much
smaller team could do the same job.” One comment that demonstrated conflicting
feelings reported:
We were able to use current structures and time and direct it toward accreditation.
It did put other things on hold. My response is mixed. I think it’s really good for
the entire institution to step back and take a look at itself, but it does take over
12+ months of our work. I guess bottom line I’d say yes.
These commenters indicated that accreditation costs were justified but also demonstrated
reservation in doing so.
183
Several comments focused on how the true high costs of accreditation are in the
time committed rather than the direct fiscal expenditures:
Costs are high and the institution pays for everything, but what you should glean
from this survey is that not only are the costs a component but the time
individuals spend on the process because our other work is put aside while in
accreditation mode and this shouldn’t be.
Another agreed:
The financial costs are reasonable and generally small. The cost in time for those
with real responsibility for the process are considerable… The money is not really
the issue. It’s getting all on board to believe that the accreditation process and
assessment and voluntary committees are really important.
Another wrote:
The direct costs of the process not burdensome. Indirect costs, hard to estimate, is
where the real cost lies: data gathering, analysis, cross institutional relationship
building, professional development of faculty, strengthening of technical
infrastructure, and countless hours of writing and analysis. No question that the
University gained insight and improved its commitment and capacity for
assessment and educational effectiveness, however, the time on task to "herd cats"
is huge.
Simply put, the costs of time committed were much greater than the direct costs. As
noted earlier, according to this study indirect costs are approximately four times greater
than direct costs.
184
One ALO spoke specifically in more business related terms by discussing
accreditation with respect to return on investment:
As for “return,” I would assess it as not a particularly good return on investment.
Things that the institution needed to do—assess where it wanted to go over the
next decade in preparation for the presidential search, for example—it could have
(and would have) done with a lot less time and effort spent on dealing with
accreditation standards that really weren't (and shouldn't have been) the focus of
our attention. While the self-study we produced was good, I was a bit
disappointed by the site team itself, and its report, in terms of it “helping” the
institution... I can't but help thinking there is a more efficient way to accomplish
that.
A separate respondent described in great detail one reason accreditation costs were
perhaps higher than they needed to be:
One more thing I might add. I respect the need for accreditation. However, some
things are more important to evaluate than others. Right now my staff is having to
collect and organize 500+ CV’s and job descriptions and to respond to one
standard that asks us to demonstrate that we have qualified administrators, when
we already have a very thorough search and selection policy and multiple layers
of reviews of appointments that go all the way up through our state system board
for many of these positions. This is truly busy work. It is taking time away from
working on putting together sufficient evidence to show… that we are in
compliance with institutional effectiveness standards, which is much more
185
complicated, subjective, and important to prove to external reviewers. The
accrediting bodies should do some serious cost/benefit analysis on these
requirements. Standards should remain high, but expectations for documenting
compliance should reflect the actual need for reviewers to examine detailed data
in order to make an assessment.
These respondents recognized the importance of what the accreditation process was
intended to do, however they questioned why the cost had to be so high.
Other reflections on whether accreditation costs were justified referenced mixed
feelings on the same campus: “I believe that most at our institution would say ‘yes’ here,
although they would also assert that the process itself is stretched out over too long a
period of time,” and, “I would have to say, ‘Yes.’ Others in administration may not,” as
well as, “Many here feel [the accreditation costs are] a necessary burden that should have
long term benefits.”
Many respondents directly referenced the extremely high costs while
acknowledging specific reasons that those costs were or were not justified. For example:
[Accreditation costs are] somewhat [justified]. The administrative expenses that
go directly to our accrediting agency are outrageous and, as a general rule, they
border on being simply not worth it. However, the dollars invested in our hands-
on engagement in the self-study process, and in bringing our visiting team to the
campus twice… was worth it, as they provided much-needed external and neutral
perspective and consultation on the issues we chose to address in our self study.
Another said that whether costs were justified was directly related to the benefits:
186
The costs are extraordinary, but to the extent that the process created momentum,
movement, and campus-wide engagement, yes. To the degree the institution
achieved real change due to issues that materialized during an external
accreditation visit, yes. The institution has been forced to confront long-standing
challenges because of external accreditation.
For another, costs were justified “perhaps not in strict financial terms. But given the
importance of initiatives begun as a result of our accreditation and the conversations
begun on behalf of it, I am still glad we are doing it.” Someone else pointed out that:
Cost of accreditation vary by institution depending on the scope of [improvements
undertaken]. In our case, all Academic units with Baccalaureate degree programs
were required to review curriculum, or propose new courses. Thus faculty time
and effort was required for this planning and implementation. As a result of the
reaccreditation process, the institution achieved a major review and revision of the
curriculum. This is always a time consuming and costly process, but results in a
major change at the institution.
These comments reflect the sentiment expressed by this ALO: “The accreditation process
was extremely beneficial to our campus. This partially, but not fully, justifies the costs.”
These respondents also recognized the importance of accreditation, but for many of them
the process was too costly in too many ways to allow for a simple yes or no answer.
Other respondents that referred directly to the extremely high costs focused on
ways in which those costs were becoming less justified. For example one commenter
said, “It is good for an institution to go through accreditation. However, the increasing
187
requirements for particular points of information are now making the accreditation
process [too] overwhelming for an institution.” Concerns about the site visit included:
“There were too many people on our site visit team. The work (developing the self study)
was worth it, but the visit was not as useful and cost too much money for the benefit we
got.” And similarly:
The problem with the way [our accrediting region] is set up, the visiting team
focuses only on whether the institution is in compliance with the… standards of
excellence. We did a great self-study that was broader than [those] standards. We
could have benefited from getting input and insights from the visiting team.
However, that never happened.
One response in particular conveyed these feelings especially well:
Yes [the costs were justified], because the benefits of accreditation are the sine
qua non of institutional existence under current U.S. higher education law and
public policy. However, this is not to say that each aspect of the accreditation
process provided benefits to the institution or that a more beneficial process, or
even a more beneficial instance of the existing process, could not have been
imagined. In my opinion, cost-benefit analyses of the current system of regional
accreditation need to be conducted at a fairly granular level. In other words, a
general question like “Was it worth the effort to have your accredited status
reaffirmed?” will always elicit a positive response. Meanwhile, a more granular
question like “Was it worth the effort to demonstrate that your institution's
administrative services regularly follow some (prescribed) model of outcomes
188
assessment?” might elicit much more negative responses. Depending on the
number of standards being applied, accreditation is a struggle on 100 or more
fronts.
Clearly the question of whether accreditation costs are justified is very complex.
As this section has demonstrated, while a strong majority (two-thirds) of
comments illustrated how ALOs did consider costs to be justified, almost exactly half of
that majority consisted of comments that also recognized and even lamented how high
the costs were. Additionally some of those comments described the cost of accreditation
in terms of opportunity costs, expressed grave concern about the way those costs
appeared to be rising, or made specific recommendations on cost areas that needed to be
addressed as the following three subsections will illustrate.
The opportunity costs associated with accreditation.
Some ALOs who answered the survey spoke of the opportunity costs of what was
not being done instead with the time and resources devoted to the accreditation process.
One person said, “Accreditation visits disrupt the normal flow of the university, which is
costly in itself.” Another answered, “The process strains limited staff resources, diverts
those resources from other important initiatives, and has a tremendous negative impact on
the university budget.” One ALO who felt very positively about accreditation still felt
compelled to acknowledge the opportunity costs involved:
Marginally yes [costs are justified]. Certainly, accreditation is vital to the
university. So in that sense, the costs were obviously warranted. But in all
seriousness, anytime one is asked to self evaluate there can be benefits. In this
189
case, the members of our institution were asked to assess our teaching and
learning model in a relatively new way. On the other hand, a significant
component of the cost involved assembling materials and working with poorly
articulated, sometimes overlapping “compliance” with standards matters that
detracted from focusing time and energy on the really significant matters
Similarly, a different ALO said:
I am strongly concerned that accreditation is becoming ever more intrusive into
academic institutions, consuming enormous resources that (1) could be put to
creative discovery and social/cultural advancement, and (2) the results of which
are of limited value both to the institution and to external constituents.
The following comment offered this reason as to why the opportunity costs were so high:
At a certain point, the whole exercise became an artificial distraction away from
our own planning and goal setting. So much politicking and energy on the
Provost’s Office part goes into getting people to swallow the bitter pills of the
accreditation mandates. It stirs mistrust and makes us look like the “evil other.”
Another respondent provided this thoughtful answer given by a colleague on campus:
Yes, the self-study provided some useful results. At a minimum, it informed our
presidential search and the way we thought about what the next 10 years in the
life of the University should look like. But that’s quite different than saying that
there was an “appreciable return on investment.” That is, do I know if the benefits
to the University exceeded the costs of getting there? I really don’t know. I’d
want to see how many hours everyone else spent on the process and then go back
190
to see if I could figure out how much the University really benefited. It is possible
that any changes we made for the better are changes that we would have made
even without having gone through the self-study. Or it could be even worse than
that… perhaps the recommendations from the self-study led to some bad policy-
making. I hope not, but it’s possible. Very sorry, but I guess that’s what you get
for asking a policy analyst. I can’t answer your question in good conscience
without doing a cost-benefit analysis. With any luck, some of the other members
of this group have a clearer way to think about your question than I do. Perhaps
there’s a lawyer or a physicist who can give you a “yes” or “no.” I can only tell
that we did it because we had to, and given that we had to, we made it as valuable
for ourselves as we could.
These responses clearly demonstrated the sentiment that the accreditation process,
while perhaps important or even valuable, was exacting a toll on the institution by
distracting limited resources from other important (or even more important) activities. In
some cases (as with the excessive politicking referenced above) limited capacity was
even redirected toward counterproductive activities.
Growing accreditation costs.
Another theme that permeated the various comments submitted by ALOs was that
the costs of accreditation are continually rising. While costs might always be expected to
rise somewhat as the result of inflation, the following comments highlight ways that
ALOs felt the costs were rising unreasonably. One respondent said:
191
I am a firm believer in accreditation but also believe the cost of accreditation is
getting out of hand. Part of this is due to the pressure on accrediting agencies by
the department of education. In many cases the expenses could be used to fund
[students’] education through scholarships, or simply reduce the cost of education
(or slow down the escalating cost) as the increased costs are of course passed on
to students.
Another commenter wrote:
It's too expensive. I have no idea how smaller and less well-endowed institutions
manage it. We gained much, but at great expense. I wondered often if the review
itself was compromising institutional quality. Considering myself alone, I had to
put aside numerous projects to focus on this for five years, with almost exclusive
focus in the two years prior to the review. Some of the training and infrastructure
will benefit us the next time around, but I still think it's too expensive in terms of
time and money.
A separate respondent answering the question of whether costs were justified said, “In
theory yes based on the opportunity for reflection. In practice it is all consuming if not
kept up with annually.”
One person suggested that such rising costs might be offset by increased support
at the institutional level:
Yes [costs are justified], but a reasonable (reduced) level of financial support
needs to be provided on a continuing basis for the steady state efforts to keep
rolling. Due [to] this global economic crisis which has affected all institutions,
192
such a level of support is now virtually non-existent. Consequently, this has had
an adverse effect on the momentum level of all activities undertaken or planned,
with the ultimate effect of having to close the Office of Continuous Improvement
and Assessment.
This comment demonstrates how at least one ALO feels unsupported or even undermined
in the accreditation effort.
Other comments focused on the costs in terms of the timing of the accreditation
cycle. One respondent said that costs were “Absolutely [justified]! Given that we were
conducting a 10 year accreditation review and visit. This process doesn’t happen that
often.” A less positive response in the same vein said about whether costs were justified:
“I think so—but I truly would not want it to come around more often than every 10
years.” Similarly someone else wrote, “Absolutely. Not looking forward to the 5th year
report, though.” These last few comments show how at least some ALOs framed the
question of cost in a long-term context. Taken together, the comments cited in this
section illustrate not only an acute awareness some ALOs had of ways in which these
costs were rising but also a growing dismay.
Streamlining accreditation costs.
Among those comments that supported accreditation while recognizing how high
the costs were, some considered the possibility or necessity of reducing the costs. Several
comments said the process “could be streamlined,” indicating that the respondents saw
areas where this was possible. For one respondent, costs were justified “on the whole,
yes. Although the process [and all the separate] reviews and visits, are too cumbersome,
193
expensive and overbroad for the goals to be achieved.” Commenting on whether
accreditation costs were justified, an ALO wrote:
Yes, although a less costly process might be possible ([our accrediting region] is
currently in process of redesigning process with this in mind). Have to take into
account not only the direct benefits to the institution but also the need to respond
to (legitimate, in my opinion) calls for accountability from external
constituencies.
Another wrote about the justification:
Yes and no. I believe that some of the monetary costs should be lessened so as to
not be a burden on a small institution such as ours. So no in that regards. Yes,
though in the sense that we did gain tremendously from the experience.
Streamlining costs or otherwise helping to make the process more efficient would make a
real difference for the ALOs cited here.
One institution had a particularly bad experience with the review:
As you can see, the costs were considerable. Much of the cost and agony was due
to bias on the part of the accreditation liaison from the accreditor. Because of this,
we had to demand a change in liaison and basically start over with a new and
unbiased liaison. At the end of the day, we were granted a full ten years
reaccreditation, but the costs of this were shameful because of accreditor
incompetence.
Not dissimilarly:
194
The one cost I was not certain was well-spent were the funds for the team visit.
While certain members of our team took the process seriously, one member
seemed to be just resume building and going through the motions and contributed
nothing of use in her section of our report. She arrived unprepared and left early
and in between annoyed all with whom she met.
The site visit in particular was a regular target of complaint in this regard.
Suggested one respondent:
I believe the process can be significantly streamlined. When the IRS audited us
this past year (we were randomly selected), one person came. To send a visiting
team of six people, to me, seems a bit much. I believe one representative from
each body (regional and local) could accomplish the task.
Another commenter expressed: “We have ‘hosted’ three Special Visits since the 2009
Commission decision. These visits are very time consuming and very expensive
(approximately $5K for each visit). Much of the information could have been gathered by
phone or skype more efficiently.” Another remarked: “I’m not certain how the costs
could be lowered unless the number of site visitors is lowered and/or mandates for
computer-based compliance audits. This requirement forced us to hire professionals at
considerable expense.” Whether because of a specific incident or the entire experience
taken altogether these ALOs felt strongly that the costs could and should be lessened.
Further, they often felt that there were obvious ways that this could be done.
195
Identified primary theme: Accreditation costs are not justified.
The third and final primary theme identified through an analysis of the open-
ended comments on whether accreditation costs were justified was that the costs were not
in fact justified. Various respondents said that the costs were “disproportional to the
benefits” and “too high… process is too complex.” Others said that “the process was way
too long and the benefits did not justify the time on task,” and that “this was a very
onerous process, time consuming and expensive for very little value added.” Another
commenter lamented the site visit: “The visit was a monumental disappointment because
of the dysfunction of the team chair and several team members. We got a ‘clean’ report
but the visit itself was not helpful because of inexperienced team members.” Another felt
that they “could have employed the best consultants available and gained much better and
more specific advice.” In answer to the question on whether accreditation costs were
justified, one ALO said: “Not really. The [regional accreditor] has gone nuts with
reviewing!” Another answered:
No. The burden of regional accreditation becomes higher and higher each year.
We are large enough that we can at least keep up with it in terms of staffing and
resources but I can't imagine how small colleges are going to survive it.
Furthermore, it's totally inefficient. Our professional schools have numerous
professional accreditations and these should be recognized by the regional
accrediting body which could then focus on those programs/schools within our
University without an external accreditor. I don't think it actually adds any value
to the process. We know of schools that are about to go under and they get
196
virtually no sanctions so what does it mean that we all have the same
accreditation? So it doesn't tell anyone anything about actual quality of the
institution.
A less dichotomous answer expressing the same kind of sentiment went as follows:
It is valuable to learn about best practices at comparable institutions and to read
the literature about reforms in higher education. It helps the faculty articulate
learning outcomes and measure student performance. In our eight-year history, we
have received… [an] endowment, student housing, and a new, modern building. It
is possible that having to be accountable to an accrediting body helped us get
these improvements, though they were never [the accreditor’s] recommendations.
But I would say that perhaps only 10% of the accreditation process yielded value
to the institution or to our student learning.
Clearly some respondent ALOs believed that accreditation costs were just not worth it.
A separate answer to the question of whether costs were justified indicated fear
about the failure of acquiring accreditation:
No and yes. No in terms of the cost of the benefits I cited in the previous question.
We could have identified the positives and learned what we might further enhance
about our institution in a much less stressful and costly evaluation process; the
reaccreditation process seemed like extreme overkill. Yes in terms of having
obtained reaccreditation without any conditions or sanctions. The financial cost of
the reaccreditation exercise relative to the benefits obtained is nothing compared
197
to the cost to the institution of a public failure and all the negative consequences
that would come with it.
Arguably the cost was justified for this institution because of its reliance on accreditation
for financial and other reasons. However, except for the fact that accreditation is
practically required, this ALO did not believe that the costs were justified.
One commenter relayed a more personal reason that accreditation costs were not
justified:
Two of the faculty members of the self study team here got seriously ill from all
the pressure of the visit. One has since retired. The cost in money and people time
and usage was not worth the cost. This can be done without all the minute
attention to detail that [our regional accreditor] and the US DOE demands.
This comment again illustrates the personal toll that managing the accreditation process
can take, not only on the individual managing the effort but on colleagues who are
involved as well.
Two comments framed the question in terms of limited resources: “No, especially
in a period of constricted resources. [Our regional accreditor] has recognized that their
process requires an excessive allocation of resources.” The other said: “We had no
choice but to do it. Economically we are not in the position to be putting money out for
these things when we cannot give raises or meet expectations as part of our mission
statement.” ALOs from these institutions did not feel that accreditation costs were
justified because they could not afford the costs.
198
Several of the comments expressed particular dissatisfaction with the regional
accreditor. Said one, “We spent 10’s of thousands of dollars that added no value because
of the debacle made of this by the accreditor itself.” One suggested:
[Accreditors] shouldn’t demand so much in a report, they should be able to make
a process much more simplified with rather than a complete text report a checklist
sort of meeting the required elements of accreditation and then validated by a site
visit.
In a similar vein another said: “There are WAY too many superfluous data requests with
too many specifically-tailored data requirements. There should be a MUCH better way to
get this done.” Even more drastically another said, “As a small institution the demands—
both fiscal and personnel—are unsustainable. We are considering dropping our [regional]
accreditation because of this fact.” This ALO went to great lengths to describe
dissatisfaction with the regional accreditor:
I am concerned that regional accreditors are self-serving, trying harder to sustain
their own existence than to find simple ways to support student learning. They are
trendy and naive, claiming that now we understand cognitive sciences and have to
redesign education accordingly. One of our faculty likens their efforts to the "new
math" of the 1950s or 1960s. My region frequently changes the standards and
expectations, and they cannot define what they expect from schools until their site
visit teams apply the standards, so it is a moving target. I am an intelligent person
and attend all the accreditation meetings and I have had a hard time understanding
199
the processes and expectations. Our faculty members find the language and
requirements incomprehensible.
Another ALO agreed: “Organizational confusion from accrediting agency changing
standards caused more hours in preparation than were justified.” These individuals did
not feel that accreditation costs were justified because too much was required throughout
the process.
Another frequently expressed concern had to do with the requirement of multiple
site visits. One example went as follows:
The costs were far too high because the process was too long. There were two
major site visits when one would have been sufficient. The process should have
been shortened by at least half as it consumed almost five years of time.
Another echoed:
In my humble opinion, the costs were higher than they needed to be, since
[requiring two visits] effectively doubled the cost of the previous approach to
accreditation. [Our regional accreditor] is in the process of redesigning its
accreditation model; the current draft (which is likely to be approved later this
year) returns to a single site visit, and it clarifies and simplifies some of the other
requirements involving report preparation and offsite review of documents.
This adjustment in process developed by the regional accreditor in question was
mentioned frequently in the comments: “[Our regional accreditor] has recognized [the
requirement for multiple reports and site visits] as excessive and is changing their
process.” Those who expected to be affected by the change anticipated relief: “In theory,
200
this new process will be less time-intensive for institutions,” and, “If that plan goes into
effect, I imagine the percentage [of my time as ALO committed to accreditation] will be
lower, as the burden will be lighter.”
Two comments described the requirements of regional accreditors as overly
formulaic: “Too much of the process is focused on checking the boxes in the way the
accrediting body wants, vs institutions really looking [at] how they can take some risks to
improve their teaching and reaching students.” The other said: “The accreditors are far
too prescriptive. Everything is moving toward standardization and thus ‘dumbing
down.’” Representatives of these schools did not believe that accreditation costs were
justified because the process did not allow institutions to use it in a way that would
specifically or individually benefit them.
While the majority of open-ended comments provided by ALOs on whether
accreditation costs were justified indicated a majority sentiment that they were justified,
there were still a significant number of comments that specified that costs were not
justified. The language these respondents used is of particular note as it often employed
emotionally charged words or expressions and fervid commentary. While overall ALOs
clearly felt that these costs were justified, those who did not feel that way tended to
display even more passionate feelings about the topic.
Identified secondary theme: There is no option but to have accreditation.
Two other themes became apparent in an analysis of the comments on whether
accreditation costs were justified although they were brought up with less frequency than
the primary themes discussed above. The first one treated the way that there really is no
201
option but to maintain institutional accreditation, it is “not a choice, you have to do it.” In
reflecting on whether accreditation costs were justified, one respondent said, “It is
questionable, but it is mandatory in order to insure transfer of credits and to qualify for
federal and state funds. That is to say that it really isn’t optional.” Another commenter
used “virtually” the same language: “It depends. The benefits identified above could
certainly be obtained in cheaper ways, but one questions whether universities would
undertake the task in absence of (virtually) mandatory accreditation.” For these
respondents the question of whether accreditation costs were justified was moot because
being accredited is required rather than optional. In fact the question is essentially moot
for all institutions since accreditation has become functionally mandatory, however these
ALOs responded to the question without answering it. The incongruous nature of the
question may be the reason that an additional 121 respondents (36.0% of all responses)
did not address the question at all; however this is unclear due to the lack of feedback on
this specific issue resulting from the non-response of these ALOs.
Many comments followed this very pragmatic thread. Accreditation costs are
justified because “our school would not be able to continue to exist without the financial
benefit of student loans which would be unavailable without such or similar
accreditation.” Similarly:
Over 80% of our students participate in some type of federal financial aid
program. Without the opportunity to apply for those funds, our students would
have to go elsewhere. Therefore, the costs of accreditation are justified because
our university could not survive without support for our students.
202
Other respondents looked additionally beyond the financial aid eligibility issue but still in
terms of institutional survival:
In a fundamental sense, the college would be unable to function without regional
accreditation which is essential for financial aid eligibility, for recognition of
credits, and for the future success of our students and graduates, both in terms of
their ability to compete in the job market and for entry into graduate schools.
Another respondent said: “Without accreditation we would significantly limit the
opportunities for our students in future education options, and in some cases in obtaining
jobs. It also would likely hamper new enrollment.” Simply put, “Without transferability
of courses, recognition of degrees, and access to federal financial aid, we would have to
close our doors.”
In all there were only 11 comments that addressed this secondary theme,
relatively few with respect to the other topics identified above. This pattern in the
comments is important to note because of its recurrence and the relevance of this topic to
the cost question within the greater context of the accreditation picture. In one sense
however these respondents did not actually address the question, essentially claiming that
because accreditation is functionally required the question of whether the costs are
justified is irrelevant.
Identified secondary theme: Accreditation is better than any alternative.
The other secondary theme that arose out of the qualitative analysis of open-ended
comments identified an apprehension of the costs involved with whatever would be the
alternative to institutional accreditation. There was general acknowledgment that there
203
had to be some kind of oversight of higher education. According to several comments, a
mechanism different from the one currently in place would be more expensive: “If the
institution paid to out sources what we have done in the accreditations process the cost
will be a lot more.” In similar fashion another comment read:
The costs have to be weighed in the broader context of a system that provides
evidence to the Department of Education and the Congress and the broader public
that higher education is providing a good return on the investment of public
funds; direct benefits to our school probably outweigh the aggregate costs. If not
within the context of accreditation assessment we would have to accomplish
improvement and benchmarking via an alternative means, say the use of outside
consultants. This likely would be more expensive.
Other responses were downright fearful of alternatives: “The bottom line in answering
this question is, if we don’t participate in a voluntary accreditation process, then we will
have an external force controlling it, which is close [to] the worst nightmare.” Another
respondent wrote:
Yes [accreditation costs are justified], because regional accreditation staffed by
peers is so… preferable to national accreditation dictated by the Margaret
Spellings and George W. Bush's of the world. Regional accreditation is perhaps
best characterized as a necessary evil. Like capitalism, it is probably the best of a
series of undesirable choices.
Another comment summarized:
204
It's impossible to know what would happen if there were no external body to
which we are responsible. In a few policy arenas, it is helpful to be able to raise
the specter of accreditation. But generally, my answer [to the question on whether
accreditation costs are justified] is “no.”
Like the other secondary theme the frequency of occurrence of this one was
relatively low (only seven unique comments), however this topic occurred frequently
enough and is relevant to the overall question of accreditation costs such that it merits
note here. Both of these secondary themes, the idea that not being accredited is not a real
option for many institutions and the lack of any realistic alternative, may be serving as
the primary motivation for many schools to be going through the accreditation process.
Analyses of both a quantitative and a qualitative nature clearly show that,
generally speaking, ALOs believe that accreditation costs are justified. At the same time
however there is a vigorous acknowledgment that the costs are indeed very high. Some
ALOs even took the time to specify that these costs are high not only in terms of direct
fiscal and indirect time costs but also opportunity costs, and that the costs appear to be
rising or should be streamlined. A minority of respondents did not believe that costs were
justified and expressed this conviction staunchly. ALOs from both sides of the debate
spoke ardently, demonstrating a thorough grasp of the concept and the process as well as
a firm commitment to their accreditation-related responsibilities.
Summary
The Accreditation Liaison Officers who participated in this survey responded
thoughtfully and passionately about the subject matter. Thanks to their participation, a
205
better understanding was gained of who the institutional ALO is across the regions of the
higher education landscape involved. It became apparent that the person serving as ALO
is a committed and focused individual, although generally overwhelmed professionally
and personally by the demands of accreditation. Additionally the turnover within this
position between accreditation reviews could be considered high.
Respondents identified specific direct fiscal costs of accreditation including costs
associated with the preparation of the self-study document, but especially emphasized the
disproportionately higher costs associated with the site visit. They discussed at further
length the indirect costs of the time spent on accreditation and frequently and explicitly
acknowledged that the high nature of these costs was due largely to the cumulative nature
of the sheer number of people involved and the high salaries of institutional
representatives, particularly the ALO who individually spends the most time on the
accreditation process. Overall, institutions manifested a strong commitment to
accreditation by giving it a high priority as demonstrated by the level of professionals
assigned the task (most frequently Vice President or Provost) and the importance ascribed
to the process. Statistically significant differences were identified in the direct costs of
institutional accreditation (document cost, site visit cost, and combined costs) between
Carnegie classifications but not between accreditation regions. Statistically significant
differences were also identified for some measures of indirect costs (the number of
people from various groups involved in the accreditation review and the cumulative total
number of hours contributed by various groups) between both accreditation regions and
Carnegie classifications. Through a qualitative review of open-ended comments it was
206
determined that the magnitude of both direct and indirect costs is considered to be high.
Indirect costs were identified as much higher than direct costs.
The primary benefits of accreditation identified by responding ALOs were the
opportunity for institutional self-reflection and the subsequent university improvement.
Secondary benefits included the possibility of using the accreditation process as a vehicle
for institutional improvement, improved campus unity, a review provided by an entity
from outside the institution, the ability to offer financial aid, the mere fact of having
accreditation, and the reputation provided by having accreditation. Other cited benefits
were the chance to share best practices and the opportunity to celebrate institutional
accomplishments, and finally the fear of not having accreditation was also mentioned.
Despite the high nature of accreditation costs as indicated by respondents, ALOs
generally felt that these costs were justified. Roughly 45% of respondents felt that
accreditation costs were justified as opposed to only about 13% of respondents who felt
they were not. There was a statistically significant difference between accreditation
regions but not between Carnegie classifications as to whether ALOs felt that costs were
justified. The primary themes identified in a qualitative review of open-ended comments
on the justification for accreditation costs were that the costs were justified, that the costs
were justified although they were high (with additional notes that the opportunity costs
were high, costs were growing, and costs should be streamlined), and that the costs were
not justified. The secondary themes that emerged from this analysis were that there is no
option but to go through the accreditation process, and that the current system of
accreditation is better than any alternative. Chapter five will include further discussion of
207
the results presented in this chapter as well as an identification of patterns between types
of institutions.
208
CHAPTER FIVE: DISCUSSION
The cost of institutional accreditation, comprised of both direct fiscal costs and
indirect personnel costs, is significant and is continually rising (Blauch, 1959; Ewell,
2008). At the same time because of the number of different groups that rely on
accreditation for sundry reasons, not the least of which is the practical necessity of
continued fiscal support from the federal government, it is essential to maintain
accredited status (Chernay, 1990; Ewell, 2008; National Advisory Committee on
Institutional Quality and Integrity, 2012; Orlans, 1975).
There is an acute lack of empirical research on the costs of institutional
accreditation generally (Shibley & Volkwein, 2002) and an even more noticeable paucity
of research that is quantitative in nature. This paucity was apparent to many Accreditation
Liaison Officers who participated in this study, several of whom commented on the
dearth. The substantial number of study participants and the extensive nature of many of
the responses to open-ended questions was evidence of the desire that many ALOs have
to contribute to the knowledge on this topic. There was also a balanced representation of
both accrediting regions and Carnegie classifications, further attesting to a wide interest
in this topic. Conducting research on the subject is important particularly because, as
Ewell (2008) notes, “accreditation may be entering one of the most eventful periods in its
long history” (p. 161).
This study investigated the direct and indirect costs of institutional accreditation
and the varying levels of commitment between types of institutions. The difficulty of
ascertaining accreditation costs with any degree of certainty cannot be overstated (Parks,
209
1982; Reidlinger & Prager, 1993) and this difficulty is a primary culprit for the lack of
literature. The studies that do address accreditation costs acknowledge high variability in
kinds of costs summarized, value attributed, and totals reported (Florida State
Postsecondary Education Planning Commission, 1995; Kennedy, Moore, & Thibadoux,
1985; Reidlinger & Prager, 1993). This chapter provides further discussion on the
findings detailed in chapter four. It will also consider the study’s implications for practice
and opportunities for future research.
Discussion of the Findings
Four research questions guided the methodology behind the study and are
addressed in this section. The findings are summarized and discussed by research
question.
Research Question #1: What costs are associated with institutional accreditation
and how do those costs vary between and among types of institutions?
This research question examined actual direct costs and indirect costs of
institutional accreditation as reported by institutions participating in the study. The survey
instrument suggested many of the obvious direct and indirect costs, but also allowed
respondents the opportunity to make open-ended comments on additional costs not
directly suggested by the survey.
Direct costs associated with institutional accreditation.
The direct costs of accreditation include the costs associated with the production
of the self-study document and those associated with the site visit. Significant differences
for these costs were discovered between the mean costs of institution by Carnegie
classification. The document cost was highest for doctoral/research institutions, followed
210
by master’s institutions, baccalaureate institutions, and lowest for special focus
institutions. This progression can be viewed in terms of institutional complexity where
increased complexity is generally ascribed to schools that place a greater “emphasis on
research or service” or to institutions exhibiting “an increasing complexity within mission
components” (Johnstone, 2001; Leslie & Rhoades, 1995, p. 195). As noted previously,
the pattern of decreasing mean costs demonstrated here is likely a reflection of the greater
complexity and (generally) the greater size of doctoral/research institutions relative to
institutions in the other classifications, special focus institutions in particular tending to
be smaller and more specialized. Also as might be expected, the variability between
institutions within the same Carnegie classification as demonstrated by the standard
deviation generally decreased as the mean decreased. The differences between
classifications are striking however. Doctoral/research institutions spend on average more
than the average amount spent by the next two classifications (master’s and
baccalaureate) combined, and more than twice the average of either baccalaureate or
special focus institutions. Doctoral/research institutions also have the most reliable mean
as demonstrated by a distribution that was closest to normal of any of the four categories.
It would be difficult to quantify institutional complexity and argue to what extent
the complexity of doctoral/research institutions is greater than that of institutions in the
other classifications, but it is likely because of this complexity that doctoral/research
institutions might be expected to have lower costs than they do. The greater complexity
and larger size for institutions in this classification might be responsible for an already
existent infrastructure into which accreditation costs could be folded as well as greater
211
repute making the institution less reliant on the credibility provided by accreditation
status. At the same time that complexity along with a generally larger budget may be the
reason that these more complex institutions are able to spend more on accreditation.
Relative complexity may also account for the similar means discovered between
baccalaureate and special focus institutions, the two most similar groups of the four
surveyed. On the other hand complexity alone could hardly account for the magnitude of
these differences, and it is clear from the consistency of the open-ended comments
between all institutions that differences in amount spent are not a reflection of differing
magnitudes of priority between Carnegie classifications. The precise reasons for the
difference in mean cost amount between institutional types fell outside the scope of this
study and would be fertile grounds for additional research.
An identical pattern was discovered in the means of site visit costs as
demonstrated by Carnegie classification although the means were much closer and the
variability was lower. Doctoral/research institutions still had the greatest average cost
with master’s institutions next, baccalaureate institutions following, and special focus
institutions averaging the lowest cost. Again doctoral/research institutions averaged more
than twice the mean cost of special focus institutions, and baccalaureate and special focus
institutions had very similar means, but the costs were much lower for the site visit than
for the preparation of the document. In the case of site visit costs, special focus
institutions reported data that were the most normally distributed demonstrating
remarkable consistency in institutional cost.
212
Two reasons that reported costs for the document exceed reported costs for the
site visit stand out. First, the site visit is a narrowly defined period characterized by actual
event dates, even though additional time is usually budgeted to prepare for and debrief
after those dates. The preparation of the document on the other hand may take years, and
the “points” of beginning and end might be difficult to define and somewhat subjective.
The site visit costs then are much more easily identified and more standardized because
they are associated with a single, short-term event. Consequently these costs are easier to
define and manage.
The second reason site visit costs are lower than document costs is that these data
report site visit costs to the institution being reviewed and exclude the costs borne by
agents outside of the institution (i.e., those representing the regional accreditor directly or
volunteers making up the visiting team from other peer institutions), which groups would
have their own costs to report but which fall outside the institutional budgets examined in
this study. Examination of the actual total cost of an accreditation site visit (including
costs incurred by all parties) would also be an opportunity for additional research. These
are likely the kinds of costs reflected in the Council for Higher Education Accreditation’s
annual almanac reporting most recently a fiscal support of $9,805,770 provided by
34,705 volunteers (Council for Higher Education Accreditation, 2010).
All direct cost means are summarized in Appendix E. These tables include mean
costs by Carnegie classification as well as mean costs by accreditation region, although
differences between means of accreditation regions were not statistically significant.
213
Indirect costs associated with institutional accreditation.
The indirect costs associated with accreditation consisted of the time spent on the
accreditation process by various campus constituencies. This study examined the total
number of people involved and the total number of hours spent by people in six
categories: the Accreditation Liaison Officer (ALO), senior administration (e.g.,
president, vice-presidents, provosts, deans, etc.), faculty, administrative staff, students,
and others. For all reporting institutions taken together, faculty made the greatest
commitment to accreditation averaging the highest number of people for the review (38.3
on average) as well as the highest number of hours (1,842.0 on average). The original
purpose of accreditation was, and continues to be, quality assurance and quality
enhancement (Blauch, 1959; Ewell, 2008; Newman, 1996; Pfnister, 1971; Zook &
Haggerty, 1936) and discovering empirically that an institution’s faculty are making the
most significant contribution to the accreditation effort is therefore a reassuring finding.
The next largest group to participate was students (30.6 on average) although
students contributed the lowest amount of hours (271.0 on average) other than those from
the miscellaneous category of “Other.” Students make up the most fleeting campus
constituency as each one is there for a brief number of years and then moves on; however
they are also the most important campus constituency as the institution exists for the
purpose of providing them instruction. This finding on student involvement demonstrates
that they are being given a voice in the process and an opportunity to be involved without
being overly taxed.
214
The combined university staff of course contributed the largest share of work to
accreditation efforts. The ALO committed the greatest amount of time of any single
individual, averaging 1,408.7 hours per accreditation review as reported by participants in
this study. On average, 18.3 other administrative staff members spent a total of 1,647.1
hours facilitating the process for and working with 11.2 senior administrators who
committed a total of 934.9 hours.
When looking at differences in means of indirect costs by Carnegie classification
several noteworthy patterns emerged. The same kind of pattern demonstrated for direct
costs (highest costs among doctoral/research institutions, then master’s institutions, then
baccalaureate institutions, and finally special focus institutions) was evident in the total
number of people involved, although the means of total number of people involved for
each category for master’s institutions was very close to those for doctoral/research
institutions. The distribution of hours among those people varied greatly. The only
category of total hours spent that followed this pattern strictly was the time committed by
ALO (2,233.7 hours for doctoral/research institutions, 1,866.8 hours for master’s
institutions, 1,443.2 hours for baccalaureate institutions, and 974.5 hours for special focus
institutions).
The total number of hours committed by senior administration followed almost
the opposite pattern, with senior administration from doctoral/research institutions
contributing the lowest number of hours of any of the four categories. Excluding special
focus institutions (760.9 hours from senior administration on average), the means rose in
the opposite direction: 700.5 hours on average from the senior administration at
215
doctoral/research institutions, 1,089.4 hours from the senior administration at master’s
institutions, and 3,356.7 hours from the senior administration at baccalaureate
institutions. This indicates that on average, fewer senior administrators spent more hours
on the institutional accreditation review as institutional complexity decreased, and is
likely a function of more limited staff resources. Presidents of more complex universities
(i.e., doctoral/research institutions) probably had more flexibility to create varying levels
of infrastructures to handle accreditation responsibilities as well as more administrative
staff overall to execute the process. Indeed the total number of hours contributed by
administrative staff reflected this likelihood, as staff from doctoral/research institutions
averaged 3,530.5 total hours, staff from master’s institutions averaged 1,913.0 hours,
staff from baccalaureate institutions averaged 1,749.4 hours, and staff from special focus
institutions averaged 762.3 hours.
A few noteworthy differences emerged from examining differences in means by
accreditation region as well. In every category of hours committed to accreditation except
for students, the groups at SACS institutions spent the most time on accreditation review
followed by the groups at MSCHE institutions with groups at WASC institutions
spending the lowest number of hours. This is most likely a reflection of the differences in
accreditation process between the regions. The fact that SACS institutions consistently
have the highest means in total hours spent on accreditation could be a reflection of the
amount of time required to execute a meaningful and successful Quality Enhancement
Plan (QEP). The fact that WASC institutions consistently have the lowest means in total
216
hours spent on accreditation could be a reflection of efforts by the regional accreditor to
make the process less intrusive and onerous than it was historically.
On the other hand, the only category for which SACS had the most people
involved in accreditation was the category of students. This means that SACS institutions
had more students involved on average than institutions in the other regions and at lower
cost to each student participating. It is also noteworthy that WASC institutions average
the highest number of faculty involved but the lowest number of total faculty hours spent
on accreditation. Similarly to the average per-student time cost, this indicates that more
faculty are involved in WASC accreditation and at lower cost to each faculty member
participating than for the other participating regions.
All indirect cost means are summarized in Appendix F. These tables include mean
costs for all institutions as well as mean costs by Carnegie classification and by
accreditation region.
In her study on the costs of professional accreditation for baccalaureate nursing
programs, Freitas (2007) found administrative time to be the highest identified cost of
accreditation. This study reflected that finding at the institutional level and attempted to
explore and further disaggregate those costs. Respondents frequently commented on the
nature of indirect costs as higher and more burdensome than the direct costs. Whalen
(1991) stressed the importance of understanding indirect costs because of the otherwise
erroneous (and at least partially inevitable) assumption that certain goods and services are
free. A more sound perspective on the real, total cost of institutional accreditation can
217
only be attained by accounting for these costs and eliminating the “illusion of free goods
and services” (p. 50).
Research question #1 conclusion.
Direct costs associated with institutional accreditation include document costs and
site visit costs. Document costs are comprised of the costs of software, materials,
copying, printing, mailing, and fees for professional services such as writers and
consultants associated with the preparation of the self-study. Site visit costs are
comprised of the costs of travel, accommodations, food, stipends or honoraria, and other
gifts provided to the visiting team. Indirect costs associated with institutional
accreditation include the time spent on accreditation-related activities by anyone from the
campus community.
This study provided a stark contrast to the study done by Kells and Kirkwood on
MSCHE institutions in 1979. Kells and Kirkwood found that self-study costs did not
involve significant expense; almost half of respondents reported having spent less than
$5,000 on it. As reported in this study though, respondents from MSCHE institutions
averaged $39,071 and respondents from all institutions averaged $50,979 on the self-
study document alone, more than twice the average direct costs for the site visit. The
general lack of empirical data on accreditation costs historically will likely prevent a
statistically sound analysis of the effect of inflation on accreditation costs. Certainly
inflation would account for at least part of the 780% increase in self-study costs between
the MSCHE institutions represented in the Kells and Kirkwood study and MSCHE
institutions in this study. The open-ended comments provided by respondents in this
218
study however revealed an overarching belief that these costs were not insignificant. On
the other hand Kells and Kirkwood identified a practical, evident upper limit of between
100 and 125 people directly involved in the self-study with a greater proportion of faculty
(41-50%) than of staff (21-30%), and relatively few students involved in the process, and
that practical limit and constitution is evident today notwithstanding the greater
facilitation offered by such tools as the internet and other software not available in 1979.
Studies by both Lillis (2006) and Driscoll and De Norriega (2006) described institutions
that embraced the accreditation process and challenge but were nevertheless compelled to
acknowledge these high costs as disadvantageous. The indirect costs are unavoidably
high.
Research Question #2: How is financial commitment toward institutional
accreditation manifested, i.e., what are the perceived direct and indirect costs of
institutional accreditation?
Accreditation is an expensive activity requiring a strategic, methodical approach
in order for an institution’s practice to be effective and sustainable (Parks, 1982; Willis
1994). One obvious way of gauging the financial commitment toward institutional
accreditation is through the actual amount of money and time spent on the activity.
Another important way is by ascertaining how ALOs feel about that institutional
commitment by determining their perceptions about the time and money being spent.
This research question explored institutional commitment by examining these.
219
Institutional commitment to accreditation through direct costs.
Responding ALOs identified each item mentioned in the survey as a real part of
the direct costs of accreditation: materials costs, copying, printing, mailing, and fees for
professional services such as writers and consultants for the preparation of the self-study
document, and costs for travel, accommodation, food, and honoraria or stipends for the
site visit. Additionally ALOs regularly mentioned software costs associated with the
document, and token gifts to visiting team members for the site visit. A few ALOs also
directly addressed the fees assessed by regional accreditors for membership in the
accrediting association. These fees fell outside the scope of this study and the responses
in question recognized that fact, however they also indicated that these fees are not
insignificant and the respondents felt they should also be a part of an analysis such as
this.
Out of 14 total responses that addressed whether direct costs were too high, only
five indicated that these costs were reasonable. These respondents viewed accreditation
costs in terms of accountability and the need to be answerable to various groups with a
vested interest in the quality of the school outside the institutional structure, and thus they
reflect the accountability role assigned to accreditation seen in the literature (Brittingham,
2008; Eaton, 2003a; Ewell, 2008; Ewell, Wellman, & Paulson, 1997; Lubinescu, Ratcliff,
& Gaffney, 2001; Michael, 2005; Wolff, 2005). The other comments reflected on how
formidable the costs were. For the most part these comments did not specifically address
whether the costs were too high because this section of the survey did not address that
question. These comments demonstrate that, while ALOs understand and at least to a
220
certain extent agree with the need for these costs, they are also acutely aware of the
magnitude of the commitment the institution is making through the engagement of the
financial resources necessary to meet the direct costs.
Institutional commitment to accreditation through indirect costs.
ALOs who responded to the survey acknowledged that the time spent on
accreditation is the most significant cost of accreditation both with respect to the sheer
volume of that time and the financial value of that time. Out of 16 total responses that
addressed whether indirect costs were too high, only three indicated that these costs were
reasonable for essentially the same reasons ALOs felt direct costs were reasonable. On
the other hand the vast majority of these comments, 13 out of 16, reflected on how
burdensome indirect costs were, although again without explicitly addressing whether
they were too high.
In the literature on accreditation, authors who investigated indirect costs
repeatedly recognized the difficulty of determining an exact value for them. The
American Council of Trustees and Alumni (2007), Doerre (1983), Kennedy, Moore, and
Thibadoux (1985), Leef and Burris (2002), and Reidlinger and Prager (1993) consider
indirect costs in terms of opportunity costs to the institution, recognizing that
accreditation activities are necessarily undertaken at the expense of other worthwhile
initiatives. Schermerhorn, Reisch, and Griffith (1980) found that the personnel time
needed for accreditation activities was recognized as one of the most significant
shortcomings of the accreditation process. Respondents in this study were also cognizant
of such opportunity costs in their own accreditation activities.
221
Parks (1982) discussed accreditation costs in terms of “displaced dollar costs” (p.
4) because “usually there are no real increases in payroll or associated indirect expenses”
(p. 7) for accreditation activities; “there has simply been an exchange of one job for
another” (p. 7). Parks pointed out that such reallocation of time will actually purchase
“intangibles of value to the institution and to the program” such as the self-study and
external review, but they “do not add to the financial burden” on the institution (p. 7).
Parks recognized that part of the difficulty of ascertaining value for indirect costs is that
setting dollar values for that time is not regular practice despite the widely acknowledged
desirability and even indispensability of the intangibles produced. For these reasons this
study monetized reported indirect costs in order to be able to consider a total financial
cost of institutional accreditation.
Institutional commitment to accreditation through the combination of direct
costs and monetized indirect costs.
By monetizing indirect costs through the methodology described in chapter four
and adding them to direct costs, it is possible to estimate an average total per-institution
cost of institutional accreditation and to compare that average between institution types.
Tables 5.1 and 5.2 show these total costs by accreditation region and by Carnegie
classification.
According to these calculations direct costs accounted for 22.1% of the average
total cost of institutional accreditation and indirect costs accounted for 77.9% of the
average total cost when calculated by accreditation region. Indirect costs were relatively
lowest for WASC institutions (70.8%) indicating a greater financial commitment than
222
222
Table 5.1: Combined Direct and Indirect Costs of Accreditation by Accreditation Region
Category
Total
Costs
Total
Direct
Costs
Total
Indirect
Costs
Percentage of
Total Cost
from Direct
Costs
Percentage of
Total Cost
from Indirect
Costs
MSCHE $345,591 $60,764 $284,827 17.6% 82.4%
SACS $405,481 $78,829 $326,652 19.4% 80.6%
WASC $230,690 $67,464 $163,226 29.2% 70.8%
Overall Average
per Institution
$327,254 $69,019 $258,235 22.1% 77.9%
Table 5.2: Combined Direct and Indirect Costs of Accreditation by Carnegie
Classification
Category
Total
Costs
Total
Direct
Costs
Total
Indirect
Costs
Percentage of
Total Cost
from Direct
Costs
Percentage of
Total Cost
from Indirect
Costs
Doctoral/Research $414,586 $112,062 $302,524 27.0% 73.0%
Master's $431,810 $78,149 $353,661 18.1% 81.9%
Baccalaureate $312,146 $51,633 $260,513 16.5% 83.5%
Special Focus $205,871 $46,365 $159,506 22.5% 77.5%
Overall Average
per Institution
$341,103 $72,052 $269,051 21.0% 79.0%
time commitment with respect to overall cost, and highest for MSCHE institutions
(82.4%) indicating the inverse. Accordingly it appears that ALOs from WASC were able
to manage the accreditation process while spending proportionately less time doing so
than ALOs from the other regions, again possibly because of efforts made by WASC to
reduce the burden on them.
By Carnegie classification direct costs accounted for 21.0% of the average total
cost of institutional accreditation and indirect costs accounted for 79.0% of the average
223
total cost. Indirect costs were relatively lowest for doctoral/research institutions (73.0%)
possibly because these institutions are able to disperse the time costs among more people
at more, less expensive levels of institutional staff (i.e., more administrative staff and less
senior administration involved). Indirect costs were relatively highest for baccalaureate
institutions (83.5%).
These calculations show that, fiscally speaking, indirect costs are roughly four
times greater than direct costs. Willis (1994) explored many of these same costs and
discovered that indirect costs are “probably many times greater than the direct costs due
mainly to the personnel time required at the institution” (p. 40). Freitas (2007) found that
personnel time generally comprises the majority of an institutional budget, and Shibley
and Volkwein (2002) found that the time commitment required by accreditation
constitutes a greater burden than the direct fiscal costs. This study confirmed these
findings and was able to quantify to what extent that was the case for responding
institutions. Shibley and Volkwein also state that “the true sense of burden arose from the
time contributed to completing the self-study process rather than from finding the
financial resources to support self-study needs” (p. 8). This study supports that finding as
well both quantitatively and qualitatively (as reflected in the open-ended comments).
Through this methodology for the monetization of indirect costs it is possible to
estimate a total cost to the entire higher education community for the accreditation of
institutions conferring baccalaureate degrees. By multiplying the average per-institution
cost by the population for each category, the total cost of accreditation to all institutions
can be calculated. For the three non-participating regions in this study the average per-
224
institution cost of the three participating regions was used, and for tribal institutions
(which were not included in this study because there are no tribal institutions in the
participating accreditation regions) the average per-institution cost of the other four
Carnegie classifications was used. Tables 5.3 and 5.4 show these totals by accreditation
region and by Carnegie classification.
Table 5.3: Total Cost of Accreditation to All Institutions by Accreditation Region
Accreditation region Per-institution average Population Total per category
MSCHE $345,591 388 $134,089,308
SACS $405,481 462 $187,332,222
WASC $230,690 143 $32,988,670
NCA (average) $327,254 667 $218,278,418
NEASC (average) $327,254 173 $56,614,942
NWCCU (average) $327,254 94 $30,761,876
TOTAL Region 1,927 $660,065,436
Table 5.4: Total Cost of Accreditation to All Institutions by Carnegie Classification
Carnegie classification Per-institution average Population Total per category
Doctoral/Research $414,586 278 $115,254,908
Master's $431,810 594 $256,495,140
Baccalaureate $312,146 633 $197,588,418
Special Focus $205,871 391 $80,495,561
Tribal (average) $341,103 31 $10,574,201
TOTAL Carnegie 1927 $660,408,228
Accordingly, the total cost to institutions of a full accreditation review, including
both direct and indirect costs exceeds $660,000,000 whether calculated by accrediting
region or by Carnegie classification. It is striking how close these two totals are, the
225
difference of $342,792 accounting for a mere 0.1% of the total cost. Generally speaking,
most of the six regional accrediting agencies conduct a full re-accreditation review for
previously accredited institutions once every 10 years (WASC being the most notable
exception with a shorter review cycle). Therefore this $660,000,000 cost is incurred
approximately every 10 years.
A few respondents explicitly acknowledged that accreditation costs to the
individual institution were reasonable considering the fact that the full institutional
review happens only periodically. For example, one ALO said, “If you distribute the
costs out over time, they are actually quite reasonable and cost effective from a variety of
perspectives.” Kennedy, Moore, and Thibadoux (1985) also specifically addressed this
idea, concluding that accreditation costs were not excessive in light of the fact that total
cost could effectively be spread out over the span of time following the review. The
calculations above make it possible to consider an average institutional cost of
accreditation per year. The six regional accrediting agencies conduct comprehensive
institutional reviews every seven to 10 years. Accordingly a per-year average would lie
within the ranges illustrated in Tables 5.5 and 5.6. The total institutional cost to the
higher education community annually would therefore lie between $66,000,000 and
$94,285,715.
226
Table 5.5: Average Per-Year Cost of Accreditation by Accreditation Region
Accreditation region
Per-institution
average
Average annual cost
on 10-year cycle
Average annual
cost on 7-year cycle
MSCHE $345,591 $34,559 $49,370
SACS $405,481 $40,548 $57,926
WASC $230,690 $23,069 $32,956
NCA (estimated average) $327,254 $32,725 $46,751
NEASC (estimated average) $327,254 $32,725 $46,751
NWCCU (estimated average) $327,254 $32,725 $46,751
Average $327,254 $32,725 $46,751
Table 5.6: Average Per-Year Cost of Accreditation by Carnegie Classification
Carnegie classification
Per-institution
average
Average annual cost
on 10-year cycle
Average annual cost
on 7-year cycle
Doctoral/Research $414,586 $41,459 $59,227
Master's $431,810 $43,181 $61,687
Baccalaureate $312,146 $31,215 $44,592
Special Focus $205,871 $20,587 $29,410
Tribal (estimated average) $341,103 $34,110 $48,729
Average $341,103 $34,110 $48,729
Research question #2: Conclusion.
In commenting on the cost of accreditation, Wolff reflected on the way that many
institutions spend a significant amount of money on financial audits but lament the cost
of accreditation which is in essence an academic audit. Where this cost happens only
periodically according to the institutional review cycle, it pales in comparison to the
amount of money in federal financial aid that goes to the institution, for which funding
the institution qualifies because of its accredited status and the gatekeeper role currently
assigned to institutional accreditation (personal communication, October 10, 2011).
227
Similarly one ALO who participated in the study commented: “The financial cost of the
reaccreditation exercise relative to the benefits obtained is nothing compared to the cost
to the institution of a public failure and all the negative consequences that would come
with it.” While the question of cumulative cost obviously entails more than a simple cost-
benefit analysis of maintaining institutional accreditation, access to federal funding is a
key motivating force if only because of the sheer magnitude of that funding.
Accreditation is inarguably expensive if only because respondent ALOs consider
it to be so. While this is primarily because of the cost of the time necessary to manage
accreditation effectively, the direct costs in and of themselves are certainly not negligible
and the combined cost is considerable. While it might be reasonable to expect that these
benefits would be worth an average of between $32,000 and $34,000 a year (for
institutions undergoing review every 10 years), or even worth an average of between
$46,000 and $49,000 a year (for institutions undergoing review every seven years), this is
an institutional decision not to be taken lightly. From the number of institutions
maintaining accreditation it would appear that these benefits are in fact worth the cost,
however the calculations explored in this study will allow institutional agents to more
deliberately reflect upon the question.
Research Question #3: Do primary Accreditation Liaison Officers believe that the
perceived benefits associated with institutional accreditation justify the institutional
costs?
As noted previously, discussion of the actual perceived benefits of accreditation is
a peripheral but important part of the question of accreditation costs. Benefits identified
by respondent ALOs included the act of institutional self-evaluation (not necessarily the
228
self-study in and of itself), university improvement, the opportunity to use accreditation
as a vehicle for institutional improvement, increased campus unity, an outside review, the
ability to offer financial aid, the mere fact of having accreditation, the reputation
provided by having accreditation, the opportunity to share best practices, the opportunity
to celebrate institutional accomplishment, and the avoidance of the repercussions
associated with not being accredited. In terms of using accreditation as a vehicle for
institutional improvement, this was done sometimes purposefully and deliberately,
sometimes incidentally. Respondents specifically mentioned that their institutions
variously used accreditation as an “opportunity,” an “impetus,” a “catalyst,” a
“requirement,” a “driver,” an “incentive,” a “tool,” a “force,” an “engine,” a “pressure,” a
“push,” a “stepping stone,” a “scare,” a “specter,” a “jump start,” an “inventory,” as
“leverage,” as an “assistance,” and as a “motivator” or “motivation.” This research
question examined whether and why ALOs felt that accreditation costs were justified.
Whether accreditation costs are justified.
The survey instrument asked ALOs directly whether they believed accreditation
costs to be justified. On average, the vast majority (77.7%) of respondents providing
dichotomous yes or no answers believed that costs were justified (i.e., 44.6% believed
that costs were justified, 12.8% believed that costs were not justified, 6.5% did not
answer the question clearly, and 36.0% did not answer the question). In contrast to the
tests on means for significant differences in direct costs, significant differences for this
question existed between accreditation regions and not between Carnegie classifications.
In particular ALOs from MSCHE institutions approved the most strongly (45.4%
229
believed costs to be justified and 7.2% believed costs not to be justified, indicating that
six times more ALOs believed costs to be justified than that did not believe costs to be
justified) and those from WASC institutions approved the least strongly (44.6% believed
costs to be justified and 33.9% believed costs not to be justified, indicating that fewer
than half again as many ALOs believed costs to be justified than that did not believe costs
to be justified). The result for WASC institutions was incongruous with the emphasis the
regional accrediting agency has placed on making the accreditation process more
manageable for institutions, however the recent changes that WASC has been
implementing may in fact be in response to this general sentiment, and approval might
reasonably be expected to increase over the course of the next decade as all institutions
complete a review cycle implementing these changes. By Carnegie classification,
approval is lowest among special focus institutions (48.8% believed costs to be justified
and 20.9% believed costs not to be justified) and may be a reflection of the relatively
higher toll maintaining accreditation takes on smaller, more specialized institutions, a
theme present in the open-ended comments provided by ALOs from that classification of
school. On the other hand it is probable that significant differences do exist between
accreditation regions but not Carnegie classifications because each accreditation region
inevitably has its own unique set of process methods and techniques.
The literature review explored public opinion on accreditation and accreditation
cost throughout history and discovered emotional opposition expressed through quite
colorful language dating back almost a century. As might be expected this same kind of
passion and enthusiasm was evident from respondents who did not believe that costs
230
were justified, however the same emotion and colorful language were also evident among
those respondents who indicated that accreditation costs were justified despite both the
number of acknowledgments that costs were considerable and the majority belief that
costs were justified.
This pattern can be seen throughout the literature on accreditation costs too.
Studies conducted over the last several decades by accrediting institutions discovered a
general support for accreditation among university executives despite the associated high
costs (Andersen, 1987; Council for Higher Education Accreditation, 2006; Engdahl,
1981; Federation of Regional Accrediting Commissions of Higher Education, 1970;
Pigge, 1979; Puffer, 1970; Reidlinger & Prager, 1993; Romine, 1975; Warner, 1977).
The North Central Association explored the perception of accreditation costs and found
that 53% of respondents considered accreditation benefits to outweigh costs and that an
additional 33% of respondents considered accreditation benefits to equal costs, resulting
in a total of 86% of respondents who believed that benefits either equaled or outweighed
costs (Lee & Crow, 1998). Freitas (2007) discovered remarkably similar results in a
review of the costs of professional accreditation for baccalaureate nursing programs,
specifically that 81.3% of respondents believed benefits either equaled or outweighed
costs. The finding of this study that over three-quarters of respondents who clearly
answered the question believed that accreditation costs were justified closely reflects the
findings from these other studies. University administrators support accreditation efforts
while being simultaneously cognizant of the corresponding costs (Andersen, 1987; Ewell,
2008; Newman, 1996). As with the other studies cited here, participants in this study
231
generally supported accreditation despite its costs while acknowledging that there were
aspects of the accreditation process that could be improved.
Comments revealing respondent beliefs on whether accreditation costs were
justified.
Responding ALOs commented openly on whether accreditation costs were
justified, and an analysis of these comments provides further insight into why they
generally supported accreditation efforts. These comments fell into four categories. First,
about a third of the comments elaborated on why costs were justified. For these
respondents, their institutions had developed a sense of ownership of the accreditation
process and those directly involved had a personal stake in it. For these schools
accreditation was serving exactly the function it was originally intended to serve, that is,
providing the campus an opportunity for self-evaluation following a mindful, deliberate
institutional assessment intended to lead to university betterment. These representatives
took control of the process and used it for institutional learning, creating a base from
which they could better budget limited time and money resources for future evaluations.
These respondents frequently acknowledged that they were compelled to do the
accreditation review, but worked to make the most out of the opportunity.
Second, another one-third of the comments came from respondents who generally
believed that accreditation costs were justified but focused rather on how the costs were
high. Surprisingly, despite the fact that average document costs were generally more than
double the average site visit costs, the high magnitude of the site visit costs were
lamented six times as much as the document costs. This was possibly a consistent topic
because the regional accreditor mandates and drives the visit, often dictating the number
232
of individuals involved and the timing. Therefore institutions have much less control over
the site visit than they do over the self-study document. Some ALOs had ideas or strong
feelings about specific ways that site visit costs could be reduced. As mentioned above
the magnitude of indirect costs was an important theme running throughout these
comments as well.
Some respondents imputed the regional accrediting agencies with having
prescriptive requirements but little flexibility in terms of execution both for minimizing
accreditation work and for maximizing accreditation value to the institution. This was a
finding of Leef and Burris (2002), and recent policy recommendations published by
NACIQI also recognized this as a danger. Representatives from NACIQI commented:
“Some current requirements… may be seen as unnecessarily intrusive, prescriptive, and
granular in ways that may not advance system goals nor match institutional priorities, and
as costly in resources such as time, funds, and opportunity” (National Advisory
Committee on Institutional Quality and Integrity, 2012, p. 6). The American Council on
Education (2012) also recognized accreditors’ requirements leading to “busy work” as
contributing to cost (p. 27). Consequently respondents recognized mixed feelings about
accreditation among varying administrators and other campus constituencies at the same
institution, a sort of contrast to the value accreditation held for other respondents in terms
of improving campus unity.
Conceivably ALOs and other institutional executives believed that costs were
high because of the way accreditation compliance compels institutions to confront and
resolve difficult problems or issues, some of which may have deep roots and generate
233
real resistance to change. There is however a real concern over the increasing complexity
of accreditation requirements and the corresponding rise in cost to meet them. Many
respondents opined that there had to be a better (i.e., less expensive) process for
accreditation. There was a frequent acknowledgment among ALOs that, realistically,
accreditation requires a constant institutional focus from someone dedicated specifically
to it to ensure constant compliance and currency even during periods of non-review.
The third group of comments, about 20% of the responses, reflected on how
accreditation costs were not justified. This was similar to the findings of Andersen’s
(1987) study that there was a strong sentiment that accreditation took too much time and
that the monetary cost was too high. The site visit (including required multiple site visits
where applicable) was again a frequent target of ire and was identified as one of the
reasons that accreditation costs were not justified. Shibley and Volkwein (2002) also
found the site visit to be a source of consternation for respondents in their study, and Dill
(1998) referred to problems with both the frequency of visits and the composition of the
visiting team. The personal toll on people at the school also figured prominently in these
comments and was another reason that the costs were not justified.
Some of these comments reflected on the fast pace of changing standards and
expectations as one of the reasons that accreditation costs were too high: Keeping up with
these changes was very difficult. Freitas (2007) also found “changes in expectations from
accrediting and governmental bodies” to be a separately listed concern contributing to
cost (p. 110). The federal advisory group NACIQI (2011) suggested the opposite: “The
accreditation system is one that does not subject institutions of higher education to rapid
234
changes in how they will be judged for purposes of accreditation (and therefore federal
financial aid)” (p. 3). The fact remains though that this was a real concern for respondents
to this study. The same NACIQI paper also recognized that, “the accreditors themselves
find their standards, criteria, and requirements being shaped by a federal agenda quite
apart from the traditions and interest of the voluntary peer review of an academic
enterprise” (p. 4). Sibolski (2012) recognized that this often results in individual
institutions holding the accreditors responsible for rising accreditation costs. It is possible
however that the evident changes are instead a result of tying eligibility for federal
financial aid to accreditation, and this could be driving changes to regional accreditation
in ways that are frustrating and overwhelming the institutions.
The final group of comments on the justification of accreditation costs, about 14%
in all, reflected a feeling that there is no real alternative to maintaining accreditation as
institutions presently do. This is for a number of reasons mostly related to the students at
the institution: the need to maintain eligibility for financial aid, the need to ensure the
transferability of credits between institutions, the need to enable the matriculation of
graduates into programs at other universities, and the need to establish the recognition of
the credentials or degrees the university confers so the students can acquire employment
after program completion. Although it is not required that an institution be accredited,
because of the federal government’s involvement through federal financial aid on which
institutions are largely dependent, accreditation has truly become a quasi-governmental
function (Finkin, 1979; Finkin, 1994b) and is therefore necessary for the survival of most
institutions. The alternative is to cease operations.
235
Aside from these concerns however institutions continue to maintain accreditation
because there is no truly viable alternative for the current accreditation system, a
sentiment evident from the literature on accreditation as well (Brittingham, 2008; Ewell,
2008; Hawkins, 1992; Orlans, 1972). A few comments pointed out that any other system
would just be more expensive. Wolff (2005) agreed, saying that, “Within the academic
community, there is general agreement that the costs of self-regulation are substantially
lower than if state or federal governments were to assume this function” (p. 87).
Reidlinger and Prager (1993) encouraged institutions to “examine accreditation expenses
in relation to available resources and determine the replacement costs of accreditation
benefits” (p.44). In other words the irreplaceability of accreditation notwithstanding, each
institutional representative should recognize that the institution “might still need to assess
its programs and itself as an institution through self-studies and external reviews, for
many of the reasons that gave rise to organized accreditation in the first place” (pp. 44-
45). Even without accreditation, “institutions and programs would still seek natural
affiliations with external professional bodies of like institutions and programs, and these
affiliations have attendant costs” (p. 45). It appears from their responses in this study that
ALOs were aware of the inevitability of these attendant costs even if not explicitly, and
this is likely why so many of the respondents felt that the costs were justified. There
seems to be at least a recognition that the current mechanism for accreditation, no matter
how onerous parts of it are, is preferable to, and less expensive than, anything anyone has
been able to envision as a viable alternative.
236
Research question #3: Conclusion.
Generally speaking, ALOs do believe that accreditation costs are justified. The
perceived benefits are real and numerous, and institutional executives genuinely
appreciate the opportunities the accreditation process provides. Not only are these
intended benefits good, but according to ALO comments institutions are in fact deriving
the values accreditation was originally conceived to offer. Nevertheless ALOs are acutely
aware of the magnitude of accreditation costs and (perhaps appropriately) do not
disassociate accreditation’s benefits from its frustrations. As a group therefore they view
costs as justified despite the great extent to which they lament them.
Research Question #4: What kinds of patterns emerge in accreditation commitment
between types of institutions?
Because of the wide participation in this study by ALOs from across the spectrum
of Carnegie classifications in three out of the six accreditation regions, patterns can be
identified. This question explores patterns between the ALOs themselves as well as the
institutions and regions they represent.
Patterns between Accreditation Liaison Officers.
Survey respondents were asked whether they were currently serving as the ALO.
On average 90.4% of respondents were serving as ALO, although this varied between
85.4% of respondents from MSCHE institutions at the low end and 96.4% of respondents
from WASC institutions at the high end. There was a similar range between institutional
type, ranging from 85.4% of master’s institutions at the low end and 95.3% of
baccalaureate institutions. More revealing however was the set of responses to the
question on whether the respondent was the ALO at the time of the last review. Only
237
51.4% of respondents indicated that they had been the ALO at that time, and the
variability in percentages between institution types was slightly greater: between 50.0%
of respondents from MSCHE institutions and 63.0% of respondents from WASC
institutions, and 49.0% of respondents from baccalaureate institutions compared to 59.5%
of respondents from special focus institutions. The ALOs from WASC institutions had
the highest percentage of respondents who were currently serving as ALO and who had
been ALO at the time of the previous review, perhaps indicating a stronger support for
accreditation in the WASC region. Regardless, the turnover of ALOs between the last
institutional review and the time of this study could be considered high, and may indicate
a general lack of adequate institutional support for accreditation activities. All institutions
obviously make some commitment to accreditation to ensure that the process is
successful, however in many cases that commitment may not be sufficient to maintain
consistency in the position, something that would reduce costs overall, particularly time
costs in terms of what could be irreplaceable direct experience with accreditation at each
specific institution.
As previously noted, for all three accreditation regions the professional most
frequently serving as ALO was either a Vice President or a Provost, accounting for
61.7% of positions in all three participating regions. The assignment of accreditation
responsibilities to a Provost is certainly consistent with the academic nature of the
evaluation as originally intended historically; however each institution will make the
assignment according to what makes the most sense for that setting and culture. There
were very few occurrences of a faculty member serving as ALO in any of the three
238
regions. While, as demonstrated, faculty had significant involvement in the accreditation
process, they were not being asked to coordinate the process. In all three participating
regions a Dean figured much more prominently as ALO for baccalaureate and special
focus institutions than for doctoral/research or master’s institutions, possibly because a
Dean at one of these generally smaller schools could be serving the same role as a
Provost at a more complex doctoral/research or master’s university. A staff member (at
the Director level or another level) was serving as ALO in a small number of institutions
throughout the sample. There were however very few instances of an ALO with
“Accreditation Liaison Officer” in the title (only 16 occurrences, or 1.3% of all titles)
although 35 positions did have the word “accreditation” in the title (2.9% of total
positions). This reveals that the individuals serving as ALO are almost always filling
many other important professional responsibilities for the campus, and accreditation
duties figure as only part of their overall work assignments. This must be especially true
for cases where the President is serving as ALO. The university President served as ALO
only infrequently however, never in WASC, only three times in SACS, but 20 times in
MSCHE; as might be expected this happened least often for doctoral/research institutions
and most often for special focus institutions.
Patterns between Carnegie classifications.
A theme that emerged in this study that reflected the literature in general was the
difference in the nature of accreditation commitment between institutions by Carnegie
classification. Some commenters indicated that they were already doing a lot of what was
considered accreditation activity as part of their institutional practice anyway. This
239
reflects the findings of Lee and Crow (1998) that doctoral/research institutions might
already have processes in place internally to serve the purposes intended by accreditation.
A few related comments considering the immensity of resources required to maintain
accreditation sympathized that they had “no idea how other institutions will survive.”
These comments mostly came from ALOs at large, doctoral/research institutions but were
reflected by several comments from special focus institutions referring to the small size
of the school as the reason they were overwhelmed by accreditation costs. By way of
direct contrast however, one small institution cited its small size as the reason it could
better make the necessary changes as required.
Bloland (2001) suggested that some institutions have more credibility than the
accreditation process itself. The accreditation of these institutions often has greater
benefit for other schools because they become associated with such highly reputed
schools by having the exact same accreditation (this due largely to the present inability of
the accreditation system to distinguish between levels of accreditation: an institution is
either accredited or it is not). Consequently it might be reasonable to assume that larger,
more complex universities (such as doctoral/research universities) spend less on
accreditation than other schools. The data are inconclusive on this point. On one hand a
review of the direct costs by Carnegie classification reveals that this is far from the case,
doctoral/research institutions spend the most with special focus institutions spending the
least. On the other hand a review of the total costs including monetized indirect costs
reveals that master’s institutions spend more than doctoral/research institutions (with
special focus institutions again spending the least). Therefore it appears that institutional
240
commitment made to accreditation is more likely a function of institutional resources.
Doctoral/research universities, with generally larger and more flexible budgets, spend
more on direct costs, however they also disperse the time costs more evenly throughout
the ranks of available personnel resulting in lower monetized indirect costs. Special focus
institutions, generally smaller in size and with more limited resources, have less ability to
do either of these things.
An important limitation on this study was the high variability of reported costs
within each of the categories. While this was expected for the categories of regional
accreditors it was also evident within the categories of Carnegie classifications. As noted
earlier, this variability is representative of the great variation between institutions and
institutional budgets within Carnegie classifications, which makes this a question of
relative cost. Strikingly, the magnitude of accreditation costs (both direct and indirect) is
much more consistent within each Carnegie classification than either the range of
institutions or the size and complexity of institutional budget within these categories. For
example, the average combined direct cost of doctoral/research institutions is
approximately two and a half times that of special focus institutions, however the average
institutional budget for doctoral/research institutions exceeds that of special focus
institutions by much more than two and a half times. This study therefore shows that
accreditation costs are relatively much greater for smaller institutions (i.e., special focus,
baccalaureate, etc.) than for larger, more complex institutions with immense budgets, and
an awareness of this was evident in responses to the survey. This follows naturally from
observations about the ability of these larger schools to better absorb direct accreditation
241
costs in larger budgets and to better disperse indirect accreditation costs among more
varied levels of personnel. Accreditation costs that are considered expensive for one
institution (or type of institution) are not necessarily expensive for another. The
consistency of reported costs within these standardized Carnegie classifications, despite
the high variability and generally non-normal distribution, is remarkable given the
institutional variation otherwise.
In terms of benefits, ALOs from master’s institutions cited most of the identified
benefits more frequently than ALOs from other institutions. Most notably these
respondents laid claim most frequently to the self-evaluation, using accreditation as a
vehicle for improvement, and the simple fact of having accreditation as benefits to the
process. The ALOs from special focus institutions on the other hand were the most
frequent to mention the reputation provided by having accreditation as a benefit, possibly
because of the relative importance of reputation for special focus institutions. With fewer
programs to attract students these schools are more reliant on their reputation in a specific
field for survival. Respondents from special focus institutions also least frequently
claimed campus unity as a benefit of accreditation, possibly because of the relatively
small size and consequent close-knit feel of such schools already.
Patterns between accreditation regions.
Because statistical significance was not discovered for means of costs between
accreditation regions it is more difficult to draw conclusions about patterns between these
groups. This consideration is important however. Peterson and Augustine (2000) found
the region of accreditation to be a primary influence on the ways in which institutions
242
approached student assessment. It is reasonable to expect that each regional accreditor
with different priorities and foci will exert a profound influence on the execution of
accreditation process at institutions and consequently on the results of accreditation
activities.
Of the three participating regions it appears that costs are greatest for SACS
institutions and lowest for WASC institutions. The higher cost associated with SACS
institutions may be related to the Quality Enhancement Plan (QEP) that is part of SACS
accreditation. Conversely the lower cost associated with WASC institutions may be
misleading as WASC has previously conducted more than one site visit as part of the full
accreditation cycle, and the ALO may have misreported costs for only part of the
institutional review cycle rather than the full cycle including all relevant visits. This
danger is particularly pronounced where turnover in the position of ALO is high between
accreditation reviews as has been demonstrated. Institutions in the SACS region were the
most frequent to cite university improvement as a benefit (again, quite possibly because
of the Quality Enhancement Plan) and the least frequent to cite the ability to offer
financial aid as a benefit. Institutions in the WASC region were the most frequent to cite
the self-evaluation and simply having accreditation as a benefit.
Institutions in the WASC region were also the most frequent to cite the
opportunity to use accreditation as a vehicle for improvement as a benefit, possibly
because of the desire of WASC leadership to treat this question explicitly on the survey
instrument. Institutions in the WASC region manifested the lowest average since the last
review, just under three years as opposed to five years and one month for MSCHE
243
institutions and four years and nine months for SACS institutions (although again this
could be the result of a process involving multiple visits over the course of a “full”
institutional review). An average of 67% of accreditation costs were incurred solely from
meeting accreditation requirements. This would seem to indicate that WASC is pushing
universities to spend three times what they would normally on institutional improvement.
Many comments from WASC respondents recognized and appreciated the changes
WASC was making to accreditation requirements to make the process less intrusive on
institutions. The changes WASC has been making include providing more frequent, low-
stakes opportunities for institutional assessment (Crow, 2009; Smith & Finney, 2008),
promoting positive institutional change rather than reluctant accreditation compliance
(Ewell, 2008; Smith & Finney, 2008), developing institutional benchmarks for student
retention and completion (Kelderman, 2011), “making more of its communications with
colleges public” (Kelderman, 2011, para. 18), and (most practically) amending the
process of multiple reports and site visits. Many of these changes “are meant in part to
delay more federal regulation of the accreditation process” (Kelderman, 2011, para. 19)
and might therefore be more a reflection of the fear of an alternative system of
accreditation or more stringent federal regulation. Some of these changes will potentially
result in lower time and money costs for institutions and ALOs; however some of them
could mean increased costs. The net effect will only be apparent after some years have
passed.
244
Research question #4: Conclusion.
The findings of this study indicate that a university’s Carnegie classification had
more effect on institutional accreditation commitment than the accrediting region in
which it was located. Differences in means between accrediting regions were not
significant, probably because of the isomorphism (mimetic, normative, and to a certain
extent coercive) of higher education nationally (DiMaggio & Powell, 1983; Kezar, 2009;
Leslie & Rhoades, 1995; Volkwein & Zhou, 2003). On the other hand the size and
complexity of an institution in terms of its administrative structure and budget seemed to
play a defining role in determining the amount of available resources (fiscal, time,
personnel, etc.) an institution could commit to the accreditation process. Institutions
falling in the Carnegie classification of doctoral/research institutions and to a certain
extent master’s institutions generally displayed the greatest commitment to accreditation
in terms of dollars and time spent, while special focus institutions seemed to have less
resources that they were able to bestow upon the process.
Implications for Practice
This study will be useful to institutional leaders because it will assist in providing
a better understanding of the real fiscal and personal costs of the time being committed to
accreditation. Further it provides insight on how colleges and universities manage the
costs associated with accreditation.
Budget Implications
Significant differences do not exist between either accreditation regions or
between Carnegie classifications for ALO time spent on accreditation outside of the
245
period preparing for the full institutional review, and (as noted above) individuals serving
as ALO are almost always additionally filling a wide range of other important
professional responsibilities for the campus outside of actual accreditation
responsibilities. Those serving as ALO are indeed required to make time for the
accreditation process from a schedule already filled with other professional
responsibilities, and the eventual completion of the process does seem to free up time that
must be filled subsequently (and usually quite easily). The open-ended comments
provided by respondents illustrated how exacting it could be trying to meet all
professional expectations including accreditation duties, and striving to do so often even
affected the ALO personally in his or her quality of life. As cited previously, this survey
respondent illustrated the importance of providing adequate support:
Accreditation is a constant concern in organizing, hiring, curriculum
development, establishment of branch campuses, and substantial changes.
Presidents and system administrators do not budget for this time spent on
accreditation duties. Therefore, accreditation activities are added on top of an
already heavy workload. It is critical, but not properly accounted for in terms of
hundreds and in some cases thousands of hours required in preparing and
documenting the meeting of standards.
Therefore extra time must be budgeted for the execution of accreditation responsibilities.
Lasher and Greene (2001) recognized the growing demands of accreditation: “An
increasing amount of faculty, staff, and financial resources are necessary to develop and
update costly databases and tracking systems that are requisite to maintain compliance”
246
(p. 538). They and others stress the need to guard as much as possible against
unanticipated costs (Lasher & Greene, 2001; Willis, 1994) because a schedule that does
not allow adequate flexibility to cope with accreditation demands to begin with will
certainly not withstand the added pressure of unforeseen surprise demands. Additionally,
the long period of time between accreditation reviews, especially when complicated by
turnover in the ALO position, can compromise institutional ability to execute an efficient
accreditation review (Kells, 1976; Wolff, 2005). As institutional memory of the previous
accreditation review wanes or “evaporates,” the next review will invariably be less
effective, compromised by the institution’s inability to build upon the previous review.
As Kells and Kirkwood (1979) observed in their chapter on institutional self-evaluation
processes:
Very few of the institutions were able to call upon the results of an ongoing,
active, broadly based institutional research capacity to respond to basic questions
about program functioning, goal achievement, educational effectiveness, strengths
and weaknesses of processes, and the like. In short, they did not have a choice;
they had to conduct a comprehensive study. (p. 37)
The turnover rate within the ALO position is particularly problematic, and as
shown by the survey results might also be the result of a lack of adequate institutional
support. Where there is turnover in the ALO position the following accreditation review
will inevitably be more expensive. The new ALO may have little or no experience
actually managing the accreditation process or might need to learn the distinctive
institutional context for accreditation, or both. Training will be necessary and a certain
247
degree of tolerance for the learning curve will have to be allowed. What is not entirely
clear is whether this turnover is more the result of unreasonable or unsustainable
expectations of the ALO or whether the rate of turnover is normal and consistent with
what would be expected of a senior administrative position in higher education anyway.
Where the majority of individuals serving as the ALO hold a senior administrative title
(i.e., provost or vice president) a certain degree of turnover could at the very least be
expected (Kezar, 2009; Volkwein & Zhou, 2003). At any rate, the introduction to this
study acknowledged that it can be difficult to know what constitutes adequate support for
the accreditation process. These data clearly indicate the answer to be: generally more
than is being provided now.
Accreditation Costs, Though Considered High, Are Perceived as Justified
There is widespread recognition in the literature on accreditation, evident also
from the responses contributed to this study, that there must be some kind of oversight of
higher education. It is also generally accepted that a system different from the current
mechanism of accreditation would be more expensive and more restrictive. The data
collected through this study indicate that ALOs believe accreditation costs to be justified.
Because of the link between accreditation and eligibility for federal funding, accreditation
is functionally mandatory rather than optional or voluntary. Additionally, students are
unlikely to attend an institution that cannot assure them with a reasonable degree of
certainty that the education they acquire and the credential they attain there will be valued
by other entities once they move on. Without accreditation institutions would have
neither the funding necessary to continue their operations nor the students for whom they
248
would operate. Therefore more than three times as many respondent ALOs believed that
accreditation costs were justified than that did not believe they were justified, even while
many simultaneously decry those costs. This acknowledgment provides the justification
necessary for increased budgetary allowance of accreditation support.
Other Implications
In this study the site visit was a frequent object of frustration and dissatisfaction.
Based on ALO sentiment expressed through responses to the survey, effort made to
reduce the cost or institutional intrusiveness of the site visit would be welcomed.
Regardless of whether either of these can be altered, a better understanding of the
attendant costs and the amount of control the university is able to exert over them would
benefit institutional involvement.
Finally with the monetization of indirect costs, a total cost of accreditation can be
assessed that is more realistic than a tally of direct fiscal costs alone. Through a
consideration of that total as well as its per-year expense, a deliberate decision can be
made as to whether the cost is acceptable, and more specific amendments can be
entertained if it is not.
Implications: Conclusion
This study revealed a portrait of the institutional ALO as a dedicated, purposeful
professional who does not shy away from the formidable work of managing the
accreditation process. Thanks to the commitment routinely manifested by these
individuals, the self-regulation of higher education is made possible and the hope of
NACIQI is being realized, namely that “institutions [are] sufficiently involved and
249
invested in understanding the issues, arriving at self-regulatory solutions, and establishing
principles to ensure institutional compliance” (National Advisory Committee on
Institutional Quality and Integrity, 2012, p. 3). On the other hand the ALO is commonly
overwhelmed by a generally unvarying fulfillment of responsibilities even to the extent
that his or her personal life is affected. The extraordinary importance of the ALO’s duties
is not likely to change nor is it likely that any ALO expects it to do so; however it is
imperative to account for this while establishing institutional accreditation policy,
particularly as it pertains to budget allocations for time and support.
Future Research
As has been noted, accreditation has not been widely researched and there is a
need for further empirical study particularly as it relates to the costs of accreditation
(Shibley & Volkwein, 2002). There is also a lack of quantitative research on the topic
which might provide greater generalizability to the broader higher education community.
The intention of this study was to draw conclusions that could be applied across the
landscape of higher education, however while the analyses presented here have broad
implications and applications, the high variability and non-normal distribution of
submitted data prevented this. Therefore the replication of this survey on a larger scale,
specifically with the involvement of the other three accrediting agencies, would be highly
beneficial to developing a better understanding of accreditation costs. Additionally the
relatively high turnover evident among ALOs suggests that a study such as this might be
repeated without engendering undue ALO fatigue.
250
The rate of turnover among ALOs is another area that would benefit from future
research. It would be insightful to investigate the degree of collinearity between turnover
among ALOs and turnover among other senior administrators to determine whether ALO
turnover is extraordinarily high.
As noted above, means of direct costs by institution type varied between Carnegie
classifications however these differences could not be attributed to a difference in
priority. A more thorough examination of the various and precise subcategories of direct
costs would help clarify this issue.
It was also observed above that the site visit costs solicited for this study included
only direct costs to each reporting institution. A more accurate reflection of the grand
total site visit cost would include the costs to all individuals including those serving in a
volunteer capacity from other institutions.
This study asked ALOs to report the direct and indirect costs associated with the
institution’s previous accreditation review. Consequently respondents amply and
appropriately qualified their submissions by commenting on the estimated nature of the
data shared. Thus it would be valuable to have several institutions approach a review
cycle with the intention of carefully tracking direct and indirect costs. Subsequently this
study could be replicated with improved reliability. This is consistent with Parks’ (1982)
recommendation concerning “the need for systematic, on-going data collection from a
sample of institutions with which agreements permit common costing and reporting
techniques” (p. 12).
251
It would also be useful and informative to contextualize how high accreditation
costs are relative to institutional budgets. This could be done by comparing reported
accreditation costs to published institutional budgets by Carnegie classification. It would
be most useful to further disaggregate data within Carnegie classifications by taking full
advantage of the categories in each: level of research activity for doctoral/research
institutions (very high research activity, high research activity, and research universities),
program size for master’s institutions (larger programs, medium programs, and smaller
programs), kinds of bachelor’s degrees awarded for baccalaureate institutions (arts and
sciences, diverse, and baccalaureate/associate’s), and field of concentration for special
focus institutions (theological, medical, other health, engineering, technology, business,
art, law, and other). An analysis of this type would also presumably help address the
limitation resulting from high variability of reported costs and the ensuing non-normal
distribution because the costs for these smaller categories would possibly be more
uniform.
Where the primary recommendation emerging from this study concerns the need
for greater budgetary allowance for accreditation costs, beneficial future research might
also include a survey of how institutional accreditation budgets change or have changed
over time. Warner (1977) found that about one-third of institutions responding to a
survey on accreditation influences had changed budget allocations based on accreditation
results although the study did not examine how. This topic should be explored in a
contemporary setting.
252
Finally, consideration of fees assessed by regional accreditors fell outside the
scope of this study, however they constitute a real part of the direct costs of accreditation
and some participants indicated an interest in investigating these along with other direct
costs. Eaton (2012a) noted that in 2008-2009 “accrediting organizations collected and
spent more than $98 million in fees to fund more than 760 full-time and part-time
professionals and thousands of volunteers who reviewed or took other actions… on
approximately 3,000 institutions and more than 4,600 programs” (p. 9). It would be
useful to consider how accreditor fees vary between regions along with their impact on
the total budget for direct costs.
Conclusion
Accreditation was never intended to police institutions, but wielding the
“constancy” and “epic pace” of a glacier (Banta & Associates, 2002, p. 253) it is a
powerful tool. It has become such an important mechanism for quality control that it is
not likely to be replaced and, as Wolff (2005) noted, “it will undoubtedly remain as a
major force of accountability” (p. 102). While the costs associated with accreditation may
in fact be the tip of the iceberg as one survey respondent noted, these attendant costs will
be more palatable if approached proactively (Wolff, 2005). Consequently university
representatives must carefully weigh both the costs and the benefits of the process
(National Advisory Committee on Institutional Quality and Integrity, 2012; Reidlinger &
Prager, 1993).
253
With so many different constituencies relying upon accreditation, and with
accountability being required at so many different levels, the process absorbs substantial
time and resources. According to Ikenberry (2009):
While we cherish the right of self-regulation, we tend to shortchange the
necessary commitment of time, attention and resources essential to make the
system credible and sustainable. If self-regulation is to work, the academic
community must take the responsibility seriously—seriously in terms of attention,
seriously in terms of standards and expectations, seriously in terms of the quality
of the evidence and processes used in reaching decisions, and seriously in terms
of the level of resources committed to the enterprise. Failing that, accreditation
will continue to live on the edge, never quite gaining the credibility it seeks and
needs on campus and beyond. (p. 10)
Institutions therefore must determine with increasing intentionality how they will
approach accreditation and the accreditation process (Reidlinger & Prager, 1993).
Costs are uncontestably high. Institutions reserve limited resources for them and
ALOs lament them. The ardent frustration with accreditation apparent from the literature
review was also present in the opinions of survey respondents, those managing the
accreditation process personally. Yet despite acute awareness of the costs, ALOs believe
that they are justified: The process benefits the institution in multiple, valuable ways. The
data gathered here indicate that institutional accreditation would benefit from greater
support of the process particularly with respect to the associated indirect costs. On the
other hand, there is no better committed professional than the Accreditation Liaison
254
Officer characterized by the responses in this study to magnify an increased institutional
commitment to the betterment of the institution.
255
REFERENCES
Accrediting Commission for Senior Colleges and Universities Western Association of
Schools and Colleges. (2002). A guide to using evidence in the accreditation
process: A resource to support institutions and evaluation teams. Alameda, CA:
Western Association of Schools and Colleges.
Adelman, C., & Silver, H. (1990). Accreditation: The American experience. London,
England: Council for National Academic Awards.
Alderman, G., & Brown, R. (2005). Can quality assurance survive the market?
Accreditation and audit at the crossroads. Higher Education Quarterly, 59(4),
313-328.
Amaral, A. M. S. C. (1998). The US accreditation system and the CRE’s quality audits:
A comparative study. Quality Assurance in Education, 6(4), 184-196.
Amaral, A., Rosa, M. J., & Tavares, D. A. (2009). Supra-national accreditation, trust and
institutional autonomy: Contrasting developments of accreditation in the United
States and Europe. Higher Education Management and Policy, 21(3), 15-32.
American Accounting Association, Committee on Consequences of Accreditation.
(1977). Report of the Committee on Consequences of Accreditation, 52, 165, 167-
177.
American Association for Higher Education. (1997). Assessing impact: Evidence and
action. Washington, DC: American Association for Higher Education.
American Association of University Professors. (2011). Explanation of statistical data.
2010-11 report on the economic status of the profession. Retrieved from
http://www.aaup.org/AAUP/comm/rep/Z/ecstatreport10-11/explan.htm
American Council of Trustees and Alumni. (2007). Why accreditation doesn’t work and
what policymakers can do about it. Washington, DC: American Council of
Trustees and Alumni. Retrieved from
https://www.goacta.org/publications/downloads/Accreditation2007Final.pdf
American Council on Education. (2012). Assuring academic quality in the 21st century:
Self-regulation in a new era: A report of the ACE National Task Force on
institutional accreditation. Washington, DC: American Council on Education.
Retrieved from
http://www.acenet.edu/AM/Template.cfm?Section=Government_Relations_and_
Public_Policy&Template=/CM/ContentDisplay.cfm&ContentID=45275
256
American Medical Association. (1971). Accreditation of health educational programs.
Part I: Staff working papers. Washington, DC: American Medical Association.
Andersen, C. J. (1987). Survey of accreditation issues. Washington, DC: American
Council on Education.
Arnstein, G. (1979). Two cheers for accreditation. The Phi Delta Kappan, 60(5), 357-
361.
Asgill, A. (1976). The importance of accreditation: Perceptions of Black and White
college presidents. The Journal of Negro Education, 45(3), 284-294.
Astin, A. W. (1968). Undergraduate achievement and institutional “excellence.” Science,
161(3842), 661-668.
Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation
principles and practices. The Journal of Academic Librarianship, 28(1), 3-7.
Banta, T. W., & Associates. (1993). Making a difference: Outcomes of a decade of
assessment in higher education. San Francisco, CA: Jossey-Bass.
Banta, T. W., & Associates. (2002). Building a scholarship of assessment. San Francisco,
CA: Jossey-Bass.
Banta, T. W., & Associates. (2004). Hallmarks of effective outcomes assessment:
Assessment update collections. San Francisco, CA: Jossey-Bass.
Barak, R. J., & Breier, B. E. (1990). Successful program review: A practical guide to
evaluating programs in academic settings. San Francisco, CA: Jossey-Bass.
Bardo, J. W. (2009). The impact of the changing climate for accreditation on the
individual college or university: Five trends and their implications. New
Directions for Higher Education, 145, 47-58.
Barzun, J. (1993). The American university: How it runs, where it is going. Chicago, IL:
University of Chicago Press.
Beno, B. A. (2004). The role of student learning outcomes in accreditation quality
review. New Directions for Community Colleges, 126, 65-72.
Bernhard, A. (2011). Quality assurance in an international higher education area: A
case study approach and comparative analysis. Wiesbaden, Germany: VS Verlag
für Sozialwissenschaften.
257
Bers, T. H. (2008). The role of institutional assessment in assessing student learning
outcomes. New Directions for Higher Education, 141, 31-39. doi: 10.1002/h3
Bitter, M. E., Stryker, J. P, & Jens, W. G. (1999). A preliminary investigation of the
choice to obtain AACSB accounting accreditation. Accounting Educators’
Journal, XI, 1-15.
Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States
Government Printing Office.
Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation
(CHEA). Phoenix, AZ: Oryx Press.
Bogdan, R. C., & Biklen, S. K. (2007). Qualitative research for education: An
introduction to theories and methods. Boston, MA: Pearson Education, Inc.
Brennan, J. (1997). Authority, legitimacy and change: The rise of quality assessment in
higher education. Higher Education Management, 9(1), 7-24.
Britt, B., & Aaron, L. (2008). Nonprogrammatic accreditation: Programs and attitudes.
Radiologic Technology, 80(2), 123-129.
Brittingham, B. (2008, September/October). An uneasy partnership: Accreditation and
the federal government. Change, 32-38.
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we
are? New Directions for Higher Education , 145, 7-27. doi:10.1002/he.331
Brittingham, B. (2012). Higher education, accreditation, and change, change, change:
What’s teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.),
Inquiry, evidence, and excellence: The promise and practice of quality assurance
(59-75). Washington, DC: Teacher Education Accreditation Council. Retrieved
from http://www.teac.org/wp-content/uploads/2012/03/Festschrift-Book.pdf
Burke, J. C. & Associates. (2005). Achieving accountability in higher education:
Balancing public, academic, and market demands. San Francisco, CA: Jossey-
Bass.
Cabrera, A. F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance
indicators for assessing classroom teaching practices and student learning: The
case of engineering. Research in Higher Education, 42(3), 327-352.
258
Capen, S. P. (1931). The principles which should govern standards and accrediting
practices. Bulletin of the American Association of University Professors, 17(7),
550-552.
Capen, S. P. (1939). Seven devils in exchange for one. In Coordination of Accrediting
Activities, (5-17). Washington, DC: American Council on Education.
Carey, K. (2009, September/October). College for $99 a month. Washington Monthly.
Retrieved from http://www.washingtonmonthly.com
Carey, K. (2010). Death of a university. In K. Carey & M. Schneider (Eds.),
Accountability in American higher education. New York, NY: Palgrave
Macmillan.
Casile, M., & Davis-Blake, A. (2002). When accreditation standards change: Factors
affecting differential responsiveness of public and private organizations. Academy
of Management Journal, 45(1), 180-195.
Chernay, G. (1990). Accreditation and the role of the Council on Postsecondary
Accreditation. Washington, DC: Council on Postsecondary Accreditation.
Christal, M. E., & Jones, D. P. (1995). A common language for postsecondary
accreditation: Categories and definitions for data collection. Boulder, CO:
National Center for Higher Education Management Systems.
Clark, B. R. (1983). The higher education system: Academic organization in cross-
national perspective. Berkeley, CA: University of California Press.
Clitheroe, H. (2010). Academic accreditation and the postmodern condition: A critical
analysis of practices in postsecondary education. Journal of Integrated Studies,
1(1), 1-10.
College and University Professional Association for Human Resources. (2011a).
Administrative compensation survey: For the 2010-11 academic year. Retrieved
from http://www.cupahr.org/surveys/files/salary2011/AdComp11
ExecutiveSummary.pdf
College and University Professional Association for Human Resources. (2011b). Mid-
level administrative & professional salary survey: For the 2010-11 academic year.
Retrieved from http://www.cupahr.org/surveys/files/salary2011/MidLevel11_
Executive_Summary.pdf
259
Commission of the European Communities. (1993). Quality management and quality
assurance in European higher education: Methods and mechanisms. Brussels,
Belgium: Commission of the European Communities.
Council for Higher Education Accreditation. (2003). Statement of mutual responsibilities
for student learning outcomes: Accreditation, institutions, and programs.
Washington, DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/StmntStudentLearningOutcomes9-03.pdf
Council for Higher Education Accreditation. (2006). Presidential perspectives on
accreditation: A report of the CHEA Presidents Project. Washington, DC:
Council for Higher Education Accreditation.
Council for Higher Education Accreditation. (2010). Quality review 2009: CHEA
almanac of external quality review. Washington, DC: Council for Higher
Education Accreditation.
Council for Higher Education Accreditation. (n.d.) The CHEA initiative: Building the
future of accreditation. Retrieved from http://www.chea.org/About/CI/index.asp
Council of Regional Accrediting Commissions. (2003). Regional accreditation and
student learning: Principles for good practices. Retrieved from
http://www.ncahlc.org/download/0412AssessmentAccredLearningPrinciples.PDF
Council of Regional Accrediting Commissions. (n.d.). A guide for institutions and
evaluators. Retrieved from
http://www.sacscoc.org/pdf/handbooks/GuideForInstitutions.pdf
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches. Los Angeles, CA: Sage Publications, Inc.
Crow, S. (2009). Musings on the future of accreditation. New Directions for Higher
Education, 145, 87-97. doi:10.1002/he.338
Daoust, M. P., Wehmeyer, W., & Eubank, E. (2006). Valuing an MBA: Authentic
outcome measurement made easy. Unpublished manuscript. Retrieved from
http://www.momentumbusinessgroup.com/resourcesValuingMBA.pdf
Davenport, C. A. (2000). Recognition chronology. Retrieved from http://www.aspa-
usa.org/documents/Davenport.pdf
Davis, C. O. (1945). A history of the North Central Association of Colleges and
Secondary Schools 1895-1945. Ann Arbor, MI: The North Central Association of
Colleges and Secondary Schools.
260
Denoya, L. E. (2005, July). Accreditation, curriculum model, and academic audit
strategies for quality improvement in higher education. Paper presented at the
Sixth Annual International Conference on Information Technology Based Higher
Education and Training, Juan Dolio, Dominican Republic. Retrieved from
http://ieeexplore/ieee/org/stamp/stamp.jsp?arnumber=01560271
Department of Health, Education, and Welfare. (1973). The second Newman report:
National policy and higher education. Cambridge, MA: The MIT Press.
Dickeson, R. C. (2006). The need for accreditation reform. Issue paper (The Secretary of
Education’s Commission on the Future of Higher Education). Washington, DC.
Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/dickeson.pdf
Dickey, F. G., & Miller, J. W. (1972). A current perspective on accreditation.
Washington, DC: American Association for Higher Education.
Dill, D. D., Massy, W. F., Williams, P. R., & Cook, C. M. (1996, September/October).
Accreditation and academic quality assurance: Can we get there from here?
Change 28(5), 16-24.
Dill, W. D. (1998). Specialized accreditation: An idea whose time has come? Or gone?
Change 30(4), 18-25.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode
surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons, Inc.
Dillon, P. (1997). Credentialing of teacher professional development activities. Retrieved
from http://www.aare.edu.au/97pap/dillp353.htm
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional
isomorphism and collective rationality in organizational fields. American
Sociological Review, 48(2), 147-160.
Doerr, A. H. (1983). Accreditation: Academic boon or bane. Contemporary Education,
55(1), 6-8.
Driscoll, A., & De Noriega, D. C. (2006). Taking ownership of accreditation: Assessment
processes that promote institutional improvement and faculty engagement.
Sterling, VA: Stylus Publishing, L.L.C.
Eaton, J. S. (2001, March/April). Regional accreditation reform: Who is served? Change,
33(2), 38-45.
261
Eaton, J. S. (2003a). Is accreditation accountable? The continuing conversation between
accreditation and the federal government. Washington, DC: Council for Higher
Education Accreditation.
Eaton, J. S. (2003b). The value of accreditation: Four pivotal roles. Washington, DC:
Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/pres_ltr_value_accrd_5-03.pdf
Eaton, J. S. (2007, September/October). Institutions, accreditors, and the federal
government: Redefining their “appropriate relationship.” Change, 16-23.
Eaton, J. S. (2008, July/August). Attending to student learning. Change, 22-27.
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher
Education, 145, 79-86. doi:10.1002/he.337
Eaton, J. S. (2010). Accreditation and the federal future of higher education. Academe,
96(5), 21-24.
Eaton, J. S. (2011a). An overview of U.S. accreditation. Washington, DC: Council for
Higher Education Accreditation. Retrieved from http://www.chea.org/pdf/
Overview%20of%20US%20Accreditation%2003.2011.pdf
Eaton, J. S. (2011b). U.S. accreditation: Meeting the challenges of accountability and
student achievement. Evaluation in Higher Education, 5(1), 1-20.
Eaton, J. S. (2012a). The future of accreditation. Planning for Higher Education, 40(3),
6-7.
Eaton, J. S. (2012b). What future for accreditation: The challenge and opportunity of the
accreditation – federal government relationship. In M. LaCelle-Peterson & D.
Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of
quality assurance (77-88). Washington, DC: Teacher Education Accreditation
Council. Retrieved from http://www.teac.org/wp-
content/uploads/2012/03/Festschrift-Book.pdf
Edler, F. H. W. (2004). Campus accreditation: Here comes the corporate model. Thought
and Action, 19(2), 91-104.
El-Khawas, E. (1993). Accreditation and evaluation: Reciprocity and exchange. Paper
presented at Conference on frameworks for European quality assessment of
higher education, Copenhagen, Denmark.
262
El-Khawas, E. (1998). Accreditation’s role in quality assurance in the United States.
Higher Education Management, 10(3), 43-56.
El-Khawas, E. (2000). The impetus for organizational change: An exploration. Tertiary
Education and Management, 6, 37-46.
El-Khawas, E. (2001). Accreditation in the USA: Origins, developments and future
prospects. Paris, France: International Institute for Educational Planning.
Elliott, L. H. (1970, December). Accreditation or accountability: Must we choose? Paper
presented at the Middle States Association of Collegiate Registrars and Officers
of Admission.
Engdahl, L. E. (1981). Objectives, objections, and options: Current perceptions of
regional accreditation. North Central Association Quarterly, 56(1), 3-13.
Ewell, P. T. (1984). The self-regarding institution: Information for excellence. Boulder,
CO: National Center for Higher Education Management Systems.
Ewell, P. T. (1994, November/December). A matter of integrity: Accountability and the
future of self-regulation. Change, 26(6), 24-29.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of
departure. Washington, DC: Council for Higher Education Accreditation.
Retrieved from http://www.chea.org/award/StudentLearningOutcomes2001.pdf
Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth
anniversary report from the Council for Higher Education Accreditation.
Washington, DC: Council for Higher Education Accreditation.
Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension.
Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved
from http://www.learningoutcomeassessment.org/documents/PeterEwell_006.pdf
Ewell, P. T. (2012). Disciplining peer review: Addressing some deficiencies in U.S.
accreditation practices. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry,
evidence, and excellence: The promise and practice of quality assurance (89-
105). Washington, DC: Teacher Education Accreditation Council. Retrieved from
http://www.teac.org/wp-content/uploads/2012/03/Festschrift-Book.pdf
Ewell, P. T., Wellman, J. V., & Paulson, K. (1997). Refashioning accountability: Toward
a coordinated system of quality assurance for higher education. Denver, CO:
Education Commission of the States.
263
Fallon, D. (2012). Knowing by asking: Frank B. Murray’s life of inquiry. In M. LaCelle-
Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and
practice of quality assurance (1-12). Washington, DC: Teacher Education
Accreditation Council. Retrieved from http://www.teac.org/wp-
content/uploads/2012/03/Festschrift-Book.pdf
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible
statistical power analysis program for the social, behavioral, and biomedical
sciences. Behavior Research Methods, 39, 175-191.
Federation of Regional Accrediting Commissions of Higher Education. (1970). A report
on institutional accreditation in higher education. Chicago, IL: Federation of
Regional Accrediting Commissions of Higher Education.
Finkin, M. W. (1973). Federal reliance on voluntary accreditation: The power to
recognize as the power to regulate. Journal of Law and Education, 2(3), 339-375.
Finkin, M. W. (1978). Federal reliance on educational accreditation: The scope of
administrative discretion. Washington, DC: The Council on Postsecondary
Accreditation.
Finkin, M. W. (1979). Reforming the federal relationship to educational accreditation.
North Carolina Law Review, 57(3), 379-413.
Finkin, M. W. (1994a). Recent developments concerning accrediting agencies in
postsecondary education. Law and Contemporary Problems, 57(4), 121-149.
Finkin, M. W. (1994b). The unfolding tendency in the federal relationship to private
accreditation in higher education. Law and Contemporary Problems, 57(4), 89-
120.
Finn, Jr. C. E. (1975, Winter). Washington in academe we trust: Federalism and the
universities: The balance shifts. Change, 7(10), 24-29, 63.
Flexner, A. (1910). Medical education in the United States and Canada: A report to the
Carnegie Foundation for the Advancement of Teaching. New York, NY: The
Carnegie Foundation for the Advancement of Teaching.
Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and
Policy Analysis, 2(2), 35-46. doi:10.3102/01623737002002035
Florida State Postsecondary Education Planning Commission. (1995). A review of
specialized accreditation. Tallahassee, FL: Florida State Postsecondary Education
Planning Commission.
264
Freitas, F. A. (2007). Cost-benefit analysis of professional accreditation: A national
study of baccalaureate nursing programs (Doctoral dissertation, Kent State
University).
Geiger, L. G. (1970). Voluntary accreditation: A history of the North Central Association
1945-1970. Menasha, WI: George Banta Company.
Gillen, A., Bennett, D. L, & Vedder, R. (2010). The inmates running the asylum?: An
analysis of higher education accreditation. Washington, DC: Center for College
Affordability and Productivity. Retrieved from
http://www.centerforcollegeaffordability.org/uploads/Accreditation.pdf
Global University Network for Innovation. (2007). Higher education in the world 2007:
Accreditation for quality assurance: What is at stake? New York, NY: Palgrave
Macmillan.
Graffin, S. D., & Ward, A. J. (2010). Certifications and reputation: Determining the
standard of desirability amidst uncertainty. Organization Science, 21(2), 331-346.
doi:10.1287/orsc.1080.0400
Graham, P. A., Lyman, R. W., & Trow, M. (1995). Accountability of colleges and
universities: An essay. New York, NY: Columbia University.
Gruson, E. S., Levine, D. O, & Lustberg, L. S. Issues in accreditation, eligibility and
institutional quality. Cambridge, MA: Sloan Commission on Government and
Higher Education.
Hagerty, B. M. K., & Stark, J. S. (1989). Comparing educational accreditation standards
in selected professional fields. The Journal of Higher Education, 60(1), 1-20.
Harcleroad, F. F. (1976). Educational auditing and accountability. Washington, DC: The
Council on Postsecondary Accreditation.
Harcleroad, F. F. (1980). Accreditation: History, process, and problems. Washington,
DC: American Association for Higher Education.
Hercleroad, F. F. (1990). Are voluntary accrediting associations becoming government
agencies? The current answer: No! But the struggle continues. Retrieved from
ERIC database. (ED421024)
Harcleroad, F. F., & Dickey, F. G. (1975). Educational auditing and voluntary
institutional accrediting. Washington, DC: American Association for Higher
Education.
265
Hardin, J. R., & Stocks, M. H. (1995). The effect of AACSB accreditation on the
recruitment of entry-level accountants. Issues in Accounting Education, 10(1), 83-
90.
Hartle, T. W. (2012). Accreditation and the public interest: Can accreditors continue to
play a central role in public policy? Planning for Higher Education, 40(3), 6-7.
Harvey, L. (2004). The power of accreditation: Views of academics. Journal of Higher
Education Policy and Management, 26(2), 207-223.
Haviland, D. (2009, February 20). Leading assessment: From faculty reluctance to
faculty engagement. Academic Leadership. Retrieved from
http://www.academicleadership.org/article/leading-assessment-from-faculty-
reluctance-to-faculty-engagement
Hawkins, H. (1992). Banding together: The rise of national associations in American
higher education, 1887-1950. Baltimore, MD: The Johns Hopkins Press.
Hayward, F. M. (2001, June). Finding a common voice for accreditation internationally.
Prepared for the 2001 Council for Higher Education Accreditation conference,
Chicago, IL. Retrieved from http://www.chea.org/international/common-
voice.html
Haywood, C. R. (1974). The mythus of accreditation. The Educational Forum, 38(2),
225-229.
Huffman, J., & Harris, J. (1981). Implications of the “input-outcome” research for the
evaluation and accreditation of educational programs. North Central Association
Quarterly, 56(1), 27-32.
Hunt, G. T. (1990, April). The assessment movement: A challenge and an opportunity.
Association for Communication Administration Bulletin, 72, 5-12.
Ikenberry, S. O. (2009). Where do we take accreditation? Washington, DC: Council for
Higher Education Accreditation.
Jackson, R. S., Davis, J. H., & Jackson, F. R. (2010). Redesigning regional accreditation:
The impact on institutional planning. Planning for Higher Education, 38(4), 9-19.
Johnstone, B. D. (2001). Financing higher education: Who should pay? In J. J. Yeager, G.
M. Nelson, E. A. Porter, J. C. Weidman, & T. G. Zullo (Eds.), ASHE reader on
finance in higher education (2nd ed.), (pp. 3-16). Boston, MA: Pearson Custom
Publishing.
266
Jung, S. M. (1986). The role of accreditation in directly improving educational quality.
Washington, DC: The Council on Postsecondary Accreditation.
Kelderman, E. (2011, November 13). Accreditors examine their flaws as calls for change
intensify. Chronicle of Higher Education. Retrieved from
http://chronicle.com/article/Accreditors-Examine-Their/129765/
Kells, H. R. (1976) The reform of regional accreditation agencies. Educational Record
57(1), 24-28.
Kells, H. R., & Kirkwood, R. (1979). Institutional self-evaluation processes. The
Educational Record, 60(1), 25-45.
Kells, H. R., & Parrish, R. M. (1979). Multiple accreditation relationships of
postsecondary institutions in the United States. Washington, DC: The Council on
Postsecondary Accreditation.
Kells, H. R., & Parrish, R. M. (1986). Trends in the accreditation relationships of U.S.
postsecondary institutions. 1978-1985. Washington, DC: The Council on
Postsecondary Accreditation.
Kennedy, V. C., Moore, F. I., & Thibadoux, G. M. (1985). Determining the costs of self-
study for accreditation: A method and a rationale. Journal of Allied Health,14(2),
175-182.
Ketcheson, K. A. (2001). Public accountability and reporting: What should be the public
part of accreditation? New Directions for Higher Education, 113, 83-93.
Kezar, A. (2009). Change in higher education: Not enough, or too much? Change: The
Magazine of Higher Learning, 41(6), 18-23. doi: 10.1080/00091380903270110
Kis, V. (2005). Quality assurance in tertiary education: Current practices in OECD
countries and a literature review on potential effects. Unpublished manuscript,
Institut d’Etudes Politiques de Paris, Paris, France. Retrieved from
http://www.oecd.org/dataoecd/55/30/38006910.pdf
Koerner, J. D. (1971, March/April). Preserving the status quo: Academia’s hidden cartel.
Change, 50-54.
Kren, L., Tatum, K. W., & Phillips, L. C. (1993). Separate accreditation of accounting
programs: An empirical investigation. Issues in Accounting Education, 8(2), 260-
272.
267
Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning
outcomes assessment in American higher education. Retrieved from National
Institute for Learning Outcomes Assessment website:
carnegie.org/fileadmin/Media/Publications/PDF/niloafullreportfinal2.pdf
Larsen, K., & Vincent-Lancrin, S. (2002). International trade in educational services:
Good or bad? Higher Education Management and Policy, 14(3), 9-45.
Lasher, W. F., & Greene, D. L. (2001). College and university budgeting: What do we
know and what do we need to know? In M. B. Paulsen and J. C. Smart (Eds.), The
finance of higher education: Theory, research, policy and practice (pp. 501-542).
New York, NY: Agathon Press.
Learned, W. S., & Wood, B. D. (1938). The student and his knowledge: A report to the
Carnegie Foundation on the results of the high school and college examinations
of 1928, 1930, and 1932. New York, NY: The Carnegie Foundation for the
Advancement of Teaching.
Lee, M. B., & Crow, S. D. (1998). Effective collaboration for the twenty-first century:
The Commission and its stakeholders (Report and Recommendations of the
Committee on Organizational Effectiveness and Future Directions). Chicago, IL:
North Central Association of Colleges and Schools.
Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise?
Washington, DC: American Council of Trustees and Alumni. Retrieved from
https://www.goacta.org/publications/downloads/CanAccreditationFulfillPromise.
pdf
Lenn, M. P. (1996). The globalization of accreditation. The College Board Review, 178,
6-11.
Leslie, L. L., & Rhoades, G. (1995). Rising administrative costs: Seeking explanation.
Journal of Higher Education, 66(2), 187-212.
Lillis, D. (2006). Bar raising or navel-gazing?: The effectiveness of self-study
programmes in leading to improvements in institutional performance. Paper
presented at the 2006 conference of the Dublin Institute of Technology. Retrieved
from http://arrow.dit.ie/scschcomcon/41
Longanecker, D. A. (2011, September). Institutional accreditation and quality assurance
in American higher education from a federal and state perspective. Prepared for
the National Task Force on Institutional Accreditation. Retrieved from
http://www.wiche.edu/PPT/090811_Washington_DC.pdf
268
Lubinescu, E. S., Ratcliff, J. L., & Gaffney, M. A. (2001). Two continuums collide:
Accreditation and assessment. New Directions for Higher Education, 113, 5-21.
McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the
origins and spread of state performance-accountability policies for higher
education. Educational Evaluation and Policy Analysis, 28(1), 1-24.
Michael, S. O. (2005). Chapter 1: A contextual background. In S. O. Michael & M.
Ketrovics (Eds.), Financing higher education in a global market (3-31). New
York: Algora.
Middaugh, M. F. (2012). Introduction to themed PHE issue on accreditation in higher
education. Planning for Higher Education, 40(3), 6-7.
Middle States Commission on Higher Education. (2009). Highlights from the
Commission’s first 90 years. Philadelphia, PA: Middle States Commission on
Higher Education. Retrieved from
http://www.msche.org/publications/90thanniversaryhistory.pdf
National Advisory Committee on Institutional Quality and Integrity. (2011). Higher
education accreditation reauthorization policy considerations. Retrieved from
http://www2.ed.gov/about/bdscomm/list/naciqi-dir/hea-recommendations.pdf
National Advisory Committee on Institutional Quality and Integrity. (2012). Higher
education accreditation reauthorization policy recommendations. Retrieved from
http://www2.ed.gov/about/bdscomm/list/naciqi-dir/naciqi_draft_final_report.pdf
National Policy Board on Higher Education Institutional Accreditation. (1994).
Independence, accreditation, and the public interest. Washington, DC: National
Policy Board on Higher Education.
Neal, A. D. (2008). Dis-accreditation. Academic Questions, 21(4), 431-445.
Nevins, J. F. (1959). A study of the organization and operation of voluntary accrediting
agencies. Washington, DC: The Catholic University of America Press.
New England Association of Schools and Colleges. (1986). The first hundred years:
1885-1985. Winchester, MA: New England Association of Schools and Colleges.
Newman, M. (1996). Agency of change: One hundred years of the North Central
Assocaition of Colleges and Schools. Kirksville, MO: Thomas Jefferson
University Press.
269
Orlans, H. O. (1974). Private accreditation and public eligibility: Volumes 1 and 2.
Retrieved from ERIC database. (ED097858)
Orlans, H. O. (1975). Private accreditation and public eligibility. Lexington, MA: D.C.
Heath and Company.
Parks, R. P. (1982). Costs of programmatic accreditation for allied health education in
the CAHEA [Committee on Allied Health Education and Accreditation] System:
1980. Executive summary. Chicago, IL: American Medical Association,
Department of Allied Health Education and Accreditation.
Patton, M. Q. (2002). Qualitative research & evaluation methods. Thousand Oaks, CA:
Sage Publications.
Peterson, M. W. (1974). Organization and administration in higher education:
Sociological and social-psychological perspectives. Review of Research in
Education, 2, 296-347.
Peterson, M. W., & Augustine, C. H. (2000). External and internal influences on
institutional approaches to student assessment: Accountability or improvement?
Research in Higher Education, 41(4), 443- 479.
Pfnister, A. O. (1971). Regional accrediting agencies at the crossroads. The Journal of
Higher Education, 42(7), 558-573.
Pfnister, A. O. (1977, February). [Review of the book Private accreditation and public
eligibility, by H. O. Orlans]. Higher Education, 6(1), 125-127.
Pigge, F. L. (1979).Opinions about accreditation and interagency cooperation: The
results of a nationwide survey of COPA institutions. Washington, DC: Committee
on Postsecondary Education.
Procopio, C. H. (2010, April 14). Differing administrator, faculty, and staff perceptions
of organizational culture as related to external accreditation. Academic
Leadership. Retrieved from http://www.academicleadership.org/article/
Differing_Administrator_Faculty_and_Staff_Perceptions_of_Organizational_Cult
ure_as_Related_to_External_Accreditation
Provezis, S. J. (2010). Regional accreditation and learning outcomes assessment:
Mapping the territory (Doctoral dissertation, University of Illinois at Urbana-
Champaign).
270
Puffer, C. E. (1970). A study prepared for the Federation of Regional Accrediting
Commissions of Higher Education. Washington, DC: Federation of Regional
Accrediting Commissions of Higher Education.
Raessler, K. R. (1970). An analysis of state requirements for college or university
accreditation in music education. Journal of Research in Music Education, 18(3),
223-233.
Rainwater, T. (2006). The rise and fall of SPRE: A look at failed efforts to regulate
postsecondary education in the 1990s. American Academic, 2(1), 107-122.
Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the
US. Quality in Higher Education, 2(1), 5-19.
Ratcliff, J. L., Lubinescu, E. S., & Gaffney, M. A. (2001). How accreditation influences
assessment. San Francisco, CA: Jossey-Bass.
Ratteray, O. M. T. (2008). History revisited: Four mirrors on the foundations of
accreditation in the Middle States region. Middle States Commission on Higher
Education, Middle States Association of Colleges and Schools. Retrieved from
http://www.msche.org/documents/History-Revisited.pdf
Reidlinger, C. R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New
Directions for Community Colleges, 83, 39-47.
Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the
degree. Planning for Higher Education, 40(3), 6-7.
Robinson Kurpius, S. E., & Stafford, M. E. (2006). Testing and measurement: A user-
friendly guide. Thousand Oaks, CA: Sage Publications, Inc.
Romine, S. (1975). Objectives, objections, and options: Some perceptions of regional
accreditation. North Central Association Quarterly, 49(4), 365-375.
Ruppert, S. S. (1994). Charting higher education accountability. Denver, CO: Education
Commission of the States.
Rusch, E. A., & Wilber, C. (2007). Shaping institutional environments: The process of
becoming legitimate. The Review of Higher Education, 30(3), 301-318.
doi:10.1353/rhe/2007.0014
Ryan, J. F. (2005). Institutional expenditures and student engagement: A role for
financial resources in enhancing student learning and development? Research in
Higher Education, 46(2), 235-245).
271
Salkind, N. J. (2011). Statistics for people who (think they) hate statistics. Thousand
Oaks, CA: Sage Publications, Inc.
Schermerhorn, J. W., Reisch, J. S., & Griffith, P. J. (1980). Educator perceptions of
accreditation. Journal of Allied Health 9(3), 176-182.
Schwarz, S., & Westerheijden, D. F. (2004). Accreditation and evaluation in the
European higher education area. Norwell, MA: Kluwer Academic Publishers.
Scriven, M. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, & T.
Kellaghan (Eds.), Evaluation models (250-278). Boston, MA: Kluwer Academic
Publishers.
Selden, W. K. (1957). The National Commission on Accrediting: Its next mission.
Educational Record 38, 152-156.
Selden, W. K. (1960). Accreditation: A struggle over standards in higher education.
New York: Harper & Brothers.
Shaw, R. (1993). A backward glance: To a time before there was accreditation. North
Central Association Quarterly, 68(2), 323-335.
Shibley, L. R., & Volkwein, J. F. (2002, June). Comparing the costs and benefits of re-
accreditation processes. Paper presented at the annual meeting of the Association
for Institutional Research, Toronto, Ontario, Canada.
Sibolski, E. H. (2012). What’s an accrediting agency supposed to do?: Institutional
quality and improvement vs. regulatory compliance. Planning for Higher
Education, 40(3), 6-7.
Smith, V. B., & Finney, J. E. (2008, May/June). Redesigning regional accreditation: An
interview with Ralph A. Wolff. Change, 18-24.
Southern Association of Colleges and Schools. (2007). The Quality Enhancement Plan.
Retrieved from http://www.sacscoc.org/pdf/081705/QEP%20Handbook.pdf
Spangehl, S. D. (2012). AQIP and accreditation: Improving quality and performance.
Planning for Higher Education, 40(3), 6-7.
Stensaker, B., & Harvey, L. (2006). Old wine in new bottles? A comparison of public and
private accreditation schemes in higher education. Higher Education Policy, 19,
65-85.
272
Stensaker, B., & Harvey, L. (2011). Accountability in higher education: Global
perspectives on trust and power. New York, NY: Routledge.
Stoodley, R. V., Jr. (1985). An approach to postsecondary accreditation with the efficient
use of human resources and cost containment methods.
Stufflebeam, D. L, & Webster, W. J. (1980). An analysis of alternative approaches to
education. Educational Evaluation and Policy Analysis, 2(3), 5-20.
Thornton, S. (2011). It’s not over yet: The annual report on the economic status of the
profession, 2010-11. Retrieved from
http://www.aaup.org/NR/rdonlyres/17BABE36-BA30-467D-BE2F-
34C37325549A/0/zreport.pdf
Trivett, D. A. (1976). Accreditation and institutional eligibility. Washington, DC:
American Assocaition for Higher Education.
Troutt, W. E. (1978). The quality assurance function of regional accreditation (Master’s
dissertation, University of Louisville).
Troutt, W. E. (1981). Relationships between regional accrediting standards and
educational quality. New Diretions for Institutional Research, 29, 45-59.
Trow, M. (1996). Trust, markets, and accountability in higher education: A comparative
perspective. Higher Education Policy, 9(4), 309-24.
Uehling, B. S. (1987a). Accreditation and the institution. North Central Association
Quarterly, 62(2), 350-360.
Uehling, B. S. (1987b). Serving too many masters: Changing the accreditation process.
Educational Record 68(3), 38-41.
Underwood, D. G. (1991). Taking inventory: Identifying assessment activities. Research
in Higher Education, 32(1), 59-69.
U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S.
Higher Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf
U.S. Department of Education. (n.d.). Boards & Commissions: National Advisory
Committee on Institutional Quality and Integrity. Retrieved from:
http://ed.gov/about/bdscomm/list/naciqi.html
273
U.S. Department of Labor. (n.d.). Wages. Retrieved from
http://www.dol.gov/dol/topic/wages/minimumwage.htm
Van Damme, D. (2000). Internationalization and quality assurance: Towards worldwide
accreditation? European Journal for Education Law and Policy, 4, 1-20.
Van Damme, D. (2002). Trends and models in international quality assurance in higher
education in relation to trade in education. Higher Education Management and
Policy, 14(3), 93-136.
Van Vught, F. S., & Westerheijden, D. F. (1994). Towards a general model of quality
assessment in higher education. Higher Education, 28, 355-371.
Vaughn, J. (2002). Accreditation, commercial rankings, and new approaches to assessing
the quality of university research and education programmes in the United States.
Higher Education in Europe, XXVII(4), 433-441.
Volkwein, J. F., Lattuca, L. R., Caffrey, H. S. & Reindl, T. (2005). What works to ensure
quality in higher education institutions: A summary of accreditation’s role. In
What Works: Policy seminar on student success, accreditation and quality
assurance. New York, NY: American Association of State Colleges and
Universities, Washington, DC: & Pennsylvania State University Center for the
Study of Higher Education. Retrieved from
http://www.aascu.org/media/pdf/whatworks_03.pdf
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2006). Measuring the
impact of professional accreditation on student experiences and learning
outcomes. Research in Higher Education, 48(2), 251-282. doi: 10.1007/s11162-
006-9039-y
Volkwein, J. F., Lattuca, L. R., & Terenzini, P. T. (2008). Measuring the impact of
engineering accreditation on student experiences and learning outcomes. In W. E.
Kelly (Ed.), Assessment in engineering programs: Evolving best practices (17-
43). Tallahassee, FL: Association for Institutional Research.
Volkwein, J. F., & Zhou, Y. (2003). Testing a model of administrative job satisfaction.
Research in Higher Education, 44(2), 149-171. doi: 10.1023/A:1022099612036
Walker, J. J. (2010). A contribution to the self-study of the postsecondary accreditation
protocol: A critical reflection to assist the Western Association of Schools and
Colleges. Paper presented at the WASC Postsecondary Summit, Temecula, CA.
274
Warner, W. K. (1977). Accreditation influences on senior institutions of higher education
in the western accrediting region: An assessment. Oakland, CA: Western
Association of Schools and Colleges.
Warren, D. R. (1974). To enforce education: A history of the founding years of the United
States Office of Education. Detroit, MI: Wayne State University Press.
Wellman, J. V. (2000, September 22). Accreditors have to see past “learning objectives.”
Chronicle of Higher Education. Retrieved from
http://chronicle.com/article/Accreditors-Have-to-See-Pas/23720/
Wergin, J. F. (2005, May/June). Waking up to the importance of accreditation. Change,
35-41.
Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson &
D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of
quality assurance (27-38). Washington, DC: Teacher Education Accreditation
Council. Retrieved from http://www.teac.org/wp-
content/uploads/2012/03/Festschrift-Book.pdf
Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (2007). Quality assurance in higher
education: Trends in regulation, translation and transformation. Dordrecht, The
Netherlands: Springer.
Western Association of Schools and Colleges. (1998). Eight perspectives on how to focus
the accreditation process on educational effectiveness. Oakland, CA: Accrediting
commission for Senior Colleges and Universities WASC.
Western Association of Schools and Colleges. (2009). WASC resource guide for ‘good
practices’ in academic program review. Retrieved from
http://www.wascsenior.org/findit/files/forms/WASC_Program_Review_Resource
_Guide_Sept_2009.pdf
Whalen, E. L. (1991). Responsibility center budgeting. Bloomington, IN: Indiana
University Press.
Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions:
The case of university accreditation. Human Organization, 51(4), 398-407.
Wiley, M. G., & Zald, M. N. (1968). The growth and transformation of educational
accrediting agencies: An exploratory study in social control of institutions.
Sociology of Education, 41(1), 36-56.
275
Willis, C. R. (1994). The cost of accreditation to educational institutions. Journal of
Allied Health, 23, 39-41.
Wilson, S. M. (2012). Doing better: Musings on teacher education, accountability, and
evidence. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and
excellence: The promise and practice of quality assurance (39-57). Washington,
DC: Teacher Education Accreditation Council. Retrieved from
http://www.teac.org/wp-content/uploads/2012/03/Festschrift-Book.pdf
Winskowski, C. (2012, March). U.S. accreditation and learning outcomes assessment: A
review of issues. Bulletin of Morioka Junior College Iwate Prefectural University,
14, 21-40.
Wolff, R. A. (1990, June 27-30). Assessment 1990: Accreditation and renewal. Paper
presented at The Fifth AAHE Conference on Assessment in Higher Education,
Washington, DC.
Wolff, R. A. (1993). The accreditation of higher education institutions in the United
States. Higher Education in Europe, 18(3), 91-99.
Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing
demands? In J. C. Burke and Associates (Eds.), Achieving accountability in
higher education: Balancing public, academic, and market demands (78-103).
San Francisco, CA: Jossey-Bass Publishers.
Wood, A. L. (2006). Demystifying accreditation: Action plans for a national or regional
accreditation. Innovative Higher Education, 31(1), 43-62. doi: 10.1007/s10755-
006-9008-6
Wriston, H. M. (1960). The futility of accrediting. The Journal of Higher Education,
31(6), 327-329.
Young, J. L. (2010). A community college’s loss of accreditation: A case study (Doctoral
dissertation, California State University, Long Beach).
Young, K. E. (1983). Understanding accreditation. San Francisco, CA: Jossey-Bass.
Zook, G. F., & Haggerty, M. E. (1936). The evaluation of higher institutions: Principles
of accrediting higher institutions (Vol. 1). Chicago, IL: The University of Chicago
Press.
276
APPENDIX A: HISTORY OF INSTITUTIONAL ACCREDITATION TIMELINE
1850 – 1920: The concept of accreditation emerges.
Key developments in the accreditation process:
• Institutional accreditation begins.
• Programmatic (or specialized) accreditation begins.
1642 Harvard Universities initiates an external review of its programs.
1784 The New York Board of Regents is established in the style of a European
ministry. It is the oldest recognized accrediting body.
1867 The US Department of Education is established for the collection of statistics
and data on schools and colleges.
1882 The American Association of University Women (AAUW) begins visiting
institutions and distributing lists of those it approves.
1885 The New England Association of Schools and Colleges (NEASC) is formed.
1887 The Middle States Association of Colleges and Secondary Schools (within which
the Middle States Commission on Higher Education or MSCHE deals with
collegiate education) is formed.
1888 Charles W. Eliot (president of Harvard) argues in a national forum that school
organizations are not effectively addressing certain educational problems of the
day. This results in the formation of the Committee of Ten (10 subcommittees
involving 100 people total) to lead necessary educational reform.
1895 The North Central Association (NCA) is formed.
1895 The Southern Association of Colleges and Schools (SACS) is formed.
1901 Formal accreditation is pioneered by the College Entrance Examination Board
(CEEB) to unify college entrance requirements.
1901 The New York Board of Regents first defines what a college is.
1909 The North Central Association publishes its first standards for colleges.
1910 Medical Education in the United States and Canada: A Report to the Carnegie
Foundation for the Advancement of Teaching (known as the Flexner report after
its author, Abraham Flexner) is published, resulting in a rapid and dramatic
increase in the quality of medical education. That unprecedented success causes
the proliferation of programmatic accrediting agencies.
277
1910 Kendrick C. Babcock (of the US Department of Education) prepares a list
classifying colleges in groups based largely on the work done by the schools’
graduates. The list is suppressed prior to publication by President William
Howard Taft following public opposition to it.
1916 The North Central Association publishes its first list of accredited colleges.
1917 The Northwest Commission on Colleges and Universities (NWCCU) is
formed.
1920 – 1950: Regional accreditors begin to develop their roles.
Key developments in the accreditation process:
• The definition of “college” is expanded to include many more kinds of institutions
such as vocational colleges and community colleges.
• The increasing diversity of institutions effects the following changes:
o Accreditation language shifts from institutions being “approved” to being
“accredited.”
o Accrediting agencies begin using qualitative standards increasingly in lieu of
quantitative standards.
o Accrediting agencies begin looking for schools to meet optimal requirements
rather than minimal requirements.
o Accreditors begin to re-visit schools if institutions demonstrate instability or
difficulty in fulfilling their mission.
1924 The Western Association of Schools and Colleges (WASC) is formed.
1936 Zook and Haggerty publish The Evaluation of Higher Institutions: Principles of
Accrediting Higher Institutions. Consequently the North Central Association begins
using a self-study-defined, mission-oriented approach to determining eligibility for
accreditation, allowing each institution’s mission to drive the question of quality.
1938 The Joint Commission on Accrediting is formed by the Association of Land-
Grant Colleges and Universities and the National Association of State
Universities.
1938 The North Central Association is sued by William Langer, governor of North Dakota
(the Langer Case) when the North Dakota Agricultural College is removed from the
list of accredited institutions. The final court decision favors the NCA, formally and
legally granting credibility to regional accreditation for the first time.
1944 The GI Bill provides federal funds to veterans of World War II to attend college,
vastly increasing public access to higher education.
1949 The Joint Committee on Accrediting becomes the National Commission on
Accrediting (NCA).
278
1949 Regional accrediting agencies meet to coordinate their activities and create The
National Committee of Regional Accrediting Agencies (NCRAA).
1950 – 1985: Golden Age for higher education marked by increasing federal regulation.
Key developments in the accreditation process:
• The self-study becomes standard.
• The site visit is executed by colleagues from peer institutions.
• Institutions are visited regularly on a cycle.
1952 The Veteran’s Act of 1952, a renewal of the GI Bill of 1944, provides education
benefits to veterans of the Korean War directly rather than to the educational
institution being attended, increasing the importance of accreditation as a
mechanism for recognition of legitimacy.
1962 WASC forms a second, distinct higher education commission, the Accrediting
Commission for Community and Junior Colleges (WASC-ACCJC).
1963 The Higher Education Facilities Act requires that higher education institutions
receiving federal funds (through enrolled students) be accredited.
1964 The National Committee of Regional Accrediting Agencies (NCRAA) becomes
the Federation of Regional Accrediting Commissions of Higher Education
(FRACHE).
1965 The Higher Education Act is first signed into law to strengthen the resources
available to higher education institutions and to provide financial assistance to
students enrolled at those institutions.
1965 Congress establishes the National Advisory Committee on Accreditation and
Institutional Eligibility (NACAIE) to advise the Commission of Education on
policy concerning accrediting agencies and to determine institutional eligibility
for federal funding.
1967 Parsons College loses accreditation. The courts deny the college’s appeal on the
basis that the regional accrediting associations are voluntary bodies.
1970 The New England Association of Schools and Colleges forms a second, distinct
higher education commission, the Commission on Technical and Career
Institutions (NEASC-CTCI).
1975 The National Committee on Accrediting (NCA) and FRACHE merge to form the
Council on Postsecondary Accreditation (COPA).
279
1985 – Present: Accountability becomes the issue of paramount importance.
Key developments in the accreditation process:
• Higher education experiences rising costs (both relative and absolute) resulting in
high student loan default rates.
• Accreditation endures increasing criticism for a number of apparent shortcomings,
most ostensibly a lack of demonstrable student learning outcomes. Similarly
accreditation is increasingly and formally defended by various champions of the
practice.
1992 The National Advisory Committee on Institutional Quality and Integrity
(NACIQI) is formed as part of the amendments to the Higher Education Act of
1965, and replaces the NACAIE.
1992 State Postsecondary Review Entities (SPREs) are created in concept (as part of
the renewal of the Higher Education Act) to review institutions with high student
loan default rates.
1993 The inability of COPA to anticipate and prevent the SPREs becomes the breaking
point that leads to the dissolution of COPA.
1993 The Commission on Recognition of Postsecondary Accreditation (CORPA) is
formed by COPA as an interim organization to continue national recognition of
accreditation in its absence.
1994 The SPREs are abandoned largely because of a lack of adequate funding.
1995 National leaders in accreditation form the National Policy Board (NPB) to shape the
creation and legitimation of a national organization overseeing accreditation.
1996 The Council for Higher Education Accreditation (CHEA) is formed by the NPB.
2006 The Spellings Commission accuses accreditation of being both ineffective and a
barrier to innovation.
2012 In anticipation of the next reauthorization of the Higher Education Act (2013),
NACIQI releases important policy recommendations concerning accreditation
particularly with respect to its role in determining eligibility for federal funding.
280
APPENDIX B: SURVEY INVITATION LETTERS
Survey Invitation
Subject: Information request regarding accreditation costs
Date: Early October, 2011
Attachment: “Accreditation Cost survey.pdf”
Dear ALOFirstName ALOLastName:
My name is P J Woolston and I am a student in the Ed.D. program at the University of Southern
California conducting research on the costs associated with accreditation. To date very few
studies have been published on this topic (see for example Reidliner & Prager, 1993; Shilbey &
Volkwein, 2002). As the designated Accreditation Liaison Officer at your institution you probably
have a better sense for these costs than anyone. This email is an invitation to participate in
research on this subject now by completing a short survey on accreditation costs at your school.
The purpose of this study is to understand better how the direct and indirect costs of institutional
accreditation vary nationally by accreditation region and institution type. The study will make a
significant contribution to understanding these costs, and because of the perspective you have at
your school your help is critical to its success. The study is IRB-approved and the information you
share will be secured locally on a password-protected removable drive. Your confidentiality and
anonymity will be strictly maintained.
The survey itself should take no more than 15 minutes to complete, and it is hoped that the
experience will be interesting to you because of your expertise in the field. A PDF copy of the
study is attached for your reference and to facilitate task sharing as appropriate. You can access
and submit the survey directly at:
[Hyperlink to survey online]
Your participation in this study is entirely voluntary and you may decline to answer any questions
you wish without explanation. Your assistance will be greatly appreciated. To thank you for your
contribution you can receive a complete copy of the findings of the study by contacting me
directly (since the survey is completely anonymous). If you have any questions about the survey
or the study, please do not hesitate to contact me at woolston@usc.edu or [cell phone number].
Best regards,
P J Woolston
Doctor of Education (Ed.D.) Candidate
USC Rossier School of Education
Referenced above:
Reidlinger, C. R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New Directions for
Community Colleges, 83, 39-47.
Shibley, L. R., & Volkwein, J. F. (2002, June). Comparing the costs and benefits of re-
accreditation processes. Paper presented at the annual meeting of the Association for
Institutional Research, Toronto, Canada.
281
Survey Invitation Follow-Up
Subject: Follow-up on information request regarding accreditation costs
Date: Two weeks after the survey invitation
Attachment: “Accreditation Cost survey.pdf”
Dear ALOFirstName ALOLastName:
Two weeks ago an email was sent to you inviting your participation in a study on the costs
associated with institutional accreditation. Because so few studies have been done on this topic,
this IRB-approved study will help provide a better understanding of how the direct and indirect
costs of institutional accreditation vary nationally by accreditation region and institution type. If
you have completed and submitted the survey, thank you very much for your assistance. The
response so far has been quite positive! If you have not yet had the chance to contribute to the
research however, please take about 15 minutes to do so now. A PDF copy of the survey is
attached to this message to facilitate completion and the survey will be open for approximately
two more weeks. You can access and submit the survey directly at:
[Hyperlink to survey online]
Your participation is critical to the success of this study. As the Accreditation Liaison Officer for
your school, you are the designated expert on the topic and therefore in the best position to
provide information to increase the accuracy of the study’s findings. Your confidentiality and
anonymity will be strictly maintained, and you are welcome to a complete copy of the findings.
Should you have any questions about the survey or the study, or to receive a copy of the findings,
please do not hesitate to contact me directly at woolston@usc.edu or [cell phone number].
Sincerely,
P J Woolston
Doctor of Education (Ed.D.) Candidate
USC Rossier School of Education
282
Final Survey Invitation Follow-Up
Subject: Final request for information regarding accreditation costs
Date: Ten days after the survey invitation follow-up
Dear ALOFirstName ALOLastName:
Many sincere thanks if you have already completed the survey on the direct and indirect costs
associated with accreditation, the response has been overwhelmingly positive. (Because the
survey is completely anonymous there is no way of identifying who has submitted it.) To be
sensitive to the various and multiple demands on your time, I am writing one final time to invite
you to participate in the study if you have not yet had a chance to do so. The study will conclude
at the end of next week. The survey can be found and submitted at:
[Hyperlink to survey online]
Thank you for your contribution to a greater understanding of how these accreditation costs vary
by accreditation region and institution type. Your input is critical to the study’s success. If you
have any questions about the survey or the study, please do not hesitate to contact me directly at
woolston@usc.edu or [cell phone number].
Best regards,
P J Woolston
Doctor of Education (Ed.D.) Candidate
USC Rossier School of Education
283
APPENDIX C: SURVEY INSTRUMENT
Accreditation Cost Survey
1. Demographic information
1. Accreditation region (select one):
Middle States Commission on Higher Education (MSCHE)
North Central Association of Colleges and Schools (NCA)
New England Association of Schools and Colleges (NEASC)
Northwest Commission on Colleges and Universities (NWCCU)
Southern Association of Colleges and Schools (SACS)
Western Association of Schools and Colleges (WASC)
2. Carnegie Basic Classification (select one):
Research/Doctoral University (Includes institutions that awarded at least 20
research doctoral degrees during the year excluding doctoral-level degrees that qualify
recipients for entry into professional practice, such as the JD, MD, PharmD, DPT, etc.
Excludes Special Focus Institutions and Tribal Colleges.)
Master's College or University (Generally includes institutions that awarded at least
50 master's degrees and fewer than 20 doctoral degrees during the year. Excludes
Special Focus Institutions and Tribal Colleges.)
Baccalaureate College (Includes institutions where baccalaureate degrees represent
at least 10 percent of all undergraduate degrees and where fewer than 50 master's
degrees or 20 doctoral degrees were awarded during the update year. Excludes Special
Focus Institutions and Tribal Colleges.)
Special Focus Institution (Institutions awarding baccalaureate or higher-level
degrees where a high concentration of degrees, above 75%, is in a single field or set of
related fields. Excludes Tribal Colleges.)
Tribal College (Colleges and universities that are members of the American Indian
Higher Education Consortium, as identified in IPEDS Institutional Characteristics.)
284
3. Are you the Accreditation Liaison Officer (ALO) at your institution?
Yes
No
4. Your official title:
5. Month and year of decision on last full regional institutional accreditation
review:
Month (MM)
Year (YYYY)
6. Were you the Accreditation Liaison Officer (ALO) at the time of the last
institutional accreditation review?
Yes
No
7. Percentage of your work hours devoted specifically to institutional
accreditation during the following periods:
The period preparing for and
including formal accreditation
review
The period when the school is not
preparing for a formal
accreditation review
8. Comments on percentage of time devoted specifically to institutional
accreditation (optional):
285
2. Survey of costs: Direct costs
Direct cost estimates: Fiscal expenses incurred excluding any salary or
wages paid for time spent, or fees paid to accrediting organizations
1. For the last full institutional accreditation review, please estimate the
cumulative financial cost of the preparation of the self-study document. Please
include the costs of materials, copying, printing, mailing, fees for professional
services (such as writers or consultants), etc.:
2. For the last full institutional accreditation review, please estimate the
cumulative financial cost of the site visit. Please include the cost of travel,
accommodations, food, stipends/honoraria, etc.:
3. Please comment on the composition of the visiting team from the last site visit:
Total number of
people:
Total number
from outside the
accrediting
region:
Total number
who were
faculty:
Total number
who were
administrators:
286
4. Comments on direct costs (optional):
3. Survey of costs: Indirect costs
Indirect cost estimates: Time spent in number of hours
For the last full institutional accreditation review, to the best of your
ability please estimate for the following groups:
1) the total number of people from the institution who had any
involvement at some point with the review, and,
2) the estimated cumulative number of hours contributed by all people
within that group.
1. The primary Accreditation Liaison Officer (ALO):
Estimated cumulative
hours spent:
2. Senior administration (e.g., President, Vice-Presidents, Provost, Vice-Provosts,
Deans, etc.):
Estimated number of
participants:
Estimated cumulative
hours of this group:
287
3. Faculty:
Estimated number of
participants:
Estimated cumulative
hours of this group:
4. Administrative staff:
Estimated number of
participants:
Estimated cumulative
hours of this group:
5. Students:
Estimated number of
participants:
Estimated cumulative
hours of this group:
6. Other (e.g., trustees, alumni, etc.):
Estimated number of
participants:
Estimated cumulative
hours of this group:
Explain (optional):
7. Comments on indirect costs (optional):
288
4. Cost explanation (optional)
1. What were the most important benefits to your institution of going through the
accreditation process?
2. Considering those benefits and in your (the Accreditation Liaison Officer's)
opinion, were the costs of accreditation justified?
289
[Included for WASC institutions only]
3. What percentage of the reported costs was incurred solely from meeting the
requirements of accreditation, and what percentage was incurred by initiatives the
institution would have undertaken anyway (but which used the accreditation
process as a vehicle for improvement)?
290
APPENDIX D: TITLE ANALYSIS
Formal Title Held by Institutional ALO by Accreditation Region
Region President
Vice
President Provost Dean Faculty ALO Director Staff
MSCHE 5.2% 40.2% 33.9% 24.0% 5.0% 1.0% 10.2% 5.0%
SACS 0.7% 45.4% 26.1% 15.0% 3.9% 1.3% 18.5% 3.7%
WASC 0.0% 33.8% 35.9% 23.2% 10.6% 4.2% 10.6% 5.6%
Total 2.3% 41.7% 30.6% 19.7% 6.2% 1.6% 14.1% 4.5%
Formal Title Held by Institutional ALO by Carnegie Classification
Carnegie
classification President
Vice
President Provost Dean Faculty ALO Director Staff
Doctoral/Research 1.3% 28.4% 54.8% 7.7% 5.2% 0.7% 15.5% 3.2%
Master's 1.8% 49.3% 36.6% 11.1% 7.2% 1.2% 12.6% 0.6%
Baccalaureate 1.9% 42.5% 21.9% 27.0% 6.0% 6.4% 8.6% 2.2%
Special Focus 5.0% 37.9% 13.7% 33.0% 5.5% 1.1% 14.8% 1.7%
Total 2.3% 41.7% 30.6% 19.7% 6.2% 1.6% 14.1% 4.5%
291
Formal Title Held by Institutional ALO: MSCHE Region
Carnegie
Classification President
Vice
President Provost Dean Faculty ALO Director Staff Total
Doctoral/
Research 1 15 25 6 1 0 8 4 60
% within
classification 1.7% 25.0% 41.7% 10.0% 1.7% 0.0% 13.3% 6.7%
% within
position 5.0% 9.7% 19.2% 6.5% 5.3% 0.0% 20.5% 20.0%
Master's 5 65 60 20 7 0 12 8 177
% within
classification 2.8% 36.7% 33.9% 11.3% 4.0% 0.0% 6.8% 4.5%
% within
position 25.0% 42.2% 46.2% 21.7% 36.8% 0.0% 30.8% 40.0%
Baccalaureate 6 43 34 43 6 4 12 4 152
% within
classification 3.9% 28.3% 22.4% 28.3% 3.9% 2.6% 7.9% 2.6%
% within
position 30.0% 27.9% 26.2% 46.7% 31.6% 100.0% 30.8% 20.0%
Special Focus 8 31 11 23 5 0 7 4 89
% within
classification 9.0% 34.8% 12.4% 25.8% 5.6% 0.0% 7.9% 4.5%
% within
position 40.0% 20.1% 8.5% 25.0% 26.3% 0.0% 17.9% 20.0%
Total 20 154 130 92 19 4 39 20
292
Formal Title Held by Institutional ALO: SACS Region
Carnegie
Classification President
Vice
President Provost Dean Faculty ALO Director Staff Total
Doctoral/
Research 1 23 42 3 4 0 16 0 89
% within
classification 1.1% 25.8% 47.2% 3.4% 4.5% 0.0% 18.0% 0.0%
% within
position 33.3% 11.0% 35.0% 4.3% 14.8% 0.0% 18.8% 0.0%
Master's 1 75 44 14 12 3 26 4 179
% within
classification 0.6% 41.9% 24.6% 7.8% 6.7% 1.7% 14.5% 2.2%
% within
position 33.3% 35.9% 36.7% 20.3% 44.4% 50.0% 30.6% 23.5%
Baccalaureate 0 81 29 34 9 3 30 11 197
% within
classification 0.0% 41.1% 14.7% 17.3% 4.6% 1.5% 15.2% 5.6%
% within
position 0.0% 38.8% 24.2% 49.3% 33.3% 50.0% 35.3% 64.7%
Special Focus 1 30 5 18 2 0 13 2 71
% within
classification 1.4% 42.3% 7.0% 25.4% 2.8% 0.0% 18.3% 2.8%
% within
position 33.3% 14.4% 4.2% 26.1% 7.4% 0.0% 15.3% 11.8%
Total 3 209 120 69 27 6 85 17
293
Formal Title Held by Institutional ALO: WASC Region
Carnegie
Classification President
Vice
President Provost Dean Faculty ALO Director Staff Total
Doctoral/
Research 0 6 18 3 3 1 0 1 32
% within
classification 0.0% 18.8% 56.3% 9.4% 9.4% 3.1% 0.0% 3.1%
% within
position n/a 12.5% 35.3% 9.1% 20.0% 16.7% 0.0% 12.5%
Master's 0 24 18 3 5 1 4 1 56
% within
classification 0.0% 42.9% 32.1% 5.4% 8.9% 1.8% 7.1% 1.8%
% within
position n/a 50.0% 35.3% 9.1% 33.3% 16.7% 26.7% 12.5%
Baccalaureate 0 10 6 8 4 2 4 3 37
% within
classification 0.0% 27.0% 16.2% 21.6% 10.8% 5.4% 10.8% 8.1%
% within
position n/a 20.8% 11.8% 24.2% 26.7% 33.3% 26.7% 37.5%
Special Focus 0 8 9 19 3 2 7 3 51
% within
classification 0.0% 15.7% 17.6% 37.3% 5.9% 3.9% 13.7% 5.9%
% within
position n/a 16.7% 17.6% 57.6% 20.0% 33.3% 46.7% 37.5%
Total 0 48 51 33 15 6 15 8
294
294
APPENDIX E: DIRECT COSTS
The means reported here exclude one extreme outlier. The direct cost extreme outlier was from an institution that reported a site
visit cost greater than three times the next highest reported site visit cost (a master’s institution from SACS).
Means of Direct Costs for All Institutions
Means for Document Cost, Site Visit Cost, and Combined Direct Costs: All Institutions
95% confidence interval
Cost N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Document
Cost 214 $50,979 $99,253 $5,000 $12,000 $0 $600,000 $37,605 $64,353 3.3 12.4
Site Visit
Cost 210 $20,591 $19,656 $20,000 $15,000 $3,000 $150,000 $17,917 $23,265 3.1 13.5
Combined
Direct
Costs 204 $73,591 $109,713 $40,000 $35,000 $4,500 $620,000 $58,445 $88,736 2.9 9.0
295
295
Means of Direct Costs by Carnegie Classification
Note that the means displayed in these tables include all outliers except the extreme outlier noted above.
Document Cost by Carnegie Classification (differences in means are significant)
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 43 $88,271 $125,232 $5,000
a
$32,171 $200 $600,000 $49,730 $126,812 2.3 5.9
Master's 71 $53,528 $104,660 $10,000 $12,000 $300 $600,000 $28,755 $78,300 3.3 12.3
Baccalaureate 67 $33,777 $75,353 $1,000
a
$10,000 $250 $534,900 $15,397 $52,157 5.1 30.7
Special Focus 33 $31,827 $80,214 $1,000 $10,000 $0 $400,000 $3,384 $60,270 3.9 15.5
a
Multiple modes exist. The smallest value is shown.
Site Visit Cost by Carnegie Classification (differences in means are significant)
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 44 $28,401 $26,480 $15,000 $17,250 $4,500 $120,000 $20,350 $36,451 1.7 2.9
Master's 70 $22,895 $20,544 $25,000 $20,000 $4,000 $500,000 $17,997 $27,794 3.8 21.0
Baccalaureate 64 $15,991 $14,198 $20,000 $14,000 $3,000 $100,000 $12,444 $19,538 3.8 19.7
Special Focus 32 $14,013 $9,807 $20,000 $12,000 $3,000 $40,000 $10,477 $17,549 1.1 0.7
296
296
Combined Direct Costs by Carnegie Classification (differences in means are significant)
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound SK Rku
Doctoral/
Research 41 $120,792 $143,370 $60,000 $60,000 $9,500 $620,000 $75,539 $166,045 1.9 3.2
Master's 69 $77,872 $111,107 $40,000 $40,000 $8,500 $618,000 $51,181 $104,562 3.0 9.7
Baccalaureate 62 $51,665 $81,691 $20,000 $26,854 $5,557 $549,900 $30,919 $72,410 4.4 23.3
Special Focus 32 $46,365 $85,878 $35,000 $23,036 $4,500 $436,000 $15,403 $77,328 3.8 15.0
Combined Direct Costs by Carnegie Classification Excluding Outliers (combined direct costs greater than $250,000; differences in means
are significant)
95% confidence interval
Carnegie
classification N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Doctoral/
Research 34 $63,749 $52,704 $60,000 $51,000 $9,500 $215,000 $45,360 $82,138 1.3 1.4
Master's 63 $47,558 $39,690 $40,000 $35,454 $8,500 $185,000 $37,562 $57,554 1.8 3.6
Baccalaureate 60 $39,639 $41,228 $20,000 $25,354 $5,557 $200,000 $28,988 $50,289 2.3 5.6
Special Focus 30 $26,090 $23,982 $35,000 $20,250 $4,500 $120,000 $17,135 $35,045 2.4 7.5
297
297
Means of Direct Costs by Accreditation Region
Document Cost by Accreditation Region (differences in means are not significant)
95% confidence interval
Accreditation
Region N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
MSCHE 53 $39,071 $77,531 $5,000 $12,000 $1,000 $400,000 $17,701 $60,442 3.3 10.9
SACS 118 $58,682 $106,783 $10,000 $12,000 $0 $600,000 $39,213 $78,150 2.9 9.5
WASC 43 $44,518 $101,805 $1,000 $10,300 $600 $600,000 $13,187 $75,848 4.4 22.2
Site Visit Cost by Accreditation Region (differences in means are not significant)
95% confidence interval
Accreditation
Region N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
MSCHE 51 $19,738 $11,348 $20,000 $20,000 $3,000 $60,000 $16,547 $22,930 1.0 1.9
SACS 113 $20,466 $19,981 $15,000 $15,000 $3,000 $120,000 $16,742 $24,191 2.8 8.9
WASC 46 $21,843 $25,598 $15,000 $13,850 $3,500 $150,000 $14,241 $29,444 3.2 13.6
Combined Direct Costs by Accreditation Region (differences in means are not significant)
95% confidence interval
Accreditation
Region N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
MSCHE 49 $60,789 $84,997 $35,000 $35,000 $4,600 $436,000 $36,375 $85,202 3.1 9.9
SACS 112 $81,624 $118,889 $40,000 $35,000 $7,200 $620,000 $59,363 $103,884 2.6 6.8
WASC 43 $67,256 $110,280 $6,000
a
$26,000 $4,500 $618,000 $33,317 $101,196 3.6 15.4
a
Multiple modes exist. The smallest value is shown.
298
298
Combined Direct Costs by Accreditation Region Excluding Outliers (combined direct costs greater than $250,000; differences in means
are not significant)
95% confidence interval
Accreditation
Region N Mean
Standard
deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
MSCHE 46 $42,014 $38,671 $35,000 $35,000 $4,600 $215,000 $30,530 $53,498 3.1 11.6
SACS 101 $46,910 $45,047 $40,000 $32,000 $7,200 $200,000 $38,018 $55,803 1.8 2.8
WASC 40 $41,351 $39,085 $6,000
a
$23,950 $4,500 $130,000 $28,850 $53,851 1.2 0.1
a
Multiple modes exist. The smallest value is shown.
299
APPENDIX F: INDIRECT COSTS
The means reported here exclude nine extreme outliers, one for direct costs and eight for
indirect costs. The direct cost extreme outlier was from an institution that reported a site visit
cost greater than three times the next highest reported site visit cost (a master’s institution from
SACS). The eight indirect cost extreme outliers were from those institutions that reported a top-
five extreme outlier value for more than one of the six categories of cumulative hours
contributed, and included two baccalaureate institutions from MSCHE, two baccalaureate
institutions from SACS, two master’s institutions from SACS, one doctoral/research institution
from SACS, and one baccalaureate institution from WASC.
The tables below include means for all categories regardless of significance of difference
between means.
Means of All Institutions
Total Number of People Involved with Accreditation: All Institutions
95% confidence interval
Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 11.2 11.7 5 8 1 115 9.5 12.9 4.6 34.0
Faculty 38.3 38.1 50 29 0 250 32.8 43.8 2.3 7.7
Staff 18.3 27.9 2
a
10 0 250 14.2 22.3 4.5 28.7
Students 30.6 98.5 10 10 0 1,000 16.2 45.1 7.1 59.3
Others 10.4 14.3 10 6 0 100 8.2 12.5 3.9 20.6
a
Multiple modes exist. The smallest value is shown.
300
Cumulative Hours Spent on Accreditation: All Institutions
95% confidence interval
Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 1,408.7 1,610.2 2,000 800 0 10,000 1,164.2 1,653.2 2.0 5.4
Senior
Admin 934.9 1,443.6 100 400 10 9,750 714.4 1,155.5 3.1 12.3
Faculty 1,842.0 5,602.8 1,000 600 0 63,750 983.4 2,700.6 8.8 92.3
Staff 1,647.1 3,926.2 100
a
500 0 35,000 1,047.3 2,247.0 6.1 44.7
Students 271.0 831.1 100 60 0 8,000 141.7 400.4 6.5 51.0
Others 72.1 148.8 50 30 0 1,200 48.1 96.0 4.7 26.7
a
Multiple modes exist. The smallest value is shown.
Means by Carnegie Classification
Total Number of People Involved with Accreditation: Doctoral/Research Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 42 14.2 9.1 25 14 1 36 11.4 17.1 0.3 -0.9
Faculty 42 46.5 37.7 50 40 6 200 34.8 58.2 1.9 5.5
Staff 41 23.4 30.5 10 14 1 150 13.7 33.0 2.5 7.2
Students 42 21.4 35.7 5
a
10 2 200 10.2 32.5 3.8 16.2
Others 40 8.5 8.5 5
a
5 0 40 5.8 11.3 1.8 3.7
a
Multiple modes exist. The smallest value is shown.
Cumulative Hours Spent on Accreditation: Doctoral/Research Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 35 2,233.7 3,200.9 500
a
1,200 50 18,000 1,134.2 3,333.3 3.8 17.6
Senior
Admin 36 700.5 986.0 100 350 15 5,000 366.9 1,034.1 2.9 10.2
Faculty 35 1,968.9 4,988.5 200
a
650 30 28,600 255.3 3,682.5 4.9 25.5
Staff 34 3,530.5 7,674.6 500 610 20 35,000 852.7 6,208.3 3.4 11.4
Students 36 387.3 829.4 100 100 10 4,000 106.7 667.9 3.5 12.4
Others 32 93.0 152.5 50 43 0 640 38.0 148.0 2.9 8.1
a
Multiple modes exist. The smallest value is shown.
301
Total Number of People Involved with Accreditation: Master’s Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 63 14.2 16.7 8
a
10 2 115 10.0 18.4 4.1 21.6
Faculty 62 41.1 31.4 30 32 0 150 33.1 49.1 1.3 1.6
Staff 64 24.8 37.2 3 15 0 250 15.5 34.1 4.1 21.6
Students 60 47.9 149.8 10 10 1 1,000 9.2 86.6 5.3 30.1
Others 56 14.2 17.9 10 10 0 100 9.4 19.0 3.1 12.0
a
Multiple modes exist. The smallest value is shown.
Cumulative Hours Spent on Accreditation: Master’s Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 57 1,866.8 2,414.2 2,000 1,000 20 12,000 1,226.2 2,507.3 2.3 6.5
Senior
Admin 55 1,089.4 1,638.7 500
a
500 20 7,920 646.4 1,532.4 2.6 6.7
Faculty 55 2,505.5 4,373.3 500 750 0 20,800 1,323.2 3,687.7 3.0 9.4
Staff 56 1,913.0 2,525.0 1,000 750 0 10,000 1,236.8 2,589.2 1.6 1.4
Students 52 487.7 1,250.8 100 90 5 8,000 139.4 835.9 4.8 26.5
Others 52 251.4 1,113.5 10
a
38 0 8,000 -58.6 561.4 6.9 48.6
a
Multiple modes exist. The smallest value is shown.
Total Number of People Involved with Accreditation: Baccalaureate Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 58 8.8 6.0 5 8 1 25 7.3 10.4 1.3 1.1
Faculty 58 42.9 43.8 20 25 0 200 31.4 54.4 1.6 2.5
Staff 58 14.2 17.4 4 8 1 100 9.6 18.7 2.7 9.8
Students 56 35.8 88.8 0 10 0 500 12.1 59.6 4.0 16.8
Others 50 17.8 39.5 3 5 0 250 6.6 29.0 4.7 25.2
302
Cumulative Hours Spent on Accreditation: Baccalaureate Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 55 1,443.2 1,972.2 1,500 650 0 11,250 910.1 1,976.4 3.2 12.3
Senior
Admin 55 3,356.7 15,132.9 200
a
500 10 112,500 -734.3 7,447.7 7.2 52.8
Faculty 55 5,636.7 15,764.6 1,000 1,000 0 80,000 1,375.0 9,898.5 3.6 12.5
Staff 56 1,749.4 5,614.6 200 350 10 40,000 245.8 3,252.9 6.1 40.8
Students 53 406.1 1,348.6 100 50 0 8,000 34.4 777.8 4.6 22.0
Others 48 144.1 465.8 20 23 0 3,000 8.9 279.4 5.4 31.5
a
Multiple modes exist. The smallest value is shown.
Total Number of People Involved with Accreditation: Special Focus Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 30 6.2 5.2 2 5 2 25 4.2 8.1 2.2 5.5
Faculty 30 24.5 45.1 3 12 3 250 7.7 41.3 4.6 23.3
Staff 30 8.7 10.3 5 5 1 50 7.8 12.5 2.6 8.1
Students 30 9.1 10.9 0 5 0 50 5.0 13.2 2.2 5.9
Others 30 6.0 6.1 0 5 0 20 3.7 8.2 0.9 0.1
Cumulative Hours Spent on Accreditation: Special Focus Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 29 974.5 1,207.0 2,000 450 5 5,000 515.4 1,433.6 2.0 4.1
Senior
Admin 28 760.9 1,139.4 100 200 20 4,000 319.1 1,202.7 1.8 2.2
Faculty 28 584.1 863.3 100 200 30 3,552 249.3 918.8 2.5 6.1
Staff 28 762.3 1,313.0 100 250 20 6,000 253.1 1,271.4 2.9 9.2
Students 27 49.3 59.0 0 20 0 200 26.0 72.7 1.4 1.3
Others 26 50.5 126.0 0 18 0 650 -0.4 101.5 4.6 22.6
303
Means by Accreditation Region
Total Number of People Involved with Accreditation: MSCHE Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 49 13.6 17.9 12 10 2 115 8.4 18.7 4.3 22.1
Faculty 49 44.1 29.1 50 40 4 100 35.7 52.4 0.6 -0.6
Staff 49 26.8 39.5 15
a
15 2 250 15.5 38.2 4.2 21.6
Students 48 15.1 17.5 10 10 0 100 10.0 20.2 3.0 11.3
Others 47 10.7 16.6 2
a
6 0 100 5.8 15.6 4.1 19.3
a
Multiple modes exist. The smallest value is shown.
Cumulative Hours Spent on Accreditation: MSCHE Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 45 1,451.8 1,839.4 400 500 0 7,500 899.2 2,004.5 1.5 1.7
Senior
Admin 43 1,327.0 2,170.5 200
a
300 10 10,000 659.0 1,995.0 2.6 6.8
Faculty 42 2,800.6 6,355.7 1,000 1,000 10 40,000 820.1 4,781.2 5.2 30.1
Staff 43 2,134.1 6,184.6 500 500 10 40,000 230.7 4,037.4 5.7 35.4
Students 41 559.3 1,494.7 40 50 0 8,000 87.5 1,031.1 4.0 17.0
Others 41 190.0 511.6 50
a
50 0 3,000 28.5 351.5 4.6 23.9
a
Multiple modes exist. The smallest value is shown.
Total Number of People Involved with Accreditation: SACS Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 103 11.3 8.2 4
a
8 1 36 9.7 12.9 1.1 0.3
Faculty 103 34.4 32.4 20 25 0 150 28.1 40.7 1.5 2.0
Staff 104 16.4 24.4 2 8 0 150 11.6 21.1 3.2 12.4
Students 101 45.0 133.4 10 10 0 1,000 18.6 71.3 5.0 29.2
Others 92 11.3 15.5 0
a
5 0 100 8.1 14.5 3.3 14.9
a
Multiple modes exist. The smallest value is shown.
304
Cumulative Hours Spent on Accreditation: SACS Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 94 1,986.5 2,806.0 1,500 1,350 6 18,000 1,411.8 2,561.2 3.3 13.5
Senior
Admin 95 2,171.8 11,538.1 100
a
500 20 112,500 -178.6 4,522.3 9.5 91.7
Faculty 95 3,616.9 11,295.6 1,000 600 0 80,000 1,316.1 5,917.7 5.3 30.3
Staff 95 2,331.8 5,050.2 200
a
600 0 35,000 1,303.0 3,360.6 4.7 25.9
Students 92 386.0 1,087.5 100 80 0 8,000 160.8 611.2 5.0 28.6
Others 84 158.8 872.8 0 30 0 8,000 -30.6 348.2 9.0 81.3
a
Multiple modes exist. The smallest value is shown.
Total Number of People Involved with Accreditation: WASC Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
Senior
Admin 41 8.7 8.3 2 5 1 40 6.1 11.4 2.0 4.3
Faculty 40 50.6 59.9 50 31 3 250 31.4 69.7 2.0 3.4
Staff 40 15.3 17.5 5 6 1 75 9.7 20.9 1.7 2.8
Students 39 20.0 22.4 10
a
10 0 100 12.8 27.3 2.0 4.0
Others 37 17.9 42.6 0
a
9 0 250 3.7 32.1 4.9 26.0
a
Multiple modes exist. The smallest value is shown.
Cumulative Hours Spent on Accreditation: WASC Institutions
95% confidence interval
N Mean
Standard
Deviation Mode Median Min Max
Lower
bound
Upper
bound Skewness Kurtosis
ALO 37 1,085.4 1,183.8 1,000 570 40 5,000 690.7 1,480.1 1.6 2.3
Senior
Admin 36 768.7 1,098.6 100 215 15 4,000 397.0 1,140.4 1.9 2.8
Faculty 36 1,995.9 7,917.5 100 425 15 48,000 -683.0 4,674.8 5.9 35.4
Staff 36 921.8 2,029.2 100 250 20 10,000 235.2 1,608.4 3.5 12.5
Students 35 106.1 186.6 100 50 0 1,000 42.0 170.2 3.7 16.0
Others 33 95.5 222.0 10 25 0 1,000 16.8 174.2 3.2 9.9
Abstract (if available)
Abstract
This mixed methods study investigated the direct and indirect costs of institutional accreditation and the differences in those costs between types of institutions. Accreditation Liaison Officers (ALOs) at four-year institutions from three of the six regional accrediting agencies were surveyed. Statistically significant differences were discovered between the means of direct costs by Carnegie classification but not by accrediting region, whereas statistically significant differences were discovered between various means of the different categories of indirect costs by both Carnegie classification and accrediting region. Indirect costs were monetized and combined with direct costs to determine a cumulative cost. Thus a per-institution average of $327,254 (when calculated by accreditation region) and $341,103 (when calculated by Carnegie Basic classification) was identified for a total institutional expense to the higher education community in excess of $660 million per seven to 10 year review cycle. Additionally it was determined that the indirect costs amount to roughly four times the direct costs. Despite the high costs of accreditation more than three times as many ALOs believed the costs of accreditation to be justified than those who did not believe them to be justified. It was also possible to develop a general profile of the ALO as a highly committed and capable individual who is often overwhelmed professionally and sometimes even personally by the demands of accreditation. This has important implications for institutional administrators in determining what constitutes adequate support for this person in both money and time resources made available for the execution of his or her responsibilities.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
A descriptive analysis focusing on similarities and differences among the U.S. service academies
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
The efficacy of regional accreditation compared to direct public regulation of post-seconadary institutions in the United States
PDF
A cost benefit analysis of professional accreditation by ABET for baccalaureate engineering degree programs
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
Impact of accreditation actions: a case study of two colleges within Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges
PDF
Achieving faculty diversity at University of California medical schools while maintaining compliance with proposition 209
PDF
Institutional student loan cohort default rates by institution type
PDF
The effectiveness of a district's social skills curriculum for students with disabilities
PDF
Accreditation and accountability processes in California high schools: a case study
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
States of motivation: examining perceptions of accreditation through the framework of self-determination
Asset Metadata
Creator
Woolston, P. J.
(author)
Core Title
The costs of institutional accreditation: a study of direct and indirect costs
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
06/26/2012
Defense Date
03/26/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,benefit,Cost,direct,indirect,Institutional,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Winn, Jade (
committee chair
), Robison, Mark Power (
committee member
)
Creator Email
pjwoolston@gmail.com,woolston@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-51038
Unique identifier
UC11290120
Identifier
usctheses-c3-51038 (legacy record id)
Legacy Identifier
etd-WoolstonPJ-907.pdf
Dmrecord
51038
Document Type
Dissertation
Rights
Woolston, P. J.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
benefit
direct
indirect