Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
(USC Thesis Other)
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE GOALS OF SPECIALIZED ACCREDITATION:
A STUDY OF PERCEIVED BENEFITS AND COSTS OF MEMBERSHIP WITH THE
NATIONAL ASSOCIATION OF SCHOOLS OF MUSIC
By
Jennifer L. Barczykowski
________________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2018
Copyright 2018 Jennifer L. Barczykowski
ii
ACKNOWLEDGEMENTS
I would first like to thank both dissertation chairs that I had the pleasure of working with
during this project, Dr. Robert Keim and Dr. Patricia Tobey. I would also like to thank my other
committee members, Dr. P.J. Woolston, Dr. Lawrence Picus, and Dr. Rebecca Lundeen. I am
incredibly grateful for their continuous support, motivation, and knowledge while guiding me
through this process.
I could have not done this without the support of my cohort, colleagues, friends, and
family. Many thanks to my work family for allowing me the time and flexibility to pursue this
goal. To my cohort, thank you for the laughs, tears, and wonderful memories. Many of you
became lifelong friends and I am so grateful to have gone on this journey with you. I have so
many friends I need to thank for their unwavering support, but a special thank you to Monique,
Sara, Scott, and Brandon for being there from start to finish.
Thank you to Mom, Dad, and Hannah for believing in me and not allowing me to give
up. Through all of the craziness, the three of you have been my rock and I cannot thank you
enough for your love and support.
iii
ABSTRACT
This quantitative study examined institutional perceptions of the benefits and costs associated
with National Association of Schools of Music (NASM) membership/accreditation, while also
examining variance in perception based on institution type and regional accreditation status. This
study defined various aspects of the accreditation process of post-secondary music schools across
the United States. Despite the benefits of accreditation, it remains unclear if the established
standards in the accreditation process are fair and consistent for all parties concerned. One must
ensure that standards in accreditation are relevant and meet the needs of the member institutions.
The original purpose behind accreditation has been forgotten, as the emphasis currently relies on
easily quantifiable outcomes. Currently, the evidence indicates that standardization causes
constriction rather than creativity, as a focus remains on items that are easily measurable, as
opposed to those that are not. Thus, the purpose of this study was to examine perceived benefits
and costs associated with NASM accreditation. A quantitative approach was employed to
explore the monetary and nonmonetary benefits of NASM membership, and open-ended
responses were used to support the quantitative results. Findings demonstrated that accreditation
was perceived to influence the credibility of the institution. Additionally, accreditation signified
the strength of peer evaluation and self-study as important in ensuring quality of education. In
addition, the participants also perceived the accreditation process was appropriate in improving
institutional effectiveness. Further studies were recommended to understand the different
variables that influenced institutions to apply for accreditation.
iv
TABLE OF CONTENTS
List of Tables ................................................................................................................................ vii
Chapter One: Overview of the Study .............................................................................................. 1
Introduction ......................................................................................................................... 1
Definition and Process of Accreditation ............................................................................. 3
Goals of Accreditation ........................................................................................................ 3
Statement of the Problem .................................................................................................... 5
Purpose of the Study ........................................................................................................... 5
Significance of the Study .................................................................................................... 6
Definitions........................................................................................................................... 8
Chapter Two: Literature Review .................................................................................................. 12
History of Accreditation ................................................................................................... 13
Early Institutional Accreditation ........................................................................... 13
Regional Accreditation, 1885 to 1920 .................................................................. 14
Regional Accreditation, 1920 to 1950 .................................................................. 15
Regional Accreditation, 1950 to 1985 .................................................................. 16
Regional Accreditation, 1985 to Present .............................................................. 18
Effect of Outcomes Assessment and Accreditation .......................................................... 20
Trend Toward Learning Assessment .................................................................... 20
Framework for Learning Assessment ................................................................... 21
Benefits of Accreditation on Learning .................................................................. 22
Organizational Effects of Accreditation ............................................................... 23
Future Assessment Recommendations ................................................................. 24
Challenges of Assessing Student Learning Outcomes.......................................... 25
Accountability and Accreditation ......................................................................... 30
Costs of Accreditation....................................................................................................... 32
International Accreditation in Higher Education .............................................................. 39
Internationalization of Accreditation ................................................................................ 41
Critical Assessment of Accreditation................................................................................ 42
Specialized Accreditation ................................................................................................. 47
National Association of Schools of Music (NASM) ........................................................ 48
v
Alternatives to Accreditation ............................................................................................ 50
Current State and Future of Accreditation ........................................................................ 53
Conclusion ........................................................................................................................ 57
Chapter Three: Methodology ........................................................................................................ 58
Rationale for the Study ..................................................................................................... 58
Research Design................................................................................................................ 59
Population and Sample ..................................................................................................... 61
Instrumentation ................................................................................................................. 61
Data Analysis .................................................................................................................... 62
Reliability .......................................................................................................................... 65
Validity ............................................................................................................................. 65
Limitations and Delimitations ........................................................................................... 66
Chapter Four: Results ................................................................................................................... 68
Characteristics of NASM Member Institutions ................................................................ 70
Response About NASM Accreditation Standards ............................................................ 72
Associations Between Perceived Costs and Benefits Associated With NASM
Membership/Accreditation With Institution Type and Regional Accreditation
Status ..................................................................................................................... 75
Summary of the Results .................................................................................................... 79
Chapter Five: Discussion .............................................................................................................. 81
Introduction ....................................................................................................................... 81
Discussion ......................................................................................................................... 83
Perceived Cost and Benefits of Accreditation ...................................................... 83
Perceived Effectiveness of Accreditation Standards ............................................ 87
Implications....................................................................................................................... 89
Limitations of the Study.................................................................................................... 92
Recommendations for Future Research ............................................................................ 93
Conclusion ........................................................................................................................ 95
References ..................................................................................................................................... 96
vi
Appendices
Appendix A: National Association of Schools of Music Member Institutions .......................... 115
Appendix B: Survey Invitation Letter ......................................................................................... 126
Appendix C: Survey Invitation Follow Up Letter ...................................................................... 128
Appendix D: Survey Instrument ................................................................................................. 130
Appendix E: Survey on NASM Accreditation Standards & Effectiveness Codebook ............... 136
Appendix F: NASM Accreditation Standards Analysis Coding Scheme ................................... 141
vii
LIST OF TABLES
Table 1: Survey Items in the NASM Accreditation Evaluation Standards Questionnaire ........... 62
Table 2: Frequency and Percentage Summaries of Characteristics of NASM Member
Institutions......................................................................................................................... 71
Table 3: Descriptive Statistics Summaries of Characteristics of NASM Member Institutions .... 72
Table 4: Frequency and Percentage Summaries of Responses About NASM Accreditation
Standards ........................................................................................................................... 74
Table 5: ANOVA Results of Associations Between Perceived Costs and Benefits Associated
With NASM Membership/Accreditation With Accreditation Affiliations ....................... 76
Table 6: ANOVA Results of Associations Between Perceived Costs and Benefits Associated
With NASM Membership/Accreditation With Dual Accreditation Institutions .............. 77
Table 7: ANOVA Results of Associations between Perceived Costs and Benefits Associated
With NASM Membership/Accreditation with Basic Carnegie Classification ................. 78
Table A1: Table List of National Association of Schools and Music Member Institutions ....... 115
Table F1: NASM Accreditation Standards: Qualtrics Survey .................................................... 141
1
CHAPTER ONE: OVERVIEW OF THE STUDY
Introduction
Accreditation is a vital process that institutions undergo to ensure and maintain the
quality being provided by member institutions (Blauch, 1959; Newman, 1996; Pfnister, 1971;
Winskowski, 2012). Accreditation developed when institution leaders perceived the need for
self-study to end peer regulation in the U.S. educational system, prosperous despite the lack of
supervision from the federal government. The effects of accreditation, as a necessary process in
the educational system, entail changes in the quality assurance and accountability of the
educational institutions (Bresciani, 2006; Ewell, 2009). One major influence is the shift of focus
on greater accountability and learning assessment (Beno, 2004; Wergin, 2005, 2012), thereby
imposing higher standards of literacy, problem solving ability, and collaborative skills on the
students (Ewell, 2001). Effectiveness of the accreditation processes is then hinged on the
learning outcomes of students.
Researchers have found several indicators to assess the influence of accreditation on the
internal and external processes of educational institutions (Procopio, 2010; Ruppert, 1994;
Volkwein, Lattuca, Harper, & Domingo, 2007). These include student-related factors, such as
essential learning outcomes and performance assessment, and organization-related factors, such
as perceptions of the staff and school leaders on the importance of accreditation (Procopio, 2010;
Ruppert, 1994; Volkwein et al., 2007). In addition, accreditation benefits institutions on an
organizational level by providing access to financial aid for students and faculty. Being a
member of an accrediting body also means increased exposure to a diverse set of institutions,
allowing students to have more opportunities for transfer and obtaining higher degrees
(Brittingham, 2009). Thus, institution leaders must continually develop and improve plans and
strategies to improve education quality (Beno, 2004).
2
Despite the benefits and positive influence of accreditation, several challenges remain
from processes imposed on the institution leaders seeking accreditation. The wide adoption of
the practice of assessment of learning outcomes as a major tenet of accreditation imposes
burdens on the institution to change its priorities and recognize greater accountability (Maki,
2010). However, the rigorous methods leaders must take are not usually completed because of
the constraints in labor (Ewell, 2005). Institution leaders face little incentives to address the
deficiencies identified during the accreditation process, which causes cynicism on whether the
current accreditation standards are enough for the purpose of quality assurance and
accountability (Ewell, 2005; Wolff, 2005). Another major challenge in accreditation is the
monetary and nonmonetary costs that institution leaders incur whenever seeking accreditation. In
essence, these challenges are not followed through and are usually abandoned once leaders have
obtained accreditation.
The focus on process, rather than outcomes, is tedious because once requirements have
been satisfied, little is done to implement strategies for long-term improvement of the
educational system (Ewell, 2005). This aspect is also affected by factors, such as faculty buy-in
(Kuh & Ewell, 2010), lack of institutional investment (Beno, 2004), and the integration of the
accreditation standards into the local practice of institutions (Banta, 1993; Kuh & Ewell, 2010;
Lind & McDonald, 2003; Maki, 2010). Assessment is practiced for the sake of accreditation and
not integrated to the policies and practices of member institutions. This limits the institution to
adopt practices that are beneficial for the long-term and challenges the relevance of accreditation
in the context of educational institutions. In the next subsections, the processes, goals, and
challenges of accreditation are discussed.
3
Definition and Process of Accreditation
Eaton (2012) stated, “Accreditation in the United States is about quality assurance and
quality improvement.” (p. 9). The accreditation process provides a process in which higher
education institution and program leaders can face evaluation to attain minimum criteria. With
six regional accreditation bodies—and more than 70 additional accreditation organizations of a
programmatic or professional nature—with a role in the accreditation of institutions or programs
at more than 7,000 institutions that contain close to 20 thousand programs, standards vary widely
between accreditation organizations (Council for Higher Education Accreditation [CHEA],
2011). Some have claimed that accreditation cannot keep pace with the complex, changing
nature of the higher education environment (Eaton, 2012). In addition, the diversity of missions
across institutions may allow for an increased amount of leniency and indecisiveness among
accrediting organizations.
Accreditation began as a voluntary method of quality assurance; however, because many
essential functions are dependent on accreditation, such as federal financial aid, higher education
institution leaders must maintain their regional accreditation status. High school students also use
accreditation to evaluate the effectiveness and legitimacy of the programs offered at each
institution. The quality of curriculum delivery; management of areas, which require
improvement from an instructor; and student access to federal financial aid are all components
that specific accreditations can deliver before a student selects and applies to a university,
college, or other higher learning institution.
Goals of Accreditation
The accreditation process currently has four primary purposes: (a) It is a quality
assurance mechanism; (b) it is an instrument for institutional quality improvement; (c) it enables
and facilitates student mobility; and (d) it provides access to federal funding. Because of these
4
uses, acquiring and maintaining institutional accreditation is an absolute necessity for most
institutions of higher education.
The first purpose, quality assurance, was the concept behind the original ideas that
evolved into the current system of accreditation (Blauch, 1959; Newman, 1996; Pfnister, 1971;
Winskowski, 2012). The second purpose, quality enhancement, also derives from its primary
explicit purposes (Zook & Haggerty, 1936). The demonstration of accredited status by an
institution indicates the institution maintains a minimum level of quality (Eaton, 2009), and
when implemented as intended, accreditation is a vehicle for institutional improvement (Ewell,
2008).
Accreditation serves the purpose of enabling student mobility by facilitating the transfer
of credit between institutions (American Council on Education, 2012; Blauch, 1959; Eaton,
2011a, 2011b; National Advisory Committee on Institutional Quality and Integrity, 2012;
Sibolski, 2012; Winskowski, 2012). Accredited status indicates the quality of instruction from
one institution is similar in rigor to that of another institution, and therefore can be relied on as
being adequate to fulfill similar curricular requirements, allowing students not to repeat
previously completed courses. This particular issue has become increasingly important in the last
decade, as college students have become mobile, earning credit from multiple institutions before
completing a degree (Ewell, 2008). The public understands accreditation primarily as the means
by which the federal government determines institutional eligibility for federal fund distribution
via students.
Title IV, part of the Higher Education Act of 1965, authorizes the distribution of federal
financial aid by higher education institutions. Title IV allows for the distribution of benefits to
students enrolled in postsecondary education. This distribution can occur in any of five ways,
5
including federal Pell grants, supplemental educational opportunity grants (SEOG), payments to
states “to assist them in making financial aid available,” “providing assistance to institutions of
higher education,” and the funding of “trio” programs (Gaston, 2013, pp. 6-7).
Statement of the Problem
The Stateside University (SU) Emerson School of Music has recently withdrawn from its
affiliation with the National Association of Schools of Music (NASM); in speaking with the
Dean, Steven Allen, this withdrawal might be the start of a trend. SU was the first founding
member of NASM to withdraw from the organization, and Provincial University was the first
member to withdraw in 2013; however, these were not founding members of NASM. In 1924, 16
members founded NASM. Their mission involved ensuring the quality of the curriculum for
music schools across the United States. NASM advertises, through its web site, that it has 644
active members. With the decision of some members to withdraw recently, other members may
begin to evaluate their memberships.
Purpose of the Study
With the recent withdrawal of SU and Provincial University from NASM, this researcher
examined perceived benefits and costs associated with NASM accreditation. These benefits
included further examination of both monetary and nonmonetary benefits and costs. The
researcher utilized all members of NASM as participants in the study in the comparison of
institutional type, accreditation status, size, location, and additional demographic characteristics.
The utilization of a large-scale survey with open-ended questions provided the researcher an
opportunity to compare beliefs among institutions with varying demographics. The study also
provided the opportunity for member institutions to evaluate the NASM standards and report on
their ability to measure what music education professionals believed necessary to succeed in the
6
music industry. Thus, the purpose of the present study was to explore the perceived benefits and
costs associated with NASM accreditation.
Significance of the Study
With the changes in the music industry over the past few decades, music school leaders
must update their curriculum to meet the changing demands of the industry. NASM (2014)
claimed that membership to NASM signified a comprehensive understanding of the relationship
between the work of individual institutions and the entire community of institutions that prepared
musicians at the collegiate level. However, it was unclear if the established standards evaluated
programs in a fair and consistent way, with an understanding that programs varied in complexity,
and not all music schools were the same.
The standards, as presented by NASM, were seen as a framework, and each institution
was responsible for ensuring that the programs meet the requirements outlined in the framework.
It was up to each institution to develop the best way to meet these standards, thus removing the
idea of complete standardization. This aspect put the responsibility solely on each institution in
addressing the complexities of changing student demographics, emerging trends in the music
industry, and developing and implementing technology. Moreover, no significant body of
literature addressed the value of current NASM accreditation standards. Therefore, this study
address this gap in the literature to develop an understanding of the current NASM standards, if
the lack of standardization was seen as benefit, and if it was believed the NASM model should
be changed to meet the current industry needs.
Research Questions
The purpose of this study was to investigate the perceived benefits of being accredited by
the National Association of Schools of Music (NASM). This study examined the following
research questions and hypotheses:
7
RQ1: What are the perceived costs and benefits associated with NASM
membership/accreditation?
H10a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by institution type.
H11a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by institution type.
H10b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by institution type.
H11b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by institution type.
RQ2: Do perceived costs and benefits associated with NASM membership/accreditation
vary by institution type and regional accreditation status?
H20a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by regional accreditation status.
H21a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by regional accreditation status.
H20b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by regional accreditation status.
H21b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by regional accreditation status.
8
Definitions
Accreditation: Accreditation refers to a quality review process conducted by professional
peers, whereby an institution or program is evaluated to determine whether it has a minimum
level of adequate quality.
Accrediting Body, Agency, or Association: Accrediting body, agency, or association are
nongovernmental entities that set standards for accreditation, administers the process of
accreditation, and provides assistance to institutions, programs, students, parents, and the public.
Accreditation Liaison Officer (ALO): ALO refers to a designated institutional
representative who is responsible for coordinating the accreditation effort with the accrediting
agency.
Benefits of accreditation: Benefits of accreditation refer to the advantages an institution
gains by having accreditation.
Cost of accreditation: Cost of accreditation entails the institutional commitment
regarding budgetary spending (direct costs) and time contributed (indirect costs) by the various
campus constituencies to the accreditation effort.
Council for Higher Education Accreditation (CHEA): CHEA consists of the national
body that coordinates advocacy efforts for accreditation and performs the function of recognizing
accrediting entities; CHEA reviews the effectiveness of accrediting bodies and primarily assures
the academic quality and improvement within institutions.
Gatekeeper: Gatekeeper refers to the role of accreditation regarding federal funding; in
order for an institution or program to qualify for the receipt of federal funds, it must be
accredited by a recognized institution; thus, accreditation serves as a gatekeeper for those funds.
9
Institutional accreditation: Institutional accreditation involves the recognition of a
minimum level of adequate quality at the institutional level and without respect to individual
programs of study.
Middle States Commission on Higher Education (MSCHE): MSCHE refers to the
regional accreditor responsible for the institutional accreditation of schools in Delaware,
Maryland, New Jersey, New York, Pennsylvania, Puerto Rico, the U.S. Virgin Islands,
Washington DC, and select locations overseas.
Music unit: The music unit is a college, school, division, department, or program within a
postsecondary institution or a freestanding, independent, school of music.
National accreditation: National accreditation refers to the quality review, at either the
institutional level or the programmatic level, conducted on a national scope, rather than on a
regional or state scope.
National Advisory Committee on Institutional Quality and Integrity (NACIQI): NACIQI
refers to a congressionally established committee providing advisement to the secretary of
education on matters relating to accreditation and institutional eligibility for federal financial aid.
New England Association of Schools and Colleges (NEASC): NEASC includes the
regional accreditor responsible for the institutional accreditation of schools in Connecticut,
Maine, Massachusetts, New Hampshire, Rhode Island, Vermont, and select locations overseas.
North Central Association of Colleges and Schools (NCA): NCA refers to the regional
accreditor responsible for the institutional accreditation of schools in Arizona, Arkansas,
Colorado, Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, New
Mexico, North Dakota, Ohio, Oklahoma, South Dakota, West Virginia, Wisconsin, and
Wyoming.
10
Northwest Commission on Colleges and Universities (NWCCU): NWCCU involves the
regional accreditor responsible for the institutional accreditation of schools in Alaska, Idaho,
Montana, Nevada, Oregon, Utah, Washington, and select locations overseas.
Peer review: Peer review refers to the concept-governing accreditation, whereby the
actual review of the self-study is conducted by knowledgeable professionals from like
institutions to found the decision in legitimacy and credibility.
Programmatic accreditation (or specialized accreditation): Programmatic accreditation
(or specialized accreditation) consists of recognition of a minimum level of adequate quality at
the level of the individual program of study regarding the rest of the institution as a whole.
Regional accreditation: Regional accreditation is quality review at the institutional level
that is conducted on a regional scope, rather than on a national or state scope.
Self-regulation: Self-regulation entails a concept where entities agree to govern
themselves and establish mechanisms and processes to do so; accreditation exemplifies the
concept of self-regulation.
Self-study: Self-study refers to a comprehensive review, usually lasting approximately a
year and a half to two years, resulting in a culminating document in which an institution or
program considers every aspect of its operation to determine whether it has adequate resources at
all levels to fulfill its clearly defined mission.
Site visit: Site visit is generally a 2 to 3 day period in which knowledgeable professionals
from like institutions visit an institution after reviewing its self-study to ascertain the accuracy of
the self-study and identify any concerns; subsequent to the site visit, the visiting team makes an
accreditation recommendation to the accrediting body; after which, the accrediting body
announces a formal decision.
11
Southern Association of Colleges and Schools (SACS): SACS refers to a regional
accreditor responsible for the institutional accreditation of schools in Alabama, Florida, Georgia,
Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, Virginia,
and select locations overseas.
U.S. Department of Education: The U.S. Department of Education consists of the arm of
the federal government concerned with education quality and access nationally.
Voluntary association: A voluntary association is an organization in which membership
is optional; accrediting bodies began as voluntary associations and continue to be so classified;
however, because eligibility for federal funding is tied to accreditation, many professionals
question whether accreditation is truly voluntary.
Western Association of Schools and Colleges (WASC): WASC has a regional accreditor
responsible for the institutional accreditation of schools in California, Guam, Hawaii, and the
Pacific Basin.
12
CHAPTER TWO: LITERATURE REVIEW
This chapter surveys the body of literature written on three broad areas applicable to the
scope of study:
1. The development of accreditation in the United States and its future direction;
2. The effects of learning outcome assessments in the United States, including the
development of outcomes assessment movement in higher education, its framework,
benefits, and challenges; and
3. Specialized/programmatic accreditation agencies in the United States, covering a
brief history of the various agencies.
The author looked at the history of NASM, its mission, and the self-study process.
The Development of Accreditation and the Effect of Outcomes Assessment and
Accreditation sections of this chapter were authored by students in the thematic group on
accreditation and outcomes assessment under the direction of Dr. Robert G. Keim, Chair of the
Thematic Group. The author(s) for each segment of this two sections included Nathan Barlow
and Rufus Cayetano for History of Accreditation, Deborah Kinley for Critical Assessment of
Accreditation, Jennifer Barczykowski for Costs of Accreditation, Benedict Dimapindan and
Winyuan Shih for Effect of Outcomes Assessment and Accreditation, Dinesh Payroda for
Specialized Accreditation, Richard May for Internationalization of Accreditation, Kris Tesoro for
Alternatives to Accreditation, and Jill Richardson for Future Direction of Accreditation. The
purpose of this literature review is to examine the current state of accreditation and assessment
practices by focusing on the accreditation process for music schools in the United States.
13
History of Accreditation
Early Institutional Accreditation
Accreditation has a long history among the universities and colleges of the United States,
dating back to the self-initiated external review of Harvard in 1642. This external review, done
only six years after Harvard’s founding, was intended to ascertain rigor in its courses by peers
from universities in Great Britain and Europe (Davenport, 2000; Brittingham, 2009). This type of
self-study is not only the first example in America of peer-review, but it also highlights the need
for self- and peer-regulation in the U.S. educational system due to the lack of federal
governmental regulation. This lack of federal government intervention in the evaluation process
of educational institutions is a main reason for the way accreditation in the United States
developed (Brittingham, 2009).
While the federal government does not directly accredit educational institutions, the first
example of an accrediting body was through a state government. In 1784, the New York Board
of Regents was established as the first regionally organized accrediting organization. The board
was set up like a corporate office with the educational institutions being franchisees. The board
created mandated standards that had to be met by each college or university if that institution was
to receive state financial aid (Blauch, 1959).
Not only did Harvard pioneer accreditation in the United States with its early external
review of its own courses, but also the president of Harvard University initiated a national
movement in 1892 when he organized and chaired the Committee of Ten. The Committee of Ten
was an alliance formed among educators (mostly college and university presidents). The goal
was to seek standardization regarding educational philosophies and practices in the United States
through a system of peer approval (Davis, 1945; Shaw, 1993).
14
Around this same time, different association and foundation leaders undertook an
accreditation review of educational institutions in the United States based on their own standards.
Associations, such as the American Association of University Women, the Carnegie Foundation,
and the Association of American Universities, would ⎯for a variety of different reasons and
clientele (e.g. gender equality and professorial benefits) ⎯evaluate various institutions and
generate lists of approved or accredited schools. These association leaders responded to the
desire of their constituents to have accurate information regarding the validity and efficacy of the
different colleges and universities (Orlans, 1975; Shaw, 1993).
Regional Accreditation, 1885 to 1920
When these association leaders declined to broaden or continue their accrediting
practices, institution leaders began to unite together to form regional accrediting bodies to assess
secondary schools’ adequacy in preparing students for college (Brittingham, 2009). Colleges
were then measured by the quality of students admitted based on standards at the secondary
school level that were measured by the accrediting agency. The regional accrediting agencies
began to focus also on creating a list of colleges that were good destinations for incoming
freshmen. If an institution was a member of the regional accreditation agency, it was considered
an accredited college; more precisely, the institutions that belonged to an accrediting agency
were considered colleges, while those that did not belong were not (Blauch, 1959; Davis, 1932;
Ewell, 2008; Orlans, 1974; Shaw, 1993).
Regional accrediting bodies were formed in the following years: New England
Association of Schools and Colleges (NEASC) in 1885, the Middle States Association of
Colleges and Secondary Schools (MSCSS and Middle States Commission on Higher Education
[MSCHE]) in 1887, the North Central Association of Colleges and Schools (NCA) and the
15
Southern Association of Colleges and Schools (SACS) in 1895, the Northwest Commission on
Colleges and Universities (NWCCU) in 1917, and the Western Association of Schools and
Colleges (WASC) in 1924 (Brittingham, 2009). Regional accrediting associations created
instruments for establishing unity and standardization regarding entrance requirements and
college standards (Blauch 1959). For example, in 1901, MSCHE and MSCSS created the
College Entrance Examination Board to standardize college entrance requirements. The NCA
also published its first set of standards for its higher education members in 1909 (Brittingham,
2009).
Although there were functioning regional accreditation bodies in most of the states, in
1910, the Department of Education created its own national list of recognized (accredited)
colleges. Because of the public’s pressure to keep the federal government from controlling
higher education directly, President Taft blocked the publishing of the list of colleges, and the
Department of Education discontinued the active pursuit of accrediting schools. Instead, it
reestablished itself as a resource for the regional accrediting bodies regarding data collection and
comparison (Blauch, 1959; Ewell, 2008; Orlans, 1975).
Regional Accreditation, 1920 to 1950
With the regional accrediting bodies in place, the ideas of what an accredited college was
became more diverse (e.g. vocational colleges and community colleges). Out of the greater
differences among schools regarding school types and institutional purposes, a need occurred to
apply more qualitative measures and focus on high rather than minimum outcomes (Brittingham,
2009). School visits by regional accreditors became necessary once a school demonstrated
struggles because qualitative standards became the norm. The regional organization leaders
began to measure success (and, therefore, grant accredited status) on whether an institution met
16
its own standards outlined in its own mission, rather than a predetermined set of criteria
(Brittingham, 2009). In other words, if a school did what it said it would do, it could be
accredited. The accreditation process later became a requirement for all member institutions.
Self- and peer-reviews, which became a standard part of the accreditation process, were
undertaken by volunteers from the member institutions (Ewell, 2008).
Accrediting bodies began to be challenged about their legitimacy in classifying colleges
as accredited or not. The Langer Case in 1938 is a landmark case that established the standing of
accrediting bodies in the United States. Governor William Langer of North Dakota lost in his
legal challenge of the NCA’s denial of accreditation to North Dakota Agricultural College. This
ruling carried over to other legal cases that upheld the decision that accreditation was a
legitimate, as well as a voluntary, process (Fuller & Lugg, 2012; Orlans, 1974).
In addition to the regional accrediting bodies, other associations were meant to regulate
the accrediting agencies themselves. The Joint Commission on Accrediting was formed in 1938
to validate legitimate accrediting agencies and discredit questionable or redundant ones. After
some changes to the mission and the membership of the Joint Commission on Accreditation, the
name was changed to the National Commission on Accrediting (Blauch, 1959).
Regional Accreditation, 1950 to 1985
The period 1950 to 1985 was coined the golden age of higher education and was marked
by increasing federal regulations. During this period, key developments in the accreditation
process included the standardization of the self-study; the execution of the site visit by
colleagues from peer institutions; and the regular, cyclical, visitation of institutions (Woolston,
2012). With the passage of the Veterans’ Readjustment Assistance Act of 1952, the U.S.
commissioner of education was required to publish a list of recognized accreditation associations
17
(Bloland, 2001). This act provided education benefits to veterans of the Korean War directly,
rather than to the educational institution that they attended, thereby increasing the importance of
accreditation as a mechanism for recognition of legitimacy (Woolston, 2012).
A more “pivotal event” occurred in 1958 with the National Defense Education Act’s
(NDEA) allocation of funding for NDEA fellowships and college loans (Weissburg, 2008).
NDEA limited participating institutions to those that were accredited (Gaston, 2014). In 1963,
the U.S. Congress passed the Higher Education Facilities Act, requiring that higher education
institutions receiving federal funds through enrolled students be accredited.
Arguably, the most striking expansion in accreditation’s mission coincided with the
passage of the Higher Education Act (HEA) in 1964 (Gaston, 2014). In this legislation, Title IV
of HEA expressed the intent of Congress to use federal funding to broaden access to higher
education. According to Gaston (2014), having committed to this much larger role in
encouraging college attendance, the federal government found it necessary to affirm that
institutions benefitting from such funds were worthy of these funds. That same year, the National
Committee of Regional Accrediting Agencies (NCRAA) became the Federation of Regional
Accrediting Commissions of Higher Education (FRACHE).
The Higher Education Act was first signed into law in 1965. That law strengthened the
resources available to higher education institutions and provided financial assistance to students
enrolled at those institutions. The law was especially important to accreditation because it forced
the U.S. Department of Education (USDE) to determine and list a much larger number of
institutions eligible for federal programs (Trivett, 1976). In 1967, the National Committee on
Accrediting (NCA) revoked Parsons College accreditation citing “administrative weakness” and
18
a $14 million debt (Woolston, 2012, p. 20). The college appealed but the courts denied it on the
basis that the regional accrediting associations were voluntary bodies (Woolston, 2012).
The need to deal with a much larger number of potentially eligible institutions led the U.S
commissioner of education to create in the Bureau of Higher Education the Accreditation and
Institutional Eligibility Staff (AIES) with an advisory committee. The purpose of the AIES,
which was created in 1968, was to administer the federal recognition and review process
involving the accrediting agencies (Dickey & Miller, 1972). In 1975, NCA and FRACHE
merged to form a new organization called the Council on Postsecondary Accreditation (COPA).
The newly created national accreditation association encompassed an array of types of
postsecondary education to include community colleges, liberal arts colleges, proprietary
schools, graduate research programs, bible colleges, trade and technical schools, and home-study
programs (Chambers, 1983).
Regional Accreditation, 1985 to Present
Since 1985, accountability is of importance in the field of education. According to
Woolston (2012), key developments in the accreditation process during this period included
higher education’s experiencing rising costs, resulting in high student loan default rates, as well
as increasing criticism for a number of apparent shortcomings, most ostensibly a lack of
demonstrable student learning outcomes. At the same time, accreditation is defended by various
champions of the practice. For example, congressional hostility reached a crisis stage in 1992
when Congress, in the midst of debates on the reauthorization of the Higher Education Act,
threatened to bring the role of the accrediting agencies as gatekeepers for financial aid to a close.
During the early 1990s, the federal government grew increasingly intrusive in matters
directly affecting accrediting agencies (Bloland, 2001). As a direct consequence, Subpart 1 of
19
Part H of the Higher Education Act amendments involved an increased role for the states in
determining the eligibility of institutions to participate in the student financial aid programs of
the aforementioned Title IV. For every state, this aspect meant the creation of a State
Postsecondary Review Entity (SPRE) that would review institutions that the USDE secretary
identified as having triggered such review criteria as high default rates on student loans (Bloland,
2001). The SPREs were short lived and, in 1994, were abandoned largely because of a lack of
adequate funding. The 1992 reauthorization also created the National Advisory Committee on
Institutional Quality and Integrity (NACIQI) to replace the AIES.
For several years, the regional accrediting agents entertained the idea of pulling out of
COPA and forming their own national association. Based on dissatisfaction with the
organization, regional accrediting agents proposed a resolution to terminate COPA by the end of
1993; following a successful vote on the resolution, COPA was effectively terminated (Bloland,
2001). A special committee, generated by the COPA plan of dissolution of April 1993, created
the Commission on Recognition of Postsecondary Accreditation (CORPA) to continue the work
of recognizing accrediting agencies (Bloland, 2001). However, CORPA was formed primarily as
an interim organization to continue national recognition of accreditation. In 1995, national
leaders in accreditation formed the National Policy Board (NPB) to shape the creation and
legitimation of a national organization overseeing accreditation. National leaders in accreditation
were adamant that the new organization should reflect higher education’s needs, rather than
those of postsecondary education. Following numerous intensive meetings, a new organization
named the Council for Higher Education Accreditation (CHEA) was formed in 1996 as the
official successor to CORPA (Bloland, 2001).
20
In 2006, the Spellings’ Commission argued that accreditation poses shortcomings
concerning the future of higher education (USDE, 2006, p. 7) and accused accreditation of being
both ineffective and a barrier to innovation. Since the release of the Spellings’ Commission
report, the next significant event on the subject of accreditation came during President Barack
Obama’s State of the Union Address on February 12, 2013. In conjunction with the president's
address, the White House released a nine-page document titled The President's Plan for a Strong
Middle Class and a Strong America. The document stated that the president was going to call on
Congress to consider value, affordability, and student outcomes in making determinations about
which colleges and universities receive access to federal student aid, either by incorporating
measures of value and affordability into the existing accreditation system or by establishing a
new, alternative system of accreditation that would provide pathways for higher education
models to receive federal student aid based on performance and results (White House, 2013b).
Effect of Outcomes Assessment and Accreditation
This section of the literature review will examine the effects of accreditation, focusing
primarily on the assessment of student learning outcomes. Specifically, outcome assessment
serves two main purposes — quality improvement and external accountability (Bresciani, 2006;
Ewell, 2009). Over the years, institutions of higher education have made considerable strides
regarding learning assessment practices and implementation. Despite such progress, key
challenges remain.
Trend Toward Learning Assessment
The shift within higher education accreditation toward greater accountability and student
learning assessment began in the mid-1980s (Beno, 2004; Ewell, 2001; Wergin, 2005, 2012).
During that time, higher education was portrayed in the media as “costly, inefficient, and
insufficiently responsive to its public” (Bloland, 2001, p. 34). The impetus behind the public’s
21
concern stemmed from two reasons: first, was the perception that students were underperforming
academically, and second, was the demand of the business sector (Ewell, 2001). Employers and
business leaders expressed their need for college graduates who could demonstrate high levels of
literacy, problem solving ability, and collaborative skills to support the emerging knowledge
economy of the 21st century. In response to these concerns, institution leaders of higher
education started emphasizing student learning outcomes as the main process of evaluating
effectiveness (Beno, 2004).
Framework for Learning Assessment
Accreditation is widely considered a significant force behind advances in both student
learning and outcomes assessment. According to Rhodes (2012), in recent years, accreditation
has contributed to the proliferation of assessment practices, lexicon, and even products, such as
e-portfolios, which are used to show evidence of student learning. For example, Kuh and
Ikenberry (2009) surveyed provosts or chief academic officers at all regionally accredited
institutions granting undergraduate degrees; they found student assessment was driven more by
accreditation than by external pressures, such as government or employers. Another major
finding was that most institutions planned to continue their assessment of student learning
outcomes despite budgetary constraints. They also found that gaining faculty support and
involvement remained a major challenge—an issue that is examined in more depth later in this
section.
Additionally, college and university faculty and student affairs practitioners stressed how
students must now acquire proficiency in a wide scope of learning outcomes to address the
unique and complex challenges of today’s ever-changing, economically competitive, and
increasingly globalizing society adequately. In 2007, the Association of American Colleges and
22
Universities (AAC&U) published a report focusing on the aims and outcomes of a 21st century
collegiate education, with data gathered through surveys, focus groups, and discussions with
postsecondary faculty. Emerging from the report were four critical learning outcomes, which
include (a) knowledge of human cultures and the physical and natural world, through study in
science and mathematics, social sciences, humanities, history, languages, and the arts; (b)
intellectual and practical skills, including inquiry and analysis, critical and creative thinking,
written and oral communication, quantitative skills, information literacy, and teamwork and
problem-solving abilities; (c) personal and social responsibility, including civic knowledge and
engagement, multicultural competence, ethics, and foundations and skills for lifelong learning;
and (d) integrative learning, including synthesis and advanced understanding across general and
specialized studies (AAC&U, 2007, p. 12). With the adoption of such frameworks or similar
tools at institutions, accreditors can be well positioned to connect teaching and learning and, as a
result, better engage faculty to improve student learning outcomes (Rhodes, 2012).
Benefits of Accreditation on Learning
Researchers have focused on accreditation and student performance assessment, with
several pointing to benefits of the accreditation process. Ruppert (1994) conducted case studies
in 10 states ⎯Colorado, Florida, Illinois, Kentucky, New York, South Carolina, Tennessee,
Texas, Virginia, and Wisconsin ⎯to evaluate different accountability programs based on student
performance indicators. Ruppert concluded that planning processes integrated with institutional
efforts was a significant quality indicator to obtain state priorities.
Furthermore, researchers have also demonstrated how accreditation is helping shape
outcomes inside college classrooms. Specifically, Cabrera, Colbeck, and Terenzini (2001)
investigated classroom practices and the relationship with the learning gains in professional
23
competencies among undergraduate engineering students. The study involved 1,250 students
from seven universities. The researchers found that the expectations of accrediting agencies
might be encouraging more widespread use of effective instructional practices by faculty.
Volkwein et al. (2007) measured changes in student outcomes in engineering programs,
following the implementation of new accreditation standards by the Accreditation Board for
Engineering and Technology (ABET). Based on the data collected from a national sample of
engineering programs, the authors noted the new accreditation standards were indeed a catalyst
for change, finding evidence that linked the accreditation changes to improvements in
undergraduate education. Students experienced significant gains in applying knowledge of
mathematics, science, and engineering; using modern engineering tools; using experimental
skills to analyze and interpret data; designing solutions to engineering problems; teamwork and
group work; communication effectively; understanding professional and ethical obligations;
understanding societal and global context of engineering solutions; and recognizing of the need
for life-long learning. The authors also found accreditation also prompted faculty to engage in
professional development-related activity. Thus, the study showed the effectiveness of
accreditation as a mechanism for quality assurance (Volkwein et al., 2006).
Organizational Effects of Accreditation
Beyond student learning outcomes, accreditation also has considerable effects on an
organizational level. Procopio (2010) noted the process of acquiring accreditation influenced
perceptions of organizational culture. According to the study, administrators are more satisfied
compared to staff ⎯and especially more so than faculty ⎯when rating organizational climate,
information flow, involvement in decisions, and utility of meetings. Results of the study
indicated that institutional role was a key factor in considering changes in organizational culture
24
through accreditation (Procopio, 2010). Similarly, Wiedman (1992) described how the 2-year
process of reaffirming accreditation at a public university inspired the change of institutional
culture.
Meanwhile, Brittingham (2009) explained that accreditation offered organizational-level
benefits for colleges and universities. The commonly acknowledged benefits include students’
access to federal financial aid funding, legitimacy in the public, consideration for foundation
grants and employer tuition credits, positive reflection among peers, and government
accountability. However, Brittingham (2009) pointed out that benefits were usually ignored. For
example, accreditation is cost-effective, particularly when contrasting the number of personnel to
carry out quality assurance procedures here in the United States versus internationally, where it
is far more regulated. Second, Brittingham (2009) argued that accreditation was related to good
professional development because those who led a self-study came to learn about their institution
with more breadth and depth (p. 19). Third, self-regulation by institutions ⎯if done properly ⎯is
a better system compared to government regulation. Lastly, Brittingham posited that
accreditation enabled conditions that provided students with mobility by gathering a diverse
group of institutions under a single tent for student transfer and opportunities of a higher degree.
Future Assessment Recommendations
Many higher education institution leaders have developed plans and strategies to measure
student-learning outcomes; such assessments are already in use to improve institutional quality
(Beno, 2004). For future actions, the Council for Higher Education Accreditation, in its 2012
Final Report, recommended enhancing commitment to public accountability further:
Working with the academic and accreditation communities, explore the adoption and
implementation of a small set of voluntary institutional performance indicators based on
25
mission that can be used to signal acceptable academic effectiveness and to inform
students and the public of the value and effectiveness of accreditation and higher
education. Such indicators would be determined by individual colleges and universities,
not government. (p. 7)
In addition, Brittingham (2012) outlined three developments that had the capacity to
influence accreditation and increase its ability to improve educational effectiveness. First,
accreditation is growing more focused on data and evidence, which strengthen its value as a
means of quality assurance and quality improvement. Second, Brittingham posited that
technology and open-access education had the capability to change how one understood higher
education. These innovations ⎯such as massive open online courses ⎯hold enormous potential
to open up higher education sources. As a result, this trend will heighten the focus on student
learning outcomes. Third, the shift of focus on accountability accreditation comes with
challenges to keep and improve its emphasis on institutional and programmatic improvement
(Brittingham, 2012). This aspect becomes particularly important amid the current period of rapid
change.
Challenges of Assessing Student Learning Outcomes
Assessment is critical to the future of higher education. As noted earlier, outcome
assessment serves two main purposes ⎯quality improvement and external accountability
(Bresciani, 2006; Ewell, 2009). The practice of assessing learning outcomes is now widely
adopted by colleges and universities since its introduction in the mid-1980s. Assessment is also a
requirement of the accreditation process. However, outcomes assessment in higher education
remains a work in progress, and there remain many challenges (Kuh & Ewell, 2010).
26
Organization learning challenges. First, there is the organizational culture and learning
issue. Assessment, as clearly stated by the American Association for Higher Education (AAHE,
1992), is a means to improve educational standards. The process of assessment is not a means
unto its own end. Instead, it provides an opportunity for continuous organizational learning and
improving (Maki, 2010). Too often, institution leaders assemble and report sets of mountainous
data just to comply with federal or state accountability policy or accreditation agency’s
requirements. However, after the report is submitted, the evaluation team left, and the
accreditation confirmed, there are little incentives to act on the findings for further improvement.
The root causes of deficiencies identified are rarely followed up, and real solutions are never
sought (Ewell, 2005; Wolff, 2005).
Ewell (2005) pointed out another concern that accreditation agencies tended to emphasize
the process, rather than the outcomes once the assessment infrastructure was established. The
accreditors are satisfied with formal statements and goals of learning outcomes, but they do not
query further about how, the appropriateness, and to what degree these learning goals are applied
in the teaching and learning process. As a result, the process tends to be a single-loop learning,
where changes reside at a surface level, instead of a double-loop learning, where changes are
incorporated in the practices, belief, and norms (Bensimon, 2005).
Lack of faculty buy-in. A lack of faculty’s buy-in and participation is another hurdle in
the adoption of assessment practice (Kuh & Ewell, 2010). In a 2009 survey by the National
Institute for Learning Outcomes Assessment, two-thirds of all 2,809 surveyed schools noted that
more faculty involvement in learning assessment would be helpful (Kun & Ikenberry, 2009).
According to Ewell (1993, 2002, 2005), several reasons define why faculty is uninclined to be
directly involved in the assessment process. First, faculty view teaching and curriculum
27
development as their domain. Assessing their teaching performance and student learning
outcomes by external groups can be viewed as an intrusion of their professional authority and
academic freedom. Second, faculties are deterred by the extra efforts, time required for engaging
in outcome assessment, and the unconvincing benefit perceived. Furthermore, external bodies
impose the compliance-oriented assessment requirements, and most faculty members participate
in the process indirectly. Faculty may have a different view on the definitions and measures of
“quality” compared to that of institution or accreditors (Perrault, Gregory, & Carey, 2002, p.
273). Finally, the assessment process incurs heightened work and resources. To cut costs,
administration at the institution completes the majority of the work. Consequently, faculty
perceives that assessment as an exercise performed by administration for external audiences, and
they do not embrace the process.
Hutchings (2010) further echoed Ewell’s (1993, 2002, 2005) observation by enumerating
four formidable obstacles for faculty engagement in assessment. First, assessment is an area with
which most faculty are not familiar. The language and activities of assessment, including
accounting, testing, evaluation, measurement, total quality management, benchmarking, are
foreign to many faculty members. In addition, assessment is neither an area that faculty are
trained for during their education process, nor is it an area in which they will invest professional
development experience. The third obstacle is that there is no incentive for faculty to be involved
in assessment. It is not tied to institutional reward systems, including promotion and tenure
deliberations. Finally, faculty have not seen sufficient evidence to show assessment makes a
difference. When most faculty are stressed and pressed with increasing demands in teaching and
research, assessment appears to exhibit less priority.
28
Lack of institutional investment. A shortage of resources and institutional support is
another challenge in the implementation of assessment practice. As commented by Beno (2004),
“[D]eciding on the most effective strategies for teaching and for assessing learning will require
experimentation, careful research, analyses, and time” (p. 67). With continuously dwindling
federal and state funding in the last two decades, higher education, particularly at the public
institutions, is stripped of resources to support such an endeavor. A case in point is the recession
in early 1990s.
Budget cuts forced many states to abandon the state assessment mandates originated in
mid-1980s and switched to process-based performance indicators as a way to gain efficiency in
large public institutions (Ewell, 2005). The 2009 National Institute for Learning Outcomes
Assessment survey showed that majority of the surveyed institutions undercapitalized resources,
tools, and expertise for assessment work. Twenty-percent of respondents indicated they had no
assessment staff, and 65% had two or less (Kuh & Ewell, 2010; Kuh & Ikenberry, 2009). Beno
(2004) also described the resource issue as a challenge to improve how learning assessment
results were understood, so that institution leaders could identify ways to enhance student
learning and to change institutional policies through appropriate planning, proper allocation of
resources, and implementation of evidence-based strategies.
Difficulty with integration into local practice. Integrating the value and
institutionalizing the practice of assessment into daily operations can be another issue in many
institutions. In addition to redirecting resources, leadership’s involvement and commitment,
faculty’s participation, and personnel’s adequate assessment contribute to the success of
cultivating a sustainable assessment culture and framework on campus (Banta, 1993; Kuh &
Ewell, 2010; Lind & McDonald, 2003; Maki, 2010). Furthermore, assessment activities, imposed
29
by external authorities, tend to be implemented as an addition to, rather than an integral part of,
an institutional practice (Ewell, 2002). Assessment, such as accreditation, is viewed as a special
process with its own funding and committee, instead of being part of regular business operations.
Finally, the work of assessment, program reviews, self-study, and external accreditation at
institutional and academic program levels tend to be handled by various offices on campus and
coordinating the work can be another challenge (Perrault, Gergory, & Carey, 2002).
College leaders also tend to adopt the institutional isomorphic approach by modeling
itself after those peers who are more legitimate or successful in dealing with similar situations
and practices widely used to gain acceptance (DiMaggio & Powell, 1983). As reported by Ewell
(1993), institutions are prone to “second-guessing” and adopting the type of assessment practice
acceptable by external agencies as a safe approach, instead of adopting or customizing the one
appropriate to the local needs and situation (Ewell, 1993). Moreover, institutional isomorphism
offers a safer and more predictable route for institutions to deal with uncertainty and
competition, to confirm to government mandates or accreditation requirements, or to abide by
professional practices (Bloland, 2001). However, the strategy of following the crowd may hinder
in-depth inquiry of a unique local situation, as well as the opportunity for innovation and
creativity. Furthermore, decision makers may be unintentionally trapped in a culture of doing
what everyone is doing, without carefully examining unique local situation, the logic, the
appropriateness, and the limitations behind the common practice (Miles, 2012).
Lack of assessment standards and clear terminology presents another challenge in
assessment and accreditation practice (Ewell, 2001). With no consensus on vocabulary, methods,
and instrument, assessment practice and outcomes can have limited value. As reported by Ewell
(2005), the absence of outcome metrics makes it difficult for state authorities to aggregate
30
performance across multiple institutions and to communicate the outcomes to the public. The
exercise of benchmarking is also impossible. Bresciani (2006) stressed the importance of
developing a conceptual definition, framework, and common language at institutional level.
Outcome equity. Outcome assessment that focuses on students’ academic performance
while overlooks the equity and disparity of diverse student population, as well as the student
engagement and campus climate issues, is another area of concern. In discussing local financing
of community colleges, Dowd and Grant (2006) stressed the importance of including “outcome
equity” in additional to performance-based budget allocation (p. 20). Outcome equity pays
special attention to the equal outcomes of educational attainment among populations of different
social, economic, and racial groups (Dowd, 2003).
Accountability and Accreditation
With an increase in the number of higher education institutions, rising costs, a push for
increased access, and the examination of the cost versus benefit ratio, there has been increased
scrutiny of the value of higher education (USDE, 2006). While there is a demand for the ability
for institutions to show the value of the programs offered, there is also a fear that this may lead to
standardization, much like what has happened with the K-12 system and the No Child Left
Behind Act (Eaton, 2003). This aspect will influence institutional diversity and the ability for
individual institution leaders to maintain their individuality in offering a variety of programs and
maintaining their mission statements, which are unique to each institution.
Ewell (2008) discussed the narrow role of accreditation within the context of
accountability. Accreditation began as the self-regulation (quality assurance) between
professionals within the academic community. The original purpose behind accreditation has
been forgotten, as the emphasis currently relies on easily quantifiable outcomes. Currently,
31
researchers have believed that the emphasis on standardization causes constriction rather than
creativity, as there is a focus on items that are easily measurable, as opposed to those that are not
(Wilson, 2012, p. 41).
Tension between improvement and accountability. The tension among the equally
important goals of outcomes assessment, quality improvement, and external accountability can
be another factor affecting outcomes assessment practice. According to Ewell (2008a, 2009),
assessment practice evolved over the years into two contrasting paradigms. The first paradigm,
assessment for improvement, emphasizes constant evaluating and enhancing the process or
outcomes, while the other paradigm, assessment for accountability, demands conformity to a set
of established standards mandated by the state or accrediting agencies. The strategies, the
instrumentation, the methods of gathering evidences, the reference points, and the way results are
utilized in these two paradigms tend to be at the opposite end of the spectrum (Ewell, 2009,
2008).
In the improvement paradigm, assessment is mainly used internally to address
deficiencies and enhance teaching and learning. It requires periodic evaluation and formative
assessment to track progress over time. Conversely, the accountability paradigm assessment is
designed to demonstrate institutional effectiveness and performance to external constituencies
and to comply with predefined standards or expectations. The process tends to be performed on
set schedules as a summative assessment. The nature of these two constraints can create tension
and conflict within an institution. Consequently, an institution’s assessment program is unlikely
to achieve both objectives. Ewell (2009) further pointed out that institution leaders tended to be
more accountable when they were presented with a program that aimed to achieve both
accountability and improvement.
32
Transparency challenges. Finally, for outcome assessment to be meaningful and
accountable, the process and information must be shared and open to the public (Ewell, 2005).
Accreditation has long been criticized as mysterious or secretive, with little information to share
with stakeholders (Ewell, 2010). In a 2006 survey, the Council of Higher Education
Accreditation reported that only 18% of the 66 accreditors surveyed provided information about
the results of individual reviews publicly; less than 17% of accreditors provided a summary on
student academic achievement or program performance; and just over 33% of accreditors offered
a descriptive summary about the characteristics of accredited institutions or programs (CHEA,
2006).
The progress of disclosing assessment outcomes has been slow. In the 2014 Inside Higher
Education survey, only 9% of the 846 college presidents indicated that it was very easy to find
student outcomes data on the institution’s website, and only half of the respondents agreed that it
was appropriate for federal government to collect and publish data on outcomes of college
graduates (Jaschik & Ledgerman, 2014). With the public disclosure requirements of the No
Child Left Behind Act, there was an impetus for higher education and accreditation agencies to
be more open to public and policy makers. It was expected that further openness would
contribute to more effective and accountable business practices, as well as the improvement of
educational quality.
Costs of Accreditation
Gaston (2014) discussed the various costs associated with accreditation. Institutions are
required to pay annual fees to the accrediting body. If an institution is applying for initial
accreditation, leaders must pay an application fee, as well as pay additional fees as they progress
through the process. The institution leader seeking accreditation also pays for any on-site
reviews. In addition to these “external” costs, internal costs must be calculated, as well. These
33
internal costs can include faculty and administrative time invested in the assessment and self-
study, volunteer service in accreditation activities, preparation of annual or periodic filings, and
attendance at mandatory accreditation meetings (Gaston, 2014).
Costs of initial accreditation can vary greatly from region-to-region; however, regardless
of the region, the costs are substantial. It can cost an institution $23.500.00 to pursue initial
accreditation through the Higher Learning Commission (HLC), regardless if the pursuit is
successful. This aspect does not require the costs associated with the three required on-site visits,
nor does it include the dues that must be paid during the application and candidacy period. For
example, the applicant and candidacy fees for the Southern Association of Colleges and Schools
(SACS) are $12,500 (HLC Initial, 2012).
Shibley and Volkwein (2002) claimed there was limited research on the costs of
accreditation within the literature. Calculating the cost can be complex, as institution leaders
must evaluate both monetary and nonmonetary costs of going through the accreditation process.
One of the most complex and difficult items to evaluate is time. Reidlinger and Prager (1993)
stated there were two reasons why thorough cost-based analyses of accreditation have not been
pursued. First, voluntary accreditation may be preferable to governmental control, and
accreditation is worth the cost, despite the price. Second, it is difficult to relate perceived benefits
of accreditation to an actual dollar amount (Reidlinger & Prager, 1993, p. 39).
The Council for Higher Education Accreditation (CHEA) began publishing an almanac in
1997, and it continues to release a revised version every two years. This almanac looks at
accreditation practices across the United States. This reference guide looks at accreditation on
the macro-level, looking at data, such as number of volunteers, number of employees, and unit
34
operating budgets of the regional accrediting organizations. Little, if any, information is provided
on costs incurred by individual institution leaders as they go through the accreditation process.
In 1998, the North Central Association of Colleges and Schools (NCACS) completed a
self-study, in which they examined the perception of accreditation costs among the institutions
within that region (Lee & Crow, 1998). The study revealed some significant findings, which
included the variance of responses among institutional type. Research and doctoral institutions
were less apt to claim that benefits outweighed costs, while also responding less positively
compared to other types of institutions regarding the effectiveness and benefits of accreditation.
The study indicated that well-established research and doctoral institutions might already have
internal processes in place that served the traditional function of the accreditation process; in
which case, a traditional audit system could serve as an appropriate alternative to the formal
process by the regional accreditation organization. In looking at the results of all institutional
types, the self-study found that 53% of respondents considered the benefits of accreditation
outweighed the costs. Approximately 33% of respondents considered benefits of accreditation to
be equal to the costs. The remaining 13% of the respondents believed the costs of accreditation
outweighed the benefits.
There have been similar case studies done by Warner (1977) on the Western Association
of Schools and Colleges (WASC) and by Pigge (1979) on the Committee on Postsecondary
Accreditation. In both studies, cost was labeled as a significant concern of the accreditation
process. Budget allocations have also been influenced by accreditation results. Warner (1977)
found that approximately one-third of responding institutions had changed budget allocations
based on accreditation results; however, there was no further exploration done. The majority of
35
respondents in the Warner (1977) and Pigge (1979) studies believed that despite the costs of
accreditation, the benefits outweighed the costs.
Institution leaders go through three stages of preparation when preparing for
accreditation. Wood (2006) developed the model, which included the release time required for
the various coordinators of the accreditation review, the monetary costs of training, staff support,
materials, and the site visit of the accreditation team. Each of these stages triggers cost to the
institution. Willis (1994) also examined these costs but made the differentiation between direct
and indirect costs. Direct costs include accreditation fees, operating expenses (specific to the
accreditation process), direct payments to individuals who participate in the process, self-study
costs, travel costs, and site visit costs. Indirect costs measure things, such as time. Willis
described indirect costs as many times greater compared to direct costs because the former
required personnel time. He suggested that caution should be exercised when evaluating these
costs, and these should not be underestimated. He stated that many times, the normal tasks that
could not be completed by individuals with accreditation responsibilities were distributed to
other individuals who were not identified as a participant in the accreditation process.
Kennedy, Moore, and Thibadoux (1985) attempted to establish a methodology for how
costs were determined, with particular interest given to monetizing time spent on the
accreditation process. They looked at a period of approximately 15 months, from the time from
when the initial planning of the self-study began through the presentation of the study. They used
time logs to gather data on time spent by faculty and administrative staff. There was a high return
rate for the time logs (79% were fully completed, with a 93% return rate overall). After
reviewing the time logs, the researchers discovered the time spent by faculty and administrative
staff accounted for 94% of the total cost of the accreditation review, over two-thirds of which
36
was attributed to administrative staff. These figures demonstrated that time required by both
faculty and administrative staff was the most significant cost involved in the accreditation
process. The researchers concluded that this cost was not excessive, as there was a 7-year span
between each self-study review process.
Kells and Kirkwood (1979) conducted a study in the middle states region, which looked
at the direct costs of participating in a self-study. Almost 50% of the respondents reportedly
spent under $5,000.00 on the self-study, which was not defined as excessive. The researchers
also determined that 100 to 125 people were directly involved in the self-study. The majority of
participants were faculty (41-50%), followed by staff (21-30%), and very few students. The size
of the institution was believed to have the greatest influence on the composition of the self-study
committee (number of faculty versus staff), as well as the cost of the self-study itself.
Doerr (1983) used a case study to explore the direct costs of accreditation and examine
the benefits received from accreditation when university executives wished to pursue additional
programmatic accreditations. Both financial costs and opportunity costs of institutional
accreditation granted by SACS and four programmatic accreditations cultivated by the
University of West Florida in 1981 to 1982 were examined. He assigned an average wage per
hour to faculty and administrative staff, while also adding in the cost of material supplies. The
researcher estimated the total direct costs of accreditation for these reviews totaled $50,030.71.
He also projected there would be additional costs in the following years, particularly membership
costs for the accrediting organizations and those costs associated with preparing for additional
programmatic reviews. He concluded looking at the opportunity costs, while examining
alternative ways this money might have been spent.
37
Shibley and Volkwein (2002) evaluated the benefits of a joint accreditation by
conducting a case study of a public institution in the middle states region. This institution has
multiple accrediting relationships, including both institutional and programmatic reviews. They
confirmed what Willis (1994) had suggested, which was that time was much more critical and
poses more burden in completing the self-study process, rather than the efforts to secure financial
resources needed to sustain self-study needs (Shibley & Volkwein, 2002). Shibley and Volkwein
(2002) found that the separate accreditation processes had more benefits for individuals
compared to the joint effort; however, the joint process was less costly, and the sense of burden
for participants was reduced.
Several studies were released on the expense of accreditation and its value to institutions.
Britt and Aaron (2008) distributed surveys to radiology programs without specialized
accreditation. These institutions reported that the expense of accreditation was the primary factor
in not pursuing accreditation. A secondary consideration was the time required to go through the
accreditation process. Many respondents indicated that a decrease in the expense would allow
them to consider pursuing accreditation in the future.
Bitter, Stryker, and Jens (1999) and Kren, Tatum, and Phillips (1993) considered
specialized accreditation for accounting. Both studies found that non-accredited programs
believed that accreditation costs outweighed the benefits. Many program leaders have claimed
they follow accreditation standards; however, there was no empirical evidence to prove this was
true. Because the program leaders did not go through the accreditation process, there was no way
to verify if they were actually meeting the established accreditation standards.
Cost is frequently used as a factor as to why institution leaders have not pursued
accreditation. In addition to direct costs of accreditation, resources, time, and energy spent are
38
also included. The Florida State Postsecondary Planning Commission (1995) defined costs in a
variety of ways, and only sometimes included indirect costs as part of the definition. Benefits
may influence up to three groups: students, departments, and the institution. The commission
recommended that institution leaders who sought out accreditation should balance the direct and
indirect costs of accreditation with the potential benefits to each group before making the
decision to pursue accreditation.
Because of the concerns of the higher education community and the research on the costs
associated with accreditation, both the National Advisory Committee on Institutional Quality and
Integrity (NACIQI) and the American Council on Education (ACE) published reports in 2012.
These reports called for a cost-benefit analysis of the accreditation process in an attempt to
reduce excessive and unnecessary costs. NACIQI recommended that data gathering be
responsive to standardized expectations and that it would only seek out information that was
useful and that could not be found elsewhere (NACIQI Final, 2012, p. 7). The ACE (2012) task
force called for an evaluation of required protocols, such as the self-study, the extent and
frequency of on-site visits, expanded opportunities for the use of technology, greater reliance on
existing data, and the evaluation of potential duplication of requirements imposed by different
agencies and the federal government (pp. 26-27). Moreover, the cost of accreditation can be
defined in many ways, including both time and money. Schermerhorn, Reisch, and Griffith
(1980) indicated that the time commitment required by institutions to prepare for accreditation
was one of the most significant barriers of the entire process.
Due to the limited amount of research in the area on the cost-benefit analysis of the
accreditation process, Woolston (2012) conducted a study consisting of distributing a survey to
all regionally accredited institutions of higher education in the United States, who granted
39
baccalaureate degrees. The survey was sent via email to the primary regional Accreditation
Liaison Officer (ALO) for each institution. It targeted four primary areas, including demographic
information, direct costs, indirect costs, and an open-ended section allowing for possible
explanation for the costs. Results showed that one of the most complicated things was to
determine the monetary value of the time associated with going through the accreditation
process. Through analysis of the open-ended response questions, accreditation liaison officers
indicated that two of the biggest benefits of going through accreditation included self-evaluation
and university improvement. Other themes emerged, such as campus unity, outside review,
ability to offer federal financial aid, reputation, sharing best practices, celebration, and fear
associated with not being accredited. While participants agreed that accreditation costs were
significant and excessive, many accreditation liaison officers believed the costs were justified
and the benefits of accreditation outweighed both the direct and indirect costs (Woolston, 2012).
International Accreditation in Higher Education
The United States has developed a unique accreditation process (Brittingham, 2009). The
most obvious difference between the United States and other countries is in the way education is
governed. In the United States, education is governed at the state level, whereas other nations are
often governed by a ministry of education (Ewell, 2008; Middaugh, 2012). Dill (2007) outlined
three traditional models of accreditation. These include the following (a) the European model of
central control of quality assurance by state educational ministries; (b) the U.S. model of
decentralized quality assurance, combining limited state control with market competition; and (c)
the British model, in which the state essentially ceded responsibility for quality assurance to self-
accrediting universities (Ewell, 2008; Middaugh, 2012). These models have been used in some
form by other nations in South America, Africa, and Asia. Historically, direct government
40
regulation (the European model) of higher education has been the most prevalent form of
institutional oversight outside of the United States (Dickeson, 2009).
The low level of autonomy historically granted to post-secondary institution leaders has
limited their ability to compete effectively against institutions in the United States and other
countries (Dewatripont et al., 2010; Jacobs & Van der Ploeg, 2006; Sursock & Smidt, 2010).
Overall, European institutions are found to suffer because of poor governance, lack of autonomy
and incentives for research (Dewatripont et al., 2010). Many European countries have a highly
centralized system of higher education, such as France, Germany, Italy, and Spain (Van der
Ploeg & Veugelers, 2008). In addition, the level of governmental intervention inhibits European
universities from innovating and reacting quickly to changing demands (Van der Ploeg and
Veugelers, 2008). Moreover, institutions in Europe with low levels of autonomy have
historically had little to no control in areas including hiring faculty, managing budgets, and
setting wages (Aghion et al., 2008). Thus, it is difficult for universities with low autonomy to
attract and retain the faculty needed to compete for top spots in global ranking indices (Aghion et
al., 2008; Dewatripont et al., 2010; Jacobs & Van der Ploeg, 2006).
However, some European nations have conducted serious reforms to higher education
systems, including Denmark, Netherlands, Sweden, and the United Kingdom. Not surprisingly,
universities with high autonomy in these countries have higher levels of research performance
compared to European countries with low levels of institutional autonomy (Dewatripont et al.,
2010). This sentiment is echoed by Aghion et al. (2008), who argued research performance
(which influenced academic prestige and rankings) was negatively influenced by less
institutional autonomy.
41
While research on accreditation’s direct influence on student learning outcomes was
sparse, Jacobs and Van der Ploeg (2006) argued the European system of greater regulation had
some benefits. They concluded that institutions in continental Europe had better access for
students with lower socioeconomic status, better outcomes for student completion, and even
lower spending per student.
Internationalization of Accreditation
Due to globalization, there is an increased focus on how to assure quality of standards in
higher education across nations. Assessment frameworks are being initiated and modified to
meet these increased demands for accountability (World Bank, 2002). Recent researchers have
tried to compare these assessment trends across multiple countries. For example, Bernhard
(2011) conducted a comparative analysis of such reforms in six countries (Austria, Germany,
Finland, the United Kingdom, the United States, and Canada). Stensaker and Harvey (2011)
identified a growing trend that nations relied on forms of accreditation distinctly different from
the U.S. accreditation processes. Specifically, they identified the academic audit as an
increasingly used alternative in countries such as Australia and Hong Kong. Yung-chi Hou
(2014) examined challenges the Asia-Pacific region faced in implementing quality standards that
crossed national boundaries.
Another outcome of globalization is the internationalization of the quality-assurance
process itself. Rather than each nation setting its own assessment frameworks, international
accords are attempting to bridge academic quality issues between nations. Student mobility
across national borders has driven this need for “international mutual accreditation networks”
(Van Damme, 2000, p. 17). There are many loosely or unconnected initiatives that have formed
over the last decade.
42
The United Nations Educational, Scientific and Cultural Organization (UNESCO, 2005)
has begun the discussion on guidelines for international best practices in higher education. The
International Network for Quality Assurance Agencies in Higher Education (INQAAHE) is a
network of quality assurance agencies aimed to help ensure cross-border quality assurance
measures. Public-policy led initiatives in Europe include the establishment of the “European
Standards and Guidelines for quality assurance in higher education (ESG) in the framework of
the Bologna Process” (Cremonini et al, 2012, p. 17). The CHEA International Quality Group
(CIQG) provides a forum to discuss quality assurance issues in an international context.
In conclusion, the U.S. system of accreditation has served as a model for higher
education assessment worldwide. Nonetheless, there is considerable difference in how other
nations govern quality assurance. While internationalization of the higher education accreditation
process will continue to increase, the precise frameworks used to achieve cross-national quality
standards remains undetermined. For the immediate future, nation leaders will continue to use
their own frameworks for accreditation. International accreditation processes may eventually
supersede these existing frameworks, but not anytime soon.
Critical Assessment of Accreditation
It seems accreditation has evolved from simpler days of semi-informal peer assessment
into a burgeoning industry of detailed analysis, student learning outcomes assessment, quality
and performance review, financial analysis, public attention, and all-around institutional scrutiny
(Bloland, 2001; Burke & Minassians, 2002; McLendon, Hearn, & Deaton, 2006; Zis, Boeke, &
Ewell, 2010). Public scrutiny of institutions to establish worth, contribution to student learning,
and a progressively regulated demand for institutional proof of success shown by evidence and
assessment has changed accreditation and created a vacuum of knowledge about how
accreditation is truly working in practice (Commission on the Future of Higher Ed, 2006;
43
Dougherty, Hare, & Natow, 2009; Leef & Burris, 2002). WASC-ACCJC’s recent history has
demonstrated profound changes in practices (e.g., updated standards for accreditation and a
rising rate of institutional sanctions) and the need for more information concerning the
relationship of accreditation to institutional data (Baker, 2002). Initial data collection found that
55% of all California community colleges have been sanctioned once since 2002 (Moltz, 2010).
Measures of inputs, outputs, local control versus governmental review, performance
funding versus institutional choice, rising demands, and institutional costs make difficult the task
of understanding trends and movement of regional accreditation in the United States;
nevertheless, these have a great influence on the actual implementation of accreditation standards
to real-world institutions (Leef & Burris, 2002). Researchers have called for increased public
transparency of accreditation findings and actions, including full publication of reports by the
commission and by the institutions in question. For example, some institutions are sanctioned for
deficiencies and may be given a detailed list of reporting deadlines to show compliance and
ongoing quality review for those areas noted as lacking. Some correspondence between
accreditation commissions and the institutions are public, whereas others are private. Therefore,
this semipublic nature to accreditation has been a point of contention in the literature on
accountability and assessment (Eaton, 2010; Ikenberry, 2009; Kuh, 2010). Moreover, WASC-
ACCJC (2011) has been the center of controversies during the past 10 years, due mainly to its
enlarged emphasis upon student learning outcomes compliance (WASC, 2002). There is much
debate on whether student-learning outcomes are the best measure and appropriate to education;
whether these violate the purview of faculty members; or whether these are truly in the best
interest of students, best practices, and learning (Eaton, 2010).
44
Accreditation has evoked emotional opposition since its inception, and much has been
expressed in colorful language. Accreditation has been accused of “[benefiting] the small, weak,
and uncertain” (Barzun, 1993, p. 60). It is a “pseudo-evaluative process” (Scriven, 2000, p. 272),
designed to give a façade of self-regulation but does not actually suffer the inconvenience of
such system. It is perceived as “grossly unprofessional evaluation” (Scriven, 2000, p. 271), and
that, according to Scriven (2000), it is not surprising that many states do not strongly enforce
accreditation. In addition, accreditors are accused of making the accreditation process a “high-
wire” effort for institutions (American Council of Trustees and Alumni, 2007, p. 12). The system
of accreditation is also seen as subverting the welfare of institutions and undermining the public
by focusing on the interest of the interests of a few (American Medical Association, 1971, p. F-
3). Thus, accreditation standards have become too low (American Council of Trustees and
Alumni, 2007), and accreditation is responsible for the “homogenization of education” and the
“perseverance in the status quo” (Finkin, 1973, p. 369).
Consequently, such a system ignores the welfare of the institutions’ constituents,
especially the students (Learned & Wood, 1938). It does not promote positive institutional
outcomes; instead, it consists of activities and processes that are more archaic than progressive,
and it does not address the actual challenges that institutions encounter to obtain accreditation
(Dickeson, 2006). Wriston (1960) even described the accreditation standards as a comparison
between apples and grapes, which failed to ensure the quality of education that students received
(Gruson, Levin, & Lustberg, 1979). Despite the millions of taxpayers’ money being spent,
accreditation has failed to ensure educational diversity and authentic reform due to its incapacity
to function as a regulatory standard for many institutions (Finn, 1975).
45
The renewal of the Higher Education Act in 1992 came during a time of heightened
government concern over increasing defaults in student loans. Again, concerned about the lack
of accountability demonstrated by accreditation, this legislation established a new institution: the
State Postsecondary Review Entity (SPRE; Ewell, 2008). The creation of these agencies was
intended to shift the review of institutions for federal aid eligibility purposes from regional
accreditors to state governments. This direct threat to accreditation led to the dissolution of the
Council on Postsecondary Accreditation (COPA) and the proactive involvement of the higher
education community resulting in the creation of the Council for Higher Education Accreditation
(CHEA). The issue of cost ultimately led to the abandonment of the SPREs when legislation
failed to provide funding for the initiative (Ewell, 2008). The governmental concern did not
dissipate however, and in 2006, the U.S. Department of Education released a report known as the
Spellings Commission, which criticized accreditation for being both ineffective and a barrier to
innovation (Eaton, 2008, 2012b).
Other concerns are evident. It is problematic when accreditation is considered a chore to
be accomplished as quickly and painlessly as possible, rather than an opportunity for genuine
self-reflection for improvement. Moreover, institutional self-assessment is ineffectual when there
is faculty resistance and a lack of administrative incentive (Bardo, 2009; Commission on
Regional Accrediting Commissions, n.d.; Driscoll & De Noriega, 2006; Rhodes, 2012; Smith &
Finney, 2008; Wergin, 2012). One of the greatest stresses on accreditation is the tension between
assessment for the purpose of improvement and assessment for the purpose of accountability,
two concepts that operate in irresolvable conflict with each other (American Association for
Higher Education, 1997; Burke & Associates, 2005; Chernay, 1990; Ewell, 1984, 2008; Harvey,
2004; National Advisory Committee on Institutional Quality and Integrity, 2012; Provezis, 2010;
46
Uehling, 1987b). However, some have argued that the two can be effectively coordinated for
significant positive results (Brittingham, 2012; El-Khawas, 2001; Jackson, Davis, & Jackson,
2010; Walker, 2010; Westerheijden, Stensaker, & Rosa, 2007; Wolff, 1990). Another concern
involves the way that being held to external standards undermines institutional autonomy, which
is a primary source of strength in the American higher education system (Ewell, 1984).
The Spelling’s Commission report detailed a new interest from the U.S. Department of
Education in analyzing the status quo of regional accreditation commissions (Commission on the
Future of Higher Education, 2006). Ewell (2008) described the report as a scathing rebuke of
inability of regional accreditors to innovate and a hindrance to quality improvement. Others have
called for an outright “end… to the accreditation monopoly” (Neal, 2008, p. 20).
There have been increasing calls within the last several years, even since the Spellings
report of 2006, to reform or altogether replace accreditation as it is currently known (American
Council of Trustees and Alumni, 2007; Gillen, Bennett, & Vedder, 2010; Neal, 2008). The
American Council on Education (2012) recently convened a task force comprised of national
leaders in accreditation to explore the adequacy of the current practice of institutional
accreditation. They recognized the difficulty of reaching a consensus on many issues but
recommended strengthening and reinforcing the role of self-regulation in improving academic
excellence. The Spelling’s Commission report signaled Federal interest in setting the stage for
new accountability measures of higher education, raising the worst fears of some defenders of a
more autonomous, peer-regulated industry (Eaton, 2003). Accreditation’s emphasis on value and
the enhancement of individual institutions with regional standards was now being pressed to
achieve accountability roles for the entire sector of U.S. higher education (Brittingham, 2008).
47
Specialized Accreditation
Specialized accreditation focuses on the specialized training and knowledge needed for
professional degrees and careers (i.e., Accreditation Council for Pharmacy Education [ACPE],
Accrediting Council on Education in Journalism and Mass Communications [ACEJMC],
Council on Accreditation of Nurse Anesthesia Educational Programs [CoA-NA], Council on
Social Work Education Office of Social Work Accreditation [CSWE], and Teacher Education
Accreditation Council, Inc. [TEAC], which are just a few noted by the Council for Higher
Education Accreditation [CHEA]) associated with 3,000 degree-granting colleges and
universities and recognizes 60 institutional and programmatic accrediting organizations (2013-
2014 Directory of CHEA-Recognized Organizations). Programmatic accreditation is granted and
monitored by national organizations unlike regional accrediting organizations (i.e., Western
Association of Schools and Colleges [WASC], Southern Association of Colleges and Schools
[SACS], and North Central Association of Colleges and Schools), which are associated
regionally by geographic region (Adelman & Silver, 1990; Eaton, 2009; Hagerty & Stark, 1989).
As noted by the Global University Network for Innovation (2007) publication,
institutional accreditation has to focus on academic programs, whereby not ignoring it in order to
be effective, while programmatic accreditation have to support the overall institutional
accreditation goals in meeting its objectives, thus working hand-in-hand for overall institutional
success. Coordinating institutional accreditation efforts where possible can be cost effective
because overlap exists between the process of both regional and programmatic accreditation.
However, the review process and resource allocations can become complicated due to its
overwhelming nature (e.g., see good practices suggested by the Western Association of Schools
and Colleges, 2009; Shibley & Volkwein, 2002).
48
Programmatic accrediting organizations recognized by CHEA affirm that the standards
and processes of the accrediting organization remain consistent with the academic quality,
improvement, and accountability expectations that CHEA has established (http://www.chea.org).
Institutions acknowledge the pressure of meeting not only institutional accreditation but also
specialized accreditation of individual programs, thereby upholding the notion of efficacy of
students and professions (Bloland, 2001). Specialized program accreditations carry institutional
quality assurance importance because differences between individual programs within a single
institution can be greater compared to entire institution. The credibility of program accreditation
review is strengthened based on its achievement, where its more focused on particular areas of
studies and carried out by colleagues from peer institutions who are specialists in specific
disciplines (Ratcliff, 1996).
Research on program accreditation suffers from the same lack of volume and rigor as
research on institutional accreditation. Strong faculty involvement and instruction have been
linked to individual program accreditation (Cabrera et al., 2001; Daoust, Wehmeyer, & Eubank,
2006). Other researchers have focused on student outcomes in measuring competencies and
found that program accreditation does not provide enough support for student success (Hagerty
& Stark, 1989). Program accreditation outlines the parameters of professional education (Ewell,
Wellman, & Paulson, 1997; Hagerty & Stark, 1989) and upholds the national professional
standards (e.g., American Accounting Association, 1977; Bardo, 2009; Floden, 1980; Raessler,
1970), which calls for further empirical research on specialized accreditation, given its
importance on student’s educational and professional achievement.
National Association of Schools of Music (NASM)
Established in 1924 with 24 founding members, NASM currently has 644 accredited
members. The mission statement of the NASM includes three basic purposes:
49
• To advance the cause of music in American life, and especially in higher education;
• To establish and maintain threshold standards for the education of musicians, while
encouraging diversity and excellence; and
• To provide a national forum for the discussion of issues related to these purposes.
At the core of NASM’s activities is accreditation. While it is not the sole purpose of
NASM, one of its primary functions involves evaluating and ensuring the quality of the
programs offered by its member institutions. NASM’s accreditation procedures involve
orientation and training sessions, institutional self-study, on-site evaluation, and review by an
accrediting commission.
NASM (2014) outlined the five components of the accreditation process, which included
standards, self-study, on-site review, commission action, and obligations. Standards represent
threshold conditions for offering various types of music degrees and other credentials. These
provide a framework for institutions to approach content in an individualized way that will meet
the needs of students. There are not specific methods outlined for implementation; however,
established basic competencies must be met by those who obtain a degree. For example, the self-
study process is designed so that institutions can compare their institutional practices against
NASM standards and their own personal goals. It allows individual institution leaders to identify
strengths, areas for improvement, and aspirations for future development.
Trained NASM evaluators conduct the on-site review to observe self-study. A “visitors
report” of their findings is prepared and given to the accreditation commission and the
institution. An opportunity is given to the institution to respond to any perceived errors or to note
any changes made since the visit occurred.
50
Following the review of the self-study, NASM visitors report, and any optional response,
the commission will decide on the accreditation status of an institution. The status of an
accreditation review is then published in the NASM directory. Accredited institutions are
required to submit an annual Higher Education Arts Data Services (HEADS) report. This
includes enrollment, faculty members and salaries, administrative process, budget, presentation
information, and ratios.
Alternatives to Accreditation
As the role of accreditation is highlighted within the United States, one must review the
alternatives to the current system that have been proposed in previous years. Generally speaking,
scholars or administrators have proposed alternatives to accreditation in the past that included the
common theme of increased government involvement (either at the state or federal government
level). To illustrate this notion, Orlans (1975) described the development (at the national level)
of a Committee for Identifying Useful Postsecondary Schools that would allow for accrediting
agencies to focus on a wider range of schools. This committee was part of Orlans’s greater
overall idea that there was an increase in the amount of competition among accrediting agencies
to the advance education further. Trivett (1976) demonstrated there was a triangular relationship
between accrediting agencies, state governments, and federal governments. Trivett (1976) said
the following:
In its ideal form, the states establish minimum legal and fiscal standards, compliance
with which signifies that the institutions can enable a student to accomplish his objectives
because the institution has the means to accomplish what it claims it will do. Federal
regulations are primarily administrative in nature. Accrediting agencies provide depth to
the evaluation process in a manner not present in either the state or federal government’s
evaluation of an institution by certifying academic standards. (p. 7)
51
Trivett’s (1976) statement speaks to the ever-present relationship between accreditation
agencies, state governments, and federal governments. This aspect means that each participating
body, the accrediting agency and the institution, and the roles are associated with the need to
improve the accreditation processes. Thus, accreditation does not only entail ensuring surface-
level quality assurance, but it is also necessary to develop a process that covers regulatory
standards for member institutions.
Harcleroad (1976, 1980) identified six different methods for accreditation in his writings:
Three vouched for an expansion of responsibility for state agencies; one called for an expansion
of federal government responsibility; and the remaining two asked for a modification of the
present system (by increasing staff members or auditors) or keeping the present system in place.
Harcleroad (1976, 1980) suggested that both regional and national association leaders should
seek to develop and refine their processes to increase the objectivity of the methods. These
methods, proposed by Harcleroad (1976, 1980), clearly demonstrate a preference for increased
state government involvement within the accreditation process. Harcleroad (1976) also spoke
about using educational auditing and accountability as an internal review to increase both
external accountability and internal quality. This concept was modeled after the auditing system
developed by the Securities and Exchange Commission (SEC) that was used to accredited
financial organizations (Harcleroad, 1976).
Another example of internal and external audits was demonstrated by the proposals in the
essay produced by three scholars (Graham, Lyman, & Trow, 1995). The essay (i.e.,as the Mellon
report) was the result of a grant funding the study of accountability of higher education
institutions to their three major constituencies (students, government, and the public; Bloland,
2001). This essay emphasized the notion that accountability had both an internal and external
52
aspect, and the authors suggested that institutions conduct internal reviews (primarily within
their teaching and research units) every 5 to 10 years (Bloland, 2001). Once this internal review
was completed, an external review would then be conducted in the form of an audit on the
procedures of the internal review (Bloland, 2001). Specifically, this external audit would be
conducted by regional accrediting agencies, while institutional accrediting agencies were
encouraged to pay close attention to the internal processes to determine if the institution had the
ability to learn and address its weaknesses (Bloland, 2001). These concepts surrounding auditing
were later explored by other authors and, most recently, were linked to discussions regarding the
future of higher education accreditation (Bernhard, 2011; Burke & Associates, 2005; Dill,
Massy, Williams, & Cook, 1996; Ewell, 2012; Ikenberry, 2009; Western Association of Schools
and Colleges, 1998; Wolff, 2005).
In examining alternatives to accreditation, one must note that alternative programs have
been established by regional accreditors as enhancements to current accreditation processes. For
example, in 2001, the Higher Learning Commission (an independent commission within the
North Central Association of Colleges and Schools) established an alternative assessment for
institutions that have already been accredited: the Academic Quality Improvement Program
(AQIP). According to Spangehl (2012), this process instilled the notion of continuous quality
improvement through the processes that would provide evidence for accreditation. An example
of AQIP offering continuous improvement for higher educational institutions would be its
encouragement of institutions to implement using various categories (i.e., the Helping Students
Learn category that allows for institution leaders to monitor their ongoing program and
curricular design continuously) to stimulate organizational improvement (Spangehl, 2012).
53
Another example of an alternative program is using the Quality Enhancement Plan (QEP)
by the Southern Association of Colleges and Schools (Jackson et al., 2010; Southern Association
of Colleges and Schools, 2007). QEP was adopted in 2001 and defined as an additional
accreditation requirement that would help guide institutions to produce measurable improvement
in the areas of student learning (Jackson et al., 2010). A few common themes of student learning
that have been utilized by institutions (through the use of QEP) include inspiring student
engagement, encouraging critical thinking, and promoting international tolerance (Jackson et al.,
2010).
This section has offered a glimpse into the alternatives to accreditation that have been
proposed and implemented in the past. One must note that many have criticized accreditation.
However, the general thoughts of many are that accreditation is a critical piece of academia and
is vital to accomplishing the goal of institutional quality assurance and accountability (Bloland,
2001).
Current State and Future of Accreditation
Accreditation in higher education is at a crossroads. Since the Spellings Report was
released in 2006, which called for more government oversight of accreditation to ensure public
accountability, the government and critics have begun scrutinizing a system that had been
nongovernmental and autonomous for several decades (Eaton, 2012). The U.S. Congress is
currently in the process of reauthorizing the Higher Education Act (HEA), and it is expected that
leaders will address accreditation head-on. All the while, CHEA and other accreditation
supporters have been attempting to convince congress, the academy, and the public of
accreditation’s current and future relevance in quality higher education.
In anticipation of the HEA’s reauthorization, NACIQI (2012) was charged with providing
the U.S. secretary of education with recommendations on recognition, accreditation, and student
54
aid eligibility. The committee advised that accrediting bodies should continue their gatekeeping
role for student aid eligibility but also recommended some changes to the accreditation process.
These changes included more communication and collaboration between accreditors, states, and
the federal government to avoid overlapping responsibilities; moving away from regional
accreditation and toward sector or mission-focused accreditation; creating an expedited review
process and developing more gradations in accreditation decisions; developing more cost-
effective data collection and consistent definitions and metrics; and making accreditation reports
publicly available (NACIQI, 2012).
However, two members of the committee did not agree with the recommendations and
submitted a motion to include the Alternative to the NACIQI Draft Final Report, which
suggested eliminating accreditor’s gatekeeping role; creating a simple, cost-effective system of
quality assurance that would revoke financial aid to campuses not financially secure; eliminating
the current accreditation process altogether as means of reducing institutional expenditures;
breaking the regional accreditation monopoly; and developing a user-friendly, expedited
alternative for the re-accreditation process (NACIQI, 2012). The motion failed to pass, and the
alternative view was not included in NACIQI’s (2012) final report. As a result, Hank Brown
(2013), the former U.S. Senator from Colorado and founding member of the American Council
of Trustees and Alumni, drafted a report seeking accreditation reform and reiterating the
alternatives suggested above, because accreditation had “failed to protect consumers and
taxpayers” (p. 1).
The same year the final NACIQI (2012) report was released, the American Council of
Education’s (ACE, 2012) Task Force on Accreditation released its own report that identified
challenges and potential solutions for accreditation. The task force made six recommendations:
55
(a) increase transparency and communication, (b) increase the focus on student success and
institutional quality, (c) take immediate and noticeable action against failing institutions, (d)
adopt a more expedited process for institutions with a history of good performance, (e) create
common definitions and a more collaborative process between accreditors, and (f) increase cost-
effectiveness (ACE, 2012). They also suggested that higher education should address the
deficiencies of the processes (ACE, 2012).
President Obama has also recently spoken out regarding accountability and accreditation
in higher education. In his 2013 State of the Union address, Obama (2013a) asked congress to
“change the Higher Education Act, so that affordability and value are included in determining
which colleges receive certain types of federal aid” (para. 39). The address was followed by The
President’s Plan for a Strong Middle Class and a Strong America, which suggested achieving
the above change to the HEA
either by incorporating measures of value and affordability into the existing accreditation
system; or by establishing a new, alternative system of accreditation that would provide
pathways for higher education models and colleges to receive federal student aid based
on performance and results. (Obama, 2013b, p. 5)
Furthermore, in August 2013, President Obama (2013c) called for a performance-based rating
system that would connect institutional performance with financial aid distributions. Though
accreditation was not specifically mentioned in his plan, it is not clear if the intention is to
replace accreditation with this new rating system or utilize both systems simultaneously (Eaton,
2013b).
The president’s actions over the last year have CHEA and other supporters of
nongovernmental accreditation concerned. Calling it the “most fundamental challenge that
56
accreditation has confronted to date,” Eaton (2012, p. 20) has expressed concern over the
standardized and increasingly regulatory nature of the federal government’s influence on
accreditation. Astin (2014) also stated that if the U.S. government created its own process for
quality control, the U.S. higher education system was “in for big trouble” (para. 9), such as in the
government-controlled, Chinese higher education system.
Though many agree there will be an inevitable increase in federal oversight after the
reauthorization of the HEA, supporters of the accreditation process have offered
recommendations for minimizing the effect. Gaston (2014) provided six categories of
suggestions, which included stages for implementation: consensus and alignment, credibility,
efficiency, agility and creativity, decisiveness and transparency, and a shared vision. The
categories maintain the aspects of accreditation that have worked well and that are strived for
around the world ⎯nongovernmental, peer review ⎯as well as addressing the areas receiving the
most criticism. Eaton (2013a) added that accreditors and institutions must push for streamlining
of the federal review of accreditors to reduce federal oversight, better communicate the
accomplishments of accreditation and how quality peer-review benefits students, and anticipate
any further actions the federal government may take.
While the HEA undergoes the process of reauthorization, the future of accreditation
remains uncertain. There have been many reports and opinion pieces on how accreditation
should change and/or remain the same, much of these containing overlapping themes. Only time
will tell if the accreditors, states, and the federal government reach an acceptable and functional
common ground that ensures the quality of U.S. higher education into the future.
57
Conclusion
It has been three decades since the birth of the assessments movement in U.S. higher
education, and a reasonable amount of progress has been made (Ewell, 2005). Systematic
assessment of student learning outcomes is now a common practice at most institutions, as
reported by three nationwide surveys. The 2008 survey performed by AAC&U reported that 78%
of the 433 surveyed institutions have a common set of learning outcomes for all their
undergraduate students, and 68% of the institutions assess learning outcomes at the departmental
level (Hart Research Associates, 2009a). Similarly, the 2009 National Institute for Learning
Outcomes Assessment (NILOA) found that 74% of the 1,518 surveyed institutions adopted
common learning outcomes for all undergraduate students, and most institutions conducted
assessments at both the instructional and program level (Kuh & Ikenberry, 2009). In a
subsequent NILOA survey in 2014, the number of institutions with common outcomes
assessment rose to 84% (Kuh, et al., 2014).
As the public concern about the performance and quality of American colleges and
universities continues to grow, it is more imperative than ever to embed assessment in the
everyday work of teaching and using assessment outcomes to further improve practice, to inform
decision makers, to communicate effectively with the public, and to be accountable for preparing
the national learners in the knowledge economy. With effort, transparency, continuous
improvement, and responsiveness to society’s demands, higher education institution leaders can
regain trust from the public.
58
CHAPTER THREE: METHODOLOGY
Rationale for the Study
Accreditation and assessment occurred to ensure quality, improvement, performance,
student outcomes, and the transparency of data within the realm of higher education.
Accreditation has become increasingly more important to institution leaders, as they have been
required to demonstrate accountability and validity of their programs being offered.
Accreditation status not only influences an institution’s reputation but also has significant
financial implications. These financial implications include both costs associated with going
through the accreditation process and the ability to receive federal funding, such as federal
student financial aid.
The purpose of this study was to investigate the perceived benefits of being accredited by
the National Association of Schools of Music (NASM). This study examined the following
research questions and hypotheses:
RQ1: What are the perceived costs and benefits associated with NASM
membership/accreditation?
H10a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by institution type.
H11a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by institution type.
H10b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by institution type.
H11b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by institution type.
59
RQ2: Do perceived costs and benefits associated with NASM membership/accreditation
vary by institution type and regional accreditation status?
H20a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by regional accreditation status.
H21a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by regional accreditation status.
H20b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by regional accreditation status.
H21b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by regional accreditation status.
Overall, this study was concerned with identifying trends among various types of
institutions regarding the research questions outlined above. These institutions varied by size,
mission, locations, and degrees/programs offered, as well as differences among the type of
accreditation maintained by each institution.
Research Design
This study used a quantitative research approach to examine institutional perceptions of
the benefits and costs associated with being regionally accredited, while also being affiliated
with the NASM. This research approach involved using primarily quantitative data with
qualitative data from open-ended items as additional support to explain or explore a
phenomenon. A quantitative research approach was appropriate for this study because
quantitative data from survey responses provided answers to the research questions, whereas
open-ended responses provided additional qualitative insights to support the quantitative results.
The multiple sources of data were integrated to triangulate the findings, thereby supporting the
60
validity of each data source with another data source (Yin, 2009). The results of the qualitative
analysis also provided further observations, elucidating the results of the quantitative analysis.
Quantitative studies involve numbers and any factor that is measurable (Moghaddam &
Moballeghi, 2008). Quantitative methodologies are used when the objective is to investigate
relationships between two or more variables or if an independent variable affects a dependent
variable (Babbie, 2012). The variables for this study were measured quantitatively to be
statistically tested.
Conversely, qualitative studies involve the collection of non-numerical and non-statistical
data (Denzin, 2012). A qualitative study is conducted in an attempt to understand the attitudes,
behaviours, motivations, and concerns of a targeted research group (Babbie & Benaquisto,
2009). A qualitative study was more appropriate for the objective of generating findings based
on the experiences of the interview participants because with a quantitative method, the
researcher could not analyze results for open-ended questions.
This study consisted of a survey that was emailed out to NASM member institutions (n =
647). This researcher-developed survey collected data on the perceived costs and benefits of
maintaining accreditation with NASM. The survey also asked participants to give their feedback
on the NASM accreditation evaluation standards. The survey included both Likert-scale
questions and open-ended questions. This survey provided a large data set, allowing for strong
measures of validity. The survey allowed for a comparison of responses between those
individuals who were both regionally accredited and members of NASM with the responses of
those who were accredited solely by NASM. The open-ended questions allowed the researcher to
consider common themes among institutions, allowing the opportunity to make
61
recommendations on ways in which to improve the current NASM evaluation/accreditation
process.
Population and Sample
The population for the study was all members of the NASM, which included schools that
were accredited solely by NASM, as well as institutions that were accredited by one of the six
major regional accrediting bodies, as recognized by the Council for Higher Education
Accreditation (CHEA) and maintain affiliation with NASM. Established in 1924, 647 accredited
institutional members existed (n = 647). The study focused on this population, as it was the focus
of the established research question(s) regarding the NASM accreditation process and the
perceived value of affiliation/membership.
Instrumentation
Researchers have identified surveys as “a remarkably useful and efficient tool for
learning about people’s opinions and behaviors” (Dillman, Smyth, & Christian, 2009, p. 1). A
survey was the most efficient way to collect data from this large population, with intentions of
revealing trends on the perceived value of NASM accreditation/affiliation. A survey about
NASM accreditation evaluation standards was used (see Table 1). Question items 1, 2, 3, 4, 7, 8,
9, 10, 11, and 12 were closed-ended questions, wherein the response scales were Likert-type
scales or categorical scales. Conversely, Questions 5 and 6, which asked about the perceived
costs and benefits associated with NASM membership/accreditation, were open-ended type
questions. Questions 1, 2, 3, 4, 7, 8, 9, 10, and 11 were also open-ended because participants had
the freedom to provide comments on each of these items.
62
Table 1
Survey Items in the NASM Accreditation Evaluation Standards Questionnaire
Construct Question
Prestige/Credibility 1
Membership Status 2
Quality of Education/Curriculum 3
Cost vs. Benefit 4, 5, 6
Effectiveness of Accreditation Process 7, 8, 9, 10
Standards 11
Qualitative Participation 12
To maximize survey responses, participants were purposefully selected. To maximize
response rates, the survey was delivered to the specific individual serving as the school of music
accreditation liaison officer (ALO). To establish trust, the researcher identified herself as a
doctoral student conducting research at the Stateside University, and she communicated the
importance of the survey by illustrating the value of the results of this research on the time and
resources of music schools across the United States. The researcher ensured the confidentiality
and security of collected data by describing the steps taken to maintain confidentiality. To
establish the benefits of participation, the researcher provided information about the survey by
describing how the results would contribute to increased knowledge on the topic. The researcher
offered to share survey results as an incentive for survey participation. To increase participation,
the researcher provided a survey link directly in the invitation email, avoided confusing
language, made the survey relatively short and easy to finish, identified the time expected in
order to finish the survey, and minimized questions seeking personal or sensitive information, as
suggested by researchers (Dillman et al., 2009).
Data Analysis
Data from survey responses were analyzed using qualitative and quantitative methods.
The analysis was organized into three steps, which were (a) statistical analysis of closed-choice
63
survey items, (b) coding and content analysis of open-ended survey items, and (c) utilization of
themes from the open-ended items to support the quantitative findings. Quantitative analysis
included descriptive and inferential statistical analyses. Descriptive statistics were calculated to
summarize the different survey responses of the different question items in the survey
questionnaire uploaded in Qualtrics for NASM accreditation evaluation standards. The responses
in these survey questions were summarized using frequency and percentage summaries.
Questions 5 and 6, which asked about the perceived costs and benefits associated with NASM
membership/accreditation, were open-ended questions. The summaries of the responses in these
questions addressed Research Question 1.
To analyze open-ended survey responses, an inductive content analysis approach was
used. Content analysis refers to the process of categorizing qualitative textual data into clusters
of similar entities or conceptual categories to identify consistent patterns and relationships
between variables or themes (Given, 2008). The method is a way of reducing data and making
sense or meaning of these (Given, 2008). An inductive analysis approach, as opposed to
deductive, moves from gaining a general familiarization with the data by conducting multiple
reviews, to identifying categories of meaning within participants' responses, to themes related to
accreditation.
Open coding was conducted to comb through the responses of the open-ended questions
to obtain the thematic categories and constituents of each category. Open coding of open-ended
survey responses was conducted to obtain various themes or categories that would summarize
the open-ended data. Open coding refers to that part of analysis that deals with the labeling and
categorizing of phenomena, as indicated by the data. Researchers stated, “[Coding] represents
the operations by which data are broken down, conceptualized, and put back together in new
64
ways. It is the central process by which theories are built from data” (Strauss & Corbin, 1990, p.
57).
Open coding was accomplished by segregating the open-ended survey data into words,
phrases, sentences, or paragraphs to emphasize the functional relation between parts and the
whole of the responses from the open-ended questionnaire. Codes were labels for assigning
meaning to the descriptive information compiled from the open-ended questions. A narrative of
themes that emerged from open-ended survey data were presented to support quantitative results
from closed-ended survey responses in addressing Research Question 1. The frequencies of the
number of responses for each thematic category of perceived costs and benefits were tallied and
presented.
To address Research Question 2, this researcher determined if perceived costs and
benefits associated with NASM membership/accreditation varied by institution type and regional
accreditation status. An Analysis of Variance (ANOVA) was conducted to determine
associations between perceived costs and benefits associated with NASM
membership/accreditation with institution type and regional accreditation status. An ANOVA
was conducted to determine whether there were any statistically significant differences between
the means of two or more independent (unrelated) groups. A level of significance of 0.05 was
used in the ANOVA. There was a significant difference in the perceived costs and benefits
associated with NASM membership/accreditation variance by institution type and regional
accreditation status if the p-value of the F statistics generated in the ANOVA was less than or
equal to the level of significance value. If significant associations were observed, mean
comparisons of the ratings of the perceived costs and benefits associated with NASM
65
membership/accreditation with the different groupings of institution type and regional
accreditation status were conducted to determine the degree of the associations.
Reliability
Quantitative reliability tests whether the measure of a study is consistent and accurate.
The smaller the error in a test the more reliable it is (Robinson Kurpius & Stafford, 2006;
Salkind, 2011). The instrument was designed to reduce four kinds of survey error, including
sampling error, coverage error, non-response error, and measurement error, as identified by
Dillman et al. (2009). Qualitative reliability is a reflection of consistency throughout a study
(Creswell, 2009). Reliability in qualitative research also refers to dependability of study findings
(Given, 2008). To ensure dependability of open-ended survey responses, the same collection and
analysis procedures were conducted for responses of each participant. To maximize qualitative
reliability, the same invitation, survey, and follow-up communications were sent to all
participants.
Validity
Quantitative validity is a measure of how well a study measures what it was designed to
measure (Robinson Kurpius & Stafford, 2006; Salkind, 2011). Validity was maximized by
constructing the survey instrument to ensure that the only questions asked were directly related
to the research questions. Each question was carefully and simply worded to avoid confusion and
to maximize respondent participation. The survey was given to faculty from the researcher’s
program and the ALO of the researcher’s home institution to evaluate the validity of the
questions used. Qualitative validity is a reflection of measures that are taken by the researcher to
ensure accuracy of the findings (Creswell, 2009). To ensure validity of open-ended survey
responses (Creswell, 2009), the researcher was cognizant of the coding process to ensure there
was no shift in the meaning of codes used.
66
Limitations and Delimitations
The first assumption and limitation in this study was that all participants were honest in
answering the survey and interviews questions. The researcher had no control whether the
responses of the representatives of the members of the NASM were accurate or not. In this
research, no sensitive information was asked of the samples; thus, they have no reason to be
vague when answering the interview questions.
The second assumption and limitation in the study was that the quantitative case study
supplemented by open-ended responses would allow for sufficient and in-depth collection of data
that would result in a comprehensive understanding and explanation on whether institutional
perceptions of the benefits and costs associated with being regionally accredited, while also
being affiliated with the NASM. One of the main characteristics of quantitative case studies with
supplemental qualitative data was use of multiple data sources, and this flexibility should yield
comprehensive information about the topic being investigated (Yin, 2009). However, one
limitation of a quantitative study with supplemental open-ended responses was that the
combination of both qualitative and quantitative methods often led to unrelated results. The
qualitative research methodology was used to expand more on the findings of the quantitative
instruments. Results obtained through qualitative methods would supplement the findings from
the quantitative data by identifying structural constraints of which the participants might not
have been aware.
Researcher bias could especially influence the qualitative aspect of this research. Steps
taken to avert researcher bias included a process of critical reflection and the double-checking of
open-ended survey data to reduce the potential for misinterpretation. Using multiple data source
methods controlled bias, as consistency across data collection methods was prioritized in the
analysis and presentation of results. Personal biases in the interpretation of the open-ended
67
responses was eliminated through open coding, which captured every common theme or
category in the responses.
68
CHAPTER FOUR: RESULTS
The purpose of this quantitative study was to investigate the perceived benefits of being
accredited by the National Association of Schools of Music (NASM). This study examined the
following research questions:
RQ1: What are the perceived costs and benefits associated with NASM
membership/accreditation?
H10a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by institution type.
H11a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by institution type.
H10b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by institution type.
H11b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by institution type.
RQ2: Do perceived costs and benefits associated with NASM membership/accreditation
vary by institution type and regional accreditation status?
H20a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by regional accreditation status.
H21a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by regional accreditation status.
H20b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by regional accreditation status.
69
H21b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by regional accreditation status.
Overall, this study was concerned with identifying trends among various types of
institutions in relation to the research questions outlined above. These institutions varied by size,
mission, locations, and degrees/programs offered, as well as differences among the type of
accreditation maintained by each institution. The survey instrument is provided in Appendix D.
The survey was created to gather demographic information, as well as inferential statistical data
that could be used to analyze the current self-review and peer-evaluation process that was
required of all NASM member institutions.
This chapter will present the findings from researcher-developed survey that was emailed
out to NASM member institutions (n = 647). There were 128 member institutions who started
the survey. There were 86 member institutions that completed the survey. The data collected
indicated a detailed description on perceived costs and benefits of maintaining accreditation with
NASM. The survey also asked participants to give their feedback on the NASM accreditation
evaluation standards. The survey included both descriptive and inferential statistical analyses,
such as Likert-scale questions and open-ended questions. The analysis was organized into three
steps, which were (a) statistical analysis of closed-choice survey items, (b) coding and content
analysis of open-ended survey items, and (c) utilization of themes from the open-ended items to
support the quantitative findings.
This survey provided a large data set, thereby allowing for strong measures of validity.
The survey allowed for a comparison of responses between those individuals who were both
regionally accredited and members of NASM with the responses of those who were accredited
solely by NASM. The open-ended questions allowed the researcher to consider common themes
70
among institutions, thereby allowing the opportunity to make recommendations on ways in
which to improve the current NASM evaluation/accreditation process.
Characteristics of NASM Member Institutions
The following tables are formatted to display the characteristics of 128 schools that are
NASM member institutions. There were 84 (64.6%) schools that have accreditation affiliation of
NASM and regional accreditation. There were 32 (24.6%) schools that have accreditation
affiliation of NASM only. There was one school that had no affiliation. Regarding dual
accreditation, many of the schools have NASM accreditation and NCA (30; 23.1%), and NASM
accreditation and SACS (25; 19.2%). For basic Carnegie classification, 43 (33.1%) schools were
Baccalaureate College; 39 (30%) schools were Research/Doctoral University; and 27 (20.8%)
schools were Master's College or University. For accreditation experience, the majority of the
128 schools have NASM experience (92; 71.9%), and there were almost half (52; 40.6%$) that
have regional accreditation experience. More than half (80; 62.5%) of the representative of the
128 schools that answered the survey were the individual responsible for accreditation. For self-
study format, more than half of the schools used format A (75; 58.6%). There were only 14.8%
or 19 out of the 128 schools that considered withdrawing the accreditation. More than half (69;
53.9%) of the 128 schools have NASM access/opportunities provided to all institutions.
The mean number of individuals that participated in the NASM self-study process at their
institution was 44.33 individuals with a range of participants from 1 to 86 among the 128
samples of schools. Regarding composition of the self-study team at the institutions, the mean
number of faculties was 32.50 with a range from 0 to 67 among the 128 samples of schools. The
mean number of staffs was 22.80 with a range from 0 to 41 among the 128 samples of schools.
The mean number of administrators was 14.78 with a range from 0 to 9 among the 128 samples
of schools.
71
Table 2
Frequency and Percentage Summaries of Characteristics of NASM Member Institutions
n %
Accreditation Affiliation
NASM only 32 24.6%
NASM & Regional Accreditation 84 64.6%
Regional Accreditation only 0 0.0%
No affiliation 1 0.8%
No response 13 10.2%
Dual Accreditation Institutions
(NASM &___)
MSCHE 6 4.6%
NCA 30 23.1%
NEASC 4 3.1%
NWCCU 8 6.2%
SACS 25 19.2%
WASC 8 6.2%
No response 49 37.7%
Basic Carnegie Classification
Research/Doctoral University 39 30.0%
Master's College or University 27 20.8%
Baccalaureate College 43 33.1%
Special Focus Institution 2 1.5%
Tribal College 1 0.8%
Associates 0 0.0%
Associates Dominant 0 0.0%
No response 18 14.1%
Accreditation Experience
NASM Experience 92 71.9%
Regional Accreditation Experience 52 40.6%
Not participated in either NASM or Regional 2 1.6%
Individual Responsible for Accreditation
Yes 80 62.5%
No 14 10.9%
No response 34 26.6%
Self-Study Format
Format A 75 58.6%
Format B 7 5.5%
Format C 8 6.3%
No response 38 29.7%
Institution Considered Withdrawing
Yes 19 14.8%
No 69 53.9%
No Response 40 31.3%
NASM Access/Opportunities
Provided to All Institutions
Yes 69 53.9%
No 19 14.8%
No Response 40 31.3%
72
Table 3
Descriptive Statistics Summaries of Characteristics of NASM Member Institutions
Question Statistics
How many individuals participate in the NASM self-study process at your institution?
Sum 1330
Mean 44.33
Min 1
Max 86
Composition of the self-study team at your institution
Faculty
Sum 845
Mean 32.50
Min 0
Max 67
Staff
Sum 228
Mean 22.80
Min 0
Max 41
Administrators
Sum 133
Mean 14.78
Min 0
Max 9
Response About NASM Accreditation Standards
More than half (75; 57.7%) of the 128 schools either strongly agreed or agreed that
NASM membership brought value to the credibility of their institution. Only 14.6% or 19 out of
the 128 schools considered withdrawing from NASM. More than half or 70 (53.8%) of the 128
schools strongly agreed or agreed that the standards outlined by NASM influenced the quality of
the education given at individual institutions. There were 104 (80%) out of the 128 schools that
responded that NASM claimed that they provided access to a national forum for the discussion
and consideration of concerns relevant to the preservation and advancement of standards in the
field of music, particularly in higher education. More than half or 69 (53.1%) out of the 128
schools agreed that all of the stated opportunity in the earlier sentence existed for member
institutions. Half or almost half of the 128 schools strongly agreed or agreed with the following
73
statements: (a) Self-study is designed to produce comprehensive effort on the part of the
institution to evaluate its own program while considering its objectives, publicly or otherwise
stated. I feel the self-study process meets this objective (69; 53.1%); (b) Peer evaluation provides
professional, objective judgment from outside the institution and is accomplished through on-site
visitation, a formal Visitors’ Report, and Commission review. I feel the peer evaluation process
meets this objective (68; 52.3%); (c) I believe the autonomy given to NASM member institutions
is an effective way for ensuring that each institution meets specified NASM standards (60;
46.2%); and (d) I believe that the NASM standards evaluate institutional effectiveness
appropriately (55; 42.4%). Lastly, many or 53 (40.8%) out of the 128 schools have no
recommendations for the NASM accreditation self-study review process and responded that the
process was fine in its current state.
74
Table 4
Frequency and Percentage Summaries of Responses About NASM Accreditation Standards
n %
I believe that NASM membership brings value to the
credibility of your institution?
Strongly disagree 4 3.1%
Disagree 2 1.5%
Neutral/No Opinion 8 6.2%
Agree 34 26.2%
Strongly agree 41 31.5%
To your knowledge, has your institution considered
withdrawing from NASM?
Yes 19 14.6%
No 69 53.1%
The standards outlined by NASM influence the quality of
the education given at individual institutions.
Strongly disagree 3 2.3%
Disagree 8 6.2%
Neutral/No Opinion 9 6.9%
Agree 41 31.5%
Strongly agree 29 22.3%
NASM claims that they provide access to the following:
A national forum for the
discussion and consideration of
concerns relevant to the
preservation and advancement
of standards in the field of
music, particularly in higher
education 104 80%
Do you agree that all of these opportunities exist for
member institutions?
Yes 69 53.1%
No 19 14.6%
Self-Study is designed to produce comprehensive effort on
the part of the institution to evaluate its own program while
considering its objectives, publicly or otherwise stated. I
feel the self-study process meets this objective.
Strongly disagree 2 1.5%
Disagree 7 5.4%
Neutral/No Opinion 9 6.9%
Agree 36 27.7%
Strongly agree 33 25.4%
Peer evaluation provides professional, objective judgment
from outside the institution and is accomplished through
on-site visitation, a formal Visitors’ Report, and
Commission review. I feel the peer evaluation process
meets this objective.
Strongly disagree 2 1.5%
Disagree 8 6.2%
Neutral/No Opinion 10 7.7%
Agree 37 28.5%
Strongly agree 31 23.8%
I believe the autonomy given to NASM member institutions
is an effective way for ensuring that each institution meets
specified NASM standards?
Strongly disagree 2 1.5%
Disagree 9 6.9%
Neutral/No Opinion 17 13.1%
Agree 39 30.0%
Strongly Agree 21 16.2%
What recommendations do you have for the NASM
accreditation self-study review process? (Please select one,
and leave comments if necessary)
No recommendations, the
process is fine in its current
state 53 40.8%
I have some recommendations 32 24.6%
I believe that the NASM standards evaluate institutional
effectiveness appropriately.
Strongly disagree 4 3.1%
Disagree 11 8.6%
Neutral/No Opinion 19 14.6%
Agree 34 26.2%
Strongly Agree 21 16.2%
75
Associations Between Perceived Costs and Benefits Associated With NASM
Membership/Accreditation With Institution Type and Regional Accreditation Status
ANOVA was conducted to determine associations between perceived costs and benefits
associated with NASM membership/accreditation with institution type and regional accreditation
status. The results are presented in Tables 5 to 7. Looking at Table 5, ANOVA results showed
that only the belief of the respondents that NASM membership brings value to the credibility of
their institution, F(1, 87) 3.95, p = 0.05, was significantly associated with the accreditation
affiliation of the of 128 schools that are NASM Member Institution. There was significant
association because the p-value was less than the level of significance value of 0.05. Those that
have accreditation affiliation of NASM and Regional accreditation (M = 4.30; SD = 0.92) have
significantly higher agreement that NASM membership brought value to the credibility of their
institution compared to those with accreditation affiliation of NASM only (M = 3.79; SD = 1.23).
Considering Table 6, it was determined that between perceived costs and benefits
associated with NASM membership/accreditation was not significantly associated with schools
with different dual accreditation institutions. Considering Table 7, it was determined that
between perceived costs and benefits associated with NASM membership/accreditation was not
significantly associated with schools with different basic Carnegie classification. There was no
association between perceived costs and benefits associated with NASM
membership/accreditation with dual accreditation institutions and basic Carnegie classification
because the p-values of the ANOVA results were greater than the level of significance of 0.05.
76
Table 5
ANOVA Results of Associations Between Perceived Costs and Benefits Associated With NASM
Membership/Accreditation With Accreditation Affiliations
Sum of
Squares
df Mean
Square
F Sig.
I believe that NASM membership
brings value to the credibility of your
institution?
Between Groups 3.90 1 3.90 3.95 0.05*
Within Groups 85.86 87 0.99
Total 89.75 88
To your knowledge, has your
institution considered withdrawing
from / NASM?
Between Groups 0.32 1 0.32 1.60 0.21
Within Groups 17.47 88 0.20
Total 17.79 89
The standards outlined by NASM
influence the quality of the education
given at individual institutions.
Between Groups 0.54 1 0.54 0.49 0.49
Within Groups 96.19 88 1.09
Total 96.72 89
NASM / claims that they provide
access to the following: A national
forum for the discussion and
consideration of concerns relevant to
the preservation and advancement of
standards in the field of music,
particularly in higher education
Between Groups 0.00 2 0.00 . .
Within Groups 0.00 91 0.00
Total 0.00 93
Do you / agree that all of these
opportunities exist for member /
institutions?
Between Groups 0.09 1 0.09 0.50 0.48
Within Groups 14.81 86 0.17
Total 14.90 87
Self-Study is designed to produce
comprehensive effort on the part / of
the institution to evaluate...-I feel the
self-study process meets this objective.
Between Groups 0.00 1 0.00 0.00 0.96
Within Groups 87.81 85 1.03
Total 87.82 86
Peer evaluation provides professional,
objective judgment from / outside the
institution and is acc...-I feel the peer
evaluation process meets this
objective.
Between Groups 0.04 1 0.04 0.04 0.84
Within Groups 90.94 86 1.06
Total 90.99 87
NASM & Institutional Autonomy-I
believe the autonomy given to NASM
member institutions is an effective way
for ensuring that each institution meets
specified NASM standards?
Between Groups 0.26 1 0.26 0.25 0.62
Within Groups 87.20 86 1.01
Total 87.46 87
What / recommendations do you have
for the NASM accreditation self-study
/ review process?
Between Groups 0.04 1 0.04 0.18 0.68
Within Groups 19.91 83 0.24
Total 19.95 84
NASM Standards & Institutional
Effectiveness-I believe that the NASM
standards evaluate institutional
effectiveness appropriately.
Between Groups 0.87 1 0.87 0.70 0.41
Within Groups 107.63 87 1.24
Total 108.49 88
* Significant at p ≤ 0.05
77
Table 6
ANOVA Results of Associations Between Perceived Costs and Benefits Associated With NASM
Membership/Accreditation With Dual Accreditation Institutions
Sum of
Squares
df Mean
Square
F Sig.
I believe that NASM membership brings
value to the credibility of your institution?
Between Groups 1.77 5 0.35 0.40 0.85
Within Groups 56.43 63 0.90
Total 58.20 68
To your knowledge, has your institution
considered withdrawing from / NASM?
Between Groups 0.90 5 0.18 0.92 0.48
Within Groups 12.35 63 0.20
Total 13.25 68
The standards outlined by NASM
influence the quality of the education given
at individual institutions.
Between Groups 1.60 5 0.32 0.29 0.92
Within Groups 70.34 63 1.12
Total 71.94 68
NASM / claims that they provide access to
the following: A national forum for the
discussion and consideration of concerns
relevant to the preservation and
advancement of standards in the field of
music, particularly in higher education
Between Groups 0.00 5 0.00 . .
Within Groups 0.00 67 0.00
Total 0.00 72
Do you / agree that all of these
opportunities exist for member /
institutions?
Between Groups 0.39 5 0.08 0.45 0.81
Within Groups 10.77 63 0.17
Total 11.16 68
Self-Study is designed to produce
comprehensive effort on the part / of the
institution to evaluate...-I feel the self-
study process meets this objective.
Between Groups 1.92 5 0.38 0.34 0.89
Within Groups 70.02 62 1.13
Total 71.94 67
Peer evaluation provides professional,
objective judgment from / outside the
institution and is acc...-I feel the peer
evaluation process meets this objective.
Between Groups 3.64 5 0.73 0.63 0.68
Within Groups 73.35 63 1.16
Total 76.99 68
NASM & Institutional Autonomy-I believe
the autonomy given to NASM member
institutions is an effective way for ensuring
that each institution meets specified
NASM standards?
Between Groups 0.99 5 0.20 0.19 0.97
Within Groups 66.75 63 1.06
Total 67.74 68
What / recommendations do you have for
the NASM accreditation self-study /
review process?
Between Groups 0.58 5 0.12 0.46 0.81
Within Groups 15.18 60 0.25
Total 15.76 65
NASM Standards & Institutional
Effectiveness-I believe that the NASM
standards evaluate institutional
effectiveness appropriately.
Between Groups 5.20 5 1.04 0.87 0.51
Within Groups 76.25 64 1.19
Total 81.44 69
78
Table 7
ANOVA Results of Associations between Perceived Costs and Benefits Associated With NASM
Membership/Accreditation with Basic Carnegie Classification
Sum of
Squares
df Mean
Square
F Sig.
I believe that NASM membership
brings value to the credibility of your
institution?
Between Groups 3.13 3 1.04 1.02 0.39
Within Groups 86.63 85 1.02
Total 89.75 88
To your knowledge, has your
institution considered withdrawing
from / NASM?
Between Groups 0.46 3 0.15 0.76 0.52
Within Groups 17.33 86 0.20
Total 17.79 89
The standards outlined by NASM
influence the quality of the education
given at individual institutions.
Between Groups 2.70 3 0.90 0.82 0.49
Within Groups 94.03 86 1.09
Total 96.72 89
NASM / claims that they provide
access to the following: A national
forum for the discussion and
consideration of concerns relevant to
the preservation and advancement of
standards in the field of music,
particularly in higher education
Between Groups 0.00 4 0.00 . .
Within Groups 0.00 89 0.00
Total 0.00 93
Do you / agree that all of these
opportunities exist for member /
institutions?
Between Groups 0.09 3 0.03 0.17 0.92
Within Groups 14.81 84 0.18
Total 14.90 87
Self-Study is designed to produce
comprehensive effort on the part / of
the institution to evaluate...-I feel the
self-study process meets this
objective.
Between Groups 4.35 3 1.45 1.44 0.24
Within Groups 83.46 83 1.01
Total 87.82 86
Peer evaluation provides professional,
objective judgment from / outside the
institution and is acc...-I feel the peer
evaluation process meets this
objective.
Between Groups 1.31 3 0.44 0.41 0.75
Within Groups 89.68 84 1.07
Total 90.99 87
NASM & Institutional Autonomy-I
believe the autonomy given to NASM
member institutions is an effective
way for ensuring that each institution
meets specified NASM standards?
Between Groups 1.14 3 0.38 0.37 0.78
Within Groups 86.32 84 1.03
Total 87.46 87
What / recommendations do you have
for the NASM accreditation self-
study / review process?
Between Groups 0.54 3 0.18 0.75 0.53
Within Groups 19.41 81 0.24
Total 19.95 84
NASM Standards & Institutional
Effectiveness-I believe that the
NASM standards evaluate
institutional effectiveness
appropriately.
Between Groups 2.82 3 0.94 0.76 0.52
Within Groups 105.68 85 1.24
Total 108.49 88
79
Summary of the Results
The purpose of this quantitative study was to investigate the perceived benefits of being
accredited by the National Association of Schools of Music (NASM). Results of the descriptive
statistics analysis showed the following key results about NASM accreditation standards:
• Schools strongly disagreed that NASM membership brought value to the credibility
of their institution.
• Schools agreed that the standards outlined by NASM influenced the quality of the
education given at individual institution.
• Schools agreed that all of the stated opportunity in the earlier sentence existed for
member institutions.
• Schools agreed that self-study was designed to produce comprehensive effort on the
part of the institution to evaluate its own program, while considering its objectives,
publicly or otherwise stated ⎯the school sample felt the self-study process met this
objective.
• Schools agreed that peer evaluation provided professional, objective judgment from
outside the institution and was accomplished through on-site visitation, a formal
visitors’ report, and commission review ⎯School sample felt the peer evaluation
process met this objective.
• Schools believed the autonomy given to NASM member institutions was an effective
way for ensuring that each institution met specified NASM standards.
• Schools believed the NASM standards evaluated institutional effectiveness
appropriately (55; 43%).
80
ANOVA results showed that only the belief of the respondents that NASM membership
brought value to the credibility of their institution was significantly associated with the
accreditation affiliation of the of 128 schools that were NASM member institutions. Conversely,
perceived costs and benefits associated with NASM membership/accreditation were not
significantly associated with schools with different dual accreditation institutions and basic
Carnegie classification.
Chapter 5 includes further discussion of the results presented in this chapter. Each of the
results of the different statistical analysis will be reviewed. Moreover, the potential implications
for each of the results of the analysis will be discussed in the succeeding chapter.
81
CHAPTER FIVE: DISCUSSION
Introduction
Due to the recent withdrawal of music schools from the National Association of Schools
of Music (NASM), a trend may cause other members to re-evaluate their membership. This
aspect brings into question the important role of accreditation in schools and institutions. Thus,
the purpose of the present study was to explore the perceived benefits and costs associated with
NASM accreditation. This study also included an examination of the monetary and nonmonetary
benefits and costs. There were 128 member institutions that started the survey; there were 86
member institutions that completed the survey. The following research questions were used to
guide the study:
RQ1: What are the perceived costs and benefits associated with NASM
membership/accreditation?
H10a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by institution type.
H11a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by institution type.
H10b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by institution type.
H11b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by institution type.
RQ2: Do perceived costs and benefits associated with NASM membership/accreditation
vary by institution type and regional accreditation status?
H20a: There is no significant difference in perceived costs associated with NASM
membership/accreditation vary by regional accreditation status.
82
H21a: There is a significant difference in perceived costs associated with NASM
membership/accreditation varies by regional accreditation status.
H20b: There is no significant difference in perceived benefits associated with NASM
membership/accreditation vary by regional accreditation status.
H21b: There is a significant difference in perceived benefits associated with NASM
membership/accreditation varies by regional accreditation status.
Results of ANOVA revealed key findings that demonstrated the member institutions’
perceptions on NASM accreditation standards. The participants did not perceive NASM
membership as valuable to the credibility of their institutions. Most of the respondents also
agreed that NASM standards had an influence on the quality of the education given at individual
institutions. Peer evaluation was also perceived as providing professional, objective judgment
from outside the institution, which may be accomplished through on-site visitation, formal
visitor’s report, and commission review. The participants perceived autonomy given to NASM
member institution as an effective way to ensure that each institution met the NASM standards.
Finally. The standards outlined by NASM were seen as effective in appropriately evaluating
institutional effectiveness.
The present study offered insights on ways in which member institutions perceived the
benefits and costs of NASM accreditation and membership. This aspect was especially critical
for understanding the behaviors of member institutions regarding membership, accreditation, and
individual evaluation and implementation of NASM standards in the respective institutions.
These findings might help in pointing out areas for improvement to meet the demands of the
music industry. Such knowledge was pivotal for accrediting institution to understand the need for
83
constant reevaluation and revisions of existing standards to meet the current and changing needs
of the music industry.
In this chapter, the results are interpreted using the current literature on the perceived cost
and benefits of accreditation. The implications will also be discussed, as well as the limitations
of the current study. Recommendations for future study are also enumerated to serve as guide for
prospective researchers. Finally, the chapter is concluded with a summary of the discussion.
Discussion
In this subsection, the results are discussed within the broader context of the current
literature on the role of accreditation and its perceived costs and benefits among member
institutions. The discussion is based on the research questions that guided the present study.
Research Question 1 examined the perceived costs and benefits associated with NASM
membership/accreditation; Research Question 2 questioned whether the perceived costs and
benefits varied by institution type and regional accreditation status.
Perceived Cost and Benefits of Accreditation
Research Question 1 determined how member institutions perceived the costs and
benefits associated with NASM accreditation. Reviewing this question also included studying
perceptions on the monetary and nonmonetary costs and benefits that entail NASM accreditation.
Results of ANOVA showed that the participants observed how beneficial or otherwise the
NASM standards were regarding the credibility of their institutions, its individual comprehensive
effort for standardization and evaluation processes, and its effectiveness.
Findings demonstrated that the NASM membership/accreditation was perceived as
valuable to the credibility of the participants’ institution. This result was congruent to Gaston’s
(2014) notion that one of the effects of accreditation was credibility. Such perceptions were
contrary to the claims of some researchers regarding the challenges that accreditation could
84
encounter regarding learning and organization outcomes for member institutions. For instance,
contrary to the findings of Maki (2010), the participants felt that the standards outlined by
NASM influenced the quality of the education given at individual institutions. Further, Ewell
(2005) and Wolff (2005) found the root causes of deficiencies identified were rarely followed up,
and real solutions were never sought. However, the participants in the present study
overwhelmingly agreed that NASM provided a national forum for the discussion and
consideration of concerns relevant to the preservation and advancement of standards in the field
of music, particularly in higher education.
In addition, the findings of this study contradicted Ewell’s (2005) study, which
demonstrated that accreditation tended to be focused on the process of the standards, instead of
the outcomes. The majority of the participants agreed that the NASM standards evaluated
institutional effectiveness appropriately, contrary to the assertion of Ewell who found that
accreditation processes tended not to go beyond the formal statements and reports that member
institutions submit. The majority of the participants of this study further did not have
recommendations for the NASM accreditation self-study review process and felt the process was
fine in its current state. This finding also contradicted the implication of Ewell’s research that the
process was not comprehensive and extensive enough about assessing the effectiveness of
education systems. Positive sentiments were expressed by the participants and reflected in the
overall agreement among most participants that NASM membership brought value to the
credibility of the institution.
In addition to the agreement among the majority of the participants that accreditation
increased the credibility of an institution, they also perceived it as critical in ensuring the quality
of education. This finding further confirmed the notion that one of the main effects of
85
accreditation was quality improvement (Bresciani, 2006; Ewell, 2009). Accreditation was widely
considered a significant driving force behind advances in both student learning and outcomes
assessment. Moreover, this finding was congruent Kuh and Ikenberry (2009), who stated student
assessment and learning outcomes were driven more by accreditation than by external pressures,
such as government or employers. This finding meant internal improvement of quality of the
education and systems within an institution were greatly influenced by the accreditation
processes and standards imposed by agencies.
Likewise, accreditation entailed specific opportunities present to the member institutions.
Results showed that participants perceived these opportunities did exist for the member
institutions, which demonstrated that NASM accreditation standards were clear and well
disseminated among members. This finding was especially important because constituents ⎯such
as the teachers, faculty members, and students ⎯must be aware of the large scope of learning
outcomes that were brought about by the complex challenges of imposing standards on education
systems (Rhodes, 2012). With the adoption of comprehensive frameworks or similar tools at
institutions, accreditors could be well positioned to connect teaching and learning and, as a
result, better engage faculty to improve student learning outcomes (Rhodes, 2012). These
opportunities would be visible for the members and constituents of a member institution.
Awareness of the effectiveness of internal systems was also critical for member
institutions. Findings showed that participants agreed that self-study was designed to create a
comprehensive effort to evaluate its own programs, while also considering the standards and
objectives of NASM. Therefore, self-study was vital for member institutions, and further met this
objective. One of the most effective ways to increase efficiency regarding self-study involved
identifying quality indicators integrated into the planning processes to facilitate institutional
86
efforts (Ruppert, 1994). One example was to look into how these standards were implemented at
the classroom-level (Cabrera et al., 2001). Expectations of accrediting agencies might be
encouraging more widespread use of effective instructional practices by faculty (Cabrera et al.,
2001), which also reflected the importance of internal changes to meet the standards and
expectations of the accreditation process.
Regarding self-study of institutions, accreditation was also perceived as having an
influence on peer evaluation, which also provided professional, objective judgment from external
sources. Accreditation standards were considered catalysts of change, one that was linked to
improvements in the education systems of universities (Volkwein et al., 2007). It was found that
through accreditation and external feedback, students experienced significant gains in the
application of knowledge of mathematics, science, and engineering; usage of modern
engineering tools; use of experimental skills to analyze and interpret data; designing solutions to
engineering problems; teamwork and group work; effective communication; understanding of
professional and ethical obligations; understanding of the societal and global context of
engineering solutions; and recognition of the need for life-long learning. Professional
development was also increased, which showed the effectivity of the accreditation standards
(Volkwein et al., 2007).
External feedback also offered organizational benefits for colleges and universities, such
as positive reflection from peers (Brittingham, 2009). Assessing their teaching performance and
student learning outcomes by external groups could be viewed as an intrusion of professional
authority and academic freedom. However, continuous improvements in internal systems based
on external professional and objective judgment allowed universities and college institution
leaders to learn about the breadth and depth of their institutions (Brittingham, 2009). Evidence-
87
based accreditation processes strengthened an institution’s value regarding its quality of
education given at individual institutions. Hence, it was highly likely that peer evaluation
processes in the accreditation met the objective.
In summary, the participating member institution leaders considered that their NASM
membership added value to their credibility as an institution. Additionally, they agreed that the
standards by NASM were impactful to the quality of education given at individual institutions. In
addition, there were unique opportunities present for the member institutions. These included the
opportunity for self-study for internal improvements and access to peer evaluation and feedback.
Perceived Effectiveness of Accreditation Standards
One must understand ways in which member institutions perceived the effectiveness of
NASM membership and accreditation processes. Results of the present study showed that
majority of the participants agreed that autonomy given to NASM member institutions was
effective in certifying that each member institution met specific standards by NASM. In addition,
the participating schools perceived that the institutional effectiveness was evaluated
appropriately based on the standards specified by NASM.
These perceptions on the effectiveness of accreditation standards reflected the main
objective of accreditation ⎯to provide a method of quality assurance and improved curriculum
delivery (Eaton, 2012). Due to this objective, acquiring and maintaining institutional
accreditation was an absolute necessity for most institutions of higher education. Therefore, the
specified standards from the NASM must mirror the actual needs of each member institution.
Because the majority of the participating schools agreed to this perception, the quality of
education could be relied on as adequate to fulfill the curricular requirements (Ewell, 2008).
88
Evidence indicated that institutional effectiveness evaluation was influenced by the
standards set by NASM. This finding paralleled Procopio (2010), who stated the accreditation
process influenced perceptions on organizational culture. The satisfaction of administrators and
the faculty was found as intimately associated with organizational climate, information flow, and
involvement in decisions (Procopio, 2010). This finding reaffirmed accreditation as a vital means
to drive institutional changes. Therefore, appropriate measures in the accreditation process
should include standardized evaluation to ensure that the specific needs and requirements were
addressed.
The perceived effectiveness of NASM standards also involved looking for ways to
develop systems to ensure educational effectiveness. This finding affirmed Maki’s (2010)
argument, demonstrating that accreditation should not be a means unto its own ends. A
continuous search for new ways to improve and develop educational curricula must be part of the
evaluation criteria (Wolff, 2005). Coordinating institutional accreditation efforts where possible
can be cost effective because overlap exists between the process of both regional and
programmatic accreditation. However, the review process and resource allocations can become
complicated as overwhelming as it is (Shibley & Volkwein, 2002). The extra efforts and time
required for engaging outcome assessment and the unconvincing benefit perceived by faculty can
be another deterrent. Furthermore, the compliance-oriented assessment requirements are imposed
by external bodies, and most faculty members participate in the process indirectly. Hurdles, such
as faculty buy-in, institutional investment, and integration to local practices, should be a part of
this effort.
These findings also showed that while areas remained for improvement, the standards set
by NASM were perceived as beneficial to the evaluation processes of the member institutions;
89
the majority of the participants did not have recommendations for the NASM accreditation self-
study review process; and they felt the process was fine in its current state. This finding reflected
how NASM standards represented threshold conditions for offering various types of music
degrees and other credentials, which established basic competencies that must be met.
Additionally, perceived costs and benefits associated with NASM membership/accreditation
were not significantly associated with schools with different dual accreditation institutions and
basic Carnegie classification, and further research must be done to verify this finding and its
causes.
To summarize, the participating school leaders perceived the NASM standards as
effective, especially in brining credibility to the institution and evaluating institutional
effectiveness of the member institutions. Additionally, the findings indicated there was no
association between perceived costs and benefits associated with NASM
membership/accreditation with dual accreditation institutions and basic Carnegie classification.
NASM might utilize these findings to gain a more comprehensive understanding regarding the
variation regarding the perceived costs and benefits associated with NASM
membership/accreditation by institution type and regional accreditation status. The next
subsection includes a discussion on the implications of the results of the present study.
Implications
Based on the results, this study offered theoretical and practical implications that might
be beneficial to agencies such as NASM and to the current literature on the importance of
accreditation. The present study provided insights on the perceptions of institutions on the
effectiveness and importance of accreditation regarding internal changes and requirements of
participating schools. This knowledge could benefit both the accrediting agencies and member
90
institutions in integrating policies and systems that could help ensure the quality of education
given at an individual level.
For NASM and its member institutions, these findings might help the agency in pointing
out its strengths and the importance of maintaining its status. This aspect was particularly
important because the majority of the participants did not have recommendations for the NASM
accreditation self-study review process and felt the process was fine in its current state. Findings
on the perception of participating institutions already demonstrated that the standards set by
NASM were effective in addressing institutional effectiveness. Thus, the NASM must maintain
its strength as an institution in the future with assurance of its effectiveness.
In the case of participating institutions and prospective members, this knowledge was
beneficial in spreading awareness on the benefits of acquiring accreditation from NASM. Due to
globalization, there was an increased focus on how to assure quality of standards in higher
education across nations. To cut costs, the majority of the work was done by administration at
the institution. Faculty consequently perceived that assessment was an exercise performed by
administration for external audiences, instead of embracing the process. Assessment activities,
imposed by external authorities, tended to be implemented as an addition to, rather than an
integral part of, an institutional practice (Ewell, 2002).
The finding of this study regarding the lack of association between perceived costs and
benefits associated with NASM membership/accreditation with dual accreditation institutions
and basic Carnegie classification further highlighted the value of NASM in the context of
globalization. Thus, educational institution leaders must keep up with the changing paradigm
brought about by the internationalization of accreditation. The lack of federal government
intervention in the evaluation process of educational institutions was a main reason for the way
91
accreditation in the United States developed (Brittingham, 2009). It seemed accreditation
evolved from simpler days of semi-informal peer assessment into a burgeoning industry of
detailed analysis, student learning outcomes assessment, quality and performance review,
financial analysis, public attention, and all-around institutional scrutiny (Bloland, 2001; Burke &
Minassians, 2002; McLendon et al., 2006; Zis et al., 2010). Insights on how individual institution
leaders have perceived their accrediting bodies could help in the decision-making processes of
other institutions.
For researchers, these findings might serve as evidence on building a framework on how
institution leaders have perceived the whole process of accreditation. Another concern pointed
out by Ewell (2005) was that accreditation agencies tended to emphasize the process of, rather
than the outcomes, once the assessment infrastructure was established. The accreditors were
satisfied with a formal statements and goals of learning outcomes, but they did not query further
about how, the appropriateness, and to what degree these learning goals were applied in the
teaching and learning process.
College leaders have also tended to adopt the institutional isomorphic approach by
modeling itself after those peers who are more legitimate or successful in dealing with similar
situation and the practice widely used to gain acceptance (DiMaggio & Powell, 1983). Decision
makers may be unintentionally trapped in a culture of doing what everyone is doing without
carefully examining unique local situation, the logic, the appropriateness, and the limitations
behind the common practice (Miles, 2012). However, the findings of this study indicated that
despite these observations of previous researchers, they did not affect the effectiveness of NASM
accreditation self-study review process itself, which was perceived as effective in increasing the
credibility of the institutions and was viewed as fine in its current state. Thus, there was a need
92
for more research to understand the discrepancy regarding the negative findings of the previous
researchers on the process and the positive findings of this study on perceptions.
One must consider internal and external forces that cause institutions to acquire these
accreditations. External incentives, such as peer evaluation and professional feedback, and
internal benefits, such as improved systems, are important points to start with when
understanding the behaviors of institutions regarding acquiring accreditation. In addition to these
external costs, there are internal costs that must be calculated, as well. These internal costs can
include faculty and administrative time invested in the assessment and self-study, volunteer
service in accreditation activities, preparation of annual or periodic filings, and attendance at
mandatory accreditation meetings. Such a model is critical in determining the social
psychological processes that occur in this phenomenon.
In summary, the present study provided insightful data on the theoretical implications of
the perceptions of participating institutions on the cost and benefit of accreditation. Based on the
results of the study, accreditation processes and criteria are standardized so that these can be
perceived as appropriate in evaluating institutional effectiveness. In addition, accreditation
processes must also meet the specific needs of member institutions. The next subsection includes
a discussion on the theoretical and methodological limitations of the current study.
Limitations of the Study
Despite the benefits of the study, the results must still be interpreted based on the
limitations of the current study. For instance, one main limitation was the lack of a study that
focused on the accreditation in the music industry. It was difficult to contextualize the results of
the study because there was a scarcity of research highlighting the perceived cost and benefits of
acquiring accreditation in the field of music. Accreditation has long been criticized as mysterious
or secretive with little information to share with stakeholders (Ewell, 2010). There was an
93
impetus for higher education and accreditation agencies to be more open to public and policy
makers. It was expected that further openness would contribute to more effective and
accountable business practices, as well as the improvement of educational quality. To address
this aspect, it would be beneficial for future researchers to look into the pros and cons of getting
accreditation for music schools regarding the changing needs and requirements of the music
industry.
In addition, one limitation also involved the generalizability of the results of the study.
Because the participants came from music schools, the results could not be applied to other
schools with a different nature. However, these findings could be useful in understanding ways
in which music institution leaders have perceived the pros and cons of accreditation from
NASM. Future researchers ought to look into other industries to broaden the knowledge on this
topic.
To summarize, the limitations of the present study were related to the research questions
and generalizability of the results. These limitations could be addressed by conducting more
research on the pros and cons of getting accreditation for music schools regarding the changing
needs and requirements of the music industry, as well as ways in which music institution leaders
have perceived the pros and cons of accreditation from NASM. In the next subsection, the
recommendations for future research are enumerated based on the discussion, implications, and
limitations of the current study.
Recommendations for Future Research
Based on the results of the study, the following are recommended for future research:
1. Future researchers should look into the perceptions on accreditation through
qualitative approaches, such as discourse psychology, to understand the underlying
social psychological processes in this phenomenon.
94
2. Further research must be conducted on the advantages and disadvantages of
accreditation to provide additional knowledge for prospective institutions.
3. Future researchers should learn how decisions are made regarding the standardization
of the accreditation process. In this case, members of the accrediting bodies may be
used as the sample. This would influence institutional diversity and the ability for
individual institutions to maintain individuality in offering a variety of programs and
maintaining mission statements, as unique to each institution.
4. Further studies on the perspectives on accreditation may involve looking into the
different variables that influence an institution to acquire accreditation. As mentioned,
these may include discussion on the intrinsic and external motivations that cause
individual institutions to apply for accreditation.
5. One may also study the perceptions of people at institutions that offer different areas
of study other than music-related study. Researchers can understand the priorities of
institutions based on differing industries.
6. Researchers may also look into alternatives to accreditation by understanding how
institution leaders perceive these alternatives regarding the benefits and costs of
acquiring accreditation.
7. One may also study ways in which constituents, such as the students and faculty,
perceive the accreditation status of the institution to which they belong.
8. Further research on the explicit relationships of the variables that influence
accreditation behaviors of institutions must also be considered, so that one can
understand the decision-making processes that are involved in this phenomenon.
95
Conclusion
Despite the benefits of accreditation, it remains unclear if the established standards in the
accreditation process are fair and consistent for all parties concerned. One must ensure that
standards in accreditation are relevant and meet the needs of the member institutions. The
original purpose behind accreditation has been forgotten, as the emphasis currently relies on
easily quantifiable outcomes. Currently, the evidence indicates that standardization causes
constriction rather than creativity, as a focus remains on items that are easily measurable, as
opposed to those that are not. Thus, the purpose of this study was to examine perceived benefits
and costs associated with NASM accreditation. A quantitative approach with supplemental open-
ended responses was employed to explore the monetary and nonmonetary benefits of NASM
membership. Findings demonstrated that accreditation was perceived to influence the credibility
of the institution. Additionally, accreditation signified the strength of peer evaluation and self-
study as important in ensuring quality of education. In addition, the participants also perceived
the accreditation process was appropriate in improving institutional effectiveness. Further studies
were recommended to understand the different variables that influenced institutions to apply for
accreditation.
96
REFERENCES
Adelman, C., & Silver, H. (1990). Accreditation: The American experience. London, England:
Council for National Academic Awards.
Aghion, P., Dewatripont, M., Hoxby, C. M., Mas-Colell, A., & Sapir, A. (2008). Higher
aspirations: An agenda for reforming European Universities. Brussels: Bruegel.
American Accounting Association, Committee on Consequences of Accreditation. (1977).
Committee reports on consequences of accreditation. The Accounting Review, 52, 167-
177. Retrieved from http://aaahq.org/
American Association for Higher Education. (1992). 9 principles of good practice for assessing
student learning. North Kansas City, MO: Author.
American Association for Higher Education. (1997). Assessing influence: Evidence and action.
Washington, DC: Author.
American Council of Trustees and Alumni. (2007). Why accreditation doesn’t work and what
policymakers can do about it. Washington, DC: Author.
American Council on Education. (2012). Assuring academic quality in the 21st century: Self-
regulation in a new era: A report of the ACE National Task Force on institutional
accreditation. Washington, DC: Author.
American Medical Association. (1971). Accreditation of health educational programs. Part I:
Staff working papers. Washington, DC: Author.
Association of American Colleges and Universities (2007). College Learning for the New Global
Century. Washington, DC: Author.
97
Astin, A.W. (2014, February 18). Accreditation and autonomy. Inside Higher Ed. Retrieved from
http://www.insidehighered.com/views/2014/02/18/accreditation-helps-limit-government-
intrusion-us-higher-education-essay
Babbie, E. R. (2012). The practice of social research. Belmont, CA: Wadsworth.
Babbie, E. R., & Benaquisto, E. (2009). Fundamentals of social research (2nd ed.). Toronto,
ON: Nelson.
Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation principles and
practices. The Journal of Academic Librarianship, 28(1), 3-7.
doi:10.1016/S0099-1333(01)00279-8
Banta, T. W. (1993). Summary and conclusion: Are we making a difference? In T. W. Banta
(Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp.
357-376). San Francisco, CA: Jossey-Bass.
Bardo, J. W. (2009). The influence of the changing climate for accreditation on the individual
college or university: Five trends and their implications. New Directions for Higher
Education, 145, 47-58. doi:10.1002/he.334
Barzun, J. (1993). The American university: How it runs, where it is going. IL: University of
Chicago Press.
Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New
Directions for Community College, 236, 65-72. doi:10.1002/cc.155
Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational
learning perspective. New Directions for Higher Education, 131, 99-111.
doi:10.1002/he.190
98
Bernhard, A. (2011). Quality assurance in an international higher education area: A case study
approach and comparative analysis. Wiesbaden, Germany: VS Verlag für
Sozialwissenschaften.
Bitter, M. E., Stryker, J. P, & Jens, W. G. (1999). A preliminary investigation of the choice to
obtain AACSB accounting accreditation. Accounting Educators’ Journal, XI, 1-15. doi:
Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States
Government Printing Office.
Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation (CHEA).
Phoenix, AZ: Oryx Press.
Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A
compilation of institutional good practice. Sterling, VA: Stylus.
Britt, B., & Aaron, L. (2008). Nonprogrammatic accreditation: Programs and attitudes.
Radiologic Technology, 80(2), 123-129. doi:
Brittingham, B. (2008). An uneasy partnership: Accreditation and the federal government.
Change, 1, 32-38. doi:
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are?
New Directions for Higher Education, 145, 7-27. doi:
Brittingham, B. (2012). Higher education, accreditation, and change, change, change: What’s
teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence,
and excellence: The promise and practice of quality assurance (pp. 59-75). Washington,
DC: Teacher Education Accreditation Council.
99
Brown, H. (2013, September). Protecting students and taxpayers: The federal government’s
failed regulatory approach and steps for reform. American Enterprise Institute, Center on
Higher Education Reform. Retrieved from http://www.aei.org/files/2013/09/27/-
protecting-students-and-taxpayers_164758132385.pdf
Burke, J. C., & Associates. (2005). Achieving accountability in higher education: Balancing
public, academic, and market demands. San Francisco, CA: Jossey- Bass.
Burke, J. C., & Minassians, H. P. (2002). The new accountability: From regulation to results.
New Directions for Institutional Research, 2002(116), 5-19. doi:
Cabrera, A. F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for
assessing classroom teaching practices and student learning: The case of engineering.
Research in Higher Education, 42(3), 327-352. doi:
Carey, K. (2009, September/October). College for $99 a month. Washington Monthly. Retrieved
from http://www.washingtonmonthly.com
Carey, K. (2010). Death of a university. In K. Carey & M. Schneider (Eds.), Accountability in
American higher education. New York, NY: Palgrave Macmillan.
Chambers, C. M. (1983). Council on postsecondary education. In K. E. Young, C. M. Chambers,
& H. R. Kells (Eds.), Understanding accreditation (pp. 289-314). San Francisco, CA:
Jossey-Bass.
Chernay, G. (1990). Accreditation and the role of the Council on Postsecondary Accreditation.
Washington, DC: Council on Postsecondary Accreditation.
Commission on the Future of Higher Education (Spellings Commission Report) (2006). US
Department of Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports.html
100
Council for Higher Education Accreditation (2006). CHEA survey of recognized accrediting
organizations: Providing information to the public. Washington, DC: Author.
Council for Higher Education Accreditation. (2010). Quality review 2009: CHEA almanac of
external quality review. Washington, DC: Author.
Council for Higher Education Accreditation. (2012). The CHEA initiative final report.
Washington, DC: Author.
Council for Higher Education Accreditation. (2014). 2013-2014 directory of CHEA-recognized
organizations. Retrieved from http://www.chea.org/pdf/2013-
2014_Directory_of_CHEA_Recognized_Organizations.pdf
Council of Regional Accrediting Commissions. (n.d.). A guide for institutions and evaluators.
Retrieved from http://www.sacscoc.org/pdf/handbooks/GuideForInstitutions.pdf
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches. Los Angeles, CA: Sage.
Cremonini, L., Epping, E., Westerheijden, D., & Vogelsang, K. (2012). Impact of quality
assurance on cross-border higher education. Enschede, Netherlands: Center for Higher
Education Policy Studies.
Daoust, M. P., Wehmeyer, W., & Eubank, E. (2006). Valuing an MBA: Authentic outcome
measurement made easy (Unpublished manuscript). Retrieved from
http://www.momentumbusinessgroup.com/resourcesValuingMBA.pdf
Davenport, C. A. (2000). Recognition chronology. Retrieved from http://www.aspa-
usa.org/documents/Davenport.pdf
Davis, C. O. (1932). The North central association of colleges and secondary schools: Aims,
organization, activities. Chicago, IL: The Association.
101
Davis, C. O. (1945). A history of the North Central Association of Colleges and Secondary
Schools 1895-1945. Ann Arbor, MI: The North Central Association of Colleges and
Secondary Schools.
Degree Mills. (2014). Retrieved from Council for Higher Education Accreditation website
http://www.chea.org/degreemills/default.htm
Denzin, N. K. (2012). Triangulation 2.0. Journal of Mixed Methods Research, 6(2), 80-88. doi:
Dewatripont, M., Sapir, A., Van Pottelsberghe, B., & Veugelers, R. (2010). Boosting innovation
in Europe (No. 2010/06). Bruegel: Bruegel Policy Contribution.
Dickeson, R. C. (2006). The need for accreditation reform. Issue paper (the secretary of
education’s commission on the future of higher education). Washington, DC:
Government Printing Office.
Dickeson, R. C. (2009). Recalibrating the accreditation-federal relationship. Washington, DC:
University of Northern Colorado.
Dickey, F., & Miller, J. ((1972). Federal involvement in nongovernmental Accreditation.
Educational Record, 53(2), 139. doi:
Dill, D. D. (2007). Quality assurance in higher education: Practices and issues. Chapel Hill:
University of North Carolina.
Dill, D. D., Massy, W. F., Williams, P. R., & Cook, C. M. (1996). Accreditation and academic
quality assurance: Can we get there from here? Change, 28(5), 16-24. doi:
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys:
The tailored design method. Hoboken, NJ: John Wiley & Sons.
102
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review, 48(2), 147-
160. doi:
Directories. (2014). Retrieved from Council for Higher Education Accreditation
http://www.chea.org/Directories/special.asp
Doerr, A. H. (1983). Accreditation: Academic boon or bane. Contemporary Education, 55(1), 6-
8. doi:
Dougherty, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems for
community colleges: Lessons for the voluntary framework of accountability for
community colleges. Community College Research Center. Columbia University, NYC:
Teachers College.
Dowd, A. C. (2003). From access to outcome equity: Revitalizing the democratic mission of the
community college. Annals of the American Academy of Political and Social Science,
586, 92-119. doi:
Dowd, A. C., & Grant, J. L. (2006). Equity and efficiency of community college appropriations:
The role of local financing. The Review of Higher Education, 29(2), 167-194. doi:
Driscoll, A., & De Noriega, D. C. (2006). Taking ownership of accreditation: Assessment
processes that promote institutional improvement and faculty engagement. Sterling, VA:
Stylus.
Eaton, J.S. (2013a, June 13). Accreditation and the next reauthorization of the Higher Education
Act. Inside Accreditation with the President of CHEA, 9(3). Retrieved from
http://www.chea.org/
103
Eaton, J. S. (2003). Is accreditation accountable? The continuing conversation between
accreditation and the federal government. Washington, DC: Council for Higher
Education Accreditation.
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education,
145, 79-86. doi:10.1002/he.337
Eaton, J. S. (2010). Accreditation and the federal future of higher education. Academe, 96(5), 21-
24. doi:
Eaton, J. S. (2012). The future of accreditation: Can the collegial model flourish in the context of
the government’s assertiveness and the influence of nationalization and technology?
How? Society for College and University Planning, 1, 8-15. doi:
Eaton, J. S. (2012b). What future for accreditation: The challenge and opportunity of the
accreditation – federal government relationship. In M. LaCelle-Peterson & D. Rigden
(Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance
(pp. 77-88). Washington, DC: Teacher Education Accreditation Council.
Eaton, J. S. (2013, November-December). The changing role of accreditation: Should it matter to
governing boards? Trustee. Retrieved from http://agb.org/trusteeship/2013/11/changing-
role-accreditation-should-it-matter-governing-boards
El-Khawas, E. (2001). Accreditation in the USA: Origins, developments and future prospects.
Paris, France: International Institute for Educational Planning.
Ewell, P. T. (1984). The self-regarding institution: Information for excellence. Boulder, CO:
National Center for Higher Education Management Systems.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of departure.
Washington, DC: Council for Higher Education Accreditation.
104
Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta
(Ed.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C.
Burke (Ed.), Achieving accountability in higher education: Balancing public, academic,
and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth anniversary
report from the Council for Higher Education Accreditation. Washington, DC: Council
for Higher Education Accreditation.
Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension.
Champaign, IL: National Institute for Learning Outcomes Assessment.
Ewell, P. T. (2010). Twenty years of quality assurance in higher education: What’s happened
what’s different? Quality in higher education, 16(2), 173-175. doi:
Ewell, P. T. (2012). Disciplining peer review: Addressing some deficiencies in U.S. accreditation
practices. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence:
The promise and practice of quality assurance (pp. 89-105). Washington, DC: Teacher
Education Accreditation Council.
Ewell, P. T., Wellman, J. V., & Paulson, K. (1997). Refashioning accountability: Toward a
coordinated system of quality assurance for higher education. Denver, CO: Education
Commission of the States.
Finkin, M. W. (1973). Federal reliance on voluntary accreditation: The power to recognize as the
power to regulate. Journal of Law and Education, 2(3), 339-375. doi:
Finn, C. E., Jr. (1975). Washington in academe we trust: Federalism and the universities: The
balance shifts. Change, 7(10), 24-29, 63. doi:
105
Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and Policy
Analysis, 2(2), 35-46. doi:10.3102/01623737002002035
Florida State Postsecondary Education Planning Commission. (1995). A review of specialized
accreditation. Tallahassee, FL: Florida State Postsecondary Education Planning
Commission.
Fuller, M. B., & Lugg, E. T. (2012). Legal precedents for higher education accreditation Journal
of Higher Education Management, 27(1). Retrieved from
http://www.aaua.org/images/JHEM_-_Vol_27_Web_Edition_.pdf#page=53
Gaston, P. L. (2014). Higher education accreditation: How it’s changing, why it must. Sterling,
VA: Stylus.
Gillen, A., Bennett, D. L, & Vedder, R. (2010). The inmates running the asylum?: An analysis of
higher education accreditation. Washington, DC: Center for College Affordability and
Productivity.
Global University Network for Innovation. (2007). Higher education in the world 2007:
Accreditation for quality assurance: What is at stake? New York, NY: Palgrave
Macmillan.
Graham, P. A., Lyman, R. W., & Trow, M. (1995). Accountability of colleges and universities:
An essay. New York, NY: Columbia University.
Gruson, E. S., Levine, D. O, & Lustberg, L. S. Issues in accreditation, eligibility and
institutional quality. Cambridge, MA: Sloan Commission on Government and Higher
Education.
Hagerty, B. M. K., & Stark, J. S. (1989). Comparing educational accreditation standards in
selected professional fields. The Journal of Higher Education, 60(1), 1-20. doi:
106
Harcleroad, F. F. (1976). Educational auditing and accountability. Washington, DC: The
Council on Postsecondary Accreditation.
Harcleroad, F. F. (1980). Accreditation: History, process, and problems. Washington, DC:
American Association for Higher Education.
Hart Research Associates. (2009). Learning and assessment: Trends in undergraduate education
(A survey among members of the Association of American College and Universities).
Washington, DC: Author.
Harvey, L. (2004). The power of accreditation: Views of academics. Journal of Higher
Education Policy and Management, 26(2), 207-223. doi:
HLC Initial. (2012). Seeking accreditation. Chicago, IL: Higher Learning Commission of the
North Central Association.
Ikenberry, S. O. (2009). Where do we take accreditation? Washington, DC: Council for Higher
Education Accreditation.
Jackson, R. S., Davis, J. H., & Jackson, F. R. (2010). Redesigning regional accreditation: The
influence on institutional planning. Planning for Higher Education, 38(4), 9-19. doi:
Jacobs, B., & Van der Ploeg, F. (2006, July). How to reform higher education in Europe.
Economic Policy, 1, 535-592. doi:
Jaschik, S., & Ledgerman, D. (2014). The 2014 Inside Higher Ed survey of college & university
presidents. Washington, DC: Inside Higher Education.
Kells, H. R. (1976). The reform of regional accreditation agencies. Educational Record, 57(1),
24-28. doi:
Kells, H. R., & Kirkwood, R. (1979). Institutional self-evaluation processes. The Educational
Record, 60(1), 25-45. doi:
107
Kennedy, V. C., Moore, F. I., & Thibadoux, G. M. (1985). Determining the costs of self- study
for accreditation: A method and a rationale. Journal of Allied Health, 14(2), 175-182.
doi:
Kren, L., Tatum, K. W., & Phillips, L. C. (1993). Separate accreditation of accounting programs:
An empirical investigation. Issues in Accounting Education, 8(2), 260- 272. doi:
Kuh, G. D. (2010). Risky business: Promises and pitfalls of institutional transparency. Change:
The Magazine of Higher Learning, 39(5), 30-35. doi:
Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United
States. Higher Education Management and Policy, 22(1), 1-20. doi:
Kuh, G. D., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes
assessment in American higher education. Champaign, IL: National Institute for Learning
Outcomes Assessment.
Learned, W. S., & Wood, B. D. (1938). The student and his knowledge: A report to the Carnegie
Foundation on the results of the high school and college examinations of 1928, 1930, and
1932. New York, NY: The Carnegie Foundation for the Advancement of Teaching.
Lee, M. B., & Crow, S. D. (1998). Effective collaboration for the twenty-first century: The
Commission and its stakeholders (Report and recommendations of the committee on
organizational effectiveness and future directions). Chicago, IL: North Central
Association of Colleges and Schools.
Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise?
Washington, DC: American Council of Trustees and Alumni.
108
Lind, C. J., & McDonald, M. (2003). Creating and assessment culture: a case study of success
and struggles. In S. E. Van Kollenburg (Ed.), A collection of papers on selfstudy and
institutional improvement. Promoting student learning and effective teaching (pp. 21-23).
Chicago, IL: Higher Learning Commission.
Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the
institution (2nd ed.). Sterling, VA: Stylus.
McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins
and spread of state performance-accountability policies for higher education. Educational
Evaluation and Policy Analysis, 28(1), 1-24. doi:
Middaugh, M. F. (2012). Introduction to themed PHE issue on accreditation in higher education.
Planning for Higher Education, 40(3), 6-7. doi:
Miles, J. A. (2012). Jossey-Bass business and management reader: Management and
organization theory. Hoboken, NJ: Wiley.
Moghaddam, G. G., & Moballeghi, M. (2008). How do we measure use of scientific journals? A
note on research methodologies. Scientometrics, 76(1), 125-133. doi:
Moltz, D. (2010). Redefining community college success. Inside Higher Ed. Retrieved From
http://www.insidehighered.com/news/2011/06/06/u_s_panel_drafts_and_debates_measur
es_to_gauge_community_college_success.
National Advisory Committee on Institutional Quality and Integrity. (2012a). Higher education
accreditation reauthorization policy recommendations. Retrieved from
http://www2.ed.gov/about/bdscomm/list/naciqi-dir/naciqi_draft_final_report.pdf
109
National Advisory Committee on Institutional Quality and Integrity. (2012b). Report to the U.S.
secretary of education, higher education act reauthorization, accreditation policy
recommendations. Retrieved from http://www2.ed.gov/about/bdscomm/list/naciqi-
dir/2012-spring/teleconference-2012/naciqi-final-report.pdf
Neal, A. D. (2008). Dis-accreditation. Academic Questions, 21(4), 431-445. doi:
Obama, B. (2013a, February 12). State of the union address. The White House. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/02/12/president-barack-obamas-state-
union-address
Obama, B. (2013b, February 12). The president’s plan for a strong middle class and a strong
America. The White House. Retrieved from
http://www.whitehouse.gov/sites/default/files/uploads/sotu_2013_blueprint_embargo.pdf
Obama, B. (2013c, August 22). Fact sheet on the president’s plan to make college more
affordable: A better bargain for the middle class. The White House. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-plan-
make-college-more-affordable-better-bargain-
Orlans, H. O. (1975). Private accreditation and public eligibility. Lexington, MA: D.C. Heath
and Company.
Perrault, A. H., Gergory, V. L., & Carey, J. O. (2002). The integration of assessment of student
learning outcomes with teaching effectiveness. Journal of Education for Library and
Information Science, 43(4), 270-282. doi:
Pigge, F. L. (1979).Opinions about accreditation and interagency cooperation: The results of a
nationwide survey of COPA institutions. Washington, DC: Committee on Postsecondary
Education.
110
Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational
culture as related to external accreditation. Academic Leadership Journal, 8(2), 1-15. doi:
Provezis, S. J. (2010). Regional accreditation and learning outcomes assessment: Mapping the
territory (Doctoral dissertation). University of Illinois at Urbana, Champaign.
Raessler, K. R. (1970). An analysis of state requirements for college or university accreditation
in music education. Journal of Research in Music Education, 18(3), 223-233. doi:
Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the US.
Quality in Higher Education, 2(1), 5-19. doi:
Reidlinger, C. R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New Directions
for Community Colleges, 83, 39-47. doi:
Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the degree.
Planning for Higher Education, 40(3), 6-7. doi:
Salkind, N. J. (2011). Statistics for People who (think they) hate statistics. Los Angeles, CA:
Sage.
Schermerhorn, J. W., Reisch, J. S., & Griffith, P. J. (1980). Educator perceptions of
accreditation. Journal of Allied Health, 9(3), 176-182. doi:
Scriven, M. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan
(Eds.), Evaluation models (pp. 250-278). Boston, MA: Kluwer Academic.
Shaw, R. (1993). A backward glance: To a time before there was accreditation. North Central
Association Quarterly, 68(2), 323-335. doi:
Shibley, L. R., & Volkwein, J. F. (2002, June). Comparing the costs and benefits of re-
accreditation processes. Paper presented at the annual meeting of the Association for
Institutional Research, Toronto, Ontario, Canada.
111
Smith, V. B., & Finney, J. E. (2008). Redesigning regional accreditation: An interview with
Ralph A. Wolff. Change, 1, 18-24. doi:
Southern Association of Colleges and Schools. (2007). The quality enhancement plan. Retrieved
from http://www.sacscoc.org/pdf/081705/QEP%20Handbook.pdf
Spangehl, S. D. (2012). AQIP and accreditation: Improving quality and performance. Planning
for Higher Education, 40(3), 6-7. doi:
Stensaker, B., & Harvey, L. (2006). Old wine in new bottles? A comparison of public and private
accreditation schemes in higher education. Higher Education Policy, 19, 65-85. doi:
Sursock, A., & Smidt, H. (2010). Trends 2010: A decade of change in European higher
education. Brussels: European University Association.
Trivett, D. A. (1976). Accreditation and institutional eligibility. Washington, DC: American
Association for Higher Education.
Uehling, B. S. (1987a). Accreditation and the institution. North Central Association Quarterly,
62(2), 350-360. doi:
U.S. Department of Education Test. (2006). A test of leadership: Charting the future of US
higher education. A report of the commission appointed by Secretary of Education
Margaret Spellings. Washington, DC: U.S. Department of Education.
Van Damme, D. (2000). Internationalization and quality assurance: Towards worldwide
accreditation? European Journal for Education Law and Policy, 4, 1-20. doi:
Van der Ploeg, F., & Veugelers, R., (2008). Towards evidence-based reform of European
universities. CESifo Economic Studies, 54(2), 99-120. doi:
112
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the influence
of professional accreditation on student experiences and learning outcomes. Research in
Higher Education, 48(2), 251-282. doi:
Walker, J. J. (2010). A contribution to the self-study of the postsecondary accreditation protocol:
A critical reflection to assist the Western Association of Schools and Colleges. Paper
presented at the WASC Postsecondary Summit, Temecula, CA.
Warner, W. K. (1977). Accreditation influences on senior institutions of higher education in the
western accrediting region: An assessment. Oakland, CA: Western Association of
Schools and Colleges.
Weissburg, P. (2008). Shifting alliances in the accreditation of higher education: Self-regulatory
organizations (Dissertation). Retrieved from Dissertation Abstracts International, DAI-A
70/02, ProQuest. (250811630)
Wergin, J. F. (2005). Waking up to the importance of accreditation. Change, 37(3) 35-41. doi:
Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson & D.
Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality
assurance (pp. 27-38). Washington, DC: Teacher Education Accreditation Council.
Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (2007). Quality assurance in higher
education: Trends in regulation, translation and transformation. New York, NY:
Springer.
Western Association of Schools and Colleges. (1998). Eight perspectives on how to focus the
accreditation process on educational effectiveness. Oakland, CA: Accrediting
commission for Senior Colleges and Universities WASC.
113
Western Association of Schools and Colleges. (2002). Guide to using evidence in the
accreditation process: A resource to support institutions and evaluation teams. Retrieved
from www.wascweb.org/senior/Evidence_Guide.pdf
Western Association of Schools and Colleges’ Accrediting Commission for Community and
Junior Colleges. (2011). Retrieved from http://www.accjc.org
White House. (2013, February 12). The president's plan for a strong middle class and a strong
America. Accessed from http://www.whitehouse.gov/th e-press-
office/2013/02/12/president-s-plan-strong-middle-class-and-strong-america
Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions: The
case of university accreditation. Human Organization, 51(4), 398-407. doi:
Willis, C. R. (1994). The cost of accreditation to educational institutions. Journal of Allied
Health, 23, 39-41. doi:
Wolff, R. A. (1990, June 27-30). Assessment 1990: Accreditation and renewal. Paper presented
at The Fifth AAHE Conference on Assessment in Higher Education, Washington, DC.
Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing demands?
In J. C. Burke and Associates (Eds.), Achieving accountability in higher education:
Balancing public, academic, and market demands (pp. 78-103). San Francisco, CA:
Jossey-Bass.
Wood, A. L. (2006). Demystifying accreditation: Action plans for a national or regional
accreditation. Innovative Higher Education, 31(1), 43-62.
doi:10.1007/s10755- 006-9008-6
114
Woolston, P. J. (2012). The costs of institutional accreditation: A study of direct and indirect
costs (Dissertation Order No. 3542492). Retrieved from Stateside University, ProQuest
Dissertations and Theses. (1152182950)
World Bank. (2002). Constructing knowledge societies: New challenges for tertiary education.
Washington, DC: World Bank.
United Nations Educational, Scientific and Cultural Organization. (2005). Guidelines for quality
provision in cross-border higher education. Paris, France: Author.
Wriston, H. M. (1960). The futility of accrediting. The Journal of Higher Education, 31(6), 327
329. doi:
Yin, R. K. (2009). Case study research design and methods. Thousand Oaks, CA: Sage.
Yung-chi Hou, A. (2014). Quality in cross-border higher education and challenges for the
internationalization of national quality assurance agencies in the Asia-Pacific region: The
Taiwanese experience. Studies in Higher Education, 39(1). 135-152. doi:
Zis, S., Boeke, M., & Ewell, P. (2010). State policies on the assessment of student learning
outcomes: Results of a fifty-state inventory. Boulder, CO: National Center for Higher
Education Management Systems (NCHEMS).
115
APPENDIX A
NATIONAL ASSOCIATION OF SCHOOLS OF MUSIC MEMBER INSTITUTIONS
Table A1
Table List of National Association of Schools and Music Member Institutions
Institution Street Address City State Zip Code
Puerto Rico Conservatory
of Music
951 Ponce de Leon
Avenue
San Juan PR 00907-
3373
University of
Massachusetts, Amherst
151 Presidents Drive Amherst MA 01003
Holyoke Community
College
303 Homestead
Avenue
Holyoke MA 01040
Westfield State University 577 Western Avenue Westfield MA 01086-
1630
Anna Maria College 50 Sunset Lane Paxton MA 01612-
1198
University of
Massachusetts, Lowell
35 Wilder Street, Suite
3
Lowell MA 01854
Salem State University 352 Lafayette Street Salem MA 01970
Gordon College 255 Grapevine Road Wenham MA 01984
Community Music Center
of Boston
34 Warren Avenue Boston MA 02116
Boston Conservatory 8 The Fenway Boston MA 02215
Boston University 855 Commonwealth
Avenue #244
Boston MA 02215
Bridgewater State
University
Bridgewater MA 02325
University of Rhode Island Kingston RI 02881-
0820
Community College of
Rhode Island
400 East Avenue Warwick RI 02886
Providence College Cunningham Square Providence RI 02918
Keene State College 229 Main Street Keene NH 03435-
2402
University of New
Hampshire
Durham NH 03824
University of Southern
Maine
37 College Avenue Gorham ME 04038
University of Maine Class of 1944 Hall,
Room 208
Orono ME 04469-
5788
Central Connecticut State
University
1615 Stanley Street New Britain CT 06050
116
Institution Street Address City State Zip Code
(continued)
University of Hartford 200 Bloomfield
Avenue
West Hartford CT 06117-
1545
University of Connecticut 1295 Storrs Road, Unit
1012
Storrs CT 06269-
1012
Western Connecticut State
University
181 White Street Danbury CT 06810
Montclair State University Montclair NJ 07043
Kean University 1000 Morris Avenue Union NJ 07083
New Jersey City University 2039 Kennedy
Boulevard
Jersey City NJ 07305-
1597
William Paterson
University
300 Pompton Road Wayne NJ 07470
Rowan University 201 Mullica Hill Road Glassboro NJ 08028
Westminster College of the
Arts of Rider University
101 Walnut Lane Princeton NJ 08540-
3899
The College of New Jersey Ewing NJ 08628-
0718
Rutgers University, the
State University of New
Jersey
Douglas Campus, 81
George Street
New
Brunswick
NJ 08901-
1411
The Collective- New York
University
541 Avenue of the
Americas
New York NY 10011
The Diller-Quaile School of
Music
24 East 95th Street New York NY 10128
Music Conservatory of
Westchester
216 Central Avenue White Plains NY 10606
Nyack College 1 South Boulevard Nyack NY 10960
Molloy College 1000 Hempstead
Avenue
Rockville
Center
NY 11571-
5002
The College of Saint Rose 432 Western Avenue Albany NY 12203
Schenectady County
Community College
78 Washington Avenue Schenectady NY 12305
State University of New
York, New Paltz
1 Hawk Drive New Paltz NY 12561
Oklahoma Baptist
University
500 West University
Drive
Shawnee OK 74804-
2590
East Central University 1100 East 14th Street Ada OK 74820
Dallas Baptist University 3000 Mountain Creek
Parkway
Dallas TX 75211
Southern Methodist
University
P.O. Box 750356 Dallas TX 75275
Texas A&M University-
Commerce
P.O. Box 3011 Commerce TX 75428
117
Institution Street Address City State Zip Code
East Texas Baptist
University
One Tiger Drive Marshall TX 75670
(continued)
University of Texas at
Tyler
3900 University
Boulevard
Tyler TX 75799
Stephen F. Austin State
University
P.O. Box 13043 Nacogdoches TX 75962-
3043
University of Texas at
Arlington
P.O. Box 19105 Arlington TX 76019
Texas Wesleyan University 1201 Wesleyan Street Fort Worth TX 76105
Southwestern Baptist
Theological Seminary
P.O. Box 22390 Fort Worth TX 76122-
0390
Texas Christian University 2800 South University Fort Worth TX 76129
University of North Texas 1155 Union Circle
#311367
Denton TX 76203-
5017
Texas Woman's University P.O. Box 425768 Denton TX 76204-
5768
Midwestern State
University
3410 Taft Boulevard Wichita Falls TX 76308
Tarleton State University P.O. Box T-0320 Stephenville TX 76402
University of Mary
Hardin-Baylor
900 College Street Belton TX 76513
Baylor University One Bear Place #97408 Waco TX 76798-
7408
Howard Payne University 1000 Fisk Street Brownwood TX 76801
Angelo State University ASU Station #10906 San Angelo TX 76909
Sam Houston State
University
Huntsville TX 77341
Lone Star College-
Montgomery
3200 College Park
Drive
Conroe TX 77384
Prairie View A&M
University
P.O. Box 519, Mail
Stop 2205
Prairie View TX 77446
Lamar University P.O. Box 10044, 77710 Beaumont TX 77713
Texas Lutheran University 1000 West Court Street Seguin TX 78155
Saint Mary's University of
San Antonio
One Camino Santa
Maria
San Antonio TX 78228-
8562
University of Texas at San
Antonio
One UTSA Circle San Antonio TX 78248
Texas A&M University-
Kingsville
700 University
Boulevard, MSC 174
Kingsville TX 78363
Del Mar College 101 Baldwin
Boulevard
Corpus Christi TX 78404-
3897
Texas A&M University-
Corpus Chrsti
6300 Ocean Drive,
Unit 5720
Corpus Christi TX 78412-
5756
118
Institution Street Address City State Zip Code
University of Texas at
Brownsville
One West University
Boulevard
Brownsville TX 78520
University of Texas- Pan
American
1201 West University
Drive
Edinburg TX 78539
(continued)
Southwestern University P.O. Box 770 Georgetown TX 78627-
0770
Texas State Universitry 601 University Drive San Marcos TX 78666-
4616
University of Texas at
Austin
One University Station,
D3900
Austin TX 78712
West Texas A&M
University
P.O. Box 60879 Canyon TX 79016-
0001
Wayland Baptist
University
1900 West 7th Street,
#1286
Plainview TX 79072
Amarillo College P.O. Box 447 Amarillo TX 79178
Texas Tech University 4019 75th Street Lubbock TX 79409-
2033
Hardin-Simmons
University
2200 Hickory, Box
16230
Abilene TX 79698
Abilene Christian
University
ACU Box 28274 Abilene TX 79699
University of Texas of the
Permian Basin
4901 University
Boulevard
Odessa TX 79762
Odessa College 201 West University
Boulevard
Odessa TX 79764-
7127
University of Texas at El
Paso
300 Fox Fine Arts El Paso TX 79968-
0552
University of Colorado
Denver
Denver CO 80204
University of Denver 2344 East Iliff Avenue Denver CO 80208
Metropolitan State
University of Denver
P.O. Box 173362 Denver CO 80217-
3362
Colorado Christian
University
8787 West Alameda
Avenue
Lakewood CO 80226
University of Colorado
Boulder
Boulder CO 80309-
0301
Colorado State University Fort Collins CO 80523
University of Northern
Colorado
501 20th Streeet,
Campus Box 28
Greeley CO 80639
Colorado State University-
Pueblo
2200 Bonforte
Boulevard
Pueblo CO 81001-
4901
Adams State University 208 Edgemont
Boulevard
Alamosa CO 81101
Western Colorado State
University
600 North Adams,
Quigley Hall 101A/201
Gunnison CO 81231
119
Institution Street Address City State Zip Code
Fort Lewis College 1000 Rim Drive Durango CO 81301
Colorado Mesa University 1100 North Avenue Grand Junction CO 81501
(continued)
University of Wyoming 1000 East University
Avenue
Laramie WY 82071-
3037
Northwest College 231 West Sixth Street Powell WY 82435
Casper College 125 College Drive Casper WY 82601
Idaho State University 921 South 8th Avenue Pocatello ID 83209
Brigham Young
University- Idaho
525 South Center Rexburg ID 83460
Northwest Nazarene
University
623 South University
Boulevard
Nampa ID 83686
Boise State University 1910 University Drive Boise ID 83725-
1560
University of Idaho 875 Perimeter Drive Moscow ID 83844-
4015
Utah Valley University 800 West University
Parkway
Orem UT 84058
University of Utah 1375 East Presidents
Circle
Salt Lake City UT 84112
Utah State University 4015 Old Main Hill Logan UT 84322-
4015
Weber State University 1905 University Circle Ogden UT 84408-
1905
Brigham Young University C-550 HFAC Provo UT 84602
Snow College 150 East College
Avenue
Ephraim UT 84627
State University of New
York, Oswego
Oswego NY 13126
Syracuse University 215 Crouse College Syracuse NY 13244
State University of New
York, Potsdam
44 Pierrepont Avenue Potsdam NY 13676
Hartwick College One Hartwick Drive Oneonta NY 13820
State University of New
York, College at Oneonta
108 Ravine Parkway Oneonta NY 13820
Binghamton University,
State University of New
York
P.O. Box 6000, Vestal
Parkway East
Binghamton NY 13902-
6000
State University of New
York, Fredonia
Fredonia NY 14048
Buffalo State, State
University of New York
1300 Elmwood
Avenue
Buffalo NY 14222
120
Institution Street Address City State Zip Code
Villa Maria College of
Buffalo
240 Pin Ridge Road Buffalo NY 14225
University of Rochester,
Eastman School of Music
26 Gibbs Street Rochester NY 14612
David Hochstein Memorial
Music School
50 Plymouth Avenue Rochester NY 14614
(continued)
Nazareth College 4245 East Avenue Rochester NY 14618-
3790
Roberts Wesleyan College 2301 Westside Drive Rochester NY 14624
Houghton College One Willard Avenue Houghton NY 14744
Ithaca College 953 Danby Road Ithaca NY 14850
Carnegie Mellon
University
5000 Forbes Avenue Pittsburgh PA 15213
Duquesne University 600 Forbes Avenue Pittsburgh PA 15282
Duquesne University 600 Forbes Avenue Pittsburgh PA 15282-
1800
Seton Hill University One Seton Hill Drive Greensburg PA 15601
Indiana University of
Pennsylvania
422 South Eleventh
Street
Indiana PA 15705-
1049
Slippery Rock University 1 Morrow Way, 224
Swope Music Building
Slippery Rock PA 16057
Westminster College 319 South Market
Street
New
Wilmington
PA 16172-
0001
Edinboro University of
Pennsylvania, 127
Alexander Music Center
Mercyhurst University 501 East 38th Street Erie PA 16546-
0002
Pennsylvania State
University
University
Park
PA 16802-
1901
Mansfield University 18 Campus View Drive Mansfield PA 16933
Lebanon Valley College 101 North College
Avenue
Annville PA 17003
Elizabethtown College One Alpha Drive Elizabethtown PA 17022
Southern Utah University 351 West University
Boulevard
Cedar City UT 84720
Arizona State University Tempe AZ 85287
University of Arizona Tucson AZ 85721
Northern Arizona
University
Flagstaff AZ 86011
University of New Mexico One University of New
Mexico
Albuquerque NM 87131-
0001
New Mexico State
University
Las Cruces NM 88003
121
Institution Street Address City State Zip Code
Eastern New Mexico
University
1500 South Avenue K Portales NM 88130
University of Nevada, Las
Vegas
4505 South Maryland
Parkway
Las Vegas NV 89154-
5025
University of Nevada, Reno Reno NV 89557
The Colburn School 200 South Grand
Avenue
Los Angeles CA 90012
(continued)
Musicians Institute 6752 Hollwood
Boulevard
Los Angeles CA 90028
California State University,
Los Angeles
5151 State University
Drive
Los Angeles CA 90032
Loyola Marymount
University
One LMU Drive Los Angeles CA 90045-
2659
Pepperdine University 24255 Pacific Coast
Highway
Malibu CA 90263-
4462
Biola University 13800 Biola Avenue La Mirada CA 90639
California State University,
Dominguez Hills
1000 East Victoria
Street
Carson CA 90747
California State University,
Long Beach
1250 Bellflower Blvd. Long Beach CA 90840-
7101
Los Angeles College of
Music
370 South Fair Oaks
Avenue
Pasadena CA 91105-
2540
Pasadena Conservatory of
Music
100 North Hill Avenue Pasadena CA 91106
The Master's College 21726 Placerita
Canyon Road
Santa Clarita CA 91321
California State University,
Northridge
18111 Nordhoff Street Northridge CA 91330-
8314
California Institute of the
Arts
24700 McBean
Parkway, Room F300
Valencia CA 91355
Azusa Pacific University 901 East Alosta
Avenue
Azusa CA 91702
California State
Polytechnic University,
Pomona
3801 West Temple
Avenue
Pomona CA 91768
Point Loma Nazarene
University
3900 Lomaland Drive San Diego CA 92106
University of Redlands 1200 East Colton
Avenue
Redlands CA 92373
California State University,
San Bernardino
5500 University
Parkway
San
Bernardino
CA 92407
California Baptist
University
8432 Magnolia Avenue Riverside CA 92504
122
Institution Street Address City State Zip Code
La Sierra University 4500 Riverwalk
Parkway
Riverside CA 92505
California State University,
Fullerton
800 North State
College Boulevard
Fullerton CA 92831
Chapman University One University Drive Orange CA 92866
Westmont College 955 La Paz Road Santa Barbara CA 93108-
1099
California Polytechnic
State University, San Luis
Obispo
One Grand Avenue San Luis
Obispo
CA 93407-
0326
(continued)
California State University,
Fresno
2380 East Keats
Avenue
Fresno CA 93740-
8024
San Francisco
Conservatory of Music
50 Oak Street San Francisco CA 94102-
6011
San Francisco State
University
1600 Holloway
Avenue
San Francisco CA 94132
Pacific Union College One Angwin Avenue Angwin CA 94508
California State University,
East Bay
Hayward CA 94542
California Jazz
Conservatory
2087 Addison Street Berkeley CA 94704
Sonoma State University 1801 East Cotati
Avenue
Rohnert Park CA 94928
San Jose State University One Washington
Square
San Jose CA 95192
University of the Pacific 3601 Pacific Avenue Stockton CA 95211
California State University,
Stanislaus
One University Circle Turlock CA 95382
Humboldt State University One Harpst Street Arcata CA 95521
California State University,
Sacramento
6000 J Street Sacramento CA 95819-
6015
California State University,
Chico
400 West First Street Chico CA 95929-
0805
University of Hawaii at
Manoa
2411 Dole Street Honolulu HI 96822
Marylhurst University 17600 Pacific Highway
(Highway 43)
Marylhurst OR 97036-
0261
Pacific University Oregon Forest Grove OR 97116
Linfield College 900 Southeast Baker
Street
McMinnville OR 97128-
6894
George Fox University 414 North Meridian
Street
Newberg OR 97132
University of Portland 5000 North Willamette
Boulevard
Portland OR 97203
123
Institution Street Address City State Zip Code
Portland State University Portland OR 97207-
0751
Willamette University 900 State Street Salem OR 97301
Western Oregon
University
345 North Monmouth
Avenue
Monmouth OR 97361
University of Oregon 1225 University of
Oregon
Eugene OR 97403
Southern Oregon
University
1250 Siskiyou
Boulevard
Ashland OR 97520
Seattle Pacific University 3307 Third Avenue,
West Suite 310
Seattle WA 98119
(continued)
Western Washington
University
Bellingham WA 98225-
9107
University of Puget Sound 1500 North Warner
#1076
Tacoma WA 98416
Pacific Lutheran
University
12180 Park Avenue Tacoma WA 98447
Central Washington
University
400 East University
Way
Ellensburg WA 98926
Eastern Washington
University
Cheney WA 99004
Washington State
University
Pullman WA 99164-
5300
Whitworth University 300 West Hawthorne
Road
Spokane WA 99251-
1701
Gonzaga University 502 East Boone
Avenue
Spokane WA 99288
Walla Walla University 204 South College
Avenue
College Place WA 99324-
1198
University of Alaska,
Anchorage
3211 Providence Drive Anchorage AK 99508
University of Alaska
Fairbanks
Fairbanks AK 99775-
5660
National Office for Arts
Accreditation
11250 Roger Bacon
Drive, Suite 21
Reston VA 20190
Messiah College One College Avenue,
Suite 3004
Mechanicsburg PA 17055
Gettysburg College 300 North Washington
Street
Gettysburg PA 17325
York College of
Pennsylvania
441 Country Club
Road
York PA 17403
Millersville University of
Pennsylvania
1 South George Street Millersville PA 17551
Bloomsburg University of
Pennsylvania
400 East Second Street Bloomsburg PA 17815
124
Institution Street Address City State Zip Code
Bucknell University Lewisburg PA 17837
Susquehannah University 514 University Avenue Selinsgrove PA 17870
Moravian College 1200 Main Street Bethlehem PA 18018
Marywood University 2300 Adams Avenue Scranton PA 18509-
1598
Bucks County Community
College
275 Swamp Road Newton PA 18940
Cairn University 200 Manor Avenue Langhorne PA 19047-
2990
University of the Arts 320 South Broad Street Philadelphia PA 19102
(continued)
Curtis Institute of Music 1726 Locust Street Philadelphia PA 19103
Academy of Vocal Arts 1920 Spruce Street Philadelphia PA 19103
Temple University 1715 North Broad
Street
Philadelphia PA 19122
Settlement Music School 416 Queen Street Philadelphia PA 19147-
3966
Immaculata University 1145 King Road Immaculata PA 19345-
0703
West Chester University of
Pennsylvania
817 South High Street West Chester PA 19383
Kutztown University of
Pennyslvania
1520 Kutztown Road Kutztown PA 19530
University of Delaware Newark DE 19716
The Music School of
Delaware
4101 Washington
Street
Wilmington DE 19802
Levine Music 2801 Upton Street,
Northwest
Washington DC 20008
American University 4400 Massachusetts
Avenue, Northwest
Washington DC 20016-
8053
George Washington
University
Washington DC 20052
Howard University 2400 Sixth Street,
Northwest
Washington DC 20059
The Catholic University of
America
620 Michigan Avenue,
Northeast
Washington DC 20064
University of Maryland,
College Park
College Park MD 20742
The Peabody Institute of
the Johns Hopkins
University
One East Mount
Vernon Place
Baltimore MD 21202-
2397
Chowan University One University Place Murfreesboro NC 27855
Spelman College 350 Spelman Lane,
Southwest
Atlanta GA 30314
125
Institution Street Address City State Zip Code
Florida Memorial
University
15800 Northwest 42
Avenue
Miami
Gardens
FL 33054
New World Symphony 500 17th Street Miami Beach 33139-
1862
Palm Beach Atlantic
University
West Palm
Beach
FL 33416-
4708
Florida College 119 North Glen Arven
Avenue
Temple
Terrace
FL 33617
The Players School of
Music
923 McMullen Booth
Road
Clearwater FL 33759
Friends University 2100 West University
Avenue
Wichita KS 67213
(continued)
University of Southern
Mississippi
118 College Drive Box
5081
Hattiesburg MS 39406
Alverno College 3401 South 39th Street Milwaukee WI 53234-
3922
University of Wisconsin-
La Crosse
La Crosse WI 54601
University of Wisconsin
Superior
Belknap and Catlin Superior WI 54880-
4500
University of Saint Thomas 2115 Summit Avenue Saint Paul MN 55105
Music Institute of Chicago 1702 Sherman Avenue Evanston IL 60201
William Rainey Harper
College
1200 West Algonquin
Road
Palatine IL 60618
(continued)
126
APPENDIX B
SURVEY INVITATION LETTER
Subject: Information request regarding accreditation costs
Date: November 2014
Attachment: “NASM Membership Perceptions.pdf”
Dear ALOFirstName ALOLastName:
My name is Jennifer Barczykowski and I am a student in the Ed.D. program at the University of
Southern California conducting research on the perceived benefits and costs associated with
membership/accreditation with the National Association of Schools of Music (NASM). As the
designated Accreditation Liaison Officer at your institution, you would probably have a greater
idea of the accreditation process and the potential benefits and costs related to your institutions’
participation. This email is an invitation to participate in research on this subject now by
completing a short survey on the NASM accreditation process at your school.
The purpose of this study is to evaluate the NASM standards and their applicability to all
member institutions while also looking at the perceived costs and benefits to participating in the
NASM accreditation process. In order to better understand how the NASM standards apply to
various institution types, participation by all NASM members is critically important. The study
will make a significant contribution to understanding perceived costs and benefits to NASM
membership, and because of the perspective you have at your school your help is critical to its
success. The study is IRB-approved and the information you share will be secured locally on a
password-protected removable drive. Your confidentiality and anonymity will be strictly
maintained.
127
The survey itself should take no more than 15 minutes to complete, and it is hoped that the
experience will be interesting to you because of your expertise in the field. A PDF copy of the
study is attached for your reference and to facilitate task sharing as appropriate. You can access
and submit the survey directly at:
[Hyperlink to survey online]
Your participation in this study is entirely voluntary and you may decline to answer any
questions you wish without explanation. Your assistance will be greatly appreciated. To thank
you for your contribution you can receive a complete copy of the findings of the study by
contacting me directly (since the survey is completely anonymous). If you have any questions
about the survey or the study, please do not hesitate to contact me at barczyko@usc.edu or
(310)844-3154.
Best regards,
Jennifer Barczykowski, M.Ed., Doctor of Education (Ed.D.) Candidate
USC Rossier School of Education
128
APPENDIX C
SURVEY INVITATION FOLLOW UP LETTER
Subject: Invitation Follow-Up- Information request regarding perceived accreditation benefits
and costs
Date: April 2015
Dear Institutional Accreditation Liasion:
On March 17
th
, 2015, an email was sent to you inviting your participation in a study on the
perceived benefits and costs associated with National Association of Schools of Music (NASM)
membership. Because so few studies have studied this issue, this IRB-approved study has an
opportunity to provide important insight into the perceived costs and benefits of NASM
membership. As an individual who previously participated in the NASM accreditation/self-study
review process at your institution, you would probably have a greater idea of the accreditation
process and the potential benefits and costs related to your institutions’ participation. This email
is a reminder to participate in the pilot study on this subject now by completing a short survey on
the NASM accreditation process.
If you have completed and submitted the survey, thank you very much for your assistance. If you
have not yet had the chance to contribute to the research however, please take about 15 minutes
to do so now.
Follow this link to the Survey:
https://usceducation.az1.qualtrics.com/SE/?SID=SV_ewVCqDrTLWEg0yp
Or copy and paste the URL below into your internet browser:
https://usceducation.az1.qualtrics.com/SE/?SID=SV_ewVCqDrTLWEg0yp
Your participation in this pilot study is entirely voluntary and you may decline to answer any
questions you wish without explanation. Your assistance will be greatly appreciated. If you are
129
able to participate, I would appreciate your response by Friday, April 17
th
, 2015. To thank you
for your contribution you can receive a complete copy of the findings of the study by contacting
me directly. If you have any questions about the survey or the study, please do not hesitate to
contact me at barczyko@usc.edu or (310)844-3154.
Sincerely,
Jennifer Barczykowski
Doctor of Education (Ed.D.) Candidate
USC Rossier School of Education
130
APPENDIX D
SURVEY INSTRUMENT
Default Question Block
Survey on Perceived Benefits and Costs of Membership with the National Association of
Schools of Music
The purpose of this study is to investigate the perceived benefits of being accredited by and/or
having membership with the National Association of Schools of Music (NASM). In looking at
the direct and non direct costs of participating in the NASM accreditation/membership process,
this study looks to evaluate institutional perception of the both the perceived costs and benefits.
The study will make a significant contribution to understanding perceived costs and benefits to
NASM membership, and because of the perspective you have at your school your help is critical
to its success. The survey should take no more than 15 minutes to complete.
To thank you for your contribution you can receive a copy of the findings by contacting Jennifer
Barczykowski, Ed. D. student at the University of Southern California, at barczyko@usc.edu.
Part I: Demographic Information
Accreditation affiliation (please select one):
NASM only
Both NASM & Regional Accreditation (select region)
Regional Accreditation only (select region)
No affiliation with accrediting body
In addition to NASM, please select the regional accreditation body that your institution is
affiliated with:
Middle States Commission on Higher Education (MSCHE)
North Central Association of Colleges and Schools (NCA)
New England Association of Schools and Colleges (NEASC)
Northwest Commission on Colleges and Universities (NWCCU)
Southern Association of Colleges and Schools (SACS)
Western Association of Schools and Colleges (WASC)
Please select the regional accreditation body that your institution is affiliated with:
Middle States Commission on Higher Education (MSCHE)
North Central Association of Colleges and Schools (NCA)
New England Association of Schools and Colleges (NEASC)
Northwest Commission on Colleges and Universities (NWCCU)
Southern Association of Colleges and Schools (SACS)
Western Association of Schools and Colleges (WASC)
Basic Carnegie Classification (please select one):
Research/Doctoral University (Includes institutions that awarded at least 20 research
doctoral degrees during the year excluding doctorallevel degrees that qualify recipients
for entry into professional practice, such as the JD, MD, PharmD, DPT, etc. Excludes
Special Focus Institutions and Tribal Colleges.)
131
Master's College or University (Generally includes institutions that awarded at least 50
master's degrees and fewer than 20 doctoral degrees during the year. Excludes Special
Focus Institutions and Tribal Colleges.)
Baccalaureate College (Includes institutions where baccalaureate degrees represent at
least 10 percent of all undergraduate degrees and where fewer than 50 master's degrees or
20 doctoral degrees were awarded during the update year. Excludes Special Focus
Institutions and Tribal Colleges.)
Special Focus Institution (Institutions awarding baccalaureate or higherlevel degrees
where a high concentration of degrees, above 75%, is in a single field or set of related
fields. Excludes Tribal Colleges.
Tribal College (Colleges and universities that are members of the American Indian
Higher Education Consortium, as identified in IPEDS Institutional Characteristics.)
What experience(s) have you had in accreditation at your current institution? (Select all that
apply)
Participated in NASM review cycle
Participated in regional accreditation review cycle
Have not participated in either NASM or regional accreditation review cycle
What is your current job title?
Are you the individual responsible for accreditation at your institution? (Please select one)
Yes
No
How many individuals participate in the NASM selfstudy process at your institution? (Please
specify number)
Composition of the selfstudy team at your institution (Please check all that apply and include
number of participants in each category):
Faculty
Staff
Administrators
How often does this NASM selfstudy committee meet?
132
Duration of meetings (minutes):
Date of last NASM review (MM/YYYY):
Date (MM/YYYY)
My institution participated in the following accreditation selfstudy (please select one):
Format A (All institutions applying for NASM membership or renewal of membership)
Format B (Applying for NASM membership after Associate membership)
Format C (Institutions holding NASM membership for five (5) years or more, with the
advice and consent of the Executive Director and/or Institutions for which NASM is not
the designated institutional accreditor)
Part II: NASM Accreditation Standards
NASM Membership & Credibility
Comments regarding the relationship between NASM membership and institutional
To your knowledge, has your institution considered withdrawing from NASM?
Yes
No
Unsure
Strongly Disagree Disagree
Neither Agree nor
Disagree Agree Strongly Agree
I believe that NASM
membership brings value to the
credibility of your institution.
133
NASM Standards & Quality of Education
Comments regarding the relationship between NASM standards and the quality of education
given:
NASM claims that they provide access to the following:
• A national forum for the discussion and consideration of concerns relevant to
the preservation and advancement of standards in the field of music, particularly in
higher education.
• Develop a national unity and strength for the purpose of maintaining the position
of music study in the family of fine arts and humanities in our universities, colleges,
and schools of music.
• Maintain professional leadership in music training and to develop a national context
for the professional growth of individual musicians as artists, scholars, teachers, and
participants in music and musicrelated enterprises.
• Establish minimum standards of achievement without restricting an administration or
school in its freedom to develop new ideas, to experiment, or to expand its program.
• Recognize that inspired teaching may rightly reject a “status quo” philosophy.
• Establish that the prime objective of all educational programs in music is to provide
the opportunity for every music student to develop individual potentialities to the
utmost.
Do you agree that all of these opportunities exist for member institutions?
Yes
No
Strongly Disagree Disagree
Neutral/No
Opinion Agree Strongly Agree
The standards outlined by NASM
impact the quality of the education
given at individual institutions.
134
Comments regarding NASM goals, outlined at top of the page:
What do you believe are the most important benefit(s) associated with NASM membership?
What do you believe are the greatest cost(s) associated with NASM membership?
SelfStudy is designed to produce comprehensive effort on the part of the institution to evaluate
its own program while considering its objectives, publicly or otherwise stated.
Comments on the the selfstudy process:
Peer evaluation provides professional, objective judgment from outside the institution and is
accomplished through onsite visitation, a formal Visitors’ Report, and Commission review.
Comments on the peer evaluation process:
Strongly Disagree Disagree
Neither Agree nor
Disagree Agree Strongly Agree
I feel the selfstudy process
meets this objective.
Strongly Disagree Disagree
Neither Agree nor
Disagree Agree Strongly Agree
I feel the peer evaluation process
meets this objective.
135
NASM & Institutional Autonomy
Comments on the effectiveness of institutional autonomy in ensuring institutions meet NASM
standards:
What recommendations do you have for the NASM accreditation selfstudy review process?
(Please select one, and leave comments if necessary)
No recommendations, the process is fine in its current state.
I have some recommendations, including:
NASM Standarts & Institutional Effectiveness
Comments on the ability for NASM standards to evaluate institutional effectiveness (include any
recommendations you may have):
Thank you again for your participation in this survey.
You can receive a copy of the findings by contacting Jennifer Barczykowski, Ed. D. student at
the University of Southern California, at barczyko@usc.edu.
Strongly Disagree Disagree
Neither Agree nor
Disagree Agree Strongly Agree
I believe the autonomy given to
NASM member institutions is an
effective way for ensuring that
each institution meets specified
NASM standards?
Strongly Disagree Disagree
Neither Agree nor
Disagree Agree Strongly Agree
I believe that the NASM
standards evaluate
institutional effectiveness
appropriately.
136
APPENDIX E
SURVEY ON NASM ACCREDITATION STANDARDS &
EFFECTIVENESS CODEBOOK
Demographic Information
[Q2 accredaff] Accreditation affiliation (please select one): [Nominal]
• NASM only [1]
• Both NASM & Regional Accreditation (select region)[2]
i. Middle States Commission on Higher Education (MSCHE) [2.1]
ii. North Central Association of Colleges and Schools (NCA) [2.2]
iii. New England Association of Schools and Colleges (NEASC) [2.3]
iv. Northwest Commission on Colleges and Universities (NWCCU) [2.4]
v. Southern Association of Colleges and Schools (SACS) [2.5]
vi. Western Association of Schools and Colleges (WASC) [2.6]
• Regional Accreditation only (select region) [3]
i. Middle States Commission on Higher Education (MSCHE) [3.1]
ii. North Central Association of Colleges and Schools (NCA) [3.2]
iii. New England Association of Schools and Colleges (NEASC) [3.3]
iv. Northwest Commission on Colleges and Universities (NWCCU) [3.4]
v. Southern Association of Colleges and Schools (SACS) [3.5]
vi. Western Association of Schools and Colleges (WASC) [3.6]
• No affiliation with accrediting body [4]
[Q4 carnclass] Basic Carnegie Classification (please select one): [Nominal]
• Research/Doctoral University (Includes institutions that awarded at least 20
research doctoral degrees during the year excluding doctoral-level degrees that
qualify recipients for entry into professional practice, such as the JD, MD,
PharmD, DPT, etc. Excludes Special Focus Institutions and Tribal Colleges.)
[4.1]
• Master's College or University (Generally includes institutions that awarded at
least 50 master's degrees and fewer than 20 doctoral degrees during the year.
Excludes Special Focus Institutions and Tribal Colleges.) [4.2]
• Baccalaureate College (Includes institutions where baccalaureate degrees
represent at least 10 percent of all undergraduate degrees and where fewer than 50
master's degrees or 20 doctoral degrees were awarded during the update year.
Excludes Special Focus Institutions and Tribal Colleges.) [4.3]
137
• Special Focus Institution (Institutions awarding baccalaureate or higher-level
degrees where a high concentration of degrees, above 75%, is in a single field or
set of related fields. Excludes Tribal Colleges. [4.4]
• Tribal College (Colleges and universities that are members of the American
Indian Higher Education Consortium, as identified in IPEDS Institutional
Characteristics.) [4.5]
• Associates: These institutions award associate’s degrees but no bachelor’s
degrees. [4.6]
• Associates Dominant: These institutions award both associate’s and bachelor’s
degrees, but the majority of the degrees awarded are at the associate’s level. [4.7]
[Q5 accredexp] What experience(s) have you had in accreditation at your current
institution? (Select all that apply) [Nominal]
• Participated in NASM review cycle [nasmexp]
• Participated in regional accreditation review cycle [regexp]
• Have not participated in either NASM or regional accreditation review cycle
[noexp]
[Q6 jobtitle] What is your current job title?________________ [Nominal]
[Q7 respaccred] Are you the individual responsible for accreditation at your institution?
(Please select one) [Nominal]
• Yes [7.1]
• No [7.2]
[Q8 indivpart] How many individuals participate in the NASM self-study process at your
institution? (Please specify number) [Nominal]
_________________
[Q9] Composition of the self-study team at your institution (Please check all that apply
and include number of participants in each category): [Nominal]
• Faculty________ [faccomp]
• Staff__________ [staffcomp]
• Administrators________ [admincomp]
[Q10 meetfreq] How often does this NASM self-study committee
meet?_____________________ [Nominal]
[Q11 meetdur] Duration of meetings (minutes):________________ [Nominal]
[Q12 lastreview] Date of last NASM review (MM/YYYY): _______________
[Nominal]
[Q13 studyformat] My institution participated in the following accreditation self-study
(please select one): [Nominal]
• Format A [13.1]
• Format B [13.2]
• Format C [13.3]
[Q14 standard] NASM Accreditation Standards
[Q19 membcred] I believe that NASM membership brings value to the credibility of your
institution? [Ordinal]
1 2 3 4 5 Strongly
Disagree Disagree Neutral/No Opinion Agree Strongly Agree
138
[19.1] [19.2] [19.3] [19.4]
[19.5]
[Q27 memcredcmnt] Comments regarding the relationship between NASM membership and
institutional credibility:
___________________________________________________________________________
_______________________________________________________________
[Q16 wdrwcons] To your knowledge, has your institution considered withdrawing from NASM?
[Nominal]
• Yes [16.1]
• No [16.2]
[Q22 stndimpctqlty] The standards outlined by NASM influence the quality of the education
given at individual institutions. [Ordinal]
1 2 3 4 5
Strongly Disagree Disagree Neutral/No Opinion Agree Strongly Agree
[22.1] [22.2] [22.3] [22.4]
[22.5]
[Q30 cmntqltyeduc] Comments regarding the relationship between NASM standards and
the quality of education given: [Nominal]
________________________________________________________________________
____________________________________________________________
[Q19 accessclaim] NASM claims that they provide access to the following:
• A national forum for the discussion and consideration of concerns relevant to the
preservation and advancement of standards in the field of music, particularly in
higher education.
• Develop a national unity and strength for the purpose of maintaining the position of
music study in the family of fine arts and humanities in our universities, colleges,
and schools of music.
• Maintain professional leadership in music training and to develop a national context
for the professional growth of individual musicians as artists, scholars, teachers, and
participants in music and music-related enterprises.
• Establish minimum standards of achievement without restricting an administration
or school in its freedom to develop new ideas, to experiment, or to expand its
program.
• Recognize that inspired teaching may rightly reject a “status quo” philosophy.
• Establish that the prime objective of all educational programs in music is to provide
the opportunity for every music student to develop individual potentialities to the
utmost.
[Q20 opporexist] Do you agree that all of these opportunities exist for member institutions?
[Nominal]
• Yes [20.1]
• No [20.2]
139
[Q31 cmntgoals] Comments regarding NASM goals, outlined at top of the page: [Nominal]
___________________________________________________________________________
_______________________________________________________________
[Q21 imptben] What do you believe are the most important benefit(s) associated with NASM
membership? [Nominal]
______________________________________________________________________________
__________________________________________________________________
[Q22 grtstcost] What do you believe are the greatest cost(s) associated with NASM membership?
[Nominal]
______________________________________________________________________________
__________________________________________________________________
[Q29 selfevalpgrm] Self-Study is designed to produce comprehensive effort on the part of the
institution to evaluate its own program while considering its objectives, publicly or otherwise
stated. I feel the self-study process meets this objective. [Ordinal]
1 2 3 4 5
Strongly Disagree Disagree Neutral/No Opinion Agree Strongly Agree
[29.1] [29.2] [29.3] [29.4]
[29.5]
[Q32 cmntselfstdy] Comments: [Nominal]
______________________________________________________________________________
__________________________________________________________________
[Q31 peereval] Peer evaluation provides professional, objective judgment from outside the
institution and is accomplished through on-site visitation, a formal Visitors’ Report, and
Commission review. I feel the peer evaluation process meets this objective. [Ordinal]
1 2 3 4 5
Strongly Disagree Disagree Neutral/No Opinion Agree Strongly Agree
[31.1] [31.2] [31.3] [31.4]
[31.5]
[Q31_1 cmntpeereval] Comments: [Nominal]
______________________________________________________________________________
__________________________________________________________________
[Q33 instlautnmy] I believe the autonomy given to NASM member institutions is an effective
way for ensuring that each institution meets specified NASM standards? [Ordinal]
1 2 3 4 5
Strongly Disagree Disagree Neutral/No Opinion Agree Strongly Agree
[33.1] [33.2] [33.3] [33.4]
[33.5]
[Q34 cmntinstlautnmy] Comments: [Nominal]
______________________________________________________________________________
__________________________________________________________________
140
[Q26 recmndtn]What recommendations do you have for the NASM accreditation self-study
review process? (Please select one, and leave comments if necessary) [Nominal]
• No recommendations, the process is fine in its current state.[26.1]
• I have some recommendations, including: [26.2]
________________________________________________________________________
____________________________________________________________ [26.2 TEXT
recmndtntxt]
[Q37 evaleffect] I believe that the NASM standards evaluate institutional effectiveness
appropriately. [Ordinal]
1 2 3 4 5
Strongly Disagree Disagree Neutral/No Opinion Agree Strongly Agree
[37.1] [37.2] [37.3] [37.4]
[37.5]
[Q35 cmnteffect] Comments on the ability for NASM standards to evaluate institutional
effectiveness (include any recommendations you may have): [Nominal]
______________________________________________________________________________
__________________________________________________________________
141
APPENDIX F
NASM ACCREDITATION STANDARDS ANALYSIS CODING SCHEME
Table F1
NASM Accreditation Standards: Qualtrics Survey
Construct Question
Prestige/Credibility 1
Membership Status 2
Quality of Education/Curriculum 3
Cost vs. Benefit 4, 5, 6
Effectiveness of Accreditation Process 7, 8, 9, 10
Standards 11
Qualitative Participation 12
Abstract (if available)
Abstract
This quantitative study examined institutional perceptions of the benefits and costs associated with National Association of Schools of Music (NASM) membership/accreditation, while also examining variance in perception based on institution type and regional accreditation status. This study defined various aspects of the accreditation process of post-secondary music schools across the United States. Despite the benefits of accreditation, it remains unclear if the established standards in the accreditation process are fair and consistent for all parties concerned. One must ensure that standards in accreditation are relevant and meet the needs of the member institutions. The original purpose behind accreditation has been forgotten, as the emphasis currently relies on easily quantifiable outcomes. Currently, the evidence indicates that standardization causes constriction rather than creativity, as a focus remains on items that are easily measurable, as opposed to those that are not. Thus, the purpose of this study was to examine perceived benefits and costs associated with NASM accreditation. A quantitative approach was employed to explore the monetary and nonmonetary benefits of NASM membership, and open-ended responses were used to support the quantitative results. Findings demonstrated that accreditation was perceived to influence the credibility of the institution. Additionally, accreditation signified the strength of peer evaluation and self-study as important in ensuring quality of education. In addition, the participants also perceived the accreditation process was appropriate in improving institutional effectiveness. Further studies were recommended to understand the different variables that influenced institutions to apply for accreditation.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
Accreditation and accountability processes in California high schools: a case study
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
States of motivation: examining perceptions of accreditation through the framework of self-determination
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
A cost benefit analysis of professional accreditation by ABET for baccalaureate engineering degree programs
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Teachers’ perceptions and implementation of instructional strategies for the gifted from differentiation professional development
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
Promising practices: building the next generation of effective school principals
PDF
Institutional student loan cohort default rates by institution type
PDF
The perceived barriers of nursing faculty presence in an asynchronous online learning environment: a case study
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Reclaiming the dream: credit recovery and graduation at a California model continuation high school
PDF
An analysis of the rise of special education legal costs in one Southern California suburban district
PDF
The relationship between the campus climate and under-represented students’ experiences on campus and the influences on fit, self-efficacy, and performance: a qualitative study
Asset Metadata
Creator
Barczykowski, Jennifer Lynn
(author)
Core Title
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
07/09/2018
Defense Date
03/31/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,OAI-PMH Harvest,specialized accreditation, National Association of Schools of Music
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Tobey, Patricia (
committee chair
), Lundeen, Rebecca (
committee member
), Picus, Lawrence (
committee member
)
Creator Email
barczyko@usc.edu,jenniferbarczykowski@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-509697
Unique identifier
UC11265819
Identifier
etd-Barczykows-6378.pdf (filename),usctheses-c40-509697 (legacy record id)
Legacy Identifier
etd-Barczykows-6378.pdf
Dmrecord
509697
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Barczykowski, Jennifer Lynn
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
specialized accreditation, National Association of Schools of Music