Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
(USC Thesis Other)
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
AN EXPLORATORY, QUANTITATIVE STUDY OF ACCREDITATION ACTIONS
TAKEN BY THE WESTERN ASSOCIATION OF SCHOOLS AND COLLEGES’
ACCREDITING COMMISSION FOR COMMUNITY AND JUNIOR COLLEGES
(WASC-ACCJC) SINCE 2002
by
Ryan William Theule
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2012
Copyright 2012 Ryan William Theule
ii
DEDICATION
To my dear wife, Beth, who has supported me tirelessly and to Allie, my beautiful
daughter who was born in the midst of my graduate studies—you are the joys of my life!
I dedicate this dissertation to you, to our future as a family, and to my work for students
in higher education. I cannot wait to see what the future holds for us all.
iii
ACKNOWLEDGEMENTS
With appreciation, I thank Dr. Robert Keim for his patient support and assistance,
and my committee for their invaluable insights and counsel. I am also grateful to the help
and sounding-board assistance of Dr. Gokce Gokalp, who encouraged my journey
through statistics. Likewise, I am thankful for the advice and encouragement of Dr.
Daylene Meuschke whose enthusiasm for the USC Rossier School of Education
following her own doctoral studies convinced me to apply and attend. I am also very
grateful for the encouragement and flexibility of my community college colleagues who
made every effort to support my successful studies. I owe a debt of gratitude to the
diligent faculty and supportive classmates I encountered at the University of Southern
California, and I am extremely appreciative of the equipping scholarship support of the
USC Town and Gown and the USC Rossier School of Education, which made my studies
possible. Finally, and with deepest humility, I am grateful to my family, who believed in,
facilitated, and encouraged my own educational journeys through college and beyond.
With deep thanks, I will endeavor to serve others and honor God with the blessings of
these opportunities.
iv
TABLE OF CONTENTS
Dedication ii
Acknowledgements iii
List of Tables vi
List of Figures viii
Abstract ix
CHAPTER 1: OVERVIEW OF THE STUDY 1
Introduction 1
Accreditation and Higher Education 5
The Purposes of Accreditation 12
Historical Trends in Regional Accreditation 17
Accreditation, Accountability, and Community Colleges 23
Context and Rationale for this Study 25
Statement of the Problem 27
Purpose of the Study 29
Significance of the Study 30
CHAPTER 2: LITERATURE REVIEW 32
Introduction 32
History and Themes of Regional Accreditation 33
Accountability and Accreditation 38
Assessment and Accreditation 46
Accreditation in WASC-ACCJC 48
Conclusion 50
CHAPTER 3: METHODOLOGY AND RESEARCH DESIGN 51
Rationale for the Study 51
Research Design 53
Data Collection and Analysis 57
Limitations and Delimitations 59
CHAPTER 4: RESEARCH FINDINGS 62
Introduction 62
Descriptive Statistics 62
Comparative Statistics 89
Additional Statistical Analysis 93
Findings in Relationship to Research Questions 101
Summary of Analysis and Findings 109
v
CHAPTER 5: SUMMARY, CONCLUSIONS, AND IMPLICATIONS 114
Purpose of the Study 114
Summary of Findings 115
Limitations 122
Recommendations 123
Implications of the Findings 126
Conclusion 129
GLOSSARY 133
REFERENCES 140
APPENDICES:
Appendix A: Accrediting Organizations 152
Appendix B: WASC-ACCJC Accreditation Standards (2002) 159
Appendix C: WASC-ACCJC California Community Colleges 160
Appendix D: Accreditation Status, By California Community College 165
Appendix E: Ongoing Accreditation Status 170
Appendix F: Exploration of Peak-Sanction Year (2008) 177
vi
LIST OF TABLES
Table 1.1: Western Association of Schools and Colleges—Accrediting
Commission for Community and Junior Colleges
(WASC-ACCJC) Organizational Detail 4
Table 1.2: Public and Private, Two-Year and Four-Year Institutions
Accredited by a Regional Accreditation Commission, by Region 14
Table 1.3: Timeline for the Establishment of the Regional-Accreditation
Commissions 19
Table 3.1: Detail for WASC-ACCJC 54
Table 3.2: Detail of Variables 55
Table 3.3: Analytical Strategy for Data Analysis 58
Table 4.1: Ongoing Accreditation Status 65
Table 4.2: Accreditation Actions by Year 67
Table 4.3: Overall Percentage of Sanctioned CCCs 71
Table 4.4: Categories and Subsets of Student and Institutional
Variables Used 72
Table 4.5: Descriptive Statistics: Graduation, Transfer, and Retention 73
Table 4.6: Explanation of Student Variables Used 76
Table 4.7: Descriptive Statistics: Credit, Non-Credit, and Total
Full-Time Equivalent Students (FTES) 77
Table 4.8: Descriptive Statistics, Carnegie Size Classification 79
Table 4.9: IPEDS Budget Information in Dollars, By Year 80
Table 4.10: IPEDS Budget Spending By Area in Dollars, By Year 82
Table 4.11: IPEDS Staffing Information in FTE, By Year 84
Table 4.12: Explanation of Institutional Variables Used 86
vii
Table 4.13: Chi-Square Association of Variables and Categorical
Accreditation Status, by Year 90
Table 4.14: Categories of Variables Identified as Potentially Associated
by Chi-Square 92
Table 4.15: First-Step - Multinomial Logistic Regression Analysis
of Individual Chi-Square Identified Variables (IV) and
Accreditation Status (DV), By Significance 94
Table 4.16: Second-Step - Multinomial Logistic Regression Analysis
of Chi-Square Identified Variables (IV) and Accreditation
Status (DV), By Year 96
Table 4.17: Third-Step - Multinomial Logistic Regression Analysis of
Chi-Square Identified Variables (IV) and Accreditation
Status (DV), Using Top Four Chi-Square Categories, By Year 99
Table 4.18: Tabular Summary of Findings by Research Question One 103
Table 4.19: Tabular Summary of Findings by Research Question Two 107
Table 4.20: Tabular Summary of Findings by Research Question Three 109
Table 4.21: Summary of Variables and Statistical Analyses Performed 110
Table A.1: Accrediting Organizations Recognized by the Council for Higher
Education Accreditation (CHEA) and/or the US Department of
Education (USDE) 152
Table B.1: WASC – ACCJC Accreditation Standards (2002) 159
Table C.1: WASC – ACCJC California Community Colleges, by Title 160
Table D.1: Accreditation Status, by California Community College 165
Table E.1: Ongoing Accreditation Status and Percentage of Institutions
on Sanction 170
Table F.1: Exploration of Peak-Sanction Year (2008) in Demonstration of
Sample Methods for Future Study 177
viii
LIST OF FIGURES
Figure 1.1: Coordination of Accreditation 16
Figure 1.2: Complex Relationships 22
Figure 2.1: Competing Forces 39
Figure 2.2: Redux of Accountability Triangle for Accreditation 40
Figure 4.1: Percent of All CCCs with Sanctioned Status 66
Figure 4.2: Overall Count of All WASC-ACCJC Actions, Annual 68
Figure 4.3: Overall Count of WASC-ACCJC Sanctions, Annual 69
Figure 4.4: Sanctions by WASC-ACCJC as a Percentage of Actions
Taken that Year 70
Figure 4.5: Sample Distribution of Student Variable – Transfer Rate, 2009 75
Figure 4.6: Logic Model of Statistical Tests 88
ix
ABSTRACT
The purpose of this study was to conduct a quantitative exploration of
accreditation actions issued by the Western Association of Schools and Colleges’
Accrediting Commission for Community and Junior Colleges (WASC-ACCJC) since
2002, in order to examine their relationship to a dataset of common student and
institutional variables. This exploration was based on the observation that WASC-
ACCJC sanctions have greatly increased since 2002 within a rising culture of
accountability, prompting speculation about the association between outcomes variables
and institutional resources and ongoing accreditation status. Initial data collection found
that fifty-five percent of all California community colleges have been sanctioned at least
once since 2002. In exploring the variables and accreditation statuses of these colleges,
this study sought to provide more information concerning the presence or absence of
significant patterns of association between public institutional data and accreditation
status, providing additional insight to this underexplored period of community college
accreditation and suggesting avenues for future study.
The research questions that guided this study are as follows: In California
community colleges reviewed by WASC-ACCJC since 2002, what is the relationship
between the specific accreditation action taken by the commission and the most common
student outcomes variables cited by the literature—namely, graduation rate, transfer, and
retention? In California community colleges reviewed by WASC-ACCJC since 2002,
what is the relationship between accreditation status and several common institutional
variables—namely, full-time equivalent students (FTES), budget, staffing, and size?
x
Finally, in those California community colleges that were sanctioned by WASC-ACCJC
since 2002, what patterns emerge that may inform institutional knowledge about the
relationship between accreditation action and institutional measures?
This study examined institutional accreditation status, according to categories of
“clear” and “sanctioned” accreditation and compared these statuses to common student
and institutional variables for the years matching the accreditation status. Using chi-
square tests of association and a subsequent set of logistic regression analyses, this study
noted several statistically significant associations between college data and accreditation
status, including student graduation rates, full- and part-time retention rates, and transfer
rates which were found significant in multiple instances of the post-2002 data. Student
outcomes variables with significant associations were found in greater number than
institutional variables. These findings offer some initial insights into recent WASC-
ACCJC accreditation actions, suggesting that some quantitative measures of student
outcomes such as graduation rate are associated with accreditation status. However, this
study noted that year-to-year comparisons did not demonstrate an ongoing pattern of
significant associations. These findings suggests that accreditation status has not become
coopted by outcomes measures alone, while likewise suggesting the need for further
exploration of the rising rate of sanctions by WASC-ACCJC in order to understand this
new trend in community college accreditation.
1
CHAPTER ONE: OVERVIEW OF THE STUDY
Introduction
Mention the term accreditation, and a diverse variety of meanings are conjured.
To some it is a “seal of approval” that an institution has met a minimum threshold for
quality and standards (Rogers, 2000) and a mark that it has attained a “stamp” of
acceptance by peer institutions (Atwell, 1994; Brittingham, 2008; Nettles, Cole, & Sharp,
1997). To others, it may usher thoughts of measures, assessments, student learning
outcomes, and the collection and review of data (Burke & Minassians, 2002; Crow, 2009;
Ewell, 1990; WASC-ACCJC, 2002; Wolff, 1992). Even the very nature of accreditation
seems dichotomous. On the one hand, it has been lauded as the height of innovation of
private, voluntary associations, even while others have called for accreditation to become
an extended arm of state or national oversight of educational institutions (Brittingham,
2008). The history of accreditation attests to a process that has incrementally grown into
a time-consuming, cost-laden effort by many institutions to collect data and package
institutional self-study materials in order to present themselves in the best possible light
for their regional accreditation review (Shibley & Volkwein, 2002). Accreditation, it
seems, has evolved from simpler days of semi-informal peer assessment into a
bourgeoning industry of detailed data analysis, student learning outcomes assessment,
quality and performance review, financial analysis, public attention, and all-around
institutional scrutiny (Bloland, 2001; Burke & Minassians, 2002; McLendon, Hearn, &
Deaton, 2006; Zis, Boeke, & Ewell, 2010).
2
Public scrutiny of institutions to demonstrate their worth, their contribution to
student learning, and an increasingly regulated demand for institutional proof of success
demonstrated by evidence and assessment has dramatically transformed accreditation in
the past decade and has created a vacuum of knowledge about how accreditation is truly
working in practice (Commission on the Future of Higher Education, 2006; Dougherty,
Hare, & Natow, 2009; Leef & Burris, 2002). This vacuum can be filled with an informed,
exploratory study. In the particular case of the Western Association of Schools and
Colleges’ Accrediting Commission for Community and Junior Colleges (WASC-
ACCJC), recent history has demonstrated profound changes in practices (e.g., updated
standards for accreditation and a rising rate of institutional sanctions) and the need for
more information concerning the relationship of accreditation action to institutional data
(Baker, 2002). Multi-directional movements in higher education—measures of inputs
versus measures of outputs, local control versus governmental review, performance
funding versus institutional choice—make difficult the task of understanding trends and
trajectories of regional accreditation in the United States, but nonetheless have profound
influence upon actual implementation of accreditation standards to real-world
institutions. It is within this context of rising demands, institutional costs in terms of time
and effort, and increasing concern by both institutional and public agents as to the role of
higher education that this study is offered (Leef & Burris, 2002).
California Community Colleges
Institutional agents with responsibility to prepare for an accreditation review of
their colleges and universities need more information about the nature of accreditation’s
3
embrace or rejection of accountability and assessment pressures upon what was
previously an individuated, private review process of local institutions by peers. In
particular, leaders at California community colleges, who work within the single largest
educational system in the United States, would be well-served by descriptive and
exploratory data concerning the relationship of accreditation review actions by WASC-
ACCJC and several of the most commonly collected quantitative measures of their
institutional and student performance. Though the regional accreditation process remains,
in theory, a system of voluntary review by private institutions in discrete geographic
areas, it has been significantly influenced by national voices trumpeting the need for
consolidated standards, greater measures of outcomes, and an overarching drive for
institutional accountability for the value added to students through the college experience
(Leveille, 2005).
The rise of a culture of evidence and accountability has dramatically changed the
landscape of assessment and accreditation in higher education, even while raising new
debate about the proper role of assessing and influencing quality improvements in
colleges and universities (Biswas, 2006; Morest, 2009). This scrutiny is, arguably,
nowhere more distinct than within the realm of recent WASC-ACCJC accreditation
where institutions have been placed under sanction with increasing regularity (Moltz,
2010). Understanding how this dynamic is affecting the actions of accreditors in the
sanction or approval of institutions is both timely and necessary, and this study seeks to
explore how accreditation standards may be working in practice in association with
institutional data (see chapter three). For additional context, including geographic area
4
and number of institutions assessed, please note the summary table provided below,
which highlights pertinent facts on the Accrediting Commission for Community and
Junior Colleges (see Table 1.1).
Table 1.1: Western Association of Schools and Colleges—Accrediting Commission for
Community and Junior Colleges (WASC-ACCJC) Organizational Detail
WASC-ACCJC Commission Details
Founded 1962
Geographic Region California, Hawaii, Pacific basin region
Type of Institution
Covered
Associate degree-granting institutions
Institutions 135 (including 112 CA community colleges)
Commission Staff 19
Standards and Sub-
Sections
4 standards, 127 subsections
Length of Review 6 years, with midterm report
Source: CCCCO, 2011.
California community colleges clearly constitute a majority of the WASC-ACCJC
institutions, making up eighty-three percent of all the community colleges reviewed by
this commission. While the geographic region covered by the commission is large
geographically, when the Pacific basis institutions are included, it is clear that this
regional accreditor is primarily concerned with California community colleges. As
detailed in the methodology section of chapter three, these colleges compose the sample
for investigation by this research project.
5
The remainder of this chapter, in conjunction with the subsequent review of the
literature, will examine accreditation through the following framework: It will discuss the
process of accreditation, define the common purposes of accreditation, discuss its
evolving definition, and orient the reader to significant issues and problems in the
historical development of this topic. Additionally, it will distinguish between the various
accreditation agencies in the United States, according to their disparate mandates,
purposes, and histories. This section will also discuss the criteria used by accreditation
bodies to assess institutional quality and the overall process by which institutions are
granted, extended, or removed from accreditation. The statement of the problem section
will narrow the focus to accreditation of community colleges and discuss how, in context
of these issues, this institutional sector has factored prominently in the landscape of
higher education in the United States. Finally, with the scope of focus sufficiently
narrowed to the history of regional accreditation of California community colleges—
including the process and criteria by which these institutions are accredited—this chapter
will argue for a quantitative exploration of WASC-ACCJC community college
accreditation and the association of accreditation status and several groups of frequently
cited institutional and student variables.
Accreditation and Higher Education
The Process of Accreditation
Accreditation is a process of reflective review used by institutions to determine
compliance with delineated standards of institutional quality (CHEA, 2011). The Council
for Higher Education Accreditation (CHEA) states that accreditation is about both quality
6
assurance and quality improvement, the former to guide the establishment of a baseline
for quality in higher education in the United States, and that latter to ensure that
institutions identify and use ongoing processes to improve upon what they do (Eaton,
2008a; Eaton, 2008b; Eaton, 2008c). The Department of Education, though not directly
responsible for the standards and actions of these private accreditation bodies, describes
accreditation as “the recognition that an institution maintains standards requisite for its
graduates to gain admission to other reputable institutions of higher learning or to achieve
credentials for professional practice” and sees it as a mark that an institution or program
“meets acceptable levels of quality” (USDE, 2011). In this definition, multiple interested
parties are identified, including other institutions, students/graduates, governmental
bodies, and the public at large. As such, accreditation serves multiple functions as part of
its review of the ongoing quality and improvement of institutions.
The presence or absence of accreditation plays an important role in the
legitimacy, prestige, and funding of a college or university. As a marker of minimum,
threshold quality for peer institutions by category (either programmatically- or
regionally-defined), accreditation is a “thumbs-up” or “thumbs-down” vote by peers of an
institution’s basic quality and commitment to ongoing quality improvement processes,
and when kept in good standing it signals a significant accomplishment by an institution
(Bloland, 2001; Rusch & Wilbur, 2007). It has been noted that a potential drawback of
this yes/no vote is the absence of fine-grained distinctions between institutions that are
accredited (Orlans, 1975). While sanctions of an accredited institution are semi-public
matters, the absence of attention to the distinctions between an institution that was
7
accredited with “no findings” versus a peer institution that was placed on probation due
to several warnings of diminishing quality or process, signals the need for more detailed
analysis of accreditation actions (McLendon, Hearn, & Deaton, 2006). In the context of
movements for greater accountability within higher education, such scrutiny is timely and
appropriate to institutional agents. With students, governments, organizations, peer
institutions, and the public all interested in various aspects of accreditation, this study can
contribute to new understanding of what differences between accredited institutions may
mean in terms of institutional and student outcomes.
Standards. The heart of the accreditation process is the devotion to delineated
institutional standards as a yardstick for attainment of minimum criteria for respected
peer institutions. With six regional accreditation bodies—and more than seventy
additional accreditation organizations of a programmatic or professional nature—with a
role in the accreditation of institutions or programs at more than seven thousand
institutions containing close to twenty thousand programs, standards vary widely between
accreditation organizations (CHEA, 2011). Such diversity of standards is not surprising
considering the history of American higher education, in which the vast coterie of
institutional types delineated in the Carnegie designation came to be. Some institutions
arose centuries ago while new institutions are created every year. Some colleges had the
blessing of governmental injunction (e.g., land-grant universities), while others operate as
private entities. Some craft programs for specialized instruction and others offer
vocational and career-technical skills. The list of programmatic and institutional diversity
could continue for some length. Accordingly, standards of regional accreditors are
8
customized to the unique histories of their regions, and have evolved separately under the
unique conditions of state/local pressures and peer-influences.
The historical confluence of simultaneous development by multiple private,
regional accreditation bodies in the US has led to a diversity of standards and some
differentiation in practice between each group. Some have indicted this diversity of
standards as a deficit of the American accreditation system, noting that pursuit of a
common “language of standards” and the “development of common resources” would
enhance communication between discrete regions and provide opportunity for new dialog
about institutional educational standards (Ewell, 2001). In spite of this call for increased
communication, practice between regional accreditors is remarkably cohesive. There is a
semi-fluent vocabulary of common accreditation terms, habits, and policies. This co-
alignment of practice is, as some scholars have noted, one of the reasons why
accreditation was never subsumed by the Federal government, since private practice was
producing valuable and adaptive measures of institutional standards that were of benefit
to multiple consuming audiences. The aforementioned success of accreditation set the
stage for increasing expectations about how accreditation should work and what it could
be used to accomplish. The encroachment of performance measures, benchmarking,
public measures of student outcomes, and other aspects of the accountability movement
carved new contours into the landscape of accreditation, and raised new debate—and
alarm—about these changes (Edler, 2004). Further discussion of this history and the
literature on accountability/assessment pressures upon accreditation will be detailed in
chapter two.
9
Self-study. The self-study is an important component of regional accreditation, in
which institutions prepare a self-generated written summary of their compliance with
accreditation standards. Detailed guides and handbooks are typically provided by
regional accreditors, which detail expectations, requests for evidence, and acceptable
responses to itemized or thematic standards (CHEA, 2011). The self-study serves as the
primary document that narrates institutional compliance with accreditation standards and
quality-improvement, as part of an ongoing cycle of progress improvement (Banta, Pike,
& Hansen, 2009). The preparation of this document is often accomplished by a sizeable
team of institutional agents drawn from the ranks of faculty, administration, and support
staff, and has been noted to be a very time-consuming and expensive endeavor
(Bernhardt, 1984; Head & Johnson, 2011). This process of self-evaluation, based on
standards, is a core component of the regional accreditation review process.
Site visit by peer review. The site visit is another crucial element of the
accreditation review process. Every regional accreditation body sends a visiting group of
peer professionals to review the institution that is undergoing accreditation review
(CHEA, 2011). The teams are typically composed of senior leaders from equivalent, peer
schools, and represent multiple areas of the college, such as instruction, student
services/affairs, budget, and other divisions. These peer reviewers are volunteers and
generally not paid for their work on site-visit teams. Depending upon the accreditation
group, site visits are generally conducted every six to ten years or even more regularly if
an institution is found to be deficient to warrant an off-cycle evaluation.
10
Reporting and correspondence. Accreditors also set a process for
correspondence and reporting between institutions and the commission, in accordance
with an institution’s status on the last review cycle. For example, following a successful
site visit and review, an institution might not be visited again by an accreditation team
until six to ten years later, but they are generally required to submit a midterm report to
the commission halfway between the last and up-coming visit. Moreover, institutions that
are sanctioned for deficiencies may be given an itemized list of reporting deadlines to
demonstrate compliance and ongoing quality review for those areas noted to be lacking.
Some correspondence between accreditation commissions and the institutions are public,
whereas others are private. This semi-public nature to accreditation has been a point of
contention in the literature on accountability and assessment, with calls for increased
public transparency of accreditation findings and actions, including full publication of
reports by the commissions and by the institutions in question (Eaton, 2010; Ikenberry,
2009; Kuh, 2010). For example, in WASC-ACCJC, some sanctioned institutions do not
have mention of their status on their websites.
Action by accreditation commissions. Following site-visits, and review of
institutional self-studies, the accreditation teams render a judgment of institutional
compliance with specified standards for quality and improvement (CHEA, 2011). These
actions typically fall within the following categories: clear accreditation (no findings),
conditional accreditation (sanctions/findings noted), probationary accreditation
(sanctions/warnings requiring action according to a timeline), and removal of
accreditation (rescinded accreditation due to noncompliance, closure, or voluntary
11
withdrawal by the institution). In WASC-ACCJC, the following actions are standard for
the commission:
• Grant candidacy or initial accreditation
• Deny candidacy or initial accreditation
• Defer action
• Continue accreditation during review
• Reaffirm accreditation
• Issue a formal notice of concern
• Issue a warning
• Impose probation
• Issue an order to show cause
• Terminate accreditation
These categories of approval or rejection by the commission can be truncated to the
following categories, according to level of severity and the jeopardy of accreditation.
• Clear accreditation (no findings), which includes the categories of initial, non-
contingent candidacy review and reaffirmed accreditation.
• Sanctioned/Conditional accreditation, which includes notices of concern,
warning, and probation.
• Jeopardized or rescinded accreditation, which includes orders to show cause
and termination of accreditation.
Additional information about these categories, for the purposes of the research design of
this study is available on Table 3.2. In chapter three, these categories are delineated
12
according to an ordinal scale of severity, in order to setup a study to explore and provide
more detailed information concerning the relationship between accreditation status and
public institutional measures. WASC-ACCJC has been “at the center of controversies”
during the past ten years due to its increased emphasis upon student learning outcomes
compliance (WASC, 2002; WASC-ACCJC, 2011). Debate has raged as to whether this
emphasis upon “measurable” markers of student outcomes is appropriate to education,
whether it is a violation of the purview of faculty members, and whether it is truly in the
best interest of students, best practices, and learning (Eaton, 2010). This controversy is
part of the overarching problem concerning WASC-ACCJC accreditation, which includes
lack of information about the relationship between commission actions, institutional
response to the student learning outcomes initiative, and the proper role of data in
advancing education. This study, as detailed in the subsequent sections, will help explore
associations between college data and accreditation status and help examine the current
history and practice of this commission that reviews a sizeable and important sector of
US higher education, namely the California community college system.
The Purposes of Accreditation
The overarching purpose of accreditation is to signal institutional quality and to
connote the existence of quality improvement processes for an interested public. The
“seal” of accreditation demonstrates to other institutions, to the public, to organizations,
and to governments, that the reviewed college or university has defined and attained clear
and appropriate objectives for higher education and is working in such a manner as to
reasonably signal its ongoing ability to meet them (Ewell, 2009). Accreditation is a mark
13
of quality, a mark of ongoing quality improvement, a signal to other institutions and the
public that an institution’s degrees and certificates are of minimal quality for transfer, and
a basic condition for recognition by the Federal government for access to financial aid
and/or grant funding (Brittingham, 2008; Dill, Massy, Williams, & Cook, 1996; Eaton,
2009b; Eaton, 2009c).
Accreditation also involves “core values,” such as “institutional autonomy,
academic freedom, and peer and professional review” (Eaton, 2010, p. 1). The emphasis
upon quality assurance and quality improvement is not to be missed, as this is both a de
facto aspect of accreditation and a critical component of the ongoing controversy in the
evolution of accreditation (Bogue, 1998; El-Khawas, 1998; Ruiz, 2010). In particular,
community colleges accredited by WASC-ACCJC have been especially scrutinized in the
past decade, demonstrating the need for additional exploration of the recent history of this
important accreditation commission (Moltz, 2010; Beno, 2004).
Types of Accreditation
Regional accreditation. The six regional accreditation bodies in the United
States accredit slightly more than three thousand public and private institutions of higher
education, including both two-year and four-year institutions (CHEA, 2007; Eaton,
2009c). Accreditation by a regional commission is the most sought after type of
accreditation. The six regional accreditors are as follows:
• Middle States Commission on Higher Education (MSCHE)
• New England Association of Schools and Colleges (NEASC), which
includes the Commission on Institutions of Higher Education (NEASC-
14
CIHE) as well as the New England Association of Schools and Colleges
Commission on Technical and Career Institutions (NEASC-CTCI)
• North Central Association of Colleges and Schools’ Higher Learning
Commission (NCA-HLC)
• Northwest Commission on Colleges and Universities (NWCCU)
• Southern Association of Colleges and Schools Commission on Colleges
(SACS)
• Western Association of Schools and Colleges (WASC), which includes
the Accrediting Commission for Senior Colleges and Universities
(WASC-ACSC) and the Western Association of Schools and Colleges’
Accrediting Commission for Community and Junior Colleges (WASC-
ACCJC), which will be of special concern for this study
Institutional details concerning the number of institutions reviewed by each of the
regional accreditation commissions are listed below in Table 1.2.
Table 1.2: Public and Private, Two-Year and Four-Year Institutions Accredited by a
Regional Accreditation Commission, by Region
Source: Eaton, 2008a; Eaton, 2009c.
Accreditation Region Total Institutions
MSCHE 526
NCA 1,005
NEASC 249
NWCCU 156
SACS 797
WASC 293
Total 3,026
15
National faith-based accreditation. In addition to regional accreditation, four
primary accreditation commissions review approximately four hundred and fifty
religiously-affiliated institutions. These commissions are as follows:
• Association for Biblical Higher Education
• Association of Advanced Rabbinical and Talmudic Studies
• Association of Theological Schools
• Transnational Association of Christian Colleges and Schools
National career-related accreditation. Career-based accreditation of institutions
exists for over three thousand institutions. These are primarily for-profit, career-oriented,
single-purpose institutions, and may include distance-learning programs (USDE, 2011).
The primary institutional accreditors of this category are as follows:
• Accrediting Bureau of Health Education Schools
• Accrediting Commission of Career Schools and Schools of Technology
• Accrediting Council for Continuing Education and Training
• Accrediting Council for Independent Colleges and Schools
• Council on Occupational Education
• Distance Education and Training Council
• National Accrediting Commission of Cosmetology Arts and Sciences
Accreditation by a career-related accreditation commission may or may not be
recognized by the Council for Higher Education Accreditors (CHEA) and the US
Department of Education (USDE), as notated in Appendix A.
16
Programmatic accreditation. Finally, programmatic accreditation is an
additional category of accreditation, in which individual programs of study, not
institutions, are reviewed and noted for compliance with specific standards. These
programs are typically for specialized, professionally-related programs such as medicine,
law, or other technical professions. Almost twenty thousand specific programs are
accredited by programmatic accreditation commissions, of which fifteen thousand are
degree-granting programs. The interconnection of these varying types of accrediting
organizations is summarized below in Figure 1.1.
Figure 1.1: Coordination of Accreditation
Source: Eaton, 2008a.
As Figure 1.1 explains, there are a great number of accrediting organizations that
are responsible for an even greater number of institutions and programs. However, this
list can quickly be narrowed down by focusing upon regional accreditation commissions
exclusively, since they play the most dominant role in the peer review of American
17
institutions of higher learning. Indeed, there are only six regional accreditors, of which
the Western Association of Schools and Colleges and its subcommission for community
colleges is the most recently established commission, and the subject for this research
study.
Historical Trends in Regional Accreditation
Origins of Regional Accreditation
Accreditation in the United States is autonomous, voluntary, and private, having
originated more than one hundred years ago in the peer-based review of self-organized
institutions (Bloland, 2001; Eaton, 2008a). This homegrown aspect of accreditation is
distinct in international contexts, where many countries have governmentally-derived,
non-voluntary accreditation (Adelman & Silver, 1990; Vaughn, 2002). Eight overarching
commissions, under six regional accreditation organizations, compose the single most
important group of accreditation bodies in the United States. Accreditation by a regional
accreditor is a primary mark of quality and the most desirable form of accreditation for
most institutions.
Accreditation in the United States has followed an evolutionary, non-planned
trajectory, according to the growth and development of independent, voluntary
accreditation bodies (Brittingham, 2009). Traditions of self-governance, institutional
autonomy, and the self-organization of historic institutions in Europe and the early
American Republic contributed the foundation for self-organization, peer review, and
independent delineation of institutional standards (Bloland, 2001; Lucas, 1994). The
absence of direct, governmental oversight of accreditation allowed significant diversity in
18
the landscape of US higher education, but this lack of coordinated, Federal authority also
contributed—somewhat ironically—to the recent clamor for nationally-aligned,
coordinated standards (King, 2007).
The first regional accreditation associations emerged in the second half of the
nineteenth century, on the heels of predecessors such as the American Association of
Universities (AAU) (Harcleroad, 1980). This was a period of profound development in
higher education, and witnessed the emergence of many universities in the wake of the
land-grant movement (Lucas, 1994). Of the six regional accreditation commissions, the
first to be established was the New England Association of Schools and Colleges
(NEASC), in 1885, concluding with the Western Association of Schools and Colleges
(WASC) in 1924 (Bloland, 2001). The sub-commission in question for this study—the
Western Association of Schools and Colleges’ Accrediting Commission for Community
and Junior Colleges (WASC-ACCJC)—was established in 1962 (WASC-ACCJC, 2011).
A detailed timeline on the history of these influential regional accreditation commissions
is detailed in Table 1.3.
19
Table 1.3: Timeline for the Establishment of the Regional-Accreditation Commissions
Regional Accreditor
Date
Established
New England Association of Schools and Colleges (NEASC) 1885
Middle States Commission on Higher Education (MSCHE) 1887
North Central Association of Colleges and Schools’ Higher
Learning Commission (NCA-HLC)
1895
Southern Association of Colleges and Schools Commission on
Colleges (SACS)
1895
Northwest Commission on Colleges and Universities (NWCCU) 1917
Western Association of Schools and Colleges (WASC)
[Sub-commission: Accrediting Commission for Community and
Junior Colleges (ACCJC)]
1924
[1962]
Source: Eaton, 2008a; Eaton, 2009c.
Recent Trends in Regional Accreditation
The twentieth century witnessed consolidation, coordination, and development of
standards by the regional commissions. With the exception of a brief stint by the US
Department of Education to classify colleges, accreditation continued to be the purview
of the voluntary, peer-led regional accreditation groups (Orlans, 1975). The coordination
of regional accreditors amongst themselves was facilitated by periodic coordination
meetings, such as the National Committee of Regional Accrediting Agencies (NCRAA),
in 1950, which evolved into the Federation of Regional Accrediting Commissions of
Higher Education (FRACHE) in 1964 and subsequently changed to the Council on
Postsecondary Accreditation (COPA) in 1975 (Bloland, 2001; Dill, Massy, Williams, &
20
Cook, 1996; Ewell, 2008; Orlans, 1975). Some feared the rise of national voices for
accreditation signaled that institutional autonomy was waning (Benezet, 1981; Benjamin,
1994; Eaton, 2008b; Eaton, 2008c; Ewell, 1994; Leveille, 2005).
Increasing access to higher education, which was fueled by national developments
such as the postwar GI Bill and the establishment of many community colleges,
contributed to rising public interest into the standards of colleges and universities
(Harcleroad, 1990; Lucas, 1994). One key marker of this transformation was the
regulation of 1963 that institutions seeking public funding from the US government be
listed as “accredited” by one of the regional accreditation commissions (Ewell, 2008).
The period from the 1960s to the present day witnessed the ongoing coordination and co-
alignment of regional accreditation commission standards and processes, such as the
ubiquitous self-study, site visit, actions by peers, and judgment according to standards
(Ibid.).
The most recent transformation of national efforts to coordinate the work of
regional accreditors was demonstrated by the morphing of the Council on Postsecondary
Accreditation (COPA) into the Council for Higher Education Accreditation (CHEA)
following a brief interlude (Bloland, 2001). COPA was dissolved due to controversy
surrounding the attempted insertion of new regulatory entities into the accreditation
equation, namely the State Postsecondary Review Entities (SPREs), which were in vogue
during the early nineteen nineties when the Federal government considered legislation to
supplement and replace the default system of regional accreditation with these SPRE
commissions. The consideration of SPREs and other public debate about institutional
21
quality signaled the rising public scrutiny of higher education, rising concern about the
quality and costs of education, and the simultaneous issue of costs and administrative
procedures to effectively review and certify institutions of quality (Dill, Massy, Williams,
& Cook, 1996). The SPREs were not established, due to rising awareness of the costs of
establishing these new review boards, but momentum was clearly in the direction of
increased public scrutiny about all aspects of the work of colleges and universities. These
developments signaled that an era of public concern for accountability had begun
(Marchese, 1995).
CHEA filled the void of the SPREs and worked in conjunction with the US
Department of Education (USDE) to influence the development of standards by the
accreditors (Wellman, 1998). CHEA and the USDE began to offer a list of “recognized”
accreditation commissions, on the basis of their alignment with CHEA and/or USDE
standards (Bloland, 2001; Dill, Massy, Williams, & Cook, 1996). In this way, CHEA
became a powerful national voice in accreditation, and part of a complex web of voices
influencing and transforming the work of accreditation (see Figure 1.2).
22
Figure 1.2: Complex Relationships
Source: Eaton, 2008a.
Related to the detail provided by Figure 1.1, the chart of relationships highlighted
in Figure 1.2 demonstrates how accrediting commissions are located within a web of
responsibility between state and Federal interests in education, between specific
institutions and programs, and within a loose national organization of groups such as
CHEA.
CHEA’s contemporary recognition of accreditation organizations is now based
upon adherence to six areas of attention: the advancement of academic quality, the
demonstration of accountability, the encouragement of “self-scrutiny” and improvement,
the use of checks and balances to ensure fair decision-making, ongoing review of
accreditation practice, and the maintenance of sufficient resources (Eaton, 2008a). In a
parallel review of accreditation, the USDE maintains the following recognition standards:
student success in relationship to institutional mission, quality curriculum, quality
faculty, quality facilities, sound fiscal and administrative operation, student support
services, appropriate policies and actions concerning admissions and record-
23
keeping/production, match between program length and degree/credential objectives,
record of student complaints, and compliance with Title IV requirements (such as data on
student loan default rates) (Eaton, 2008a; Eaton, 2009c; USDE, 2011). Regional
accreditation groups, as expected, are in hearty alignment with these recognition
standards, in part because their preexistence (with the exception, for example, of items
such as Title IV compliance) influenced the creation of these recognition standards in the
first place (Bloland, 2001; USDE, 2011). For an exhaustive list of recognized
accreditation bodies, according to CHEA and the USDE, refer to Appendix A.
Accreditation, Accountability, and Community Colleges
Community Colleges in California
Community colleges were established and multiplied with increasing frequency
during the early twentieth century, on the basis of a need for trained workers, increased
access to higher education, and the innovation of open-access, entry-level education in
the postsecondary sector (Bragg, 2001; CCCCO, 2011; Cohen & Brawer, 2008).
Nationally, more than one thousand community colleges now serve approximately seven
million students (AACU, 2011; CHEA, 2008; Cohen & Brawer, 2008). The single largest
institutional system within this sector is the California community college system, which
is composed of 112 institutions serving almost three million students (Lay, 2011). The
1960 Master Plan for Higher Education in California was especially influential in the
history of this sector of community colleges, as it delineated a three-tiered model of
research universities (the University of California), teaching-oriented universities (the
California State University system) and open-access, vocational, basic-skill, and transfer-
24
ready education (the California Community College system) (Brossman & Roberts, 1973;
CCCCO, 2011).
The CCCs, as the largest single sector of higher education in the United States, is
a vitally important system. Under the regional accreditation authority of the Western
Association of Schools and Colleges’ Accrediting Commission for Community and
Junior Colleges (WASC-ACCJC), understanding how this system has been assessed for
quality and ongoing improvement will help inform how a sizeable sector of higher
education is being assessed and how it is responding to that assessment—which has not
been without its own challenges, such as bringing new institutional research capacity
online (Skolits & Graybeal, 2007; Morest & Jenkins, 2007). Within the context of
accountability and public scrutiny of institutions, additional information concerning the
nature of WASC-ACCJC’s review, approval, and sanction of CCCs will provide new
level of insight into the current state of accountability in this sample.
WASC-ACCJC. The movement for accountability is perhaps nowhere better
demonstrated than within the sector of WASC-ACCJC regional accreditation of
California community colleges. This study will contextualize that history to the recent
trends demonstrated by the Western Association of Schools and Colleges’ Accrediting
Commission for Community and Junior Colleges (WASC-ACCJC). As mentioned, this
regional accreditation commission regulates the single largest system of higher education
in the United States, namely the California Community College (CCC) system (Lay,
2011). While this fact is interesting, even more notable is the observation that over the
past ten years, public sanctions by the WASC-ACCJC have increased, with more than
25
forty percent of CCCs having been placed on some form of sanction/probation between
2003 and 2008 (Moltz, 2010). This is a sizeable change and a somewhat perplexing turn
of events in regional accreditation history. Since the CCC system educates close to three
million students per year and is composed of 112 institutions, it is a fitting sample for
analyzing recent accreditation commission actions in association with institutional data
and student performance (Lay, 2011). With the inclusion of some additional Pacific basin
institutions accredited by the WASC-ACCJC, this system is part of a very large gateway
to higher education in the United States. With rising scrutiny of accreditation, and
increasing concern about the process, time, effort, cost, and value associated with
accreditation, this study will provide a timely, exploratory analysis of accreditation
actions by WASC-ACCJC within the context of debates on culture of assessment and
evidence in higher education.
Context and Rationale for this Study
Within the context of accountability and assessment movements in US higher
education, the WASC-ACCJC stands out as an important example of regional-
accreditation in transition (Head & Johnson, 2011). Between 2002 and 2008, almost forty
percent of all California community colleges were placed on some form of sanction
(Moltz, 2010). During the same period, in other regions across the country, the average
rate of sanction for community colleges by their local, regional commissions was
between zero and six percent (Ibid). In 2010, close to twenty institutions remained on
sanction, demonstrating an exceptional number of disapproving penalties by the WASC-
ACCJC. This history is striking, and the confluence of national (e.g., CHEA,
26
Commission on the Future of Higher Education, 2006; Dwyer, Millett, & Payne, 2006)
and local (California performance funding) forces at work on WASC-ACCJC
demonstrate the need for additional scrutiny of this accreditor and the relationship
between the commission’s actions and known, public markers of institutional
performance (Burke, 1998; Burke & Minassians, 2004; Burke, Minassians, & Nelson,
2002).
Well-known within higher education is the publicly available repository of
common measures of institutional traits and student outcomes, namely the Integrated
Postsecondary Education Data System (IPEDS). This database details common and
specific measures of institutions, including their Carnegie classification, size, location,
enrollment, tuition, graduation rate, and retention, among many other measures (IPEDS,
2011). In the context of California community colleges, not only do CCCs contribute
annual data to IPEDS, but they also contribute a sizeable amount of data to the California
Community College Chancellor’s Office (CCCCO) for reports such as the Accountability
Reporting for the Community Colleges (ARCC) and other combined datasets gathered by
the system chancellor’s office. Driven by a California Assembly bill, the ARCC report
compiles CCC institutional data into discrete peer groups on measures such as transfer
rates, attainment of vocational certificates, student participation, demographics, and
success in basic skills (CCCCO, 2011). Repeated attempts to tie college funding in
California to results on the ARCC report demonstrate the keen scrutiny in California on
publicly accessible, quantifiable measures of institutional and student performance—and
this trend underscores the need to better understand the association between institutional
27
data and accreditation status (CC League, 2011; CCCCO, 2011). The CCCs have faced
rising scrutiny and also the challenge of operating betwixt and between K-12 schools that
face their own significant scrutiny of outcomes measures and the better-known UC and
CSU systems that are accredited by WASC-ACSC.
Statement of the Problem
The betwixt and between position of the CCCs within the vector of public
accountability, a rising tide of assessment, and evolution of the regional accreditation
commissions has placed the CCCs in an accountability position of very high public
scrutiny. K-12 institutions already face many reporting requirements, such as from the No
Child Left Behind (2002) act, but the less formalized assessment and accountability
requirements of four-year institutions have historically been less stringent and less
scrutinized—which is now changing under increasing pressure to design assessments of
“genuine value to students, faculty, and stakeholder (Banta, Black, Kahn, & Jackson,
2004, p. 5). However, with the exceptionally high rate of sanction by the WASC-ACCJC
in the past decade, and due to the unique status of the CCCs as the leading community
college system in the nation based on size, additional information is needed to inform
practitioners and the public about trends in WASC-ACCJC accreditation. Indeed, even
simply the first part of the several step research design proposed in chapter three—which
includes many descriptive statistics and explanations of CCC variables and accreditation
statuses during the critical post-2002 period—would provide data that is lacking and
inform many practitioners about this trend.
28
Within the context of rising clamor for numeric measures of institutional
performance and increasing consumer-based scrutiny of measures of student outcomes,
the association between accreditation actions by the WASC-ACCJC and data available
on the ARCC report, the California Community College Chancellor’s Office Data Mart,
and the Integrated Postsecondary Data System (IPEDS) is unknown. For example, is
sanction by WASC-ACCJC associated with performance by CCCs on the some of the
more common numeric markers of institutional and student performance? Given that
accreditation has developed into a pseudo-mandatory requirement of institutions, given
that accreditation has been influenced by the movements detailed with the literature to
increase the role of evidence and assessment in the consideration of quantitative
performance measures of quality as a heuristic for consumers, and given that WASC-
ACCJC accreditation has witnessed such an increase in sanctions, it is clear that more
needs to be known (Biswas, 2006; Dwyer, Millett, & Payne, 2006; Morest & Jenkins,
2007; Morest, 2009; WASC-ACCJC, 2011). This study will help explore the association
between accreditation actions by WASC-ACCJC—the varying levels of accreditation,
such as whether an institution was accredited with no findings, warned, placed on
sanction, told to show cause, or disaccredited— and some of the most commonly cited
quantitative performance markers (IPEDS “front-page” statistics, etc.) of student
performance such as graduation rate, transfer, retention/success, and institutional
variables such as size, budget, and staffing levels.
29
Purpose of the Study
The purpose of this study will be to explore the relationship between WASC-
ACCJC accrediting commission actions and publicly-salient indicators of student
outcomes. Given the above statement of the problem, this study will examine the
association between accreditation action and institutional outcomes variables, informing
the practice of institutional agents (researchers, faculty, staff, etc.) as to the evolution of
community college accreditation, and providing information about whether or not
accreditation status is de facto aligned with several common outcomes variables and
institutional measures (Jones, 2002). This study will provide more information about how
accreditation may or may not be aligned with student performance outcome measures and
may signal whether or not regional accreditors such as WASC-ACCJC are increasingly
influenced by outcomes measures in their evaluation of community colleges. Likewise,
this examination of the association between accreditation and institutional measures is
one way of approaching the question of trends in accreditation amidst concerns for costs
and benefits.
Given that WASC-ACCJC has witnessed the most notable increase in public
sanctions of institutions since the culture of assessment movement took hold, given that
there is an exploratory need for new information about the relationship between
institutional performance and accreditation actions, and given that accreditation is
creating increased demands/costs for institutions to comply with measuring outcomes
data, the following research questions are proposed:
30
• In California community colleges reviewed by WASC-ACCJC since
2002, what is the relationship between the specific accreditation actions
taken by the commission (the assessment of “no findings,”
“probation/sanction,” etc.) and some of the most common student
outcomes variables cited by the literature (namely, graduation rate,
transfer, and retention)?
• In California community colleges reviewed by WASC-ACCJC since
2002, what is the relationship between accreditation status and several
common institutional variables—namely, full-time equivalent students
(FTES), budget, staffing, and size?
• Finally, in those California community colleges that were sanctioned by
WASC-ACCJC since 2002, what patterns emerge that may inform
institutional knowledge about the relationship between accreditation
action and institutional measures?
Significance of the Study
The evolution of WASC-ACCJC accreditation has risen to the level of national
attention, and has raised both alarm and commendation by educators and proponents of
increased accountability (Moltz, 2011). As an important sector of accreditation in the
United States, the examination of this pivotal decade in WASC-ACCJC history will help
contribute to a lack of research on accreditation in general. Moreover, this study will
contribute to the literature on accreditation and its confluence with the accountability
movement. This study is primarily quantitative, and is limited to a particular slice of
31
higher education in the US as defined by the unique scrutiny currently applied to WASC-
ACCJC. While external validity may be limited due to the discrete sample size of
WASC-ACCJC institutions (n=135), the contextualized approach of this study will
nonetheless contribute knowledge to the confluence of accountability and assessment
pressures upon the accreditation of the single largest institutional system in the United
States, which may signal important trends in regional accreditation at large. The research
of this study will be of particular interest to institutional agents in California community
colleges, as well as interested parties in accreditation. Likewise, this study may help
inform public scrutiny and student consumption of college data (Baker, 2002; Carey &
Schneider, 2010). Most especially, this study will be of interest to campus agents
responsible for the preparation, coordination, and scrutiny of accreditation.
32
CHAPTER TWO: LITERATURE REVIEW
Introduction
The transformative effect of demands for assessment and accountability in higher
education, as they have shaped the development of regional accreditation in the United
States, guides this literature review. In addressing the conjunction of the debate about a
rising culture of evidence, accountability, and assessment alongside of developments in
accreditation, this literature review will help to contextualize how the critical last decade
of WASC-ACCJC accreditation sanctions is an important phenomenon in education and
a valuable topic of exploration. Accordingly, this literature review will examine themes
in four sections:
First, it will examine perspectives from scholars on the history of regional
accreditation and its notable themes. Secondly, this literature review will detail the debate
concerning accountability in higher education as it relates to accreditation. Thirdly, this
study will scrutinize the ongoing and rising emphasis placed upon assessment (such as
the well-noted rise of assessment of student learning outcomes) and its intersection with
accreditation (Hunt, 1990; Beno, 2004; Suskie, 2009; Walvoord, 2004). Finally, it will
address the conjunction of these themes from the literature with the specific history and
case study of WASC-ACCJC accreditation of California community colleges,
synthesizing the themes of these parallel movements as historical and ongoing context for
the recent patterns of accreditation actions by WASC-ACCJC. These themes will
triangulate the literature to support the need for greater scrutiny of the recent tendency of
WASC-ACCJC to increase sanctions of California community colleges and support the
33
argument for a quantitative exploration of this critical period in community college
history.
History and Themes of Regional Accreditation
The Literature of Regional Accreditation
With the brief sketch of the history and process of accreditation from chapter one
in mind, the literature on accreditation offers a variety of evaluations of the benefits and
pitfalls of regional accreditation. Originating in practice among peer institutions in New
England, accreditation had humble origins in an informal review of institutional
characteristics by fellow scholars (Bloland, 2011). It was only slightly more than one
century ago when more formalized accreditation of US colleges and universities began,
according to a set of minimum standards of quality and with a process of peer-reviewed
assessment (Ewell, 2008). The formal establishment of what are now known as the six
regional accreditation commissions, began in the late eighteen hundreds and continued
into the twentieth century, culminating with the establishment of the Western Association
of Schools and Colleges in 1924, followed by its sub-commission, the Accrediting
Commission for Community and Junior Colleges (ACCJC) in 1962. Additional detail is
provided in Table 1.3, which places the WASC-ACCJC last in line in the historical
establishment of the regional accreditation commissions.
Strikingly, there is a relatively limited collection of formal literature on
accreditation in the United States. By far, the majority of references to accreditation are
found in conference presentations, organizational white papers, institutional websites, and
speeches by political or educational leaders, etc. However, several important histories of
34
accreditation help synthesize developments in regional accreditation, and form the basis
of subsequent literature that critiques or praises the work of accreditation in the United
States.
Beginning with Blauch’s (1959) seminal history of accreditation, these histories
began to establish common language and resources on accreditation, leading to its
frequent citation for many years. While Blauch’s history, in keeping with its intended
audience in the US Department of Education, was straightforward and devoid of
significantly critical language, a subsequent history loudly assailed multiple aspects of
regional accreditation. Selden (1960) argued that accreditation did not further the cause
of educational quality and institutional improvement, but rather was a force for the
perpetuation of existing hierarchies and social structures. The fear, it seems, was that the
organizational power of regional accreditors was susceptible to manipulation by self-
interested parties and governments (Ewell, 1994). This criticism was timely for the
establishment of the ACCJC (1962), which occurred in the midst of rising debate about
access to higher education in the United States. The California Master Plan for Higher
Education (1960) promoted California community colleges as open-access, tuition-free
institutions to promote access to general education and vocational training, but like
Selden’s critique of accreditation, these colleges were suspiciously viewed as potential
enforcers of status-quo social hierarchies. Subsequent decades witnessed repeated visits
to this topic of organizational power and it role in education (Eaton, 2010).
As regional accreditors began to commission their own informed histories of the
accreditation landscape (Geiger, 1970; Newman, 1996), the issue of the preeminence of
35
regional accreditation commissions and their role as granters of the de facto mark of
quality for institutions came to the forefront. Specialized, programmatic accreditation
agencies (noted in chapter one) were flourishing, and potentially encroaching upon the
authority and reputation of the regional accreditation commissions. The oft-cited “Puffer
Report,” written on the behalf of the regional accreditors, challenged the criticism of
Selden (1960) and other voices against accreditation, and argued for the merits of
regional accreditation’s emphasis on quality improvement alongside of threats to the
dominance of these commissions (Bloland, 2001; Federation of Regional Accrediting
Commissions of Higher Education, 1970). Indeed, new momentum was developing in the
literature on accreditation and its role, summarized most distinctly by a multi-authored
work written from the perspective of principal agents of higher education, which argued
for the benefits of accreditation within a framework of institutional, governmental, and
public scrutiny (Young, 1983).
The last decade of the twentieth century was a pivotal time in the history of
regional accreditation in terms of its coordination on a national scale, witnessing both the
dissolution of the Council on Postsecondary Accreditation (COPA) and the establishment
of the Council for Higher Education Accreditation (CHEA). Detailed in Bloland’s (2001)
tome on the process of establishing CHEA, the nineteen-nineties brought new calls for
leaders in higher education to participate in the coordinating work of CHEA (National
Policy Board on Higher Education Institutional Accreditation, 1994). Ewell’s (2008)
review of the work of CHEA—with particular emphasis upon the transformation and
debate in the wake of COPA—summarized the progress made in establishing national
36
organizations to help coordinate the work and policy of regional accreditors, while also
summarizing the work that remained to be done in order to ensure that accreditation
would remain relevant in the years to come.
Reframed Definitions and Purposes
Accreditation is an expansive term that refers to many aspects of institutional
accountability and practice. As the historical trajectory demonstrates, accreditation—in
the context of US higher education—is a broad concept that has been informed by
politics, driven by consumer/business demands, shaped by many parties, and constantly
developed according to the unique unfolding of higher education in the United States.
Indeed, this cauldron of influences has made US accreditation a unique combination of
non-governmental, voluntary review against a set of standards for quality improvement
(Brittingham, 2009). Indeed, Brittingham (2009) delineates a series of exceptional
features about US regional accreditation, including its unique history, its non-
governmental status, its alignment with American notions of education, its contextualized
evolution through social, political, and market forces, and its role in quality improvement,
among other contributions. Against critical voices condemning accreditation as outdated
or stultifying, there is new promise seen in the potential of regional accreditation (Ibid.;
Carey & Schneider, 2010), with it being marked as distinctly American in origin and
evolution owing to the historical eccentricities of its evolution (El-Khawas, 2001;
Vaughn, 2002).
As detailed, the literature on accreditation and its seminal works contained
numerous opinions as to the nature, role, and future purposes of education (Ikenberry,
37
2009), but still the scholarship is lean of distinctly empirical research. Some notable
examples exist on topics such as professional/programmatic accreditation (Volkwein,
Lattuca, Harper, & Domingo, 2007; Volkwein, Lattuca, & Terenzini, 2008), international
accreditation (Troutt, 1981), and loss of accreditation (Hoffman, & Wallach 2008), but a
gap remains that can be filled with new scholarship and exploration of the burgeoning
field of accreditation studies, the exploration of its costs and benefits, and the association
between institutional data and accreditation status.
WASC-ACCJC. One unique aspect of regional accreditation in the US is found
in the very existence of a separate commission, under the overall authority of WASC-
Senior, for the accreditation of community colleges in California—namely, the ACCJC.
Unlike the other regional accreditors, who include the accreditation review of both four-
year and community colleges under the authority of their general commissions, the
ACCJC is the only separate sub-commission within a regional commission that
specifically reviews the accreditation compliance of community colleges (CHEA, 2011).
With twenty-four percent of all of the community college students in higher education in
the nation enrolled in California community colleges alone, it is no wonder that the
ACCJC was established separately to exclusively review community colleges, with the
majority of its institutions residing in California (WASC-ACCJC, 2011). The same
historical forces that shaped the contours of regional accreditation are adapted within the
ACCJC to the specific currents of assessment and accountability facing the ever-
scrutinized community college sector. To use the analogy of systemic evolution, for the
purposes of the research examination proposed by this study, there is perhaps no better
38
sample for the examination of rapidly evolving change in accreditation than WASC-
ACCJC, as will be argued by the discussion of the accountability and assessment
movements that follow.
Accountability and Accreditation
Rising Scrutiny
The growth of higher education in the United States in recent decades has been
characterized by the rise of new institutions, a push for increased access to education, and
increasing public scrutiny of the cost and value of education in general (USDE, 2006).
Within the context of accreditation, there have been increasing concerns about the growth
of accreditation commissions, including the expanding process of preparation and
reporting required by institutions to comply with more specific accreditation standards, as
well as the expanding scope of regional accreditation on the national stage (Eaton, 2001).
Accountability has emerged as a summative term for the rise of interest in the “return on
investment” given by education (Greater Expectations, 2002; USDE, 2006). Combined
with forces in the commercial sector, public arena, and the academy itself, the push for
greater accountability has continued to grow.
Accountability in accreditation is located at a convergence of competing pressures
and competing interests. Summarized by Burke (2005) in an overview of accountability
in education, Clark’s (1983) accountability triangle summarizes the conjunction of
academic concerns, state priorities, and market forces that influence and shape education,
with an ideal balance—such as an accreditation system—located somewhere in between
the competing pressures (Figure 2.1).
39
Figure 2.1: Competing Forces
State Priorities
(Political)
Academic Concerns Market Forces
(Professional) (Market)
Source: Burton, 1983 as cited in Burke, 2005.
As Burke (2005) details, in specific periods of the history of higher education in
the US, sometimes one force or another has predominated, leading to the various
accounts of unregulated freedom or increasing regulation and scrutiny between the
cultures of the “civic,” the “collegiate,” and the “commercial” (p. 9, 24). In this way, the
accountability triangle detailed above can be used as a loose conceptual framework to
analyze and understand the case of community college accreditation under WASC-
ACCJC and the exploratory need to understand this sector.
Tensions between political, market, and professional forces certainly conspire at
once within the subsample of California community colleges. These institutions operate
under both state and national political realities, they are shaped by budgetary and
economic forces both within and outside of academia, and there is deep academic and
professional concern from institutional agents, including the faculty and administration,
40
who have both embraced and challenged the impulse for greater accountability and
assessment within the context of local institutional realities. Indeed, the accountability
triangle of the tension between political, market, and professional forces can offer some
clarity when studying accreditation, and for the purposes of this study, it loosely tethers
the literature review and data analysis within its frame. Moreover, it serves as a mirror of
a subset tension found between accountability, assessment, and accreditation. This
parallel of the accountability triangle, represented in Figure 2.2, highlights a similar
intersection of tensions that exists between developments in accountability, assessment,
and accreditation—as influenced by varying political, market, and professional
perspectives.
Figure 2.2: Redux of Accountability Triangle for Accreditation
41
Accountability, as a presence in US higher education, appears to be “here to stay”
(Burke, 2005, p. 24). Indeed, the concept of tensions between competing forces—the
simultaneous intersection of the accountability triangle forces (see Figure 2.1) and the
interplay between accountability, assessment, and accreditation in Figure 2.2—can be
used as lenses to understand the deep history and complicated nature of accreditation and
can provide for an intriguing analysis in the case of WASC-ACCJC accreditation. As a
regional accreditor with a vanguard increase in sanctions of colleges, an exploratory
investigation of recent WASC-ACCJC actions is warranted.
The growth of accreditation into more than just a review of colleges, by colleges,
into a heuristic for quality used by governments and the public at large, has transformed
accreditation (Bardo, 2009; Eaton, 2001; Eaton, 2009a; Eaton, 2009b; Eaton, 2009c;
Ewell, 1994). As accountability aspects of accreditation became more common, the
formerly voluntary nature of accreditation took on aspects of regulated, required
compliance (Kuh & Ikenberry, 2009). The preeminence of accountability concerns at
work upon accreditation also leads to conditions of ongoing change and evolving
standards (Lubinescu, Ratcliff, & Gaffney, 2001). This pressure for change has raised
worries and new concerns about the integrity of regional accreditation. Eaton (2003)
noted that key differences in accountability reporting between the K-12 educational
sector and higher education may be muddled if accountability mandates lead to the
imposition of common standards upon the varied institutions of US higher education.
Common standards and accreditation practices between the two sectors, it was feared,
could lead to universal results as muddled as those attained from the intersection of
42
external accountability and student outcomes by performance-weary K-12 schools
(Carnoy & Loeb, 2002; Dougherty & Hong, 2006). Eaton saw dire consequences for
academic freedom and the role of regional accreditors should reporting-heavy
accountability mandates in the vein of the No Child Left Behind (NCLB) legislation be
forced upon colleges and universities, but the quest for a “reliable authority on academic
quality” was in full-swing (Eaton, 2009a; Eaton, 2009b).
Criticisms of Accreditation
One example of movement in the direction of increased accountability for
accreditation was demonstrated by the “Spelling’s Commission” report, which detailed
new interest from the US Department of Education in critiquing the status quo of regional
accreditation commissions (Commission on the Future of Higher Education, 2006). Ewell
(2008) describes the report as a scathing rebuke of the inability of regional accreditors to
innovate, and the report indicts accreditation as a hindrance to quality improvement.
Others have called for an outright “end…to the accreditation monopoly” (Neal, 2008).
The Spelling’s Commission report signaled Federal interest in setting the stage for new
accountability measures of higher education, raising the worst fears of some defenders of
a more autonomous, peer-regulated industry (Eaton, 2003). Accreditation’s emphasis
upon quality and the improvement of individual institutions with regional standards was
now being pressed to accomplish accountability roles for the entire sector of US higher
education—a task that regional accreditation was not accustomed to filling (Brittingham,
2008).
43
Some argued that the regional accreditation system was, in fact, deficient when it
came to greater accountability (Ibid.) and believed that it should use access to Federal
funding to leverage increased accountability compliance (Edler, 2004; Lederman, 2009;
Trivett, 1976). Others were more pragmatic, and believed that the current system of non-
governmental accreditation could be supplemented by increased staffing levels and the
addition of modest new regulatory abilities of state governments to oversee local
education (Harcleroad, 1980). Additional voices, however, touted the rise of interest in
student outcomes and value-added measures of institutional accomplishment as a marker
of regional accreditation’s ability to change with the times while yet maintaining a non-
governmental, independent system (Jones, 2002; Malandra, 2008; Wolff, 2005).
The new interest in accreditation by the US Department of Education (2006) led
Eaton (2007) to argue that a pull-back from confrontational scrutiny was needed—a
return to the past armistice of regional accreditation as the authority on institutional
quality while yet working in good faith to provide for accountability. In fact, the calls for
greater accountability in higher education led to proposals for a system of even greater
scrutiny. The nineteen-nineties witnessed a proposal for state-based review boards to
replace the general role of regional accreditors. Known as State Postsecondary Review
Entities (SPREs), this proposal was waylaid by the costs of implementation, while yet
signaling interest in alternative models to the historical methods of regional accreditation
(Ewell, 2008).
The accountability debate, and the concerns it raised in higher education about
threats to autonomy, diversity, and academic freedom, also raised alternatives for
44
accreditation in which governmental entities would be less involved—unlike the SPREs.
Graham, Lyman, and Trow (1995) argued that academic “audits,” as opposed to the
process of regional accreditation review, would be a better model for maintaining and
improving institutional quality in education. These audits would be internal—a somewhat
popular notion by institutional agents observing the trends for rising external
accountability in education—but while the idea was in vogue for a brief period, it was
never seriously applied (Bloland, 2001; Graham, Lyman, and Trow, 1995). Calls for
regional accreditation to include faculty-led initiatives increased and push-backs
increased against the call for accountability from afar—rather than from within the realm
of academia and peer accreditors—(Benjamin, 1994). As a contrast to a culture of
accountability and assessment, characterized by the culture of evidence proposals of the
Spelling Commission (2006) and the Educational Testing Service (Commission on the
Future of Higher Education, 2006; Dwyer, Millett, & Payne, 2006), other alternative
frameworks—such as a culture of inquiry and a greater voice of institutional
practitioners—were proposed with important implications for the intersection of
accountability and accreditation (Dowd, 2005).
A culture of inquiry, as opposed to a culture of evidence or accountability, would
prioritize the role of local institutional agents, using a model of practitioners-as-
researchers (Bensimon, Polkinghorne, Bauman, & Vallejo, 2004; Dowd, 2005; Dowd,
2007; Dowd & Tong; Hirsch, 2000). By using a culture of inquiry model, the issues of
quality assurance and improvement that were at the heart of the accountability debate
could be satisfied, in part, by faculty- and institution-led inquiry into those problems in
45
student learning, access, institutional performance, etc. that were best known at the local
stage. A core kernel of this framework is the belief that institutional agents would be
better equipped to address institutional learning problems in context rather than entities
located further away from the actual environments of learning (Ibid.). Indeed, the
proposal for evidence-based inquiry councils (EBICs)—in some ways a smaller-scale
version of peer-led regional accreditation—called for institutional peer groups (typically
defined by state) to self-organize and design their own assessments of institutions
according to self-defined and self-researched problems along a fairly rigorous empirical
research plan (Dowd & Tong, 2007). In a similar vein, a program of self-assessment
under the blessing of the North Central Association, known as the Academic Quality
Improvement Program (AQIP), is an example of a distantly related emphasis upon
ongoing research and the desire for faculty leadership of institutional improvement
(Biswas, 2006; Bresciani, 2006; Edler, 2004; Hutchings, 2010; Maki, 2004). Within the
context of the accountability movement, the EBIC proposal, among others, was a
refreshing reversal from the calls for a greater Federal voice and scrutiny (Greater
Expectations, 2002; USDE, 2006) into the complex learning environments of local
institutional contexts and hearkened back to best practices seen in the history of regional
accreditation—such as its value of autonomy, independence, and local contexts—while
yet adding new calls for rigorous scholarship and investigation of institutional issues.
WASC-ACCJC. Community colleges, particularly those in California, were not
immune to the calls for increased accountability. In fact, state regulation of community
colleges in California has repeatedly considered new means of regulating institutional
46
performance and encouraging greater accountability, including notions such as
performance-based funding, which has been routinely critiqued by community college
advocates (CC League, 2011). The Accountability Report for Community Colleges
(ARCC) is a well-known document in California community college circles, and is a
great example of rising accountability pressures in this institutional context (CCCCO,
2011). While not yet explicitly linked to apportionment funding, this document signals
that the accountability movement in community colleges is headed in the direction of
greater scrutiny, more frequent measures of performance, and increased measurement of
student learning outcomes. It is within this context that this study will help to inform the
recent behavior of the accrediting commission responsible for this highly-regulated sector
of US education, as accreditation continues to be the default paradigm for assessing
institutions and encouraging institutional quality improvement (Volkwein, 2010).
Assessment and Accreditation
The Growth of Assessment
Pressures for increased accountability have been accompanied by increased
emphasis upon the assessment of student learning (Bers, 2008). Critics of accreditation
have pointed out that being “accredited” does not necessarily say anything about the
student outcomes associated with that college or university (Huffman & Harris, 1981).
While concern for student outcomes is certainly of genuine interest by faculty,
administrators, and students already in an institution of higher education, the legacy of
the accountability debate has been the most forceful proponent in increasing the emphasis
upon this form of quality improvement as an outward accountability measure that adds to
47
the overall meaning of accreditation (Eaton, 2009a). In the context of accountability,
assessment of student outcomes serves the purpose of a heuristic marker of institutional
performance, linking the stamp of accreditation to something that implies actual address
of student learning issues (Driscoll & De Norriega, 2006).
Proponents and opponents of status quo regional accreditation have found mixed
results concerning the rise of student learning outcomes assessment. On the one hand,
institutional leaders have signaled that their commitment to student learning outcomes is
motivated by the accreditation process, at large, not the public pressure for increased
accountability (Kuh & Ikenberry, 2009), while others have found mixed results in the
rationale and practice of prioritizing student learning outcomes (Ewell, 2009; Provezis,
2010). Assessment of student outcomes have already been embedded in the standards of
regional accreditation commissions, which has been trumpeted as an argument in favor of
the voluntary, non-governmental model (CHEA, 2011). Student learning outcomes and
accreditation are now mentioned side-by-side in many studies, signaling the formal
embrace of this task within the accreditation cycle (Crow, 2009; Haviland, 2009;
Provezis, 2010). Assessment is becoming integrated into institutional activities (Banta,
2004) and has been linked to increased campus awareness of educational outcomes and
improvement processes constructed to address them (Volkwein, Lattuca, Harper, and
Domingo, 2007).
48
Accreditation in WASC-ACCJC
Evolving Standards
Under the influence of the accountability debates and the rising interest in student
learning outcomes assessment, WASC-ACCJC changed its accreditation standards in
2002 to place increased emphasis upon the design, assessment, and ongoing quality
improvement based on student learning outcomes and administrative unit outcomes
(Beno, 2004; WASC-ACCJC, 2002; WASC-ACCJC, 2007; WASC-ACCJC, 2010;
WASC-ACCJC, 2011). This change has placed increased emphasis upon measurable,
mostly quantitative markers of student performance and has added a sizeable new process
to the already large task of completing regional accreditation as a community college.
Once again, this intersection of accountability, assessment, and accreditation at the locus
of California community colleges provides an opportunity to study the changes in
accreditation standards since 2002 by examining the relationship between increasingly
dire accreditation actions upon CCCs and several of the most common quantitative
measures about those institutions known to the public (Strauss & Volkwein, 2004).
Within the context of this literature, the notation of a relationship or lack of relationship
between these variables may provide additional insight concerning the co-alignment of
common, heuristic measures with the overall stamp of accreditation approval or sanction
given by WASC-ACCJC.
Operating in the legacy of the well-known California Master Plan for Higher
Education (1960), the California Community Colleges (CCCs) are the most sizeable—
and comprehensive—community college system in the nation (CCCCO, 2011; Townsend
49
& Twombly, 2001). With one hundred and twelve colleges in seventy two districts,
locally-governed but reliant upon state-level funding, and serving almost three million
students per year, the CCCs are a complex, diverse, and dynamic spoke in the wheel of
US higher education (Bailey & Morest, 2004; Bragg, 2001; Dougherty & Townsend,
2006; Dowd, 2007; CCCCO, 2011). The assessment and accountability debates in this
literature concerning accreditation are triangulated in the example of the WASC-ACCJC.
Following the adjustment of standards in 2002, scrutiny of the measurement and
address of student learning outcomes in the CCCs became a profound emphasis of
WASC-ACCJC accreditors, leading to a level of sanction and scrutiny so dramatic that
that Chancellor of the California Community College system wrote an open memo to the
US Department of Education, protesting the perception that WASC-ACCJC
commissioners were being chosen in alliance with the executive director of WASC-
ACCJC (WASC-ACCJC, 2002; CCCCO, 2011). The 2010 meeting of the Association of
American Colleges and Universities (AACU) witnessed a pro and con debate from
educators about the nature of assessment and accountability within the work of their
institutions, particularly in terms of where that pressure was shaping the view of
accreditors and leading to top-down emphasis on measuring outcomes (AACU, 2010). In
the community college sector, the CCCs under WASC-ACCJC dramatically experienced
a concomitant level of rising scrutiny of the years following the 2002 standards, making
it a key sector for further study and exploration.
50
Conclusion
With the literature on the intersection between accountability, assessment, and
accreditation in mind, and with the lens sufficiently narrowed to the striking case study of
California community colleges, it is clear that much is in transition in the accreditation of
these schools, even as institutional agents grapple with new internal and external
pressures for quality improvement and self-study (Davenport, 2001). Within a culture of
rising public interest in the accountability of higher education—which is occurring
alongside of a simultaneous rise in staff/faculty interest and ownership of the assessment
mantle in opposition to an entirely external mandate—the research proposed by this study
is timely to exploring the contours of important, but poorly understood developments in
the scrutiny and sanction of California community colleges. A synthesis of the literature
demonstrates that the combination of historical developments in accreditation, along with
the evolution of accountability and assessment movements, raises questions concerning
the rate of sanctions by WASC-ACCJC and suggests that more studies could be
conducted to explore components of this trend. Especially in terms of the accountability
literature that is so keen on quantitative, accessible, heuristic variables, the research of
this study will help to explore the association between accreditation actions and some
common institutional measures, even while being mindful of the need to provide
contextualized and exploratory knowledge to inform the ongoing analysis, debate, and
evolution of this vital educational sector.
51
CHAPTER THREE: METHODOLOGY AND RESEARCH DESIGN
Rationale for the Study
The history and themes of accreditation and assessment of higher education in the
United States have born witness to an increasing trend to emphasize quality
improvement, measures of performance, quantitative measures of student outcomes, and
an overall movement for greater transparency and use of data. Accreditation has become
incrementally more important to colleges and universities, as increasing public and
governmental scrutiny of the higher education sector transformed the standards,
procedures, and actions of regional accreditation commissions (Davenport, 2001;
Kershenstein, 2002). The WASC-ACCJC is a particular example of a regional
accreditation commission that has demonstrated a dramatic increase in sanctions, perhaps
in part because of the co-alignment of national movements in accreditation from CHEA
and the USDE and rising public interest in the accountability of education. Exploration of
the factors related to these changes is needed.
Accordingly, the purpose of this study is to investigate the period of WASC-
ACCJC accreditation since 2002, when its standards changed to explicitly emphasize
measureable aspects of student learning outcomes and a greater number of CCCs were
sanctioned. This study explores the association between WASC-ACCJC accreditation
commission actions and several common indicators of student outcomes, and addresses
the following research questions:
• In California community colleges reviewed by WASC-ACCJC since
2002, what is the relationship (if any) between the specific accreditation
52
action taken by the commission—the assessment of “no findings,”
“probation/sanction,” etc.—and some of the common student outcomes
variables cited by the literature—namely, graduation rate, transfer, and
retention?
• In California community colleges reviewed by WASC-ACCJC since
2002, what is the relationship (if any) between accreditation status and
several common institutional variables—namely size (FTES and
enrollment), budget, and staffing?
• Finally, in those California community colleges that were sanctioned by
WASC-ACCJC since 2002, what patterns emerge that may inform
institutional knowledge about the relationship between accreditation
action and institutional measures?
Overall, this study is concerned with identifying potentially salient relationships
between the variables listed above, as detailed in Table 3.2 and Table 3.3. In spite of the
rising interest in quantitative measures of education, there has been only limited
empirical research on accreditation (Peterson & Augustine, 2000; Prager, 1993; Shibley
& Volkwein, 2002) and on community colleges in general (Bailey & Alfonso, 2005). The
literature on accreditation and accountability demonstrates that much has changed and
continues to change in the landscape of regional accreditation in the US, perhaps nowhere
more significantly than in the past decade of WASC-ACCJC accreditation. Studies of the
confluence of accountability, assessment, and accreditation trends are naturally limited by
the methodologically-challenging nature of examining the stated and unstated
53
contributions of one trend to another within the context of many additional influences.
However, as specified in the first two chapters, this study is an exploratory, quantitative
examination of the association between variables detailed above and accreditation in
order to provide initial exploration of this recent trend of increasing sanctions by WASC-
ACCJC. This research provides descriptive statistics about the relationship between the
variables, frequency counts on accreditation statuses by action, a scatter plot of status
categories, and delineation of potential significance between accreditation actions and
institutional variables.
Research Design
Population and Sample
The population for this study is all California community colleges (CCC)
accredited by the regional accreditation commission of the Western Association of
Schools and Colleges’ Accrediting Commission on Community and Junior Colleges
(WASC-ACCJC). This population is of particular interest to this study due to the
prominent increase in sanctions by WASC-ACCJC in general. While many of the
regional accreditation commissions cover institutions in a sizeable number of US states,
the WASC-ACCJC is dominated numerically by California community colleges (n=112)
of the 135 institutions in total that it reviews. Accordingly, the sample has been delimited
to California community colleges specifically, as institutions in this category are most
likely to share similar traits for the purposes of the data analysis proposed by this study.
This means that far-flung Pacific basin institutions in Hawaii and various Pacific
republics reviewed by WASC-ACCJC will be excluded from this study. The full list of
54
California community colleges accredited by WASC-ACCJC was obtained from the
California Community College Chancellor’s Office website (Appendix C). For single-
campus community college districts, a single entry will be counted even if that district
has multiple geographic centers. The rationale for this accounting is due to the fact that
single-college districts with multiple campuses are reviewed as a single entity for the
purpose of regional accreditation. For multiple-campus community college districts, each
college will be counted separately, as they are also accredited separately by WASC-
ACCJC. This sample is summarized as one hundred and twelve discrete California
community colleges accredited by WASC-ACCJC within seventy-two college districts
(CCCCO, 2011). Moreover, the data range for this study is purposefully sampled after
2002, beginning with the change in WASC-ACCJC standards. Detail on the composition
of WASC-ACCJC is provided below in Table 3.1:
Table 3.1: Detail for WASC-ACCJC
WASC-ACCJC Community Colleges* Count
California Community Colleges
Districts 72
Accredited Colleges 112*
Community Colleges Outside of California
Accredited Colleges 23
Total Colleges 135
Source: WASC-ACCJC, 2011.
*Population of specific interest for this study. Purposeful sampling of the WASC-ACCJC population
identifies the California community colleges as a geographically-defined, important sample of the
population of community colleges accredited by WASC-ACCJC.
55
Data Collection
Data collection derives from four key collections: the Integrated Postsecondary
Educational Data Set (IPEDS), the California Community College Chancellor’s Office
website (including the Data Mart, the Accountability Report for Community Colleges
(ARCC), the Student Right to Known (STRK) reports, and other materials as cited),
reports and newsletters from the Accrediting Commission for Community and Junior
Colleges, and California community college websites. In alignment with the research
questions, the following variables will be collected as detailed in the Table 3.2, below.
Please note that some variables can be run as either independent or dependent variables,
depending upon the direction of the statistical test being run.
Table 3.2: Detail of Variables
Variable Usage Source
Accreditation Variables: WASC-ACCJC Formal Commission Actions
1
“Clear” Accreditation
Affirm Accreditation IV/DV* ACCJC, Web
Affirm Accreditation with Focused Report
(with or without visit)
IV/DV ACCJC, Web
Affirm Accreditation with Progress Report
(with or without visit)
IV/DV ACCJC, Web
“Sanctioned” Accreditation
Warning Issued IV/DV ACCJC, Web
Probation Imposed IV/DV ACCJC, Web
Show Cause (and/or Termination)
2
IV/DV ACCJC, Web
56
Table 3.2, Continued
Student Variables:
3
Graduation Rate
4
IV/DV IPEDS, CCCCO,
Web
Transfer Rate
5
IV/DV IPEDS, CCCCO,
Web
Retention/Success Rates
6
IV/DV IPEDS, CCCCO,
Web
Institutional Variables:
Institutional Size
7
IV/DV IPEDS, CCCCO,
Web
Institutional Budget
8
IV/DV IPEDS, CCCCO,
Web
Institutional Staffing
9
IV/DV IPEDS, CCCCO,
Web
Note: Since the research questions ask for overall associations between the variables,
they can be analyzed as both independent and/or dependent variables, depending upon
the direction of the test in question—such as chi-square tests and/or logistic regression.
1
These categories do not include initial categories of accreditation for new institutions. Institutions that
were granted initial candidacy during the period of analysis may be excluded. These categories are ordinal
from clear through terminated accreditation.
2
Since termination is an exceptionally rare category, it is truncated with the second most serious category,
Show Cause.
3
Additional student variables may be selected during data collection from the sources detailed above.
4
Traditionally, the graduation rate is reported at 150% of program, but the range of IPEDS/CCCCO
variables may be used in data collection.
5
Commonly reported in Student Right to Know (STRK) report and CCCCO data mart.
6
Commonly reported together—will use either or both for data collection. For CCCs, defined as retention =
(number of enrollments with grade of A, B, C, D, F, CR, NC, I, P, NP / number of enrollments with grade
of A, B, C, D, F, CR, NC, W, I, P, NP, DR) and success = (number of enrollments with grade of A, B, C,
CR, P / number of enrollments with grade of A, B, C, D, F, CR, NC, W, I, P, NP, DR (CCCCO, 2011).
7
Commonly reported as total students, but may also refer to truncated Full-Time Equivalent Students
(FTES). May also include institutional location (urban, rural, etc.)
8
Most commonly reported as total budget, but may also include subsets of budget, depending upon data
salience.
9
Most commonly reported as overall staffing (includes administrative, support, and faculty), but may also
include subset categories if salient.
57
With these varying levels of accreditation in mind (see Table 3.2 above), the
specific case of WASC-ACCJC accreditation has demonstrated an increase in sanction-
level actions since 2002, when the commission’s standards were revised. Since this time,
over the period of approximately one decade, a sizeable percentage of WASC-ACCJC
community colleges have been placed on probation and/or sanction. Within the sample of
California community colleges, more than forty percent of colleges were placed under
some form of sanction during the period since 2002 (Moltz, 2010). Accordingly, for the
purposes of data analysis of the variables defined above, this study will assess the period
since 2002 to identify a multiyear window that maximizes the number of California
community colleges that were visited by the WASC-ACCJC. This purposeful sampling
and data collection strategy will help address the research questions proposed by
triangulating the analysis of data to the period most salient in the literature as an
important period of increased scrutiny. Moreover, this strategy will provide opportunity
for analysis of the data collection and results to determine if significant associations
between defined variables exist.
Data Collection and Analysis
With the approval of this study by the Institutional Review Board (IRB), this
research project will collect, organize, and analyze data. The analysis of data follows a
multi-step process according to Table 3.3 below. This research strategy was developed in
consultation with research faculty and support staff at the Rossier School of Education at
the University of Southern California. This analytical strategy is designed to address the
questions of association proposed by the research questions for this study, affording the
58
opportunity to examine bidirectional, multivariate associations between sets of variables
in order to provide better understanding of the relationship between accreditation actions
and common student and institutional measures. A detail of this analytical strategy is
outlined below in Table 3.3.
Table 3.3: Analytical Strategy for Data Analysis
Research Strategy Details / Logic Model
Descriptive
Statistics
Includes frequency counts of accreditation action categories and descriptive
statistics on variables for the defined sample of WASC-ACCJC California
community colleges. Summary tables of institutional accreditation status will
also be produced, detailing accreditation actions. Additional charts will also
be produced for salient variables.
Chi-Square Tests Uses accreditation action variables as independent variables, testing for
significance of association between institutional and student variables and
accreditation, seeking to identify whether the variables are truly independent
or associated (Greenwood, 1996). The level of significance for the chi-
square analysis will be defined as p<0.05.
Logistic
Regression
Analysis
Following the chi-square tests, multiple logistic regression will analyze those
variables identified as significant. Regression analysis can be run with a
single independent variable against a dependent variable, or it can include
multiple independent variables as needed when suggested by the preceding
chi-square tests. This regression analysis will examine the significance of the
relationship between accreditation and institutional variables (Freedman,
2005; Gorard, 2003; Hilbe, 2009; Salkind, 2007; Tabachnick & Fidell,
2001).
Research Strategy Details / Logic Model
Additional
Statistical
Analysis
Determined in part by the results of the first round of the preceding tests,
additional quantitative tests will be conducted using SPSS in order to answer
the research questions concerning the relationship between variables and
additional detail on the dataset. For example, categories of variables
identified as significant by the chi-square analysis will suggest additional
variables to include in logistic regression tests. Ultimately, the cumulative
strategy of this analysis is to allow calculated, informed assessment of the
strengths of association between defined variables in order to test and answer
the proposed research questions.
59
Data from the sources detailed above (IPEDS, CCCCO, etc.) will be collected for
all WASC-ACCJC California community colleges and compiled into a Microsoft Excel
spreadsheet. Then, using the Statistical Package for the Social Sciences (SPSS) version
19, this data can be coded and analyzed according to the proposed research design.
Descriptive statistics for all variables will be produced by this analysis. Secondly, chi-
square analysis and multivariate regression analysis will be run, as detailed in Table 3.3,
to help address the question of relationships between variables proposed by the research
questions (Freedman, 2005; Greenwood, 1996; Hilbe, 2009; Tabachnick & Fidell, 2001).
The defined sample for this study is informed by lessons from research design, which
suggest that large datasets are ideal for multiple regression analysis (Grimm and Yarnold,
2000; Tabachnick & Fidell, 2000). This regression analysis will take place in multiple
stages, including a bi-directional analysis of variables as independent and dependent
variables. In the context of the statement of the problem and the literature on
accreditation and accountability, these series of regression analyses may help to identify
institutional and student outcomes characteristics that are more likely to be associated
with clear or sanctioned accreditation, or conversely to demonstrate whether non-salient
relationships exist between the proposed variables, both of which will contribute to the
ongoing research on accreditation and the case study of WASC-ACCJC community
colleges in California.
Limitations and Delimitations
One potential limitation for this study concerns the transparency of data available
from WASC-ACCJC. Full data on the actions by the commission prior to 2000 are not
60
available on the public website of commission reports. This limits the ability to compare
actions of the committee prior to the substantive change in standards in 2002. However,
since this study proposes to analyze post-2002 commission actions, this limitation is
avoided. A second limitation concerns the public data from IPEDS and the CCCCO data
mart. Before the data is collected, it is possible that during the period since 2002 a
number of institutional variables will be missing for specific WASC-ACCJC California
community colleges. Should this be the case, data may be supplemented from the
immediate prior or subsequent year, at the discretion of the researcher, or the institution
may be removed entirely from the analysis. However, since the defined variables are
required submission fields in state and Federal reports from institutions, it is anticipated
that missing data of this nature will not be a concern.
With an extensive set of data collected to assess via multiple statistical tests (see
Table 3.3), it is nonetheless possible that the exploration for relationship between defined
variables and institutional accreditation status may yield no significant data. However,
the exploration of this data and the suggestion of any or no relationship between WASC-
ACCJC accreditation actions and some of the most common markers of student outcomes
and institutional traits will still be a telling contribution to the ongoing literature of
accreditation. This study may also be instructive to supplementary, future questions
related to the relational research questions proposed, including whether or not nuances
between accreditation signal anything important about outcomes and help to define the
heuristic roles of accreditation statuses to consumers of that information. Finally, it
should be noted that while multivariate regression helps to explain variation in dependent
61
variables in terms of the independent variables, regression only suggests that the
independent variables (and their variation) are a possible factor in the variations observed
by such a technique. This is a limitation of this type of analysis, but multiple regression is
nonetheless a very important statistical tool in the arsenal of empirical research in
education that can nonetheless make important contributions especially when making
initial study of a substantial dataset (Cabrera, 1994; Freedman, 2005; Gorard, 2003;
Hilbe, 2009).
A potential delimitation that narrows the scope of this study is the fact that
WASC-ACCJC accredits more than twenty additional colleges outside of California.
However, due to the unique geographic histories of these institutions from the Pacific
basin area, these colleges are excluded from the data collection and analysis of this study,
which will focus only on the two-year community colleges in California under the
regional accreditation authority of WASC-ACCJC. Depending upon the strengths of
findings presented from this study, future studies may expand the sample to of all
WASC-ACCJC colleges and consider cross-comparisons of recent trends by additional
regional accreditation commissions. Overall, it is hoped that the results of this research
will have implications to community college administrators, faculty, support staff, and
others with a vested interest in institutional accreditation, quality improvement, and
assessment. Likewise, this work may help to inform the development of future studies to
address the evolutionary nature of contemporary accreditation in higher education.
62
CHAPTER FOUR: RESEARCH FINDINGS
Introduction
The transformation of WASC-ACCJC community college accreditation within the
vector of accountability concerns discussed in the prior three chapters suggests the need
for further exploration of this subject. Specifically, a quantitative exploration of the
association between common institutional and student variables may expose patterns of
association between these variables and institutional accreditation status. This
information, and the existence or lack of patterns of relationship will be of interest to
institutional practitioners responsible for accreditation, as well as policy makers and a
public citizenry that has become increasingly interested in accountability measures and
the use of data (Leef & Burris, 2002; Rogers, 2000). Accordingly, this section will detail
the results of the quantitative exploration of variables described in chapter three and
initial findings from this analysis.
Descriptive Statistics
As an exploratory, quantitative study, there were many variables comprising a
substantial dataset for this research project. With the goal of answering the research
questions concerning the relationship between institutional and student variables and
community college accreditation status, a number of descriptive statistics are needed.
To begin, it is important to provide detail concerning the accreditation status for
California community colleges for the years in questions. While the increasing scrutiny
of the WASC-ACCJC upon community colleges has been noted by the literature (Lay,
2011; Moltz, 2010), the ongoing status of institutional accreditation according to
63
commission actions in any given year is not readily available. Accordingly, one important
descriptive statistic compiled by this research project is a chronological listing of
accreditation actions, by college, during the period in question. Complete tables, included
in Appendix D and Appendix E, provide a succinct snapshot of institutional accreditation
statuses that may be of interest to institutional practitioners and anyone scrutinizing the
status of community colleges in California. While the WASC-ACCJC does accredit a
small number of colleges that are not in California, they have been excluded from this
representation in keeping with the research methodology defined in chapter three.
Appendix D and Appendix E provide succinct and accessible detail concerning
the accreditation status of California community colleges since WASC-ACCJC standards
were changed following 2002. This information, distilled from WASC-ACCJC
newsletters, reports, and institutional websites, provides an important visual snapshot of
actual accreditation actions.
An additional way to represent data on community college accreditation that
provides another level of detail is a chronological table of ongoing accreditation status of
CCCs. While the table in Appendix D provides detail only for actual instances when
WASC-ACCJC issued a report or finding on a community college—typically as part of
their regular site-visit to an institution—Appendix E provides an ongoing, continuous
status of institutions following and in between their recurring site-visits by an
accreditation team. This data provides a more granular level of detail concerning the
ongoing jeopardy of institutions in the “sanctioned” category of accreditation actions (see
Table 3.2 for definitions). Unlike Appendix D, which only lists actual actions by WASC-
64
ACCJC and noting when no visit was made nor a report issued, Appendix E lists the
statuses for each year until changed by a subsequent report or visit. What is particularly
instructive is the percentage of institutions on sanction and the increase witnessed
beginning in 2004. Table 4.3 details the percentage of all community colleges with a
sanction-category status.
The summarized, tabular representations of community college accreditation
statuses since 2002 provided by Appendix D and Appendix E are the foundation for the
subsequent statistical tests performed by this research project alongside of an extensive
collection of data concerning student and institutional measures. Moreover, these tables
provide the foundation for a number of additional descriptive statistics, such as the
number of institutions that held a sanctioned status from WASC-ACCJC in any given
year. The summary table, below, lists the count and percent of all institutions with a
current or ongoing sanction, demonstrating a remarkable increase in the percentage of
institutions in jeopardy with WASC-ACCJC. Beginning with no colleges with sanctioned
status in 2003, to as many as twenty four colleges with sanctions at the same time in
2009, the overall percent of CCCs with concurrent sanctioned status rose to over twenty
percent of all institutions and averaged above fifteen percent for the majority of the
period following 2003 (see Table 4.1).
65
Table 4.1: Ongoing Accreditation Status
Month/Year
Count of Sanctioned
Colleges
Sanctioned Colleges as
Percent of All CCCs
JUN-03 0 0.00%
JAN-04 1 0.90%
JUN-04 0 0.00%
JAN-05 6 5.41%
JUN-05 9 8.11%
JAN-06 8 7.21%
JUN-06 8 7.21%
JAN-07 11 9.91%
JUN-07 9 8.11%
JAN-08 17 15.32%
JUN-08 21 18.92%
JAN-09 22 19.82%
JUN-09 24 21.62%
JAN-10 18 16.22%
JUN-10 17 15.32%
JAN-11 18 16.22%
JUN-11 20 18.02%
Note: Percentages listed were of 111/112 community colleges.
66
It is clear that there has been an increase in the overall percentage of institutions
with endangered accreditation status, with more than twenty percent of all of California
community colleges sanctioned during the peak in January 2009. This is especially
noteworthy considering that California community colleges comprise the single largest
system of higher education in the nation. This increase is also represented graphically in
Figure 4.1, below, which was produced using the dataset on institutional accreditation
status collected by this study.
Figure 4.1: Percent of All CCCs with Sanctioned Status
It is clear that the percent of colleges on a form of sanctioned status has generally
increased to the present day, confirming the observation that WASC-ACCJC
demonstrated an increased rate of sanction from its own prior years and compared to
other regional accreditors (Moltz, 2010). Figure 4.1 represents this increase, alongside a
linear average that approximates the slope of increase for this period, which went from
67
zero percent to more than twenty percent of California community colleges with
sanctions.
While the aforementioned figure clearly shows an increased percentage of
colleges on sanctions, there is more data to be unfolded – specifically, whether or not the
actual number of sanctions in any given year have increased. To further represent and
explore this data, the following table truncates biannual visits into singular yearly totals,
detailing the number of actions per year, number of sanctions per year, and the
dramatically increasing percentage of actions that fell in the category of sanctions (see
Table 4.2 below).
Table 4.2: Accreditation Actions by Year
2003 2004 2005 2006 2007 2008 2009 2010 2011
Actions by
WASC-
ACCJC
8 14 24 28 22 41 50 47 44
Sanction-
Category
Actions
0 1 10 6 10 29 30 19 25
Sanctions
as Percent
of Actions
0.00 7.14 41.67 21.43 45.45 70.73 60.00 40.43 56.82
Note: Includes January and June visits as an annual statistic.
What Figure 4.2 demonstrates is that not only have the percent of all CCCs with
sanctioned statuses increased, but the number of overall actions by WASC-ACCJC, the
number of sanctions, and the percent of all actions that are sanctions have also increased.
The percent of all actions that were sanctions is a dramatic section of this figure, noting
an increase from a minimum of zero percent sanctions to over seventy percent of all
68
commission actions being sanctions in 2008. This peak in 2008 parallels the similarly
upward trajectory of discrete sanction-category actions (see Figure 4.3).
Figure 4.2: Overall Count of All WASC-ACCJC Actions, Annual
Total actions by WASC-ACCJC have increased, in part due to the increased
frequency of visits required by an increasing number of colleges on a sanctioned status.
As more colleges are sanctioned and follow-up visits and/or reports necessitate further
action on an expedited timeline, the total number of actions by WASC-ACCJC increases.
However, these increases are still notable and demonstrate a change in prior practice.
Moreover, several institutions have been granted initial accreditation this decade, which
also increases the overall rate of action. Most interesting for the purposes of this study,
however, is the increase in the count and percentage of those overall accreditation actions
that fell in the latter sanctioned category.
69
Figure 4.3: Overall Count of WASC-ACCJC Sanctions, Annual
Note: Sanctions include Termination, Show Cause, and Probation.
There was a simultaneous increase in the count of overall actions taken by
WASC-ACCJC, which rose from as few as eight in 2003 to as many as 50 in 2009, as
well as an increase in the count of those actions that were considered sanctions, rising
from zero in 2003 to thirty in 2009 (see Table 4.2). This increase is remarkable for an
accreditor as contextualized by the literature on accreditation, which typically describes
these commissions as glacial to change (Bloland, 2001; Burke & Minassians, 2002;
McLendon, Hearn, & Deaton, 2006; Zis, Boeke, & Ewell, 2010).
Indeed, in 2008 and 2009, sanctions as a percentage of all of the actions taken by
WASC-ACCJC in a given year peaked at over seventy percent—a startling statistic.
Overall, for the entire period of this table, sanctions as a percentage of all WASC-ACCJC
actions totaled a dramatic forty-seven percent of actions taken and were as high as
seventy percent in a given year. Graphically represented below in Figure 4.4, it is clear
70
that changes in rate of sanction have taken place following the change in standards in
2002.
Figure 4.4: Sanctions by WASC-ACCJC as a Percentage of Actions Taken that Year
Note: Sanctions include Termination, Show Cause, and Probation.
Represented graphically and generally mirroring the figure of singular, biannual
actions by WASC-ACCJC, Figure 4.4 is another demonstration of the generally upward
trajectory of sanctions as a percentage of WASC-ACCJC actions. By way of summary,
all of the preceding tables and figures on WASC-ACCJC sanctions combine to highlight
a phenomenon of increasing commission actions, increasing count and percentage of
sanctions, and increasing counts and percentages of institutions with endangered
accreditation statuses.
Compared with the historical assessment that accreditation in higher education
was liable to lax scrutiny or easy reaffirmation of accreditation, the case of WASC-
ACCJC appears to show a transformation in willingness to issue sanctions within a rising
culture of accountability and assessment. This observed change, coupled with the
71
research questions concerned with the potential associations between accreditation status
and institutional and student variables, provides some clarity to the contours of this
transformation in WASC-ACCJC. By better understanding the patterns of change, it is
easier to interpret any demonstrated patterns of relationship between variables that may
be manifest in the dataset of this research project. As tension between the WASC-ACCJC
and the CCCCO demonstrates, it is increasingly important to understand what is taking
place in the assessment of WASC-ACCJC institutions (Moltz, 2010).
Overall, the descriptive statistics concerning WASC-ACCJC accreditation actions
following the change of standards in 2002 and their new implementation in 2004
(WASC, 2002; WASC, 2011), both portray and help explain the transformation that took
place as the count of sanctions and overall percentage of institutions with a sanctioned
status increased. In compatibility with the literature, it seems that the WASC-ACCJC was
emblematic of the overall trend in accreditation within the rising culture of accountability
and assessment to bring new scrutiny to higher education. Once again, it is striking to
note that over fifty percent of all California community colleges found themselves
sanctioned at some point since 2004 (see Table 4.3).
Table 4.3: Overall Percentage of Sanctioned CCCs
Sanctioned CCCs as Percentage of All CCCs, Since 2004
Count, Institutions Sanctioned 61
Percent, Institutions Sanctioned 54.95%
Note: Using n=111 as an average count of CCCs during this period.
72
The new detail, summarized for the period of WASC-ACCJC accreditation since
2002, addresses an overall gap in knowledge concerning community college
accreditation. It is dramatic to see in print that as many as sixty-one institutions,
representing more than fifty percent of all CCCs, were sanctioned at least once since
2004. The patterns of change demonstrated by this specific regional accreditor and
detailed in this preceding section may foretell of transformations in other regional
accreditation commissions as the literature suggests that increased scrutiny of higher
education may become the norm (Commission on the Future of Higher Education, 2006;
Eaton, 2008a).
Independent Variables
Student Variables. With descriptive data for CCCs accredited by WASC-ACCJC
now detailed, it is also important to outline the nature of the data of several student and
institutional variables that were used in the goodness-of-fit tests and regression analyses
performed by this research project. First, a tabular representation of the category of
variables used—as defined in chapter three—is detailed below in Table 4.4.
Table 4.4: Categories and Subsets of Student and Institutional Variables Used
Student Variables:
Graduation Rate
Transfer Rate
Retention Rate
Institutional Variables:
Institutional Size
Institutional Budget
Institutional Staffing
73
There are two major categories of variables—student and institutional—and they
each contain three categories of subheadings. Within those subheadings, one or more
variables may exist. For example, the dataset includes full- and part-time retention rates
for the category of retention rate. Likewise, institutional budget includes numerous
categories, such as expenditures by area, which are detailed in the data case summaries
that follow. Generally reported for the years 2004-2009, at the height of sanctions and
during a period identified to maximize the inclusion of institutions, below are descriptive
statistics for the first category of variables – the student variables (see Table 4.5).
Definitions and criteria for Table 4.5 are included in Table 4.6.
Table 4.5: Descriptive Statistics: Graduation, Transfer, and Retention
Graduation Rate By Year
2004 2005 2006 2007 2008 2009
N 108 108 108 109 109 110
Mean 34.42 34.99 33.82 23.14 22.95 24.78
Median 34.00 35.00 33.50 23.00 22.00 23.00
Min 14 15 13 9 6 9
Max 64 59 47 56 46 56
Range 50 44 34 47 40 47
Std. Deviation 7.488 7.842 7.023 7.026 6.510 8.522
Variance 56.077 61.505 49.324 49.361 42.378 72.631
74
Table 4.5, Continued
Transfer Rate By Year
2004 2005 2006 2007 2008 2009
N 108 108 108 109 109 110
Mean 22.56 31.07 17.53 25.29 17.96 17.68
Median 22.00 31.00 15.00 24.00 17.00 17.00
Min 0 11 2 9 9 8
Max 44 57 45 60 38 37
Range 44 46 43 51 29 29
Std. Deviation 7.569 9.582 8.725 8.760 4.613 5.343
Variance 57.296 91.808 76.121 76.746 21.276 28.549
Full-Time Retention Rate By Year
2004 2005 2006 2007 2008 2009
N 109 109 109 110 110 111
Mean 65.14 64.37 62.53 63.93 65.51 50.56
Median 67.00 65.00 65.00 66.00 67.00 49.00
Min 0 43 36 28 39 17
Max 82 77 77 79 80 83
Range 82 34 41 51 41 66
Std. Deviation 9.414 7.108 8.837 9.048 8.147 13.418
Variance 88.620 50.531 78.085 81.866 66.381 180.031
Part-Time Retention Rate By Year
2004 2005 2006 2007 2008 2009
N 109 109 109 110 110 111
Mean 40.16 39.17 38.84 38.85 35.11 27.64
Median 41.00 40.00 39.00 41.00 36.00 27.00
Min 0 8 8 18 13 6
Max 71 56 55 52 65 54
Range 71 48 47 34 52 48
Std. Deviation 9.308 8.629 8.859 7.376 8.570 9.299
Variance 86.633 74.460 78.485 54.407 73.438 86.469
Note: See Table 4.6 for variable criteria.
75
Some student variables such as transfer rate can be represented along a normally
distributed curve. This representation helps to demonstrate the spread in values seen in
the data collected by this research study and underscores how some of the student
outcomes variables are distributed with lean extremes on the high and low data values.
An example using the transfer data from a discrete year is provided below in Figure 4.5.
This graphical representation highlights the contribution of the data summary tables
presented here in the findings, by showing that mean values for commonly used variables
such as transfer rate can be startlingly in themselves. Figure 4.5 shows a mean transfer
rate of only 17.68 percent for all CCCs for the year presented, which may be surprising to
observers unfamiliar with community college student outcome patterns.
Figure 4.5: Sample Distribution of Student Variable – Transfer Rate, 2009
76
The detailed descriptive data for this particular variable in Figure 4.5, as well as
all the variables and years collected for this study, are detailed beginning in Table 4.7. By
way of further explanation, Table 4.6 below details the definitions and criteria used for
each of the student data categories, as defined in the various datasets reviewed to obtain
this data.
Table 4.6: Explanation of Student Variables Used
Student Variables and Definitions/Criteria
Graduation Rate
The graduation rate of first-time, full-time degree or
certificate-seeking students. The graduation rate is the total
number of completers within 150% of normal time, divided
by the cohort and minus any exclusions, with normal time
equal to 2 years, ending on the reported cohort year.
Transfer Rate
Transfer rate of first-time, full-time degree or certificate-
seeking students to other reporting institutions within 150%
normal time of completion divided by cohort with allowed
exclusions, ending on the reported cohort year.
Retention Rate (Full-
and Part-Time)
The full-time retention rate equals the percent of fall full-
time cohort students from the previous year minus
exclusions from the fall full-time cohort that continued
enrollment as either full- or part-time in the current year.
The part-time retention rate equals the percent of fall part-
time cohort students from the previous year minus
exclusions from the fall part-time cohort that continued
enrollment as either full- or part-time in the current year
Note: Definitions derived from CCCCO, IPEDS, and institutional sources.
77
Institutional Variables. The second category of variables used in the dataset is
one of institutional variables, consisting of three subcategories each with multiple
variables. Within the three subheadings of size, budget, and staffing, there are additional
categories of detail defined by the horizontal columns on the tables below. Descriptive
statistics for the items are generally reported for the years 2004-2009 at the height of
sanctions and during a period identified to maximize the inclusion of institutions with
accreditation actions and matching institutional and student variable values. The
following table describes institutional size values, based on full-time equivalent student
counts (see Table 4.7).
Table 4.7: Descriptive Statistics: Credit, Non-Credit, and Total Full-Time Equivalent
Students (FTES)
Credit
FTES
By Year
2004 2005 2006 2007 2008 2009
N 108 108 109 109 110 110
Mean 9289.58 9489.10 9550.98 10337.57 11014.83 11251.44
Median 8096.94 8344.76 8450.33 8959.45 9615.12 10158.69
Min 722 1309 1458 1760 1674 1642
Max 26649 29948 29086 31418 31175 30266
Range 25927 28639 27627 29659 29501 28624
Std.
Dev.
5496.533 5744.270 5917.939 6315.803 6751.429 6772.019
Var* 30211876 3299664 35022003 39889361 45581790 45860242
78
Table 4.7, Continued
Non-
Credit
FTES
By Year
2004 2005 2006 2007 2008 2009
N 108 108 109 109 110 110
Mean 435.23 436.56 461.21 489.82 511.28 493.82
Median 135.43 135.56 133.49 178.22 184.33 176.13
Min 0 0 0 0 0 0
Max 5057 5406 6775 7220 7434 5964
Range 5057 5406 6775 7220 7434 5964
Std.
Dev.
754.015 769.219 868.708 896.254 941.090 955.095
Var 568539.04 591697.76754654.28803270.96885650.74 912206.68
Total
FTES
By Year
2004 2005 2006 2007 2008 2009
N 108 108 109 109 110 110
Mean 9724.81 9925.66 10012.18 10827.40 11526.11 11745.26
Median 8589.70 8496.78 8640.76 9671.36 10485.99 10536.46
Min 853 1443 1503 1823 1711 1727
Max 26929 29948 30083 31418 33270 32971
Range 26076 28505 28580 29595 31559 31244
Std.
Dev.
5784.840 6039.516 6255.302 6641.179 7099.068 7168.957
Var* 33464374 36475751 39128805 44105263 50396767 51393939
Note: See Table 4.12 for variable criteria.
*Decimal values truncated
The following table details the Carnegie size classification for institutions for the
years defined as well as the level of urbanization of the location of the institution in
79
question. Definitions of Carnegie size classifications and urbanization scale are available
in Table 4.12 at the end of this section.
Table 4.8: Descriptive Statistics, Carnegie Size Classification
Carnegie
Size
By Year
2005* 2006 2007 2008 2009
N 109 110 110 110 111
Mean 3.73 3.67 3.67 3.67 3.61
Median 4.00 4.00 4.00 4.00 4.00
Min 2 -3 -3 -3 -3
Max 5 5 5 5 5
Range 3 8 8 8 8
Std.
Deviation
0.909 1.110 1.110 1.110 1.273
Variance 0.83 1.23 1.23 1.23 1.62
Urban
Category
By Year
2004 2005 2006 2007 2008 2009
N 109 109 110 110 110 111
Mean 2.54 21.61 21.39 21.75 21.75 20.96
Median 3.00 21.00 21.00 21.00 21.00 21.00
Min -3 11 -3 11 11 11
Max 6 43 43 43 43 43
Range 9 32 46 32 32 32
Std.
Deviation
1.411 11.057 11.254 11.192 11.192 10.930
Variance 1.99 122.26 126.64 125.27 125.27 119.47
Note: See Table 4.12 for variable criteria.
*Parallel data were not similarly coded for 2004, which was excluded from this data summary (CCCCO,
2011)
80
Budget and staffing details were also collected for independent variables to assess
the potential association of budget and staffing levels with institutional accreditation
status. These values have been obtained from IPEDS and the CCCCO.
Table 4.9: IPEDS Budget Information in Dollars, By Year
2004 IPEDS Budget By Category
Total
operating
revenues
Federal
appropriation
State
appropriations
Local
appropriation
district taxes
Total
operating
expenses
N 105 105 105 105 105
Mean 21991023.6 48384.8 22627887.6 17508485.5 63033886.9
Median 18204234.0 0.00 18151384.00 13954293.00 53214465.0
Min 2875988 0 0 0 11285227
Max 77113792 4814602 87059448 74037943 239836175
Range 74237804 4814602 87059448 74037943 228550948
Std.
Dev.
14396019.5 470323.4 16966420.5 13055894.0 38350312.6
Var* 2072.454 2.212 2878.594 1704.564 14707.465
2005 IPEDS Budget
N 108 108 108 108 108
Mean 21563257.9 64737.8 24750196.6 17818850.9 65196031.2
Median 17166308.0 0.00 20192468.0 14659550.0 55589142.5
Min 0 0 0 0 9691672
Max 77355427 6734651 99535282 76628604 250797435
Range 77355427 6734651 99535282 76628604 241105763
Std. Dev.
15124921.9 648282.8 20360501.0 13922266.6 40551669.8
Var* 2287.63 4.20 4145.50 1938.30 16444.38
81
Table 4.9, Continued
2006 IPEDS Budget By Category
Total
operating
revenues
Federal
appropriation
State
appropriations
Local
appropriation
district taxes
Total
operating
expenses
N 108 108 108 108 108
Mean 23277873.1 2527.5 29278252.2 18914567.4 71512649.9
Median 18457558.5 0.00 23694656.5 15714565.0 64491319.5
Min 0 0 0 0 11958340
Max 77543631 272972 114611294 83603618 274352928
Range 77543631 272972 114611294 83603618 262394588
Std.
Dev.
15813252.7 26266.7 23807788.5 15191368.9 44318389.0
Var* 2500.59 0.01 5668.11 2307.78 19641.20
2007 IPEDS Budget
N 108 108 108 108 108
Mean 20936674.5 2542.1 31044643.0 19669422.2 77526069.5
Median 17128786.0 0.00 25564523.5 15300458.0 68925538.5
Min 0 0 0 0 14232234
Max 61027679 274547 120224640 91162812 314239661
Range 61027679 274547 120224640 91162812 300007427
Std. Dev.
13661382.0 26418.3 25534066.6 16671495.9 48539096.8
Var* 1866.33 0.01 6519.89 2779.39 23560.44
2008 IPEDS Budget
N 109 109 109 109 80
Mean 19618779.0 95405.6 31932901.4 20393652.3 83498369.1
Median 16847550.0 0.00 26661452.0 15651276.0 70199330.0
Min 1382947 0 1815816 0 17533481
Max 61829006 10125475 120818912 92439563 316928566
Range 60446059 10125475 119003096 92439563 299395085
Std. Dev.
12770043.2 969956.2 24605352.3 17011801.8 52494818.8
Var* 1630.74 9.41 6054.23 2894.01 27557.06
Note: See Table 4.12 for variable criteria.
*Listed per 100 bn
82
Additional budget details, including expenditures by college division/area were
also collected for independent variables to provide a more granular level of detail for
budget variables assessed against institutional accreditation status for goodness-of-fit.
Below, in Table 4.10, are descriptive statistics from IPEDS on institutional spending.
Table 4.10: IPEDS Budget Spending By Area in Dollars, By Year
2004 IPEDS Budget By Category
Instruction
Public
Service
Academic
Support
Student
Services
Institutional
Support
N 105 105 105 105 105
Mean 24500480.30 738291.12 4847753.49 6951272.14 7764909.89
Median 21664187.00 415307.00 3659788.00 6300682.00 5988963.00
Min 0 0 0 0 0
Max 88994498 4445833 29980289 19559279 48397522
Range 88994498 4445833 29980289 19559279 48397522
Std. Dev.
15245099.57 922795.19 4271420.30 4017031.72 6607796.90
Var* 2324.131 8.516 182.450 161.365 436.630
2005 IPEDS Budget
N 108 108 108 108 108
Mean 26246747.33 816901.91 5154806.62 7027675.78 8078984.81
Median 23769906.50 473374.50 4042761.50 6382080.00 6556541.00
Min 3642248 0 187778 1403815 641187
Max 95115394 4545118 32321815 19457420 44426414
Range 91473146 4545118 32134037 18053605 43785227
Std. Dev.
16172146.48 1039269.55 4634087.11 4050392.39 6413732.71
Var* 2615.38 10.80 214.75 164.06 411.36
83
Table 4.10, Continued
2006 IPEDS Budget
By Category
Instruction
Public
Service
Academic
Support
Student
Services
Institutional
Support
N 108 108 108 108 108
Mean 28446729.65 861178.43 5540549.84 7861090.18 9015158.69
Median 24985659.00 463957.00 4570994.00 6982604.00 7405157.50
Min 3675211 0 695527 1240196 799670
Max 103599045 7215729 26933840 28008685 43059749
Range 99923834 7215729 26238313 26768489 42260079
Std. Dev. 17844766.29 1182033.16 4258679.56 4724704.65 6701492.76
Var* 3184.36 13.97 181.36 223.23 449.10
2007 IPEDS Budget
N 108 108 108 108 108
Mean 31265487.97 921232.36 6315330.56 8651223.34 10165460.86
Median 26751461.00 455922.50 4776608.50 7745607.00 8845468.50
Min 4915602 0 0 0 0
Max 112456339 6698745 51564701 30133055 46927807
Range 107540737 6698745 51564701 30133055 46927807
Std. Dev.
19893989.50 1243022.64 6135829.40 5175586.76 7130002.96
Var* 3957.71 15.45 376.48 267.87 508.37
2008 IPEDS Budget
N 109 109 109 109 109
Mean 31522151.50 907928.03 6699243.54 8988698.94 10011935.38
Median 26909104.00 517557.00 4824079.00 8393074.00 7920553.00
Min 1028478 0 146335 666352 804242
Max 119609801 6043833 63608000 30738641 49935774
Range 118581323 6043833 63461665 30072289 49131532
Std. Dev. 20622515.26 1151057.62 7020225.52 5089475.52 7257955.48
Var* 4252.88 13.25 492.84 259.03 526.78
Note: See Table 4.12 for variable criteria.
*Listed per 100 bn
84
Finally, staffing values were also collected for independent variables in order to
assess potential association between known IPEDS and CCCCO staffing levels and
institutional accreditation status. The staffing levels are reported according to full-time
equivalent staff, defined in Table 4.12 at the end of this section.
Table 4.11: IPEDS Staffing Information in FTE, By Year
2004 IPEDS Staffing By Category
FTE Staff
FTE
Instruction,
Research,
Public Service
FTE
Executive,
Admin,
Managerial
Staff
FTE, Other
Professional Staff
FTE, Non-
Professional
Staff
N 109 109 109 109 109
Mean 509.72 287.48 24.11 13.59 184.54
Median 449.00 250.00 20.00 9.00 165.00
Min 103 26 5 0 17
Max 1989 1149 92 81 704
Range 1886 1123 87 81 687
Std. Dev. 303.39 180.09 16.23 15.52 114.04
Variance 92045.96 32433.18 263.36 240.95 13005.51
2005 IPEDS Staffing
N 110 110 110 110 110
Mean 520.73 298.07 24.85 12.82 184.98
Median 465.00 268.00 20.50 8.00 163.50
Min 103 49 0 0 1
Max 1982 1144 94 79 703
Range 1879 1095 94 79 702
Std. Dev. 309.30 181.05 17.27 14.89 117.98
Variance 95666.59 32777.83 298.24 221.58 13918.40
85
Table 4.11, Continued
2006 IPEDS Staffing By Category
FTE Staff
FTE
Instruction,
Research,
Public Service
FTE
Executive,
Admin,
Managerial
Staff
FTE, Other
Professional
Staff
FTE, Non-
Professional
Staff
N 110 110 110 110 110
Mean 531.64 302.70 26.00 13.58 189.36
Median 489.00 273.00 22.00 9.00 171.50
Min 115 50 2 0 16
Max 2036 1162 91 86 730
Range 1921 1112 89 86 714
Std. Dev. 310.86 179.95 17.26 15.52 119.03
Variance 96633.32 32381.35 298.07 240.89 14167.30
2007 IPEDS Staffing
N 110 110 110 110 110
Mean 546.15 311.75 26.12 15.13 193.14
Median 486.50 284.50 21.50 10.00 178.50
Min 126 63 6 0 31
Max 2079 1191 98 93 741
Range 1953 1128 92 93 710
Std. Dev. 317.00 184.28 16.81 16.99 120.24
Variance 100489.52 33957.47 282.67 288.81 14456.78
2008 IPEDS Staffing
N 111 111 111 111 111
Mean 525.77 298.95 26.72 14.82 185.29
Median 473.00 267.00 21.00 10.00 163.00
Min 113 49 6 0 33
Max 2000 1134 101 96 719
Range 1887 1085 95 96 686
Std. Dev. 307.19 176.50 18.40 16.53 116.35
Variance 94368.01 31153.60 338.64 273.39 13537.37
Note: See Table 4.12 for variable criteria.
86
Due to the size and complexity of the dataset collected for this study, it is
important to clarify the definitions and criteria of values assigned to the individual
student and institutional variables. The table below provides explanation according to the
data collected and presented, and as gleaned from IPEDS, CCCCO, and other sources.
Table 4.12: Explanation of Institutional Variables Used
Institutional Variables and Definitions/Criteria
Full-Time Equivalent Students (Credit- and
Non-Credit)
Reported full-time equivalent students
(FTES), defined as 1 FTES per 525
contact hours of enrolled students.
Carnegie Size
Classification of institutional size
according to category: Very small
two-year; Small two-year; Medium
two-year; Large two-year; Very large
two-year; Not classified; Not
applicable, special focus institution.
Urban Category
Classification of institutional size
according to geographic data: City:
Large/Midsize/Small; Suburb:
Large/Midsize/Small; Town:
Fringe/Distant/Remote; Rural:
Fringe/Distant/Remote, according to
population criteria
IPEDS Budget, by Area
Includes subset definitions by
expenditure category, but major
headings are as follows: Total
operating revenues is sum of all
operating revenues from provision of
service / delivery of goods & Total
expenses is sum of all operating
expenses from departments/divisions.
IPEDS Staffing, by Area
FTE (full-time equivalent) for staff is
the sum of the number of full-time
staff plus one-third the total of part-
time staff. Detail by subset category
provided in descriptive charts.
Note: Definitions derived from CCCCO, IPEDS, and institutional sources.
87
This section provided significant detail concerning the many variables collected
on common institutional and student benchmarks in order to provide a rich dataset that
could be compared to institutional accreditation status to assess potential associations and
patterns of relationship. With these descriptive details, definitions, and criteria now
defined, it is appropriate to move to the next stage of data analysis provided by the chi-
square goodness-of-fit tests.
Summary of Logic Model
As described in chapter three methodology (see Table 3.3), a series of analytical
steps was designed according to the type of data collected and the nature of the research
questions proposed by this project. Visually outlined below according to the specific
statistical tests, Figure 4.6 describes the general data analysis model used in order to
understand how the California community college data may be associated with
institutional accreditation status.
88
Figure 4.6: Logic Model of Statistical Tests
Note: All significant values for each step are reported in chapter four findings.
*Includes and removes variables from significant categories alongside of individual significant variables
**Includes and removes variables from significant categories, in groups and individually
89
Comparative Statistics
Chi-Square Tests. Following the presentation of descriptive statistics, research
proceeded to analyze the relationship between variables and accreditation status. The first
test of association was a chi-square analysis. This statistical test is commonly used to
gauge the goodness-of-fit between variables by comparing the value of observed
frequencies with a theoretical frequency. The returned probability result suggests whether
variables may be associated in a significant manner and explaining the probability that
any given result is due to chance. The basic formula is as follows: ,
with O being the observed frequency and E being expected frequency. A subsequent
calculation to obtain a p-value provides insight into the percent chance that the difference
between observed and expected values is the same as if the results were due to chance—
namely if the null hypothesis were true. For the purposes of this research project, p-value
results less than 0.05 will be considered significant. Once again, this test is essentially
testing the null hypothesis that the variables are independent.
To explore the potential relationship between common institutional and student
variables and institutional accreditation, as proposed in the chapter three research
questions, this chi-square test is a helpful initial tool to identify variables warranting
further exploration by subsequent statistical analysis. Since this test only addresses the
probability of independence between variables—or goodness of fit—the proposed
multinomial logistic regression that follows the chi-square analysis will provide a further
level of detail concerning the significance of the relationship between variables, by
90
suggesting how much of the variance in the dependent variable of accreditation status can
be explained by the independent variable(s) included in the regression calculations. Table
4.13 below summarizes the initial chi-square tests of association and the twenty-one
variables found with significant p-values for the dataset.
Table 4.13: Chi-Square Association of Variables and Categorical Accreditation Status,
by Year
Pearson
Chi-
Square
1
Institutional or Student Variable Accreditation Variable
2
Month Year
0.031 IPEDS Staffing, Public Service Accreditation Category Jan 2004
0.057* Transfer Rate Accreditation Category Jan 2004
0.057* Transfer Rate Accreditation Category Jun 2004
0.004 Transfer Rate Accreditation Category Jun 2005
0.000 Full-Time Retention Rate Accreditation Category Jun 2006
0.000 Urban Accreditation Category Jun 2006
0.002
IPEDS Staffing, Other
professional
Accreditation Category Jan 2006
0.003 Carnegie Size Accreditation Category Jun 2006
0.005 Graduation Rate Accreditation Category Jun 2006
0.005
IPEDS Staffing, Other
Professional
Accreditation Category Jun 2006
0.009 Part-Time Retention Rate Accreditation Category Jun 2006
0.042 IPEDS Staffing, Public Service Accreditation Category Jan 2006
0.000 Full-Time Retention Rate Accreditation Category Jan 2007
0.000 Urban Accreditation Category Jun 2007
0.037 Urban Accreditation Category Jan 2007
0.003 Graduation Rate Accreditation Category Jun 2008
0.005 Full-Time Retention Rate Accreditation Category Jun 2008
0.005 Transfer Rate Accreditation Category Jun 2008
0.038 Part-Time Retention Rate Accreditation Category Jun 2008
91
Table 4.13, Continued
Pearson
Chi-
Square
1
Institutional or Student Variable Accreditation Variable
2
Month Year
0.041 IPEDS Staffing, Administrative Accreditation Category Jan 2008
0.054* Carnegie Size Accreditation Category Jun 2008
0.057* Transfer Rate Accreditation Category Jan 2008
0.001 Full-Time Retention Rate Accreditation Category Jun 2009
0.005 Carnegie Size Accreditation Category Jun 2009
0.012
IPEDS Staffing, Other
Professional
Accreditation Category Jun 2009
*Not significant nor included in regression, but listed due to proximity of value.
1
Asymp. Sig (2-sided)
2
See definitions in Table 4.12
This data suggests the potential association of a number of institutional and
student variables with institutional accreditation status, such as full-time accreditation
rate for June 2009 (p=0.001), among the other items with p<0.05. The chi-square test of
association was an appropriate first test to run, according to the methodology defined in
chapter three, because it is a useful heuristic to identify significant associations in a large
dataset with many variables. Accordingly, discrete values of variables in the categories of
institutional and student data were calculated through a chi-square test against the
matching accreditation status of the defined year.
As the preceding chart details, this test returned twenty-one items with significant
chi-square values. These items now constitute a targeted, statistically reinforced basis for
another statistical test offered by this research project—namely, a regression analysis to
explore further the relationship of these chi-square identified values. However, prior to
running the regression analysis, it is important to first review and summarize the results
92
of the chi-square tests. By truncating the nearly two dozen values according to rank of
occurrence, it is possible to identify categories of variables that may be useful to include
during the regression analysis of the data (see Table 4.14). Indeed, the categories of
variables identified, below, aligned with research question one for student variables:
Table 4.14: Categories of Variables Identified as Potentially Associated by Chi-Square
Associated Institutional/Student Variables of Interest
Unique Variables, Alpha
Carnegie Size
Full-Time Retention Rate
Graduation Rate
IPEDS Staffing, Administrative
IPEDS Staffing, Other Professional
IPEDS Staffing, Public Service
Part-Time Retention Rate
Transfer Rate
Urban
Categories, by Rank
*Retention Rate
*Graduation Rate
*Transfer Rate
Urban
IPEDS Staffing
Carnegie Size
*Highest ranked categories identified by chi-square tests used as categories in
the second-step logistic regression analysis.
As the preceding table summarizes, the unique combination of variables that were
returned by the chi-square tests can be collapsed and combined into six singular
93
categories. The top three of these categories—retention, graduation, and transfer—were
selected for additional use in the regression analysis on the basis of their recurrence and
with the assumption that those categories that the chi-square tests identified may be of
most significance in a subsequent regression analysis. It is also interesting to note that
these variables are also commonly mentioned within the literature that discusses the rise
of accountability and performance measures in accreditation (Volkwein, 2010). Since the
values of these variables are relatively easy to obtain and are publicly accessible, it is
instructive to include these items in the forthcoming regression analysis to see if their
values are related to the rise in sanctions.
Additional Statistical Analysis
Regression Analysis
In order to explore the potential relationship between institutional/student
variables and accreditation status, a chi-square analysis identified a collection of potential
associations (see Table 4.13). As proposed in chapter three, these identified variables
were then used as the basis for conducting a regression analysis to identify the potential
predictive ability of institutional/student variables upon accreditation status (also see
Figure 4.6). With accreditation status organized by category, a multinomial logistic
regression was the preferred test to assess the relationship between independent variables
and the dependent variable of accreditation status. The regression analysis proceeded
according to the following outline. The items identified by the chi-square analysis with
significance values less than 0.05 were used as singular, independent variables against the
categorical, dependent variable of accreditation status. As expected, not all of the items
94
identified as potentially associated by the chi-square test were similarly identified as
salient by the regression analysis. First-step outputs are detailed below in Table 4.15.
Table 4.15: First-Step - Multinomial Logistic Regression Analysis of Individual Chi-
Square Identified Variables (IV) and Accreditation Status (DV), By Significance
Significant Values: First-Step Multinomial Logistic Regression, with One IV
Sig.
(Likelihood
Ratio Test)
Pseudo R-
Square
(Cox and
Snell)
Independent Variable
Dependent
Variable
Month Year
0.001 0.156
Full-Time Retention
Rate
Accreditation
Status
Jun 2006
0.003 0.080
Part-Time Retention
Rate
Accreditation
Status
Jun 2006
0.007 0.119
Carnegie Size
Accreditation
Status
Jun 2009
0.017 0.104
Full-Time Retention
Rate
Accreditation
Status
Jun 2008
0.018 0.103
Graduation Rate
Accreditation
Status
Jun 2007
0.030 0.080
IPEDS Staffing, Public
Service
Accreditation
Status
Jan 2006
What Table 4.15 summarizes is the return of significant results for the twenty-one
variables identified as significantly associated in the initial run of chi-square tests (see
Table 4.13). Variables such as retention and graduation were shown to be significantly
related to accreditation for a range of years from the sample. For example, in June 2007,
graduation rate was significant in both the chi-square association test and the logistic
regression, returning a p-value of 0.018. The Cox & Snell value for this variable was
0.103, meaning that this regression model suggests that approximately ten percent of the
variance in the accreditation status values can be accounted for by the graduation values.
Essentially, this analysis demonstrates approximately how much of the variation in the
95
dependent variable can be accounted for by the independent variable(s). Another
example, from June 2008, is full-time retention rate, which was identified in the initial
chi-square test as associated with accreditation status with a p-value of 0.005. This
variable was found in the logistic regression to have a significance value of p=0.017 and
a pseudo-r
2
value (Cox and Snell) = 0.104. This suggests an association that is significant
and indicates that retention accounts for ten percent of the variance in accreditation status
for this period in the dataset, according to the regression model. The full list of significant
logistic regression output values are reported in Table 4.15.
Secondly, following this singular regression analysis of the chi-square variables,
additional variables were included in the regression equation. The chief categories of
potentially associated variables identified by the chi-square test were student retention,
graduation rate, and transfer rate. Following the first step regression detailed above, these
three additional categories of variables were entered and removed from the regression
equation alongside of the initially identified variables (see Table 4.14 and Table 4.16).
The inclusion of these variables identified an additional number of instances when these
variables were predictive, in a given year, of accreditation status. A detailed output of the
significant results of these second-round regression tests is included below.
96
Table 4.16: Second-Step - Multinomial Logistic Regression Analysis of Chi-Square
Identified Variables (IV) and Accreditation Status (DV), By Year
Significant Values: Second-Step Multinomial Logistic Regression, with One or More
IVs
Sig.
(Likelihoo
d Ratio
Test)*
Pseudo R-
Square
(Cox and
Snell)*
Independent
Variable(s)**
Dependent
Variable
Month Year
0.043 0.137
Graduation Rate,
Full-Time Retention
Rate, Transfer Rate
Accreditation
Category
Jun 2005
0.001 0.217
Graduation Rate,
Transfer Rate
Accreditation
Category
Jun 2006
0.002 0.230
Full-Time Retention
Rate, Urban
Category, Graduation
Rate, Transfer Rate
Accreditation
Category
Jun 2006
0.001 0.217
Full-Time Retention
Rate, Graduation
Rate, Transfer Rate
Accreditation
Category
Jun 2006
0.001 0.189
Full-Time Retention
Rate, Graduation
Rate
Accreditation
Category
Jun 2006
0.001 0.217
Full-Time Retention
Rate, Graduation
Rate, Transfer Rate
Accreditation
Category
Jun 2006
0.033 0.155
Transfer Rate, Full-
Time Retention Rate,
Graduation Rate
Accreditation
Category
Jan 2006
0.044 0.086
Transfer Rate
Accreditation
Category
Jan 2006
97
Table 4.16, Continued
Sig.
(Likelihoo
d Ratio
Test)*
Pseudo R-
Square
(Cox and
Snell)*
Independent
Variable(s)**
Dependent
Variable
Month Year
0.037 0.231
Urban Category,
Graduation Rate,
Full-Time Retention
Rate, Transfer Rate
Accreditation
Category
Jan 2007
0.033 0.155
Transfer Rate,
Graduation Rate,
Retention Rate
Accreditation
Category
Jan 2007
0.016 0.201
Graduation Rate,
Transfer Rate, Full-
Time Retention Rate
Accreditation
Category
Jun 2008
0.005 0.187
Graduation Rate,
Transfer Rate, Part-
Time Retention Rate
Accreditation
Category
Jun 2008
0.017 0.104
Full-Time Retention
Rate
Accreditation
Category
Jun 2008
0.018 0.103
Graduation Rate
Accreditation
Category
Jun 2008
0.026 0.243
Full-Time Retention
Rate, Carnegie
Size/Graduation
Rate//Transfer Rate
Accreditation
Category
Jun 2009
*Values listed refer to first IV listed
**For items with more than one variable, the first variable listed (only) was significant
What this table summarizes, according to the logic model of Figure 4.6, is that for
those variables first identified by chi-square tests as significant for which a significant
value was not returned by the first-step logistic regression using a singular independent
variable, additional variables from the significant categories (see Table 4.14) were both
included and removed alongside of the first independent variable to test potential
98
significance of association. What was found was that the inclusion of these category-
variables identified a number of significant associations where the singular independent
variable had not, demonstrating a family relationship between some of the student and
institutional variables and accreditation status. For example, in 2006, graduation rate
paired with full-time retention and transfer rate, returned a value of p=0.001 and a
pseudo- r
2
value (Cox and Snell) = 0.217. This suggests an association that is significant
and indicates that in this instance these paired student category variables account for
twenty-one percent of the variance in accreditation status for this year according to the
regression model. The additional values detailed in Table 4.16 shows fourteen other
pairings of similar significance that included multiple student variables.
Following this second-step of the regression tests, a third-step was used to insert
and remove the four primary category variables identified by the first chi-square tests—
graduation rate, full- and part-time retention rate, and transfer rate. These four variables
were entered and removed for each of the years of accreditation data, to provide a final
lens for assessing the potential association of outcomes variables to accreditation status.
As expected because this round of tests was less specifically targeted to significant
associations identified by the first two sets of regression analysis, this final test did not
return many additional variables that were not already identified (see Table 4.17).
99
Table 4.17: Third-Step - Multinomial Logistic Regression Analysis of Chi-Square
Identified Variables (IV) and Accreditation Status (DV), Using Top Four Chi-Square
Categories, By Year
Significant Values: Third-Step Multinomial Logistic Regression, Retention Rate or
Transfer Rate
Sig.
(Likelihoo
d Ratio
Test)
Pseudo R-
Square
(Cox and
Snell)
Independent
Variable
Dependent Variable Month Year
0.000 0.156
Full-Time
Retention Rate*
Accreditation
Category
Jun 2006
0.003 0.122
Part-Time
Retention Rate*
Accreditation
Category
Jun 2006
0.044 0.086
Transfer Rate
Accreditation
Category
Jan 2007
0.017 0.104
Full-Time
Retention Rate
Accreditation
Category
Jun 2008
0.018 0.103
Graduation Rate
Accreditation
Category
Jun 2008
Note. Four categories of the chi-square-identified variables (full-time retention, part-time
retention, transfer rate) were run as IVs in multinomial logistic regression across six years
(2003-2008).
*These items were not previously identified by the first- and second-step regression tests
Table 4.17 notes five singularly significant associations between discrete
independent outcomes variables and the associated accreditation status for institutions for
that year. Only the first two items were not previously identified by prior tests. These
items were full-time retention rate and part-time retention rate in 2006. Full-time
retention returned a value of p=0.000 and a pseudo- r
2
value (Cox and Snell) = 0.156.
Part-time retention in 2006 yielded a significance value of p=0.003 and a pseudo- r
2
value
(Cox and Snell) = 0.122.These results highlight two additional significant associations
identified by this regression test, though the remaining three items in Table 4.17 were
identified by prior tests. This is another example for this dataset of chi-square tests of
100
association and subsequent regression analysis yielding new insights into interesting
associations in the data collected on WASC-ACCJC accreditation and institutions.
By way of exploring a specific slice of the institutional accreditation data included
for the post-2002 period, this study also performed one additional analysis. Figure 4.4
identified the year 2008 as the highest period of sanctions for this period, which was then
used as a narrowed scope of study for a regression analysis of accreditation status and
student variables. In keeping with several of the suggestions for future research
suggestion in chapter five, which highlight many new avenues for deeper exploration of
this interesting high-sanction period, this final analysis was one example of a deeper
exploration of the data that may be followed by subsequent studies following this initial
exploratory analysis. The specifics of this final test are as follows:
First, student variables were represented on a correlation matrix, noting that there
was an association between graduation, retention, and transfer as may be expected by the
similarity of these measures. However, distinctions between the definitions for these
variables do exist (see Table 4.6) and the noted, paired presence of these variables in
most citations of institutional data for community colleges—including local, state, and
Federal reports—nonetheless warranted the inclusion of these variables in this
exploration of WASC-ACCJC accreditation data. Secondly, these student variables were
then transformed into quartiles, according to the distribution of the data (see Appendix
F). Accreditation was truncated into binary categories of sanctioned and clear categories.
Using these transformed variables, a binary logistic regression was run with
clear/sanctioned accreditation status as the dependent variable, and student variables
101
individually as the independent variables. Regression results reported in Appendix F did
not return significant values for this period. However, it should be noted that only one
student variable had been corroborated as significant by both the chi-square and first-step
regression analyses performed in the preceding section, which highlights the fact that the
period of highest sanction was not necessarily the period of highest association between
the variables. This final test is offered as a proof of concept for additional study,
suggesting but one of many potential future methods for the exploration of this dataset.
Findings in Relationship to Research Questions
The findings gleaned from the statistical tests designed in chapter three are
described below according to the three primary research questions, utilizing the statistical
significance findings that were reported in the previous section.
Findings and Research Question One
In California community colleges reviewed by WASC-ACCJC since 2002, what
is the relationship, if any, between the specific accreditation action taken by the
commission—the assessment of “no findings,” “probation/sanction,” etc.—and several
common student outcomes variables cited by the literature—namely, graduation rate,
transfer, and retention?
As a subject that has not been substantially studied quantitatively for the sample
of California community colleges, the true relationship between student variables and
accreditation status was unknown. The literature, however, suggested an initial
hypothesis that there would likely be an association between student performance
measures and accreditation status due to the increasing alignment of accreditation with
102
outcome assessments as propelled by the movement for greater use of evidence and
assessment for accountability. Moreover, the WASC-ACCJC standards and the
California Community College Chancellor’s Office have both addressed student
performance, though not explicitly setting benchmark standards for performance
(CCCCO, 2011; WASC, 2002).
The initial chi-square test found that there was an association between twenty-one
percent of all unique student variables and accreditation status (see Table 4.13).
Moreover, all categories of initial variables were found to be significant in at least one
chi-square test of association. For research question one, however, it is most important to
note that the student variables of graduation, retention, and transfer were the most
significantly associated categories in the chi-square tests. These findings provide detail
about the existence and strength of association between student variables and
accreditation status (see table 4.14).
The subsequent multinomial logistic regression analysis performed on the chi-
square-identified variables and categories (first, second, and third-step) also addressed
research question one by pinpointing which of the chi-square student variable
associations were also found by the regression analysis. Indeed, four percent of all
variables were found significant on both chi-square and logistic regression tests, and of
the student variables that had significant chi-square values, forty percent were also
significant in the regression analysis demonstrating to research question one that there is
a relationship between the defined variables and institutional accreditation status. Table
103
4.18, below, summarizes these findings according to statistical tests and in relationship to
research question one.
Table 4.18: Tabular Summary of Findings by Research Question One
Abbreviated Research Questions
Research Question One:
Relationship between California community college accreditation status* and defined
student outcomes variables**
Research Presented and
Statistical Analysis
Performed
Summarized Results
Descriptive Statistics:
Variable Case Summaries
Detailed case summaries for the extensive dataset of
student variables was provided as context for subsequent
research tests (see Table 4.5, Appendix D, and Appendix
E).
Chi-Square Analysis:
Student Variables
4 student variables in 3 categories were checked for
association by chi-square tests across 6 years with 2
annual visits (48 unique). 10 of 48 student variables,
representing 21%, were identified as significantly
associated (see Table 4.13).
Chi-Square Analysis:
Student Variable
Categories
Graduation, Retention, and Transfer were the most
common associations found among the student and
institutional variables (see Table 4.14).
Logistic Regression
Analysis: Student
Variables - First-Step
10 significant chi-square student variables were used in
regression as IVs with DV accreditation status. 4
variables were significant in the regression (for example,
graduation rate 2007, p=0.018, Cox & Snell=0.103),
representing 40% of significant chi-square student
variables (see Table 4.15).
104
Table 4.18, Continued
Research Presented and
Statistical Analysis
Performed
Summarized Results
Logistic Regression
Analysis: Student
Variables - Second-Step
All 21 significant chi-square variables (student and
institutional) were run through a second round of logistic
regression alongside of full/part-time retention,
graduation, and transfer (included/removed for each
case). Student variables with significant associations
were found in 15 cases, representing 71% of chi-square
significant variables (see Table 4.16).
Logistic Regression
Analysis: Student
Variables - Third-Step
Using categories of graduation, full- and part-time
retention, and transfer without other IVs (4 variables
over 6 years with 2 annual visits = 48 unique), a third-
step logistic regression from 2004 to 2009 found 5 of 48
variables, representing 10%, to be significant, three of
which were retention rate such as part-time retention
2006, p=0.003, Cox & Snell=0.122). For full data, see
Table 4.17.
Summary
This exploration provides new detail to significant
associations found between student variables and CCC
accreditation status, suggesting that the graduation,
retention, and transfer have some significance in WASC-
ACCJC accreditation across the period of this study.
*Status from termination through affirmation without findings, truncated into “clear” and “sanctioned”
categories (see Table 3.2).
**Graduation, transfer, and retention rate are among the most commonly cited variables in the culture of
evidence/assessment literature.
***Size (FTES and enrollment), budget, and staffing.
105
Findings and Research Question Two
In California community colleges reviewed by WASC-ACCJC since 2002, what
is the relationship (if any) between accreditation status and several common institutional
variables—namely size (FTES and enrollment), budget, and staffing?
Like the student variables detailed above for the first research question, there is
also limited information concerning the true relationship between institutional variables
and accreditation status. Accreditation standards suggested an initial hypothesis that there
may be a relationship between expenditure on institutional research or overall resources
of an institution such as staffing and budget levels. However, this underscores the need
for such an exploratory, quantitative analysis as the relationship was unknown.
The initial chi-square test found that there was an association between five
percent of institutional variables and accreditation status (see Table 4.13). For research
question one, it is important to note that the institutional variables, combined in
categories lagged behind the categorized totals of student variables. This suggests that the
relationship between institutional variables and accreditation may not be as salient as the
relationship between student variables and accreditation. However, these chi-square
findings provide detail about the existence and strength of association between
institutional variables and accreditation status (see table 4.13).
The multinomial logistic regression analysis that followed the chi-square test
identified significant variables and categories through first-, second-, and third-step
regression tests, and also addressed research question one by pinpointing which of the
chi-square student variable associations were also found by the regression analysis. Two
106
of the initial eleven institutional variables identified by the chi-square test were also
marked as significant by the first-step logistic regression, representing eighteen percent of
those variables that were significant both by chi-square and regression analysis (see Table
4.15). This lagged behind the forty percent of student variables likewise identified.
Accordingly, these tests address research question two by demonstrating that there is an
association between the defined variables and institutional accreditation status, and a
more frequent association between said status and student outcome variables (see Table
4.19).
Table 4.19: Tabular Summary of Findings by Research Question Two
Abbreviated Research Questions
Research Question Two:
Relationship between CCC accreditation status and defined institutional variables
Research Presented and
Statistical Analysis
Performed
Summarized Results
Descriptive Statistics:
Variable Case Summaries
Detailed case summaries for the extensive dataset of
institutional variables were provided as context for
subsequent research tests (see Table 4.9. Table 4.10,
Table 4.11, and Table 4.12).
Chi-Square Analysis:
Institutional Variables
20 institutional variables in 3 categories were checked
for chi-square association across 6 years with 2 annual
visits (240 unique). 11 of 240 institutional variables,
representing 5%, were identified as significantly
associated, such as IPEDS professional staffing 2006,
p=0.005 (see table 4.13 for details).
107
Table 4.19, Continued
Research Presented and
Statistical Analysis Performed
Summarized Results
Chi-Square Analysis:
Institutional Variable
Categories
Individual institutional variables made up 11 of 21
variables identified as significant by the chi-square
tests, but they ranged across dissimilar categories,
leaving them less significant as categories than the
previously identified student variables.
Logistic Regression Analysis:
Institutional Variables - First-
Step
11 significant chi-square institutional variables were
used in regression as IVs with DV accreditation
status. 2 variables (such as Carnegie Size 2009,
p=0.007, Cox & Snell=0.119) were significant in the
regression, representing 18% of significant chi-
square institutional variables (see Table 4.15).
Logistic Regression Analysis:
Institutional Variables -
Second-Step
All 21 significant chi-square variables (student and
institutional) were run through a second round of
logistic regression alongside of retention, graduation,
and transfer (included/removed for each case).
Institutional budget was also included but not found
to be significant. Institutional variables with
significant association were found in 3 cases (such as
urbanization level 2007 with
graduation/retention/transfer, p=0.037, Cox &
Snell=0.231), representing 14% of chi-square
significant variables (see Table 4.16).
Logistic Regression Analysis:
Institutional Variables - Third-
Step
Second-step logistic regression was sufficient for
institutional variables due to the lack of significant
institutional categories identified by first- and
second-step analysis.
Summary
This exploration, while identifying some significant
institutional variables (see Table 4.13), nonetheless
suggests that the institutional variable categories such
as size, budget, and staffing did not demonstrate
patterns of association to accreditation status in the
same way suggested by the student variables detailed
in Table 4.14.
108
Findings and Research Question Three
Finally, in those California community colleges that were sanctioned by WASC-
ACCJC since 2002, what patterns emerge that may inform institutional knowledge about
the relationship between accreditation action and institutional measures?
The descriptive statistics provided are particularly helpful in addressing research
question three. Though anecdotally discussed, the true contours of WASC-ACCJC
accreditation actions have not been fully explored. Accordingly, the table detailing
institutional status (Appendix D), sanctions as a percentage of WASC-ACCJC actions
(Figure 4.4), and overall percentage of CCCs that have been sanctioned since 2002
(Table 4.3) are helpful, summative findings to research question three.
The chi-square tests and logistic regression analyses also addressed research
question three by finding that student outcomes variables such as graduation and
retention rates were more frequently significant in relationship to accreditation status, for
the defined period, than were institutional variables such as budget and staffing. This is
compatible with the literature review, which observed that the use of simple quantitative
measures to assess institutional performance for the purposes of accreditation—for better
or for worse—have become more common as they have been supported by a rising
interest in accountability and assessment in higher education (Biswas, 2006; Morest &
Jenkins, 2007).
109
Table 4.20: Tabular Summary of Findings by Research Question Three
Abbreviated Research Questions
Research Question Three:
Patterns of relationship between accreditation and institutional/student variables
among sanctioned CCCs?
Research Presented
and Statistical
Analysis Performed
Summarized Results
Descriptive Statistics:
Tabular
Representation of
Accreditation Statuses
Marked increase in both quantity of sanction and percent of
CCCs on sanction (see Figure 4.1 and Figure 4.2)
Descriptive Statistics:
Frequency
55% of all CCCs have been sanctioned at least once since
2002, and the count of sanctions and overall actions by the
commission have increased (see Tables 4.3, Figure 4.2, and
Figure 4.3)
Summary of Statistical
Analysis of Student
Variables
This exploration provides new detail to significant
associations found between student variables and CCC
accreditation status, suggesting that the graduation, retention,
and transfer have some significance in WASC-ACCJC
accreditation across the period of this study.
Summary of Statistical
Analysis of
Institutional Variables
This provides new detail to significant associations found
between institutional variables and CCC accreditation status,
suggesting that institutional variables studied may be less
associated with accreditation status than several student
variables.
Summary of Analysis and Findings
In addition to the specific findings, per variable and per research question
described in the previous section, Table 4.21, below, provides a final summary of all of
110
the tests performed by this research project and a thumbnail sketch of the significant
findings that were identified.
Table 4.21: Summary of Variables and Statistical Analyses Performed
Summary of Variables and Tests
Categories Variables Years
ACCJC
Reports
/ Year
Unique
Cases
(Variabl
es/Year)
Student Variables: 3 4 6 2 48
Institutional Variables: 3 20 6 2 240
Totals: 6 24 6 12 288
Accreditation Status: 5 1 6 2 12
Summary of Chi-Square Tests
Count, Tests Performed: 288
Total Tests Returning Sig. Values: 21
Overall Percent of Variables with Sig. Value: 7%
Count, Student Variables Checked for Chi-Square Association 48
Student Variables Returning Sig. Values 10
Percent of Student Variables with Sig. Value: 21%
Count, Institutional Variables Checked for Chi-Square Association 240
Institutional Variables Returning Sig. Values 11
Percent of Institutional Variables with Sig. Value: 5%
Count, Unique Sig. Variables: 9
Count, Sig. Variable Categories: 6
111
Table 4.21, Continued
Summary of Multinomial Logistic Regression Tests, Student Variables
Count, Student Variables Identified as Sig. by Chi-Square 10
Count, Student Variables First-Step Regression 10
Significant First-Step Student Variables 4
Percent First-Step Student Variables with Sig. Values 40%
Count, Student Categories Second-Step Regression 4
Count, Chi-Square Items Used for Second-Step Regression 21
Significant Second-Step Student Variables 15
Percent Second-Step Student Variables with Sig. Values 71%
Count, Student Categories Third-Step Regression 4
Count, All Items Used for Third-Step Regression (4 var, 6 yrs, 2/yr) 48
Significant Third-Step Student Variables 5
Percent Third-Step Student Variables with Sig. Values 10%
Summary of Multinomial Logistic Regression Tests, Institutional Variables
Count, Institutional Variables Identified as Sig. by Chi-Square 11
Count, Institutional Variables First-Step Regression 11
Significant First-Step Institutional Variables 2
Percent First-Step Institutional Variables with Sig. Values 18%
Count, Institutional Categories Second-Step Regression 4
Count, Chi-Square Items Used for Second-Step Regression 21
Significant Second-Step Institutional Variables 3
Percent Second-Step Institutional Variables with Sig. Values 14%
Count, Institutional Categories Third-Step Regression* 0
112
Table 4.21, Continued
Overall Summary of Multinomial Logistic Regression Tests
First-Step, Chi-Square Variables (1 IV): 21
First-Step Tests Returning Sig. Values: 6
Percent of Chi-Square Sig. Variables with Sig. Regression Values 29%
Second-Step, Chi-Square Variables (1+ IV): 21
Second-Step Tests Returning Sig. Values: 15
**Percent of Second-Step Variables with Sig. Values 71%
Third-Step Tests Using 4 Chi-Square Categories, Variables: 48
Third-Step Tests Returning Sig. Values: 5
Percent of Third-Step Variables with Sig. Values 10%
Percent of All Variables with Sig. Chi-Square and Regression Values 4%
*Second-step logistic regression was sufficient for institutional variables due to the lack of significant
institutional categories identified by first- and second-step analysis.
**This value includes student and institutional variables that were identified for the same chi-square-
identified items (full count 18, unique count 15)
The tests performed by this research project and the findings summarized in this
chapter explored an important sample of California community colleges accredited by the
WASC-ACCJC during a period of increasing sanctions and increasing scrutiny of
institutional performance. These tests explored the research questions concerning the
relationship of specific accreditation status on a high- to low-scale to variables from
several categories of commonly cited institutional and student variables. Findings
gleaned from the chi-square association tests and the subsequent logistic regression
analysis identified several patterns of association—especially with student variables such
as graduation, retention, and transfer—while also noting that for many years no
significant patterns of relationship were found (see Table 4.13). These findings offer
113
several potential insights to future researchers, institutional practitioners, public policy
agents, and a curious public to be detailed in the concluding chapter.
114
CHAPTER FIVE: SUMMARY, CONCLUSIONS, AND IMPLICATIONS
Purpose of the Study
This study examined the underexplored subject of community college
accreditation under the WASC-ACCJC. Specifically, it examined the relationship
between accreditation standards that were changed in 2002 and the subsequent increase in
sanctions of California community colleges (CCCCO, 2011; Figure 4.1). This study
observed that sanctions issued by WASC-ACCJC in this period can be contextualized
within a vector of market, academic, and political forces increasingly concerned with
measures of performance and the maintenance of accountability. The rising scrutiny of
college performance has been met with caution by some institutional advocates, who are
concerned that trends like the “increasing focus on public returns on investment [in
education] may be incentivizing colleges and universities to be more discerning about
whom they enroll…[which] does not bode well for college access” (Mullin, 2012, p. 4).
This makes the exploration of WASC-ACCJC accreditation standards and the associated
institutional data of keen importance.
Given this sentiment, and the statement of the problem from chapter one, this
study purposed to explore the relationship between accreditation actions and institutional
outcomes variables in order to enrich the literature concerning accreditation, explore the
recent case of WASC-ACCJC accreditation, and to provide preliminary information that
could be the basis for future studies, institutional practice, and public policy. Since the
literature outlined how accountability, assessment, and a rising tide of scrutiny of
education have characterized the most recent discussions of accreditation, the research
115
questions of this study were designed to provide better understanding about whether or
not accreditation status is trending toward alignment with common, quantitative,
frequently cited variables (Jones, 2002).
In order to provide additional detail to the specific sample of California
community colleges since 2002, while exploring the role of public data in relationship to
actions by the accreditation commission, this research study gathered a substantial dataset
of commonly cited and publicly-salient institutional and student variables. These
variables were then compared to accreditation status using a series of statistical analyses
such as chi-square test of goodness-of-fit and multivariate logistic regression, noting
several patterns of association.
Summary of Findings
Findings by Research Question in Relationship to the Literature
The research questions for this study were as follows: In California community
colleges reviewed by WASC-ACCJC since 2002, what is the relationship, if any,
between the specific accreditation actions taken by the commission—the assessment of
“no findings,” “probation/sanction,” etc.—and several common student outcomes
variables cited by the literature—namely, graduation rate, transfer, and retention?
Initial findings for this research question were identified by the chi-square
association tests, which identified that four categories of student variables were
significantly associated for one or more of the years of WASC-ACCJC accreditation
actions (see Table 4.13). This means that all categories of initial variables were found to
be significant in at least one instance of the chi-square test. These categories were
116
transfer rate, graduation rate, and part- and full-time retention rate. Additional
examination of these student variables, through several rounds of multivariate logistic
regression (outlined in Figure 4.6), likewise returned significant values for such student
variables. Retention rate, of either type, dominated the first-step regression tests, which
identified retention as the top student variable triangulated by the chi-square and logistic
regression tests as significant.
The second-step regression tests found significant associations between
independent student variables and dependent accreditation status for fifteen of the
twenty-one items identified by the initial chi-square tests. Since the second regression
analysis added variables from the significant student categories (see Table 4.14 and Table
4.15) to the previously identified independent, single student variable, it suggested that
pairings of student variables can also be helpful in identifying associations with
accreditation status in many instances across the dataset. This is in keeping with WASC-
ACCJC standards, which suggests that student outcomes variables may parallel one
another and serve together as identifiers of overall institutional success (WASC, 2002).
The third-step regression returned to the entire set of WASC-ACCJC commission
action records collected following 2002 and sequentially included and removed the four
student category variables to identify significant associations with accreditation status.
This round of logistic regression identified only two additional items of significance that
were not previously identified by the initial logistic regression analyses, namely full-time
retention in June of 2006 (p=0.000 with Cox and Snell = 0.156) and part-time retention
(p=0.003 with Cox and Snell =0.122), but these individual variables had already been
117
identified as associated with accreditation status in other years during the earlier
regression tests (see Table 4.17).
Accordingly, these results reemphasize the importance of the first- and second-
step regression tests, which identified significant patterns of association between student
variables and institutional accreditation status. It would not be surprising to find that
student outcome measures are related to institutional accreditation status, both because of
the relationship of these variables to institutional mission and also because of the
influence of an increasingly dominant accountability culture oriented to evidence and
heuristic measures of performance (Strauss & Volkwein, 2004). However, it is also
telling to note that there was not a systemic, overarching association between these
student variables and accreditation from year-to-year, which suggests that the fears held
by some institutional agents that quantitative performance measures would overwhelm
the attention of accreditors and capture the sum of their attention may have in fact been
alarmist (Moltz, 2011). It seems that while some patterns of association between student
variables and institutional accreditation can be demonstrated statistically for the sanction-
heavy WASC-ACCJC years in question, it appears that there are still many other factors
in accreditation actions aside from these specific measures. This insight confirms the
utility of an exploratory quantitative study when so many variables exist and so little is
known about the patterns of relationship that may exist.
The second research question for this study was as follows: In California
community colleges reviewed by WASC-ACCJC since 2002, what is the relationship
118
between accreditation status and several common institutional variables—namely, size,
budget, and staffing?
Initial findings for the second research question were identified by the chi-square
association tests. Twenty institutional variables in three categories were tested for chi-
square associations across six years of accreditation data, representing over two-hundred
unique variables. Eleven institutional variables, representing five percent of tested
associations, were found with p-values less than 0.05. These variables were found in a
number of categories but demonstrated a comparatively weaker significance and count
than the assessed student variables (see Table 4.13 and Table 4.14). The literature
suggested that if any variables would be associated with accreditation status, student
outcomes measures such as graduation rate would be more likely than institutional
measures to be found significant. This initial chi-square finding affirms this hypothesis.
The first-step regression analysis for institutional variables began with the eleven
data points identified by the chi-square analysis, finding that only two of these variables
were found significant in both chi-square and regression tests. It appears that institutional
variables are less likely to be related to accreditation status than the student variables
identified for research question one. A second-step regression analysis that included
categories of student variables did find some significant values, but only when
institutional variables were paired with student variables (see Table 4.16). Institutional
budget was also included and removed as an independent variable, and did not return any
significant values. Indeed, this is a notable finding in this exploration of WASC-ACCJC
accreditation, because in a period of increasing scrutiny of accreditation, it was unknown
119
whether an institution’s budget allocations or staffing levels would be associated with
accreditation actions. In the context of the literature, this initial finding that budget and
staffing levels are not clearly associated with accreditation for the post-2002 period
provides an interesting contribution to the case of WASC-ACCJC accreditation and may
be heartening to critics of the rising co-alignment of accreditation with outcomes
measures because in this instance there was not a significant association found.
To reiterate this finding from above, it appears that for the data examined on
California community colleges neither institutional budget nor size nor staffing was
strongly associated with accreditation status. This suggests that the presence or absence
of research staff, the comparative size of institutional budgets, and the overall size of the
institutions—while perhaps significant to the overall accreditation findings of the
commission—were not, in fact, statistically significant individually in the analysis
performed by this research project. Accordingly, a third-step of the logistic regression
across all years was not tested for institutional variables because of their lack of presence
in the first two rounds of statistical tests. Nonetheless, future studies may wish to explore
this subset of the data—or additional institutional data—more closely to examine whether
other patterns of association between these institutional variables and accreditation can be
observed in this and other periods of scrutiny.
As was the case with research question one, these findings emphasize the
comparative importance of the first- and second-step regression tests using variables
identified as significant by the chi-square tests. For research question two, concerning the
relationship of institutional variables to accreditation status, some significant results were
120
found at discrete points in the data, but the pattern of significance was less noteworthy
than the student variables identified for research question one (see Table 4.14). Like the
tests performed on research question one, the findings for institutional variables noted
that there was not a systemic, overarching association between institutional variables and
accreditation status from year to year. Contextualized with the literature on accountability
and accreditation, this suggests that accreditation is still a nuanced, individuated
assessment of individual institutions and not something that can be strongly predicted
using institutional or student data alone. This finding is encouraging in the context of
fears that accreditation status would become parallel with simple quantitative measures of
institutional performance. This does not appear to be the case according to the
exploration of the data by this research project.
The final research question for this study was as follows: In those California
community colleges that were sanctioned by WASC-ACCJC since 2002, what patterns
emerge that may inform institutional knowledge about the relationship between
accreditation action and institutional measures?
It is noted that descriptive statistics for all variables included in this study
represented both graphically and in tabular format show a sizeable increase in sanctions
and the percent of CCCs with sanctioned accreditation statuses. This finding helps to
answer research question three by first identifying the substantial number of institutions
with sanctioned statuses post-2002. Though local newspapers and institutional
newspapers are the traditional public sources for the most recent news on WASC-ACCJC
accreditation actions—alongside of the biannual report newsletter published online by the
121
commission—this study assembled the current accreditation status of all California
community colleges in a singular entry, which is a useful reference. The tests of
association and the logistic regression analyses performed on the dataset from this
research project did note a number of patterns between institutional and student variables
and accreditation status (see Tables 4.13 – Tables 4.17).
Most significantly, it found that of the extensive set of variables considered,
student outcomes measures such as graduation, retention, and transfer were most likely to
be found significantly associated with accreditation status in a given year. These
associations may guide future examinations of accreditation and serve as a starting point
for exploring new research questions, such as potential cause and effect relationships
between associated variables, which is also discussed in the concluding section of this
research study as an area warranting additional exploration. The identification of salient
associations between student outcomes measures and accreditation status is in line with
the literature review, which suggested that outcome variables might be significant in
review by accreditation commissions (Zis, Boeke, & Ewell, 2010). However, this
exploratory study also noted that there was an absence of strong patterns of association
year-to-year between community college variables and accreditation status. In answering
research question three, these findings suggest that there is not a consistent pattern of
ongoing associations for sanctioned institutions and related variables according to the
collected data. However, the existence of some significant associations and patterns in
given years suggests that additional, future examination of this and related data is
warranted.
122
While discrete variables were found significant in particular instances, the lack of
ongoing patterns of association suggests that these quantitative variables are only useful
in small measure in predicting an institution’s likelihood of being sanctioned. Though a
strong pattern of relationship was not found for sanctioned institutions, this exploratory
quantitative study nonetheless identified some variables with patterns of association to
accreditation status. The study of this pivotal decade in WASC-ACCJC contributes to the
overall lack of research on accreditation and contextualizes it within a rising
accountability movement.
Limitations
Several limitations exist for this study. First, the sample size for WASC-ACCJC
community colleges in California raises caution about overextension of the significance
of the variables found. External validity may be limited due to a sample composed of
WASC-ACCJC institutions alone, but the exploratory nature of this study to better
understand trends in this particular sample still provides knowledge about the intersection
of accountability, assessment, and accreditation for this, the largest sector of higher
education in the United States. In spite of limitations of this study, it may yet signal
trends for regional accreditation at large. Future studies may expand this sample to
include community colleges from multiple regional accreditors, or perhaps even
nationally. However, for the purposes of the research questions proposed by this study, it
was appropriate to limit the sample to California community colleges in order to assess
the particular case of increasing sanctions for this subsample population.
123
Secondly, the methodology of associative statistics and logistic regression are
helpful to identify significance of relationship by comparing values against those
expected by the null hypothesis, but a caveat must be given concerning the interpretation
of those items found to be significantly related. Logistic regression uses multiple
formulas and models to compare independent variables against the dependent variable,
which can be fallible (Hosmer & Lemeshow, 2000). Accordingly, the significant findings
have been offered with the caveat that future studies should continue to plumb the data.
Even so, the relationships identified by these statistical analyses can still demonstrate
patterns and areas for future exploration, and association with expectations gleaned from
the literature comforts the findings. Future research may also wish to examine student
outcome variables exclusively—expanding the list of variables included in this study—
for an even broader set of colleges. With these limitations in mind, this study still offers
greater insight into the interesting case of increasingly sanctioned California community
colleges.
Recommendations
The findings from this study suggest a number of avenues for future research.
First, the most significant pattern of relationships identified between the sizeable sample
of student and institutional variables collected were identified in the category of student
outcomes variables such as graduation, retention, and transfer. Future studies may well
zero in on these categories, exploring the strengths and weaknesses of these values, and
comparing different data according to different reporting methodology—for example,
graduation rate for community colleges may be reported according to different
124
percentages of normal time to completion (ex., 200%) and contain inclusion/exclusion
criteria that changes which student enrollment patterns are counted. With this in mind,
future research may consider collecting various forms of these outcomes variables and
comparing those to accreditation status. Moreover, collection of institutional self-study
data and accreditation commission action letters, where available, may be coded for
additional insights into the reasons given by the accreditors for a particular institution’s
accreditation status. In this way, a future study may be able to triangulate those
institutions where student success patterns were cited as a flaw and then examine the
patterns of student data for that particular subset of colleges. As with the limitations of
this study, however, such an examination would likely need to include a larger sample
size to ensure that any patterns of relationship could be considered significant.
The findings presented by this research study are offered in terms of goodness-of-
fit and association between the variables, which is a useful strategy to begin exploring
and understanding the potential associations in a large dataset concerning a complicated
topic. For the topic of WASC-ACCJC accreditation in the period of increasing sanctions
since 2002, this study has offered new insights while simultaneously raising many new
questions. For instance, future studies may wish to build upon the initial findings of this
research by asking research questions that seek answers to potential cause and effect
relationships between institutional, student, and other variables and accreditation status.
Likewise, new questions may propose whether preexisting institutional factors
that lead to strong or weak results in any given variable—such as graduation rate—may
be the very factors violating accreditation standards and leading to sanction. One example
125
might be to examine a specific, narrow period—such as a twenty-four to thirty-six month
window during which many institutions were sanctioned—and then specifically cull
detailed data for those sanctioned institutions to be examined alongside of accreditation
commission reports concerning those sanctions. Accreditation reports with specific
narrative concerning commission actions, and institutional preemption and response to
these reports—such as self-studies, board minutes, and accessible institutional research
data—may provide new detail concerning the reason for sanction. While these sources
may be difficult to obtain or unavailable for all institutions in any given window of study,
they could offer rich new detail on overarching reasons for sanction, such as standard
deficiencies in leadership, fiscal, or instructional issues, which could then provide new
information to pair with any quantitative suggestion of association between variables and
accreditation status. Such a study would require full access to the detailed commission
reports that are not always publicly accessible, though local newspapers, institutional
websites, institutional research offices may be sources for this material. This one
suggested avenue of exploration is by no means the only possibility for future studies, but
it does propose another angle of exploration of WASC-ACCJC accreditation sanctions.
Another recommendation for future study derived from this research would be to
embrace additional methodology, such as a qualitative survey or interview with
institutional and/or accreditation agents. Inclusion of text-rich qualitative data may help
explain the patterns of community college sanctions in a manner that is not possible with
quantitative study alone. Finally, given the context of the literature review on regional
accreditation commissions and their in-the-middle relationship between competing
126
accountability forces, future studies may wish to examine all of the regional accreditation
commissions together in order to contextualize the case of WASC-ACCJC within a
national picture of trends in accreditation. Ultimately, the findings from this study offer
some initial suggestions for further exploration, while underlining the need for additional
research of this underexplored topic.
Implications of the Findings
The findings from this study concerning the relationship of several student
outcomes variables to California community college accreditation status, and the
comparative lack of relationship between identified institutional variables, has a number
of implications for further research, theory, and practice.
First, there are substantial, initial implications for college practitioners, such as
faculty, staff, and administrators in a community college setting. Since institutional
agents have been among the most vocal critics of the rising alignment of accreditation
findings with quantitative outcomes measures, the findings from this study offer both
encouragement and warning to institutional agents (Driscoll & De Noriega, 2006). The
suggestion that there is a relationship between graduation, retention, and transfer and
accreditation status is not entirely surprising, given the expectation that accreditors would
consider institutional performance, as it is related to college mission, in their assessment
of a college’s accreditation standing. That said, it should be encouraging to note that there
was not a strong association found for these outcomes variables and accreditation over
time for all colleges. This implies that while increasing WASC-ACCJC sanctions may
127
have some relationship to student outcomes variables in a given year, there is not a
systemic pattern of parallel association between these measures and accreditation.
In the contemporary period when state budgets, increasing oversight and
accountability to multiple stakeholders, and enrollment challenges to student success
loom large for many colleges, these mixed findings can offer some consolation that
accreditation has not been hijacked by quantitative measures of success alone. On the
other hand, institutional agents such as faculty and administrators would do well to
understand and follow the patterns of their own institutional student data since the
increasing use of simple, heuristic measures of institutional performance—such as the
common student outcomes variables examined by this study—suggests that such data
will continue to be important.
Secondly, this study offers intriguing, initial detail concerning the lack of
association between institutional staffing and budgetary variables and accreditation (see
Table 4.16). The absence of clear, ongoing associations between the various institutional
variables on budget and staffing levels denotes that comparative institutional wealth or
staffing cannot serve as shortcuts to identify successful institutions in terms of
accreditation status. This finding may be encouraging to anyone worried that institutions
with greater resources in these categories alone would fare better in an accreditation visit.
As expected with an initial exploration of a substantial dataset on accreditation, this
finding concerning a lack of association between institutional variables and accreditation
provides new insights while simultaneously raising new questions. Future studies on
accreditation may wish to plumb these variables more deeply by expanding the dataset,
128
period, or even including non-ACCJC institutions under alternate regional accreditors. As
reiterated, there are many avenues for additional exploration that subsequent scholars
may consider.
Additionally, there are implications for public policy and regional accreditation at
large. As the literature explained, colleges are increasingly located within a vector of
accountability that is betwixt and between national, state, and local forces with varying
degrees of control or influence upon college issues such as budget, enrollment funding,
institutional mission, and future goals. This study is an attempt to understand community
college accreditation within an environment of rising accountability. In California, ideas
such as the recently recommended Student Success Task Force are present-day examples
of the controversial nature of incentivizing performance at community colleges while
maintaining access and institutional autonomy (CCCCO, 2011; CC League, 2011). This
study, therefore, can serve as preliminary context for policy makers and agents
responsible for community college agendas by giving them an initial account of current
relationships between institutional data and accreditation. Much more study is needed,
but this initial foray can yet be of use and may inform the use and study of college data
(Baker, 2002; Carey & Schneider, 2010).
Finally, it should be noted that accreditation of colleges in the instance of
California community colleges under WASC-ACCJC is still a peer-reviewed process, led
by institutional leaders from across the community college ecosystem. Recent
controversies concerning potential problems with how site team trustees are appointed to
serve on accreditation teams, and the role of the commission’s leadership, underscore the
129
fact that there is a human component to any accreditation decision (CCCCO, 2011;
WASC-ACCJC, 2011). Yet, for all the recent controversy about issues within this
accreditation commission, it is still encouraging to note that accreditation is not merely a
checklist of performance measures, but a conversation among peers about institutional
performance—at least in the ideal formulation of accreditation. In light of worries about
encroachment of institutional autonomy and increasing scrutiny of quantitative
benchmarks, accreditation for WASC-ACCJC colleges still maintains a personal human
element in spite of rising sanctions. The data on patterns of association between student
outcomes measures and accreditation is an initial offering in what can potentially be a
rich source of future studies of this important sector of higher education.
Conclusion
California community colleges remain a dynamic and vibrant sector of higher
education in the United States, even while institutional scrutiny has increased. While this
initial exploration of community college data suggests some significant associations
between outcomes such as graduation rate and accreditation status, a caution must be
offered to the desire to increase accountability in order to prevent unintended
consequences. The American Association of Community Colleges (AACC), echoed this
fear with a report warning that the increased public attention to measures such as
graduation rates risk changing the nature of college institutions themselves, such that the
most vulnerable students will suffer: “The easiest way to raise graduation rates is to turn
away poorly prepared students…but that would be a mistake” (Mullin, 2012, p. 4). This
is well-said, and emphasizes the danger of overreliance upon singular measures of
130
institutional performance. Increased scrutiny of college performance, increased
accountability, and an ever more powerful interplay of accountability forces does not
appear to be diminishing anytime soon. This means that the task of higher education and
the role of those who accredit these institutions must adapt to work within a culture of
evidence and accountability while yet maintaining that the first-order imperative for
student success is student access.
In her brief to the Institute for Higher Education Leadership and Policy, Nancy
Shulock (2011) noted that community colleges are found at a difficult intersection of
accountability and assessment pressures, which requires balance: “The idea is that
colleges can find the means that work best for their students and states, and state systems
can avoid micro-managing colleges,” which could equally have been said about the
oversight of regional accreditation commissions (p. 4). Accordingly, the exploration of
the community college data offered by this research project suggests that further
examination of the interplay between accountability pressures, outcomes variables, and
accreditation status is warranted in order to ensure that college assessment does not
become an exercise in compliance but rather a genuine institutional reflex undertaken for
student good.
The examination of student and institutional data for WASC-ACCJC California
community colleges demonstrates the depth and complexity of understanding college
accreditation. Community colleges are vested with multipronged missions, complicating
the institutional practice of these colleges and challenging regional accreditors to weigh
appropriately the performance, planning, and success of these institutions. This study
131
provided an initial exploration of some common datasets for these institutions, as
presented through federal, state, and institutional data repositories. The noted increase in
WASC-ACCJC sanctions since 2002 raised questions about this regional commission and
the potential relationship between public data and commission actions.
As noted by the statistical tests run by this study, there were some salient
associations between these data points and accreditation status and several interesting
observations (see Tables 4.18-4.20). It is intriguing that institutional data, such as budget,
full-time equivalent students, and staffing levels were not as influential as may have been
expected given the literature on rising public scrutiny of higher education and
institutional investment in rising costs of accreditation. One may have expected that
patterns would have emerged for accreditation status according to the size of an
institution, staffing levels, or the comparative wealth or poverty of the institution—this
was not demonstrated, however, by this study.
Also of interest was the observation that graduation, retention, and transfer, which
were three of the most common variables listed in public and private discussion of
college performance, were periodically but not systematically associated with
accreditation status. The lack of ongoing, significant associations is heartening because it
demonstrates that these measures cannot stand in as shortcuts to evaluating institutions.
Future policy discussions, such institutional performance funding, would do well to
consider this insight. This exploration of WASC-ACCJC suggests that individually
attenuated conditions exist at local institutions that preclude the use of singular categories
of variables, alone, to assess institutional performance. In this vein, future exploration is
132
warranted to seek additional clarity concerning the complicated nature of applying
regional accreditation standards to a diverse set of institutions.
133
GLOSSARY
Definitions, Terms, and Acronyms
Accountability Reporting for Community Colleges (ARCC): Peer-based
benchmark of California community college performance according to several delineated
categories. The ARCC reports are issued annually, and peer-group assignment is
determined by the California Community College Chancellor’s (CCCCO) office and may
vary according to year and according to variables measures.
Accreditation: A review-driven, voluntary process by which institutional peers in
higher education examine the quality of specific colleges and universities (most
commonly conducted by regional-accreditors) to assess compliance with a core set of
accreditation standards. This review is conducted cyclically, typically on a six or ten year
basis. In the United States, accreditation has been conducted on the basis of self-
regulation, by which institutions agree to self-govern the standards and process of
accreditation review.
Accreditation bodies/agencies/organizations/commissions: Refers to the distinct
accreditation bodies that establish the standards and processes of review by which
institutions or programs (depending upon if the accreditation body is regional or
programmatic/professional) are vetted for compliance with minimum quality standards
worthy of the imprimatur of the said accreditation body. The terms
body/agency/organization are used interchangeably throughout this paper.
Accreditors: Used periodically throughout this proposal, this term refers generally
to the collective commissions at the heart of each accreditation agency.
134
Benefits of accreditation: The perceived or real advantages acquired by virtue of
obtaining accreditation, such as prestige, access to financial aid, a mark of quality, etc.
California Community College Chancellor’s Office (CCCCO): Offers
overarching governance, for community colleges in California.
California Community College Chancellor’s Office data mart: Compilation of
institutional data for California community colleges, including common measures of
institutional and student performance.
California Community Colleges (CCC): 112 colleges that comprise the single
largest system of higher education in the United States, educating close to three million
students per year.
“Clear” accreditation: In the case study of WASC-ACCJC, this term refers to
accreditation that is without public sanction, warning, or finding. Detail is provided in
Table 3.2 and in chapter one.
“Conditional” or “sanctioned” accreditation: In the case study of WASC-
ACCJC, this term refers to institutional accreditation that is in jeopardy of further
scrutiny and action by the commission, such as the public notice of warning or probation.
Detail is provided in Table 3.2 and in chapter one.
Costs of accreditation: The perceived or real disadvantages associated with the
process of accreditation assessment, review, institutional self-study, response to findings,
etc. These costs may include time, effort, financial costs, opportunity costs, and costs to
prestige, reputation, and organizational culture.
135
Council for Higher Education Accreditation (CHEA): National organization
composed of approximately three thousand institutions from over fifty accreditation
bodies (both regional and programmatic). As a national association—albeit, without any
binding authority over regional accreditation bodies—CHEA offers a national voice for
the self-regulatory component of accreditation review.
Council of Regional Accrediting Commissions (C-RAC): An organization
composed of the six regional accreditation organizations.
Integrated Postsecondary Education Data System (IPEDS): National collection of
data on colleges and universities, as determined by the National Center for Education
Statistics (NCES).
Middle States Commission on Higher Education (MSCHE): A regional
accreditation body that accredits institutions in the states and regions of Delaware, the
District of Columbia, Maryland, New Jersey, New York, Pennsylvania, Puerto Rico, and
the US Virgin Islands.
National Center for Education Statistics (NCES): Federal organization conducting
the collection of institutional data concerning higher education, most notably through the
Integrated Postsecondary Education Data System (IPEDS) data mart.
New England Association of Schools and Colleges Commission on Institutions of
Higher Education (NEASC-CIHE): This accreditation organization accredits institutions
in Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont.
NEASC also has accreditation authority over several international institutions.
136
North Central Association of Colleges and Schools The Higher Learning
Commission (NCA-HLC): Most commonly known as “HLC,” this regional accreditation
body reviews institutions in Arkansas, Arizona, Colorado, Iowa, Illinois, Indiana, Kansas,
Michigan, Minnesota, Missouri, North Dakota, Nebraska, Ohio, Oklahoma, New Mexico,
South Dakota, Wisconsin, West Virginia, and Wyoming.
Northwest Commission on Colleges and Universities (NWCCU): This regional
accreditation organization accredits institutions from Alaska, Idaho, Montana, Nevada,
Oregon, Utah, and Washington.
Programmatic accreditation: Unlike regional accreditation, which governs
standards of overall institutional quality, programmatic accreditation recognizes
individual programs of study, including professional programs.
Regional accreditation: Accreditation offered by one of six regional accreditation
organizations (MSCHE, NEASC, NCA, NWCCU, SACS, WASC). Unlike programmatic
accreditation, regional accreditation does not vouch for a particular program or degree,
but rather for institution-wide compliance with regional accreditation standards set by the
geographically-defined accreditation body. For typical public and private undergraduate
institutions in the United States, regional accreditation is the de facto mark of quality and
a necessary step to obtain financial aid.
Sanctions: Used generically throughout this proposal to refer to public actions by
accreditation commissions that announce warnings, findings, or other areas of non-
compliance and/or concern. In the case study of WASC-ACCJC community colleges,
additional detail is provided by Table 3.2 and in chapter one.
137
Self-study: The self-study is a process of institutional review in preparation for a
visit by accreditors, by which individual institutions examine their compliance with
accreditation standards. This self-study culminates in a report prepared by the institution
for submission to the accreditation body.
Site visit: Conducted over a period of several days, the site visit is an essential
component of the accreditation review process, consisting of a visit by an accreditation
team to an institutional for the purposes of assessing compliance with stated accreditation
standards.
Southern Association of Colleges and Schools Commission on Colleges (SACS):
This regional accreditation organization accredits institutions from Alabama, Florida,
Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee,
Texas, Virginia, and Latin America. Like NEASC, it also has authority over several
international institutions.
Specialized accreditation: Used interchangeably throughout this study with the
term programmatic accreditation, specialized accreditation refers to the recognition of
threshold quality by individual programs, in contrast with institution-wide accreditation.
Standards: Regional accreditation bodies delineate a series of standards in their
organizational articles and/or handbook. These standards provide member institutions
with a series of guidelines to be followed for clear accreditation.
United States Department of Education (USDE): Federal agency that does not
have direct authority over the private and voluntary regional accreditation bodies, but the
138
USDE is concerned for the systematic quality of education in US colleges and
universities and uses accreditation as a standard for the dispense of federal funds.
Voluntary accreditation: In the context of accreditation, this refers to the optional
membership in accreditation review by institutions of higher education. While technically
voluntary, the literature on assessment and accountability, as well as governmental
embrace of accreditation as a heuristic for quality (and a requirement for Federal Title IV
financial aid), suggests that the “voluntary” nature of accreditation is not entirely
optional.
Western Association of Schools and Colleges (WASC): Regional accreditation
body responsible for institutional accreditation in California, Hawaii, Guam, and the
greater Pacific region. The overarching WASC organization consists of several sub-
groups that separately conduct the accreditation for specific sectors of higher education.
For example, area community colleges are accredited by the Accrediting Commission for
Community and Junior Colleges (ACCJC).
Western Association of Schools and Colleges’ Accrediting Commission for
Community and Junior Colleges (WASC-ACCJC): Typically referred to as ACCJC, this
sub-group of WASC accredits two-year, associate’s degree-granting institutions in
California. It also has authority over two-year institutions in the Pacific areas of Hawaii,
Guam, American Samoa, the Commonwealth of the Northern Mariana Islands, the
Republic of Palau, the Federated States of Micronesia, and the Republic of the Marshall
Islands.
139
Western Association of Schools and Colleges’ Accrediting Commission for Senior
Colleges and Universities (WASC-ACSCU): The body of WASC that accredits bachelor’s
degree-granting institutions in California, Hawaii, Guam, and the Pacific basin region.
Sources: CHEA, 2011; USDE, 2011.
140
REFERENCES
Adelman, C., & Silver, H. (1990). Accreditation: The American experience. Council for
National Academic Awards. London, England: Nuffield Foundation.
American Association of Community Colleges (AACC). (2009, October 19). Community
colleges and baccalaureate attainment (AACC Statement). Washington, DC: Author.
American Association of Community Colleges (AACU) (2011). Retrieved from
http://www.aacu.org.
American Association of Community Colleges (AACC). (2011). American Association
of Community Colleges: 2011 fact sheet. Washington, DC: Author. Available from
http://www.aacc.nche.edu/Aboutcc/Documents/FactSheet2011.pdf
American Council on Education (ACE) (2011). Retrieved from http://www.acenet.edu.
Atwell, R. H. (1994). Putting our house in order. Academe, 80(4), 9-12.
Bailey, T. R. & Alfonso, M. (2005). Paths to persistence: An analysis of research on
program effectiveness at community colleges. Lumina Foundation for Education,
6(1), 1-38.
Bailey, T. R. & Morest, V. S. (2004). The organizational efficiency of multiple missions
for community colleges. New York, NY: Alfred P. Sloan Foundation & Teachers
College, Columbia University.
Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation
principles and practices. The Journal of Academic Librarianship, 28(1-2), 3-7.
Banta, T. W., Black, K. E., Kahn, S., & Jackson, J. E. (2004). A perspective on good
practice in community college assessment. New Directions for Community Colleges,
2004(126), 5-16.
Banta, T. W., Pike, G. R., & Hansen, M. J. (2009). The use of engagement data in
accreditation, planning, and assessment. New Directions for Institutional Research,
2009(141), 21-34.
Bardo, J. W. (2009). The impact of the changing climate for accreditation on the
individual college or university: Five trends and their implications. New Directions
for Higher Education, 2009(145), 47-58.
Benezet, L. T. (1981). A question of accreditation: Time for a general review. Change,
13(3), 6-8.
141
Benjamin, E. (1994). From accreditation to regulation: The decline of academic
autonomy in higher education. Academe, 80(4), 34-36.
Beno, B. A. (2004). The role of student learning outcomes in accreditation quality
review. New Directions for Community Colleges, 2004(126), 65-72.
Bensimon, E. M. (1995). Total quality management in the academy: A rebellious reading.
Harvard Educational Review, 65(4), 593-612.
Bensimon, E. M., Polkinghorne, D. E., Bauman, G. L., & Vallejo, E. (2004). Doing
research that makes a difference. Journal of Higher Education, 75(1), 104–126.
Bernhardt, V. L. (1984). Evaluation processes of regional and national education
accrediting agencies: Implications for redesigning an evaluation process in
California.
Bers, T. H. (2008). The role of institutional assessment in assessing student learning
outcomes. New Directions for Higher Education, 2008(141), 31-39.
Biswas, R. R. (2006). A supporting role: How accreditors can help promote the success
of community college students. An Achieving the Dream Policy Brief.
Blauch, L. E. (1959). Accreditation in higher education. Washington, D. C.: US
Government Printing Office.
Bloland, H. G. (2001) Creating the Council for Higher Education Accreditation (CHEA).
Phoenix, AZ: American Council on Education and the Oryx Press.
Bogue, E. G. (1998). Quality assurance in higher education: The evolution of systems
and design ideals. New Directions for Institutional Research, 1998(99), 7-18.
Bollag, B. (March 27, 2000). Education Department and accreditors get to the heart of
their differences at rule-making meeting. The Chronicle. Retrieved from
http://chronicle.com/article/education-department-and/121861.
Bragg, D. D. (2001). Community college access, mission, and outcomes: Considering
intriguing intersections and challenges. Peabody Journal of Education, 93-116.
Bresciani, M.J., Zelna, C.L., & Anderson, J.A. (2004). Techniques for assessing student
learning and development: A handbook for practitioners. Washington, DC: NASPA.
Brittingham, B. (2008). An uneasy partnership: Accreditation and the federal
government. Change: The Magazine of Higher Learning, 40(5), 32-39.
142
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we
are? New Directions for Higher Education, 2009(145), 7-27.
Brossman, S. & Roberts, M. (1973). The California community colleges. Palo Alto, CA:
First Educational Publications.
Burke, J. C. & Minassians, H. P. (2002). The new accountability: From regulation to
results. New Directions for Institutional Research, 2002(116), 5-19.
Burke, J. C. & Minassians, H. P. (2004). Implications of state performance indicators for
community college assessment. New Directions for Community Colleges, 2004(126),
53-64.
Burke, J. C. (1998). Performance funding indicators: Concerns, values, and models for
state colleges and universities. New Directions for Institutional Research, 1998(97),
49-60.
Burke, J. C. (2005). The many faces of accountability. Achieving Accountability in
Higher Education: Balancing Public, Academic, and Market Demands, 1-24.
Burke, J. C., Minassians, H., & Nelson, A. (2002). Performance reporting: The
preferred" no cost" accountability program: The sixth annual report. Nelson A.
Rockefeller Institute of Government: State University of New York.
Cabrera, A. F. (1994). Logistic regression analysis in higher education: An applied
perspective. Higher Education: Handbook of Theory and Research, 10, 225-256.
California Community College Chancellor’s Office (CCCCO). (2011). Retrieved from
http://www.cccco.edu.
California Master Plan for Higher Education (CA Master Plan) (1960). The University of
California, Office of the President. Retrieved from
http://www.ucop.edu/acadinit/mastplan/mp.htm.
Carey, K. & Schneider, M. (Eds.) (2010). Accountability in American higher education.
Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A
cross-state analysis. Educational Evaluation and Policy Analysis, 24(4), 305.
Cohen, A. M. & Brawer, F. B. (2008). The American community college (5
th
Ed.). San
Francisco, CA: Jossey-Bass.
143
Commission on the Future of Higher Education (Spellings Commission Report) (2006).
US Department of Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports.html.
Community College League of California (CC League) (2011). Retrieved from
http://www.ccleague.org.
Council for Higher Education Accreditation (CHEA (2011). Directory of CHEA
recognized organizations. Washington, D.C.: Council for Higher Education
Accreditation.
Council for Higher Education Accreditation (CHEA) (1998). Recognition of accrediting
organizations. Policy and procedures. Washington, D. C.: Council for Higher
Education Accreditation.
Council for Higher Education Accreditation (CHEA) (2002). The fundamentals of
accreditation. What do you need to know? Washington, D. C.: Council for Higher
Education Accreditation.
Council for Higher Education Accreditation (CHEA) (2005). Results of a survey of
accrediting organizations on practices for providing information to the public.
Washington, D.C.: Council for Higher Education Accreditation.
Council for Higher Education Accreditation (CHEA) (2007). Reauthorization of the
higher education action. Washington, D. C.: Council for Higher Education
Accreditation. Retrieved from
http://www.chea.org/pdf/2007_CHEA_reauthorization_agenda.pdf.
Council for Higher Education Accreditation (CHEA) (2011). Retrieved from
http://www.chea.org.
Crow, S. (2009). Musings on the future of accreditation. New Directions for Higher
Education, 145, 87-97.
Davenport, C. A. (2001). How frequently should accreditation standards change? New
Directions for Higher Education, 2001(113), 67-82.
Dill, D. D., Massy, W. F., Williams, P. R., Cook, C. M. (1996). Accreditation &
academic quality assurance: Can we get there from here? Change, 28(5), 16-24.
Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea.
Defending the Community College Equity Agenda, 51–86.
144
Dougherty, K. J., & Townsend, B. K. (2006). Community college missions: A theoretical
and historical perspective. New Directions for Community Colleges, 2006(136), 5-
13.
Dougherty, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems
for community colleges: Lessons for the voluntary framework of accountability for
community colleges. Community College Research Center. Columbia University,
NYC: Teachers College.
Dowd, A. C. (2007). Community colleges as gateways and gatekeepers: Moving beyond
the access "saga" toward outcome equity. Harvard Educational Review, 77(4),
407-419.
Dowd, A. C., & Lumina Foundation for Education. (2005). Data don't drive: Building a
practitioner-driven culture of inquiry to assess community college performance.
Lumina Foundation for Education.
Dowd, A. C., & Tong, V. P. (2007). Accountability, assessment, and the scholarship of
“best practice,” Higher Education: Handbook of Theory and Research, 57-119.
Driscoll, A., & De Noriega, D. C. (2006). Taking ownership of accreditation: Assessment
processes that promote institutional improvement and faculty engagement. Sterling,
VA: Stylus Publishing.
Dwyer, C. A., Millett, C. M., & Payne, D. G. (2006). A culture of evidence:
Postsecondary assessment and learning outcomes. Princeton, N.J.: Educational
Testing Service.
Eaton, J. S. (2001). Regional accreditation reform: Who is served? Change: The
Magazine of Higher Learning, 33(2), 38-45.
Eaton, J. S. (2003). Is accreditation accountable?: The continuing conversation between
accreditation and the federal government. Council for Higher Education
Accreditation (CHEA): Washington, D. C. Retrieved from
http://www.ctc.ca.gov/educator-prep/review/WG-2004-8-17/Purpose-8.pdf.
Eaton, J. S. (2007). Institutions, accreditors, and the federal government: Redefining their
“appropriate relationship.” Change: The Magazine of Higher Learning, 39(5), 16-23.
Eaton, J. S. (2008). Accreditation and recognition in the United States. Council of Higher
Education Accreditation. Retrieved from http://www.chea.org.
145
Eaton, J. S. (2008). Attending to student learning. Change: The Magazine of Higher
Learning, 40(4), 22-29.
Eaton, J. S. (2008). The future of accreditation? Inside Higher Education. Retrieved from
http://www.insidehighered.com/views/2008/03/24/eaton.
Eaton, J. S. (2009). Accountability: An ‘old’ issue in a new era. Inside Accreditation,
5(4).
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher
Education, 2009(145), 79-86.
Eaton, J. S. (2009). An overview of US accreditation. Council for Higher Education
Accreditation. Retrieved from
http://www.chea.org/pdf/2009.06_Overview_of_US_Accreditation.pdf.
Eaton, J. S. (2010). Accreditation and the federal future of higher education. Academe,
96(5), 21-24.
Edler, F. H. W. (2004). Campus accreditation: Here comes the corporate model. Thought
& Action, 19(2), 91-104.
EI-Khawas, E. H. (1998). Accreditation's role in quality assurance in the United States.
Higher Education Management, 10, 43-56.
El-Khawas, E. H., & International Institute for Educational Planning. (2001).
Accreditation in the USA: Origins, developments and future prospects International
Institute for Educational Planning.
Ewell, P. T. (1990). Assessment and the new accountability: A challenge for higher
education’s leadership. Denver, CO: Education Commission of the States.
Ewell, P. T. (1994). A matter of integrity: Accountability and the future of self-
regulation. Change, 26(6), 24-29.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of
departure. Washington, D.C.: The Council for Higher Education Accreditation
(CHEA).
Ewell, P. T. (2009). U.S. accreditation and the future of quality assurance: A tenth
anniversary report from the Council for Higher Education Accreditation.
Washington, D.C.: The Council for Higher Education Accreditation (CHEA).
146
Freedman, D. (2005). Statistical models: Theory and practice. Cambridge, England:
Cambridge University Press.
Geiger, L. G. (1970). Voluntary accreditation: A history of the north central association,
1945-1970. North Central Association of Colleges and Secondary Schools.
Menasha, WI: G. Banta Co.
Gorard, S. (2003). What is multi–level modeling for? British Journal of Educational
Studies, 51(1), 46-63.
Graham, P. A., Lyman, R., & Trow, M. (1995). Accountability of colleges and
universities. Washington, DC: National Policy Board on Higher Education
Institutional Accreditation.
Gratch-Lindauer, B. (2002). Comparing the regional accreditation standards: Outcomes
assessment and other trends. The Journal of Academic Librarianship, 28(1-2), 14-25.
Greater Expectations Report (2002). American Association of Colleges and Universities.
Retrieved from http://www.greaterexpectations.org.
Greenwood, P. E. & Nikulin, M. S. (1996). A guide to chi-squared testing. J.Wiley: New
York.
Grimm, L. G. and Yarnold, P. R. (2000) Reading and understanding more multivariate
statistics. American Psychological Association: Washington, D.C.
Harcleroad, F. (1980). Accreditation: History, process and problems American
Association for Higher Education. Washington, D. C.
Haviland, D. (2009). Leading assessment: From faculty reluctance to faculty engagement.
Academic Leadership, 7(1).
Head, R. B., & Johnson, M. S. (2011). Accreditation and its influence on institutional
effectiveness. New Directions for Community Colleges, 2011(153), 37-52.
Hilbe, Joseph M. (2009). Logistic Regression Models. CRC Press.
Hirsch, D. (2000). Practitioners as researchers: Bridging theory and practice. New
Directions for Higher Education, 2000(110), 99-106.
Hoffman, A. J., & Wallach, J. (2008). The demise and resurrection of Compton
community college: How loss of accreditation can lead to a new beginning.
Community College Journal of Research and Practice, 32(8), 607-613.
147
Hosmer, D. and Lemeshow, S. (2000). Applied logistic regression: second edition. J.
Wiley & Sons: New York.
Huffman, J., & Harris, J. (1981). Implications of the “input-outcome” research for the
evaluation and accreditation of educational programs. North Central Association
Quarterly, 56(1), 27-32.
Hunt, G. T. (1990). The assessment movement: A challenge and an opportunity. ACA
Bulletin, 72, 5-12.
Hutchings, P. (2010). Opening doors to faculty involvement in assessment. National
Institute for Learning Outcomes Assessment (NILOA). NILOA Occasional Paper.
Ikenberry, S. O. (2009). “Where do we take accreditation?” Proceedings from 2009
CHEA Annual Meeting. Washington, D. C. Retrieved from
http://www.chea.org/pdf/2009_AC_Where_Do_We_Take_Accreditation_Ikenber
ry.pdf
Integrated Postsecondary Education Data System (IPEDS) (2011). Retrieved from
http://nces.ed.gov/ipeds/datacenter.
Jones, D. P. (2002). Different perspectives on information about educational quality:
Implications for the role of accreditation. Council for Higher Education
Accreditation (CHEA). Washington, D. C.
Kershenstein, K. (2002). Some reflections on accreditors’ practices in assessing success
with respect to student achievement. Retrieved from http://www.aspa-usa.org.
King, R. P. (2007). Governance and accountability in the higher education regulatory
state. Higher Education, 53(4), 411-430.
Kuh, G. D. (2010). Risky business: Promises and pitfalls of institutional transparency.
Change: The Magazine of Higher Learning, 39(5), 30-35.
Kuh, G. D., & Ikenberry, S. (2009). More than you think, less than we need: Learning
outcomes assessment in American higher education. National Institute for Learning
Outcomes Assessment (NILOA).
Lay, S. (2011). Community college fast facts. The Community College League of
California. Retrieved from http://www.ccleague.org.
Lederman, D. (2009). Scrutiny for an accreditor. Inside Higher Ed. Retrieved from
http://www.insidehighered.com/layout/set/popup/layout/set/popup/news/2009/12/18/
hlc.
148
Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise?
American Council of Trustees and Alumni.
Leveille, D. E. (2005). An emerging view on accountability in American higher
education. Center for Studies in Higher Education Research and Occasional Paper.
Berkeley: UC Berkeley.
Lubinescu, E. S., Ratcliff, J. L., & Gaffney, M. A. (2001). Two continuums collide:
Accreditation and assessment. New Directions for Higher Education, 2001(113), 5-
21.
Lucas, C. (1994). American higher education: A history. New York: St. Martin’s Press.
Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the
institution. Sterling, VA: Stylus.
Malandra, G. H. (2008). Accountability and learning assessment in the future of higher
education. On the Horizon, 16(2), 57-71.
Marchese, T. (1995). Editorial: Accreditation: The next phase. Change, 27(6)
McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the
origins and spread of state performance-accountability policies for higher education.
Educational Evaluation and Policy Analysis, 28(1), 1.
Moltz, D. (2010). Angst for an accreditor. Inside Higher Ed. Retrieved from
http://www.insidehighered.com/news/2010/06/07/california.
Moltz, D. (2010). Redefining community college success. Inside Higher Ed. Retrieved
from
http://www.insidehighered.com/news/2011/06/06/u_s_panel_drafts_and_debates_me
asures_to_gauge_community_college_success.
Morest, V. S. (2009). Accountability, accreditation, and continuous improvement:
Building a culture of evidence. New Directions for Institutional Research,
2009(143), 17-27.
Morest, V. S., & Jenkins, D. (2007). Institutional research and the culture of evidence at
community colleges. Community College Research Center.
Mullin, C. M. (2012, February). Why access matters: the community college student body
(AACC Policy Brief 2012-01PBL). Washington, DC: American Association of
Community Colleges.
149
Neal, A. D. (2008). Dis-accreditation. Academic Questions, 21(4), 431-445.
Nettles, M. T., Cole, J. J. K., Sharp, S. (1997). Assessment of teaching and learning in
higher education and public accountability. National Center for Postsecondary
Improvement (NCPI). Stanford University.
Newman, M. (1996). Agency of change: One hundred years of the North Central
Association of Colleges and Schools (NCA). Tempe, AZ: NCA.
Orlans, H (1975). Private accreditation and public eligibility. Lexington, MA: Lexington
Books.
Peterson, M. W., & Augustine, C. H. (2000). External and internal influences on
institutional approaches to student assessment: Accountability or improvement?
Research in Higher Education, 41(4), 443-479.
Prager, C. (1993). Accreditation of the two-year college Jossey-Bass Inc Pub.
Provezis, S. (2010). Regional accreditation and student learning outcomes: Mapping the
territory. National Institute for Learning Outcomes Assessment (NILOA). NILOA
Occasional Paper no. 7.
Rogers, J. T. (2000). Quality and public accountability: accreditation’s appropriate role.
Paper presented at conference on Accountability and Financial Support of Public
Higher Education, Athens, Georgia, 31 May-1 June.
Ruiz, E. A. (2010). College accreditation: Accountability? The Journal of Applied
Research in the Community College, 17(2), 52-55.
Rusch, E. A., & Wilbur, C. (2007). Shaping institutional environments: The process of
becoming legitimate. The Review of Higher Education, 30(3), 301-318.
Salkind, N. J. (2007). Statistics for people who (think they) hate statistics. Thousand
Oaks, CA: Sage Publications.
Selden, W. K. (1960). Accreditation: A struggle over standards in higher education New
York, NY: Harper.
Shibley & Volkwein, 2002. Comparing the costs and benefits of re-accreditation
processes. Association for Institutional Research (AIR). AIR 2002 Forum Paper.
Shulock, N. (2011, May). Concerns about performance-based funding and ways that
states are addressing concerns. Sacramento, CA: California State University
Sacramento, Institute for Higher Education Leadership & Policy.
150
Skolits, G. J., & Graybeal, S. (2007). Community college institutional effectiveness.
Community College Review, 34(4), 302.
Strauss, L. C., & Volkwein, J. F. (2004). Predictors of student commitment at two-year
and four-year institutions. Journal of Higher Education, 203-227.
Stufflebeam, D. L., & Webster, W. J. (1980). An analysis of alternative approaches to
evaluation. Educational Evaluation and Policy Analysis, 2(3), 5-20.
Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San
Francisco, CA: Jossey-Bass.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics. New York, NY:
Harper Collins.
Townsend, B. K., & Twombly, S. (2000). Community colleges: Policy in the future
context. Greenwood Publishing Group, Incorporated, 322.
Trivett, D. A. (1976). Accreditation and institutional eligibility. American Association
for Higher Education.
Troutt, W. E. (1981). Relationships between regional accrediting standards and
educational quality. New Directions for Institutional Research, 1981(29), 45-59.
United States Department of Education (USDE) (2011). Retrieved from
http://www2.ed.gov/admins/finaid/accred/index.html.
Vaughn, J. (2002). Accreditation, commercial rankings, and new approaches to assessing
the quality of university research and education programmes in the United States.
Higher Education in Europe, 27(4), 433-441.
Volkwein, J. F. (2010). The assessment context: Accreditation, accountability, and
performance. New Directions for Institutional Research, 2010(1), 3-12.
Volkwein, J. F., Lattuca, L. R., & Terenzini, P. T. (2008). Measuring the impact of
engineering accreditation on student experiences and learning outcomes. In W. E.
Kelly (Ed.), Assessment in Engineering Programs: Evolving Best Practices (Vol. 3).
Tallahassee, FL: Association for Institutional Research.
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the
impact of professional accreditation on student experiences and learning outcomes.
Research in Higher Education, 48(2), 251-282.
151
Walvoord, B. A. (2004). Assessment clear and simple: A practical guide for institutions,
departments and general education. San Francisco, CA: Jossey-Bass.
Wellman, J. (1998). Recognition of accreditation organizations. Washington, DC:
Council for Higher Education Accreditation. Retrieved from
Http://www.Chea.org/pdf/RecognitionWellman_Jan1998.Pdf.
Western Association of Schools and Colleges (WASC) (2002). Guide to using evidence
in the accreditation process: A resource to support institutions and evaluation
teams. Retrieved from www.wascweb.org/senior/Evidence_Guide.pdf
Western Association of Schools and Colleges (WASC) (2009). WASC resource guide for
‘good practices’ in academic program review. Retrieved from
http://www.wascsenior.org/findit/files/forms/WASC_Program_Review_Resource_G
uide_Sept_2009.pdf
Western Association of Schools and Colleges’ Accrediting Commission for Community
and Junior Colleges (WASC-ACCJC) (2002). Rubric for Evaluating Institutional
Effectiveness. Retrieved from http://www.accjc.org/wpcontent/uploads/2010/09/
Rubric%20for%20Evaluating%20Institutional%20Effectiveness.pdf.
Western Association of Schools and Colleges’ Accrediting Commission for Community
and Junior Colleges (WASC-ACCJC) (2010). Accreditation reference handbook.
Retrieved from http://www.accjc.org/wpcontent/uploads/2010/09/Accreditation-
Reference-Handbook-August-20101.pdf
Western Association of Schools and Colleges’ Accrediting Commission for Community
and Junior Colleges (WASC-ACCJC) (2011). Retrieved from http://www.accjc.org.
Wolff, R. A. (1992). Assessment and accreditation: A shotgun marriage? Proceedings
from fifth National Conference on Assessment in Higher Education: AAHE
Assessment Forum, Washington, D. C.
Wolff, R. A. (2005). Accountability and accreditation: can reforms match increasing
demands? In J.C. Burke, et al. (Eds.), Achieving accountability in higher
education (pp. 78-103). San Francisco, CA: Jossey-Bass.
Young, K. E. (1983) Understanding accreditation. San Francisco, CA: Jossey-Bass.
Zis, S., Boeke, M., & Ewell, P. (2010). State policies on the assessment of student
learning outcomes: Results of a fifty-state inventory. Boulder, CO: National
Center for Higher Education Management Systems (NCHEMS).
152
APPENDIX A: ACCREDITING ORGANIZATIONS
Table A.1: Accrediting Organizations Recognized by the Council for Higher Education
Accreditation (CHEA) and/or the US Department of Education (USDE)
Accreditation Organization,
By Type
Recognized by CHEA
Recognized by
USDE
Regional Accrediting Organizations
Middle States Association of Colleges
and Schools Middle States Commission
on Higher Education
Yes Yes
New England Association of Schools
and Colleges Commission on
Institutions of Higher Education
Yes Yes
New England Association of Schools
and Colleges Commission on Technical
and Career Institutions
Yes Yes
North Central Association of Colleges
and Schools The Higher Learning
Commission
Yes Yes
Northwest Commission on Colleges
and Universities
Yes Yes
Southern Association of Colleges and
Schools Commission on Colleges
Yes Yes
Western Association of Schools and
Colleges Accrediting Commission for
Community and Junior Colleges
Yes Yes
Western Association of Schools and
Colleges Accrediting Commission for
Senior Colleges and Universities
Yes Yes
Total Organizations by Category: 8 8
Total Organizations Recognized by
Category:
8 8
153
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
National Faith-Related Accrediting
Organizations
Association for Biblical Higher
Education Commission on
Accreditation
Yes Yes
Association of Advanced Rabbinical
and Talmudic Schools Accreditation
Commission
Yes Yes
Commission on Accrediting of the
Association of Theological Schools in
the United States and Canada
Yes Yes
Transnational Association of Christian
Colleges and Schools Accreditation
Commission
Yes Yes
Total Organizations by Category: 5 5
Total Organizations Recognized by
Category:
4 4
National Career-Related Accrediting
Organizations
Accrediting Bureau of Health
Education Schools
No Yes
Accrediting Commission of Career
Schools and Colleges
No Yes
Accrediting Council for Continuing
Education and Training
No Yes
Accrediting Council for Independent
Colleges and Schools
Yes Yes
Council on Occupational Education No Yes
Distance Education and Training
Council Accrediting Commission
Yes Yes
National Accrediting Commission of
Cosmetology Arts and Sciences, Inc.
No Yes
Total Organizations by Category: 8 8
Total Organizations Recognized by
Category:
2 7
154
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
Programmatic Accrediting
Organizations
AACSB International–The Association
to Advance Collegiate Schools of
Business
Yes Yes
ABET, Inc. Yes Yes
Accreditation Commission for
Acupuncture and Oriental Medicine
No Yes
Accreditation Council for Business
Schools and Programs
Yes Yes
Accreditation Council for Midwifery
Education
No Yes
Accreditation Council for Pharmacy
Education
Yes Yes
Accreditation Review Commission on
Education for the Physician Assistant
Yes No
Accrediting Council on Education in
Journalism and Mass Communications
Yes Yes
American Academy for Liberal
Education
No Yes
American Association for Marriage and
Family Therapy Commission on
Accreditation for Marriage and Family
Therapy Education
Yes Yes
American Association of Family and
Consumer Sciences Council for
Accreditation
Yes No
American Bar Association Council of
the Section of Legal Education and
Admissions to the Bar
No Yes
American Board of Funeral Service
Education Committee on Accreditation
Yes Yes
American Council for Construction
Education
Yes Yes
American Culinary Federation’s
Education Foundation, Inc. Accrediting
Commission
Yes Yes
155
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
Programmatic Accrediting
Organizations
American Dental Association
Commission on Dental Accreditation
No Yes
American Dietetic Association
Commission on Accreditation for
Dietetics Education
Formerly Yes
American Library Association
Committee on Accreditation
Yes Yes
American Occupational Therapy
Association Accreditation Council for
Occupational Therapy Education
Yes Yes
American Optometric Association
Accreditation Council on Optometric
Education
Yes Yes
American Osteopathic Association
Commission on Osteopathic College
Accreditation
Yes Yes
American Physical Therapy
Association Commission on
Accreditation in Physical Therapy
Education
Yes Yes
American Podiatric Medical
Association Council on Podiatric
Medical Education
Yes Yes
American Psychological Association
Commission on Accreditation
Yes Yes
American Society for Microbiology
American College of Microbiology
No Yes
American Society of Landscape
Architects Landscape Architectural
Accreditation Board
Yes Yes
American Speech-Language-Hearing
Association Council on Academic
Accreditation in Audiology and
Speech-Language Pathology
Yes Yes
American Veterinary Medical
Association Council on Education
Yes Yes
156
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
Programmatic Accrediting
Organizations
Association for Clinical Pastoral
Education, Inc., Accreditation
Commission
No Yes
Association of Technology,
Management, and Applied Engineering
Yes Yes
Aviation Accreditation Board
International
Yes No
Commission on Accreditation of Allied
Health Education Programs
Yes Yes
Commission on Accreditation of
Healthcare Management Education
Yes Yes
Commission on Collegiate Nursing
Education
Formerly Yes
Commission on English Language
Program Accreditation
No Yes
Commission on Massage Therapy
Accreditation
No Yes
Commission on Opticianry
Accreditation
Yes Yes
Council for Accreditation of
Counseling and Related Educational
Programs
Yes No
Council for Interior Design
Accreditation
Yes Yes
Council on Accreditation of Nurse
Anesthesia Educational Programs
Yes Yes
Council on Chiropractic Education
Commission on Accreditation
Yes Yes
Council on Education for Public Health No Yes
Council on Naturopathic Medical
Education
No Yes
Council on Rehabilitation Education
Commission on Standards and
Accreditation
Yes Yes
157
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
Programmatic Accrediting
Organizations
Council on Social Work Education
Office of Social Work Accreditation
and Educational Excellence
Yes Yes
International Assembly for Collegiate
Business Education
Yes No
International Fire Service Accreditation
Congress Degree Assembly
Yes No
Joint Review Committee on Education
Programs in Radiologic Technology
Yes Yes
Joint Review Committee on
Educational Programs in Nuclear
Medicine Technology
Yes Yes
Liaison Committee on Medical
Education
No Yes
Midwifery Education Accreditation
Council
No Yes
Montessori Accreditation Council for
Teacher Education
No Yes
National Accrediting Agency for
Clinical Laboratory Sciences
Yes Yes
National Architectural Accrediting
Board, Inc.
No Yes
National Association of Nurse
Practitioners in Women’s Health
Council on Accreditation
No Yes
National Association of Schools of Art
and Design Commission on
Accreditation
Formerly Yes
National Association of Schools of
Dance Commission on Accreditation
Formerly Yes
National Association of Schools of
Music Commission on Accreditation
and Commission on Community/Junior
College Accreditation
Formerly Yes
158
Table A.1, Continued
Accreditation Organization,
By Type
Recognized by CHEA Recognized by USDE
Programmatic Accrediting
Organizations
National Association of Schools of
Public Affairs and Administration
Commission on Peer Review and
Accreditation
Yes No
National Association of Schools of
Theatre Commission on Accreditation
Formerly Yes
National Council for Accreditation of
Teacher Education
Yes Yes
National Environmental Health Science
and Protection Accreditation Council
No Yes
National League for Nursing
Accrediting Commission, Inc.
Yes Yes
National Recreation and Park
Association Council on Accreditation
of Parks, Recreation, Tourism, and
Related Professions
Yes No
Planning Accreditation Board Yes No
Society of American Foresters Yes Yes
Teacher Education Accreditation
Council Accreditation Committee
Yes Yes
United States Conference of Catholic
Bishops Commission on Certification
and Accreditation
No Yes
Total Organizations by Category: 69 69
Total Organizations Recognized by
Category:
44 59
Overall Total of Accrediting
Organizations
90 90
Overall Total of Recognized
Accrediting Organizations
44 59
Source: CHEA, 2011.
159
APPENDIX B: WASC-ACCJC ACCREDITATION STANDARDS (2002)
Table B.1: WASC-ACCJC Accreditation Standards (2002)
Source: WASC-ACCJC, 2002; WASC-ACCJC, 2010.
Standard Section
Standard I: Institutional
Mission and Effectiveness
IA: Mission
IB: Improving Institutional Effectiveness
Standard II: Student Learning
Programs and Services
IIA: Instructional Programs
IIB: Student Support Services
IIC: Library and Learning Support Services
Standard III: Resources IIIA: Human Resources
IIIB: Physical Resources
IIIC: Technology Resources
IIID: Financial Resources
Standard IV: Leadership and
Governance
IVA: Decision-Making Roles and Processes
IVB: Board and Administrative Organization
160
APPENDIX C: WASC-ACCJC CALIFORNIA COMMUNITY COLLEGES
Table C.1: WASC-ACCJC California Community Colleges, by Title
WASC-ACCJC California Community Colleges
Allan Hancock College
American River College
Antelope Valley College
Bakersfield College
Barstow College
Berkeley City College
Butte College
Cabrillo College
Cañada College
Cerritos College
Cerro Coso Community College
Chabot College
Chaffey College
Citrus College
City College of San Francisco
Coastline Community College
College of Alameda
College of Marin
College of San Mateo
College of the Canyons
College of the Desert
College of the Redwoods
College of the Sequoias
College of the Siskiyous
Columbia College
161
Table C.1, Continued
WASC-ACCJC California Community Colleges
Contra Costa College
Copper Mountain College
Cosumnes River College
Crafton Hills College
Cuesta College
Cuyamaca College
Cypress College
DeAnza College
Diablo Valley College
East Los Angeles College
El Camino College
Evergreen Valley College
Feather River College
Folsom Lake College
Foothill College
Fresno City College
Fullerton College
Gavilan College
Glendale Community College
Golden West College
Grossmont College
Hartnell College
Imperial Valley College
Irvine Valley College
Lake Tahoe Community College
162
Table C.1, Continued
WASC-ACCJC California Community Colleges
Laney College
Las Positas College
Lassen College
Long Beach City College
Los Angeles City College
Los Angeles Harbor College
Los Angeles Mission College
Los Angeles Pierce College
Los Angeles Southwest College
Los Angeles Trade-Tech College
Los Angeles Valley College
Los Medanos College
Mendocino College
Merced College
Merritt College
MiraCosta College
Mission College
Modesto Junior College
Monterey Peninsula College
Moorpark College
Moreno Valley College
Mt. San Antonio College
Mt. San Jacinto College
Napa Valley College
Norco College
163
Table C.1, Continued
WASC-ACCJC California Community Colleges
Ohlone College
Orange Coast College
Oxnard College
Palo Verde College
Palomar College
Pasadena City College
Porterville College
Reedley College
Rio Hondo College
Riverside City College
Sacramento City College
Saddleback College
San Bernardino Valley College
San Diego City College
San Diego Mesa College
San Diego Miramar College
San Joaquin Delta College
San Jose City College
Santa Ana College
Santa Barbara City College
Santa Monica College
Santa Rosa Junior College
Santiago Canyon College
Shasta College
Sierra College
164
Table C.1, Continued
WASC-ACCJC California Community Colleges
Skyline College
Solano Community College
Southwestern College
Taft College
Ventura College
Victor Valley College
West Hills College Coalinga
West Hills College Lemoore
West Los Angeles College
West Valley College
Woodland Community College
Yuba College
Source: CCCCO, 2011.
165
APPENDIX D: ACCREDITATION STATUS
Table D.1: Accreditation Status, by California Community College
College Month and Year of Accreditation Action
J
U
N
-
0
3
J
A
N
-
0
4
J
U
N
-
0
4
J
A
N
-
0
5
J
U
N
-
0
5
J
A
N
-
0
6
J
U
N
-
0
6
J
A
N
-
0
7
J
U
N
-
0
7
J
A
N
-
0
8
J
U
N
-
0
8
J
A
N
-
0
9
J
U
N
-
0
9
J
A
N
-
1
0
J
U
N
-
1
0
J
A
N
-
1
1
J
U
N
-
1
1
Allan Hancock C. X XAXXXXXR XXR X X A X X
American River C. X AXAXXXR XXXXX A X X X
Antelope Valley C. R XXXXXXR XR XXX X X A X
Bakersfield C. X R XXXXXAXR XXX R X X X
Barstow C. R XXXXXAXXXXXR X R X X
Berkeley City C. X XXXXXXXR XR XA X P P W
Butte C. X XXXXXR XXXXXA X X X X
Cabrillo C. X R XXXXXXXAXXX X X X X
Cañada C. R XR XXXXXX W X A X R X X X
Cerritos C. X XXXXXXXXX W X A X R X X
Cerro Coso
Community C. X R X R X X X W XAXXX R X X X
Chabot C. X XXXXXR XXXXXX A X X X
Chaffey C. X XAXXXXXR XXXX X A X X
Citrus C. X AXXXXXR XXXXX A X X X
City C. of San
Francisco R XXXXXAXR XXXR X R X X
Coastline
Community C. R R X W XXXXAXR XR X R X X
C. of Alameda X X R WXAR XR XR X W X P P W
C. of Marin X X X X W W X W X P A X R X X A X
C. of San Mateo R XR XXXXXX W X A X R X X X
C. of the Canyons A XXXXR XXXXXAX X X X X
C. of the Desert X XXXAXXR XXR XR X X X A
C. of the
Redwoods R XXXX W WXP P W A W A X X X
C. of the Sequoias R R R XXXX W XAXR X R X X X
C. of the Siskiyous X XAXXR XXR XXXX X W X W
Columbia C. R XXR XAXXXR XR X X X X X
166
Table D.1, Continued
College Month and Year of Accreditation Action
J
U
N
-
0
3
J
A
N
-
0
4
J
U
N
-
0
4
J
A
N
-
0
5
J
U
N
-
0
5
J
A
N
-
0
6
J
U
N
-
0
6
J
A
N
-
0
7
J
U
N
-
0
7
J
A
N
-
0
8
J
U
N
-
0
8
J
A
N
-
0
9
J
U
N
-
0
9
J
A
N
-
1
0
J
U
N
-
1
0
J
A
N
-
1
1
J
U
N
-
1
1
Contra Costa C. X XXXXXXXXXXAX R X X X
Copper Mountain
C. R XRXXXXXAX W X W X A X X
Cosumnes River C. X AXXXXXR XXXXX A X X X
Crafton Hills C. A R XXXR XR XR XP X P X A X
Cuesta C. A XXR XR XR X W A W X P X P X
Cuyamaca C. X XXXXR XXXAXXX R X X X
Cypress C. X XXXAXXR XXR XX X X X W
DeAnza C. R XXXXAXXXXXR X R X X X
Diablo Valley C. X XXXXXXXR R W S X P X A X
East Los Angeles
C. X XXXXXRXXXXX W W A X X
El Camino C.-
Compton X XXXXXXR XXXXX X X X X
El Camino
Community C. X X X WXXR XR R X W W A X X X
Evergreen Valley
C. X XXX W AXR XR XXX X X W X
Feather River C. R XXXXX WXAR XX W W X A X
Folsom Lake C. X R XXR XXR XXXXX A X X X
Foothill C. R XXXXAXXXXXR X R X X X
Fresno City C. R XXXX W X W A R X R X X X X X
Fullerton C. X XXXAXXR XXR XX X X X W
Gavilan C. X R XXXXXXAR XXX X R X X
Glendale
Community C. X XAXXXXXR XXXX X W X A
Golden West C. X R R R XXXXAXR XR X R X X
Grossmont C. X XXXXXXXXAXR X R X X X
Hartnell C. R R XXXXXXP W A X R X R X X
Imperial Valley C. R XR AR XXXX W X W X W A X X
Irvine Valley C. X XXXXR XR XR XR X X X W X
Lake Tahoe C. R X X W XXAXXR R X R X X X X
167
Table D.1, Continued
College Month and Year of Accreditation Action
J
U
N
-
0
3
J
A
N
-
0
4
J
U
N
-
0
4
J
A
N
-
0
5
J
U
N
-
0
5
J
A
N
-
0
6
J
U
N
-
0
6
J
A
N
-
0
7
J
U
N
-
0
7
J
A
N
-
0
8
J
U
N
-
0
8
J
A
N
-
0
9
J
U
N
-
0
9
J
A
N
-
1
0
J
U
N
-
1
0
J
A
N
-
1
1
J
U
N
-
1
1
Laney C. X X RRX A RRRX RX A X P P W
Las Positas C. X XXXXXR XXXXXX A X X X
Lassen C. X X R R X X WPPPP W W A X X X
Long Beach City
C. A XXXXRXXXXX W X A X X X
Los Angeles City
C. X XXXXXR XXXXXP X A X X
Los Angeles
Harbor C. R XXXXXAXXXR R R X R X X
Los Angeles
Mission C. R XR XXXXR AXR XR X R X X
Los Angeles Pierce
C. X XR XXXXXAXR XR X R X X
Los Angeles
Southwest C. R XXXXXAXXXP XA X R X X
Los Angeles
Trade-Tech X XXXXXRXXXXXPX W X A
Los Angeles
Valley C. R XR XXXXXAXR R R X R X X
Los Medanos C. X XXXXXR XXXXAX R X X X
Mendocino C. X XXXXXXXXXAXR X R X X
Merced C. X X X W AXR XR XR XX X X X W
Merritt C. X WAXXAR XR XR X W X P P W
MiraCosta C. X XAR XXXXR W W X A X A X P
Mission C. X XXXXR XXR X W X W X A X X
Modesto Junior C. R XXXXAXXXP XAX X X X X
Monterey
Peninsula C. X XAAXXXXR XR XX X A X X
Moorpark C. X XXAXXXXXR XXX X X A X
Mt. San Antonio C. X XXXXXXXXR XXX R X A X
Mt. San Jacinto C. R XXR XAXXXR XR X X X X X
Napa Valley C. X AXR XXXR XXXXX A X X X
Ohlone C. X XXXXR XXXX W W A X X X X
168
Table D.1, Continued
College Month and Year of Accreditation Action
J
U
N
-
0
3
J
A
N
-
0
4
J
U
N
-
0
4
J
A
N
-
0
5
J
U
N
-
0
5
J
A
N
-
0
6
J
U
N
-
0
6
J
A
N
-
0
7
J
U
N
-
0
7
J
A
N
-
0
8
J
U
N
-
0
8
J
A
N
-
0
9
J
U
N
-
0
9
J
A
N
-
1
0
J
U
N
-
1
0
J
A
N
-
1
1
J
U
N
-
1
1
Orange Coast C. X R XAXXXXAX W X A X R X X
Oxnard C. R XXXXR XR XR XR X X X W X
Palo Verde C. X XR XXXXXXX W W X A X X X
Palomar C. X XXR XXR XXXXX W X W X A
Pasadena City C. A XXXXR XXXXXX W X W A X
Porterville C. X R XXXXX W X W A X X R X X X
Reedley C. R XXR XAXR R XXR X R X X X
Rio Hondo C. X R XXXXXXXXX W X A X X X
Riverside City C. R XR XXXXXXAXR X W X A X
Sacramento City C. X AXAXXXR XXXXX A X X X
Saddleback C. X XXXXR XR XR XR X X X W X
San Bernardino
Valley C. A XR AXR XXXXXAX X X X X
San Diego City C. X XXAXXXXXR XXX X X A X
San Diego Mesa C. X XXAXXXXXR XXX X X A X
San Diego
Miramar C. X XXXXXXR XR XXX X X W X
San Joaquin Delta
C. X RR WXXXXA W W P A X R X W
San Jose City C. X X X R W AXR XR XR X X X P X
Santa Ana C. X XXXXR XXXXX W X A X X X
Santa Barbara City
C. A XXXXR XXXXXXX A X X X
Santa Monica C. X XAXR XXXR XXXX X A X X
Santa Rosa Junior
C. A XXXXR XXXXXXA X X X X
Santiago Canyon
C. R XXXXAXXXXX W X A X X X
Shasta C. R XXR XAXXX W X W A X X X X
Sierra C. X R XXXXXXX W X W X A X X X
Skyline C. R XR XXXXXXAXXX R X X X
169
Table D.1, Continued
College Month and Year of Accreditation Action
J
U
N
-
0
3
J
A
N
-
0
4
J
U
N
-
0
4
J
A
N
-
0
5
J
U
N
-
0
5
J
A
N
-
0
6
J
U
N
-
0
6
J
A
N
-
0
7
J
U
N
-
0
7
J
A
N
-
0
8
J
U
N
-
0
8
J
A
N
-
0
9
J
U
N
-
0
9
J
A
N
-
1
0
J
U
N
-
1
0
J
A
N
-
1
1
J
U
N
-
1
1
Solano Community
C. R RXXXAXXR W X S P P X A X
Southwestern C. X XR XXXR XXXXXX P X P A
Taft C. X XXXXXR XXXXXX W X W X
Ventura C. X XXXXXXR XR XXX X X W X
Victor Valley C. X XXXAR R X W W W A X X X X P
West Hills C.
Coalinga X XXX W XAXXXR XX X X X A
West Hills C.
Lemoore X XR XXXR XXXR XR X X X A
West Los Angeles
C. R XXXXXAXR XR XR X R X X
West Valley C. X XXXXR XR XXXXR X X X X
Woodland
Community C. X XXXXXR XXXR XR X X X X
Yuba C. R XXXXAR R XXXR X R X X X
Note: Values reported for published and available WASC-ACCJC actions only.
Coding Key: No Status/Action = X; Show Cause = S; Probation = P;
Warning = W; Affirmed = A; Report = R.
170
APPENDIX E: ONGOING ACCREDITATION STATUS
Table E.1: Ongoing Accreditation Status and Percentage of Institutions on Sanction
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
Allan Hancock C. CC C CCCCCC C C C CCC
American River C. C CC C CCCCCC C C C CCC
Antelope Valley C. C C
Bakersfield C. CCCC C C C CCC
Barstow C. CCCCC C C C CCC
Berkeley City C. C C S S S
Butte C. C C C C C
Cabrillo C. C C C C C C C C
Cañada C. S S C C C C C C
Cerritos C. S S C C C C C
Cerro Coso Community
C.
S S C C C C C C C C
Chabot C. C C C C
Chaffey C. CC C CCCCCC C C C CCC
Citrus C. C CC C CCCCCC C C C CCC
City C. of San Francisco CCCCC C C C CCC
Coastline Community C. S S S S S C C C C C C C C C
171
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
C. of Alameda S S CCCCCC C S S S S S
C. of Marin S SSSSSC C C C C C C
C. of San Mateo S S C C C C C C
C. of the Canyons C C C C C CCCCCC C C C CCC
C. of the Desert C CCCCCC C C C CCC
C. of the Redwoods SSSSSS C S C C C C
C. of the Sequoias S S C C C C C C C C
C. of the Siskiyous C C C CCCCCC C C C S S S
Columbia C. CCCCCC C C C CCC
Contra Costa C. C C C C C C
Copper Mountain C. C C S S S S C C C
Cosumnes River C. C CC C CCCCCC C C C CCC
Crafton Hills C. C C CC C CCCCCC S S S S C C
Cuesta C. C C CC C CCCCS C S S S S S S
Cuyamaca C. C C C C C C C C
Cypress C. C CCCCCC C C C CCS
DeAnza C. CCCCCC C C C CCC
Diablo Valley C. S S S S S C C
172
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
East Los Angeles C. S S C C C
El Camino C.-Compton*
El Camino Community C. S S SSSSSS S S C C C C
Evergreen Valley C. S CCCCCC C C C CS S
Feather River C. S S C C C C S S S C C
Folsom Lake C. C C C C
Foothill C. CCCCCC C C C CCC
Fresno City C. S S S C C C C C C C C C
Fullerton C. C CCCCCC C C C CCS
Gavilan C. C C C C C C C C C
Glendale Community C. CC C CCCCCC C C C S S C
Golden West C. C C C C C C C C C
Grossmont C. C C C C C C C C
Hartnell C. S S C C C C C C C
Imperial Valley C. C C CCCCS S S S S CCC
Irvine Valley C. S S
Lake Tahoe C. S S S CCCCC C C C CCC
Laney C. CCCCCC C C C S S S
173
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
Las Positas C. C C C C
Lassen C. SSSSS S S C C C C
Long Beach City C. C C C C C CCCCCC S S C CCC
Los Angeles City C. S S C C C
Los Angeles Harbor C. CCCCC C C C CCC
Los Angeles Mission C. C C C C C C C C C
Los Angeles Pierce C. C C C C C C C C C
Los Angeles Southwest C. CCCCS S C C CCC
Los Angeles Trade-Tech S S S S C
Los Angeles Valley C. C C C C C C C C C
Los Medanos C. C C C C C C
Mendocino C. C C C C C C C
Merced C. S C CCCCCC C C C CCS
Merritt C. S CC C CCCCCC C S S S S S
MiraCosta C. CC C CCCCS S S C C CCS
Mission C. S S S S C C C
Modesto Junior C. CCCCS S C C C CCC
Monterey Peninsula C. CC C CCCCCC C C C CCC
174
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
Moorpark C. C C CCCCCC C C C CCC
Mt. San Antonio C. C C
Mt. San Jacinto C. CCCCCC C C C CCC
Napa Valley C. C CC C CCCCCC C C C CCC
Ohlone C. S S C C C C C
Orange Coast C. C C CCCCCS S C C CCC
Oxnard C. S S
Palo Verde C. S S S C C C C
Palomar C. S S S S C
Pasadena City C. C C CC C CCCCCC C S S S C C
Porterville C. SSSC C C C C C C
Reedley C. CCCCCC C C C CCC
Rio Hondo C. S S C C C C
Riverside City C. C C C C S S C C
Sacramento City C. C CC C CCCCCC C C C CCC
Saddleback C. S S
San Bernardino Valley C. C C CC C CCCCCC C C C CCC
San Diego City C. C C CCCCCC C C C CCC
175
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
San Diego Mesa C. C C CCCCCC C C C CCC
San Diego Miramar C. S S
San Joaquin Delta C. S S SSSC SS S C C C C S
San Jose City C. S CCCCCC C C C CS S
Santa Ana C. S S C C C C
Santa Barbara City C. C C CC C CCCCCC C C C CCC
Santa Monica C. CC C CCCCCC C C C CCC
Santa Rosa Junior C. C C CC C CCCCCC C C C CCC
Santiago Canyon C. CCCCCC S S C CCC
Shasta C. CCCCS S S C C CCC
Sierra C. S S S S C C C C
Skyline C. C C C C C C C C
Solano Community C. CCCCS S S S S S C C
Southwestern C. S S S C
Taft C. S S S S
Ventura C. S S
Victor Valley C. C C C C S S S C C C C C S
West Hills C. Coalinga S S CCCCC C C C CCC
176
Table E.1, Continued
College Month and Year of Accreditation Action
J
U
N
0
3
J
A
N
0
4
J
U
N
0
4
J
A
N
0
5
J
U
N
0
5
J
A
N
0
6
J
U
N
0
6
J
A
N
0
7
J
U
N
0
7
J
A
N
0
8
J
U
N
0
8
J
A
N
0
9
J
U
N
0
9
J
A
N
1
0
J
U
N
1
0
J
A
N
1
1
J
U
N
1
1
West Hills C. Lemoore C
West Los Angeles C. CCCCC C C C CCC
West Valley C.
Woodland Community C.
Yuba C. CCCCCC C C C CCC
Note: Values reported for published and available WASC-ACCJC actions only.
*Acceptance of closure/transfer report (see Appendix D).
177
APPENDIX F: EXPLORATION OF PEAK-SANCTION YEAR (2008)
Table F.1: Exploration of Peak-Sanction Year (2008) in Demonstration of Sample
Methods for Future Study
Correlation Matrices for Student Variables (Spearman's) – 2009
Variable Value Graduation
Full-
Time
Retention
Part-
Time
Retention
Transfer
Coeff.* 1 -0.029 0.039 0.43
Sig. . 0.76 0.689 0 Graduation
N 110 110 110 110
Coeff.* -0.029 1 0.772 -0.231
Sig. 0.76 . 0 0.015
Full-Time
Retention
N 110 111 111 110
Coeff.* 0.039 0.772 1 -0.053
Sig. 0.689 0 . 0.584
Part-Time
Retention
N 110 111 111 110
Coeff.* 0.43 -0.231 -0.053 1
Sig. 0 0.015 0.584 . Transfer
N 110 110 110 110
Correlation Matrices for Student Variables (Spearman's) – 2008
Variable Value Graduation
Full-
Time
Retention
Part-
Time
Retention
Transfer
Coeff.* 1 0.361 0.223 0.365
Sig. . 0 0.02 0 Graduation
N 109 109 109 109
Coeff.* 0.361 1 0.612 -0.142
Sig. 0 . 0 0.142
Full-Time
Retention
N 109 110 110 109
Coeff.* 0.223 0.612 1 -0.11
Sig. 0.02 0 . 0.254
Part-Time
Retention
N 109 110 110 109
Coeff.* 0.365 -0.142 -0.11 1
Sig. 0 0.142 0.254 . Transfer
N 109 109 109 109
178
Table F.1, Continued
Correlation Matrices for Student Variables (Spearman's) – 2007
Variable Value Graduation
Full-
Time
Retention
Part-
Time
Retention
Transfer
Coeff.* 1 0.474 0.491 0.157
Sig. . 0 0 0.104 Graduation
N 109 109 109 109
Coeff.* 0.474 1 0.643 -0.051
Sig. 0 . 0 0.597
Full-Time
Retention
N 109 110 110 109
Coeff.* 0.491 0.643 1 -0.044
Sig. 0 0 . 0.649
Part-Time
Retention
N 109 110 110 109
Coeff.* 0.157 -0.051 -0.044 1
Sig. 0.104 0.597 0.649 . Transfer
N 109 109 109 109
Correlation Matrices for Student Variables (Spearman's) – 2006
Variable Value Graduation
Full-
Time
Retention
Part-
Time
Retention
Transfer
Coeff.* 1 0.288 0.382 -0.057
Sig. . 0.003 0 0.557 Graduation
N 108 108 108 108
Coeff.* 0.288 1 0.724 -0.039
Sig. 0.003 . 0 0.691
Full-Time
Retention
N 108 109 109 108
Coeff.* 0.382 0.724 1 -0.121
Sig. 0 0 . 0.212
Part-Time
Retention
N 108 109 109 108
Coeff.* -0.057 -0.039 -0.121 1
Sig. 0.557 0.691 0.212 . Transfer
N 108 108 108 108
179
Table F.1, Continued
Correlation Matrices for Student Variables (Spearman's) – 2005
Variable Value Graduation
Full-
Time
Retention
Part-
Time
Retention
Transfer
Coeff.* 1 0.241 0.215 -0.147
Sig. . 0.012 0.025 0.128 Graduation
N 108 108 108 108
Coeff.* 0.241 1 0.601 -0.141
Sig. 0.012 . 0 0.147
Full-Time
Retention
N 108 109 109 108
Coeff.* 0.215 0.601 1 0.01
Sig. 0.025 0 . 0.918
Part-Time
Retention
N 108 109 109 108
Coeff.* -0.147 -0.141 0.01 1
Sig. 0.128 0.147 0.918 . Transfer
N 108 108 108 108
Student Variables by Quartiles
Graduation Rate
Percentiles 2004 2005 2006 2007 2008 2009
25 30 30 29 19 19 18.75
50 34 35 33.5 23 22 23
75 39 39 39 27 27 30
Transfer Rate
Percentiles 2004 2005 2006 2007 2008 2009
25 18 24 11 18.5 15 14
50 22 31 15 24 17 17
75 26 37.75 22.75 30 20 20.25
Full-Time Retention Rate
Percentiles 2004 2005 2006 2007 2008 2009
25 62 60.5 59 59 60.75 43
50 67 65 65 66 67 49
75 71 70 69 70 71.25 59
180
Table F.1, Continued
Student Variables by Quartiles
Part-Time Retention Rate
Percentiles 2004 2005 2006 2007 2008 2009
25 33.5 34 35.5 34.75 29 22
50 41 40 39 41 36 27
75 46 45 45 44 41 34
Binomial Logisitc Regresion for 2008 with Student Variables and Accreditation
Status**
IV DV Year
Model
Sig.***
Cox and
Snell
Variable
Sig.***
Graduation
Rate
Accred.
Status
2008 0.550 0.036 0.406
Transfer
Rate
Accred.
Status
2008 0.550 0.036 0.489
Full-Time
Retention
Rate
Accred.
Status
2008 0.550 0.036 0.338
Part-Time
Retention
Rate
Accred.
Status
2008 0.550 0.036 0.296
*Correlation Coefficient.
**Accreditation truncated to sanctioned/clear values only
***All values non-significant for this narrowed test.
Abstract (if available)
Abstract
The purpose of this study was to conduct a quantitative exploration of accreditation actions issued by the Western Association of Schools and Colleges’ Accrediting Commission for Community and Junior Colleges (WASC-ACCJC) since 2002, in order to examine their relationship to a dataset of common student and institutional variables. This exploration was based on the observation that WASC- ACCJC sanctions have greatly increased since 2002 within a rising culture of accountability, prompting speculation about the association between outcomes variables and institutional resources and ongoing accreditation status. Initial data collection found that fifty-five percent of all California community colleges have been sanctioned at least once since 2002. In exploring the variables and accreditation statuses of these colleges, this study sought to provide more information concerning the presence or absence of significant patterns of association between public institutional data and accreditation status, providing additional insight to this underexplored period of community college accreditation and suggesting avenues for future study. ❧ The research questions that guided this study are as follows: In California community colleges reviewed by WASC-ACCJC since 2002, what is the relationship between the specific accreditation action taken by the commission and the most common student outcomes variables cited by the literature—namely, graduation rate, transfer, and retention? In California community colleges reviewed by WASC-ACCJC since 2002, what is the relationship between accreditation status and several common institutional variables—namely, full-time equivalent students (FTES), budget, staffing, and size? Finally, in those California community colleges that were sanctioned by WASC-ACCJC since 2002, what patterns emerge that may inform institutional knowledge about the relationship between accreditation action and institutional measures? ❧ This study examined institutional accreditation status, according to categories of “clear” and “sanctioned” accreditation and compared these statuses to common student and institutional variables for the years matching the accreditation status. Using chi-square tests of association and a subsequent set of logistic regression analyses, this study noted several statistically significant associations between college data and accreditation status, including student graduation rates, full- and part-time retention rates, and transfer rates which were found significant in multiple instances of the post-2002 data. Student outcomes variables with significant associations were found in greater number than institutional variables. These findings offer some initial insights into recent WASCACCJC accreditation actions, suggesting that some quantitative measures of student outcomes such as graduation rate are associated with accreditation status. However, this study noted that year-to-year comparisons did not demonstrate an ongoing pattern of significant associations. These findings suggests that accreditation status has not become coopted by outcomes measures alone, while likewise suggesting the need for further exploration of the rising rate of sanctions by WASC-ACCJC in order to understand this new trend in community college accreditation.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Impact of accreditation actions: a case study of two colleges within Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
Input-adjusted transfer scores as an accountability model for California community colleges
PDF
A cost benefit analysis of professional accreditation by ABET for baccalaureate engineering degree programs
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
Institutional student loan cohort default rates by institution type
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
The efficacy of regional accreditation compared to direct public regulation of post-seconadary institutions in the United States
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
Accountability models in remedial community college mathematics education
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Problem-based learning and its influence on college preparation knowledge, motivation, & self-efficacy in high school students
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
Accreditation and accountability processes in California high schools: a case study
PDF
A longitudinal study on the opportunities to learn science and success in science in the California community college system
Asset Metadata
Creator
Theule, Ryan William
(author)
Core Title
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/09/2012
Defense Date
03/15/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,Community colleges,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Hocevar, Dennis (
committee member
), Vazquez, Marcelo F. (
committee member
)
Creator Email
rtheule@gmail.com,theule@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-26373
Unique identifier
UC11288286
Identifier
usctheses-c3-26373 (legacy record id)
Legacy Identifier
etd-TheuleRyan-586.pdf
Dmrecord
26373
Document Type
Dissertation
Rights
Theule, Ryan William
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation