Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
(USC Thesis Other)
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: LIBRARY AND INFORMATION STUDIES
LEARNING OUTCOMES ASSESSMENT AT AMERICAN LIBRARY ASSOCIATION
ACCREDITED MASTER'S PROGRAMS IN LIBRARY AND INFORMATION STUDIES
by
Winyuan Shih
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2015
Copyright 2015 Winyuan Shih
LIBRARY AND INFORMATION STUDIES 2
ACKNOWLEDGEMENTS
I would like to express my sincere gratitude to my committee chair, Professor Robert
Keim, for his valuable guidance and unfailing encouragement throughout the process. I am
equally grateful to my committee members, Dr. Patricia Tobey and Dr. P. J. Woolston, for their
continuous support, motivation, enthusiasm, and immense knowledge of outcomes assessment
and higher education accreditation. It is also important to mention Dr. Ken Haycock of USC
Marshall School of Business and Karen O’Brien of ALA Accreditation Office for their insightful
comments and feedback.
I would never have been able to finish my dissertation without the mutual support,
sharing, and comradery from my dissertation cohort members, Ben, Deborah, Dinesh,
Jacqueline, Jennifer, Jill, Kris, Nathan, Richard, and Rufus. Furthermore, I was fortunate to
benefit from my invaluable network of supportive and forgiving friends: Tina, Mark, Andy,
Lynn, Rick, Martha, and Sally.
Finally, I cannot finish without thanking my family.
LIBRARY AND INFORMATION STUDIES 3
TABLE OF CONTENTS
ACKNOWLEDGEMENTS 2
LIST OF TABLES 7
LIST OF FIGURES 9
ABSTRACT 10
CHAPTER ONE: OVERVIEW OF THE STUDY 11
Accreditation in the United States 12
Council for Higher Education Accreditation (CHEA) 13
Accreditation as Gatekeeper 14
A Changing Environment 14
Student Learning Outcomes 17
Growth in Literature 18
Growth in Professional Organizations 19
Growth in Assessment-Related Initiatives 20
Assessment Practices 22
Statement of the Problem 22
Purpose of the Study 27
Significance of the Study 28
Student Enrollment 28
New Entrance 29
Job Market 29
Online Programs 30
CHEA’s Recognition of ALA Committee of Accreditation 30
Importance of the Study 32
Definitions 32
CHAPTER TWO: LITERATURE REVIEW 36
History of Accreditation 36
Early Institutional Accreditation 36
Regional Accreditation, 1885 to 1920 38
Regional Accreditation, 1920 to 1950 39
Regional Accreditation, 1950 to 1985 40
Regional Accreditation, 1985 to Present 42
Future Direction of Accreditation 44
Effect of Outcomes Assessment and Accreditation 47
Trend toward Learning Assessment 48
Framework for Learning Assessment 48
Benefits of Accreditation on Learning 50
Organizational Effects of Accreditation 51
Future Assessment Recommendations 52
Challenges to Student Learning Outcomes 53
LIBRARY AND INFORMATION STUDIES 4
Conclusion 59
American Library Association Accreditation 60
Early History of Library Education 60
Accreditation of Library Training Program 61
ALA Standards and Outcome Assessment 68
Accreditation Process, Policies, and Procedures 70
Committee on Accreditation 70
Costs of ALA Accreditation 71
Outcomes Assessment Practice at MLIS program 72
Conclusion 82
CHAPTER THREE: METHODOLOGY AND RESEARCH DESIGN 83
Research Design 83
Population and Sample 84
Survey, Quantitative Research 84
Content Analysis, Qualitative Research 85
Instrumentation 86
Survey, Quantitative Research 86
Content Analysis, Qualitative Research 88
Reliability and Validity 88
Data Collection and Analysis 90
Survey, Quantitative Research 90
Response 95
Content Analysis, Qualitative Research 95
Data Analysis 96
Survey, Quantitative Research 96
Content Analysis, Qualitative Research 97
Limitations and Delimitations 97
Assumptions 97
Limitations 97
Delimitations 98
CHAPTER FOUR: RESULTS 100
Characteristics of MLIS Programs 101
Description of Respondents 101
Program Goals and Measurable Objectives 102
Presence of Formalized Learning Outcomes and Assessment Plan
(ALA Standards I.2, I.3, II.1, II.7) 103
Diverse Knowledge and Competencies Statements (ALA Standards II.5) 107
Assessment Practices 112
Major Driver of Outcomes Assessment Practice 112
Systematic and On-going Assessment Process (ALA Standards I.1) 114
Assessment Practice Drivers and Participants (ALA Standards II.7, IV.4) 116
Assessment Measures and Use of Assessment Results 123
Multiple Assessment Approaches (ALA Standards IV.4) 123
Survey Results 125
LIBRARY AND INFORMATION STUDIES 5
Content Analysis Results 127
Direct Measures at Course Level 130
Direct Measures at Program Level 130
Final Project 131
Indirect Measures at Course Level 133
Indirect Measures at Program Level 133
Summative and Formative Assessment 134
Benchmarking or Peer Review 135
Applications of Assessment Evidence (ALA Standards II.7, IV.6) 137
Survey Results 137
Content Analysis Results 139
Organizational Supports 144
Assessment Personnel 144
Faculty Engagement 146
Benefits of Assessment and Accreditation 148
Assessment Practice 148
University Mandate 148
Increasing Importance and Work in Progress 149
Best Practice 149
Be Aware of Issues 150
Benefits of Accreditation 150
Accreditation as a Means for Self-Evaluation and Program Improvement 150
Accreditation as a Seal of Approval 151
Faculty Participation 152
External Connection 152
Accountability 152
Summary 153
CHAPTER FIVE: DISCUSSION 156
Discussion of Findings 156
Theme 1: Outcomes Assessment has Taken Hold at MLIS Programs 157
Theme 2: Accreditation Is the Primary Drive for MLIS Assessment Efforts,
While Program Directors, Faculty, and Curriculum Committees Provide
Leadership in Its Practice 158
Theme 3: MLIS Programs Employed a Diverse Range of Metrics for
Measuring Learning Outcomes 159
Theme 4: MLIS Programs Applied Assessment Results Extensively 160
Theme 5: MLIS Programs Conduct Outcomes Assessment with Limited Resources 160
Theme 6: MLIS Programs and Faculty Recognize the Intrinsic Value of
Assessment and Accreditation 161
Implications for Practice 161
Using Multiple Measures 162
Involving and Supporting Faculty 163
Sustaining Assessment Efforts 163
Future Research 164
Conclusions 166
LIBRARY AND INFORMATION STUDIES 6
REFERENCES 168
APPENDICES
Appendix A Master’s Programs in Library And Information Studies
in Canada And United States 200
Appendix B Survey Invitation Letter 206
Appendix C Survey Invitation Follow-Up 207
Appendix D Survey Instrument 208
Appendix E Survey on Learning Outcomes Assessment at MLIS Program Codebook 221
Appendix F Program Presentation Analysis Coding Scheme 227
LIBRARY AND INFORMATION STUDIES 7
LIST OF TABLES
Table 1: Geographical Distribution of the Programs Studied 85
Table 2: MLIS Program Identifier, Year Last Accredited, and Type of Parent Institutions 86
Table 3: Questionnaire and Corresponding ALA Standards 93
Table 4: ALA-Accredited MLIS programs (n=57) 101
Table 5: Formal Titles of Respondents 102
Table 6: Implementation of Learning Goals and Assessment Plan 103
Table 7: Learning outcomes statement from the 12 MLIS program presentations 105
Table 8: Alignment of Program Objectives with Student Learning Outcomes Assessment
Components, School G 106
Table 9: Adoption of ALA Core Competencies of Librarianship 108
Table 10: Knowledge and Competencies Statements from Relevant Professional
Organizations or State Standards from Survey 108
Table 11: Knowledge and Competencies Statements from Relevant Professional
Organizations or State Standards from Content Analysis 110
Table 12: Forces Driving the Development of Assessment Plan and Practice 113
Table 13: Composition of Assessment and Curriculum Committees 117
Table 14: Assessment Committee Name of the 12 MLIS Program Presentations Reviewed 119
Table 15: Key Committee Charges of 12 MLIS Programs 120
Table 16: Participants in the Assessment Process at MLIS Programs 122
Table 17: Examples of Direct and Indirect Measures for Assessment 125
Table 18: Mean Score and Standard Deviation, In Parenthesis, of Each Assessment
Measure Used to Assess Student Learning Based on a 6-Point Likert Scale 127
Table 19: Direct and Indirect Measures for Assessment at MLIS Programs (n = 12) 129
Table 20: Final Project Requirements by Program Count from Content Analysis 132
Table 21: Applications of Assessment Results Based on a 4-Point Likert Scale
From MLIS Program Survey 138
LIBRARY AND INFORMATION STUDIES 8
Table 22: Applications of Assessment Results, From MLIS Program
Presentations (N = 12) 140
Table 23: Faculty Involvement in Assessment Activities 147
LIBRARY AND INFORMATION STUDIES 9
LIST OF FIGURES
Figure 1: Growth of assessment literature, 1970 - 2008 19
Figure 2: ALA-Accredited Program Enrollment, 1979-2011 29
LIBRARY AND INFORMATION STUDIES 10
ABSTRACT
There is an increasing emphasis on learning outcomes assessment in the accreditation
process in higher education in general and in library education specifically. This mixed methods
study investigated the practice of outcomes assessment at master’s programs in library and
information studies accredited by the American Library Association in the United States and
Canada. Six salient themes emerged from the survey responses of Accreditation Liaison officers
and the content analysis of 12 program presentations of MLIS programs. First, outcomes
assessment has taken hold at MLIS programs in which 93% of programs have adopting a
common set of learning goals and outcomes, whereas 79% developed a written assessment plan.
Second, accreditation is the primary driver for MLIS assessment efforts, while program
directors, faculty, and assessment and curriculum committees provide leadership in its practice.
Third, MLIS programs employed a diverse range of tools for measuring learning outcomes.
Course assignment, course evaluation, rubric, internship rating, portfolios, and surveys are the
most commonly used direct and indirect measures. Fourth, MLIS programs applied assessment
results extensively for improving program, curriculum, course, and instruction; enhancing
student services; and preparing for accreditation. Fifth, MLIS programs conduct outcomes
assessment with limited resources. Two-thirds of the MLIS programs surveyed have no
dedicated assessment personnel to support assessment efforts. Sixth, MLIS programs and faculty
recognized the intrinsic values of assessment and accreditation. The findings suggest that MLIS
programs can sustain assessment efforts by combining direct and indirect assessment measures,
providing adequate faculty support, and further integrating assessment in the infrastructure and
culture of the program.
LIBRARY AND INFORMATION STUDIES 11
CHAPTER ONE: OVERVIEW OF THE STUDY
Higher education confronts shifting economic, political, societal, and technological trends
requiring thoughtful response and realignment of mission, business model, and practices.
Today’s academic institutions face increasing calls for effective student learning, efficient
operation, and increased productivity (ACE, 2012; Arum & Roksa, 2014; Dowd, 2005; Dowd &
Tong, 2007; Golden, 2006). As the guardian of higher education quality and the facilitator for
quality improvement, accreditation garners more visibility, public scrutiny, and demand for
fundamental change (Eaton, 2013b; Gillen, Bennett, & Vedder, 2010; Leef & Burris, 2002;
Wolff, 2013). One of the loudest and most serious of critiques comes from the 2006 report by
the Commission on the Future of Higher Education, appointed by then Secretary of Education
Margaret Spellings (USDE, 2006). After assessing the landscape of American higher education,
the Commission identified several critical deficiencies and called for urgent reform across four
areas: access, affordability, accountability, and quality. Accreditation, the report pointed out, has
“significant shortcomings” and requires a transformation (p. 15). To improve student learning
and to create a culture of accountability and transparency, the report demands that
“[p]ostsecondary education institutions should measure and report meaningful student learning
outcomes.” (p. 24) [emphasis added]. Furthermore, “[a]ccreditation agencies should make
performance outcomes, including completion rates and student learning, the core of their
assessment as a priority over inputs or processes,” as well as establish a framework that
facilitates “comparisons among institutions regarding learning outcomes and other performance
measures.” (p. 25) [emphasis added].
LIBRARY AND INFORMATION STUDIES 12
Accreditation in the United States
For more than a century, accreditation has been a key mechanism of assuring and
improving the quality of American higher education and has served the public well. In the
United States, accreditation is carried out by non-governmental, independent accrediting
organizations that develop standards to ensure that institutions or programs meet the threshold of
academic quality. There are four types of accreditation organizations in the US: 1. Regional
accreditors for public and private, degree-granting, two- and four-year institutions; 2. National
faith-related accreditors for religiously affiliated institutions; 3. National career-related
accreditors for career-based, single-purpose institutions; and 4. Programmatic accreditors for
specific program and professional schools in areas such as law, medicine, health, engineering,
business, and library and information studies professions (Eaton, 2012a).
Accreditation is a self-regulatory and collegial process that typically involves three steps.
First, institutions or programs assess themselves against a set of accreditation standards in order
to identify their strengths and areas of concern. They are then visited by a team of external peer
reviewers. Lastly, an accrediting agency makes the accreditation judgment and final decision
based on the peer review report and their recommendations. To facilitate on-going quality
improvement, accreditation is a recurring process. After initial accreditation, institutions and
programs undergo periodic review by accreditors in order to maintain their accredited status
(Brittingham, 2009; Eaton, 2012a). With no direct intervention from federal and state
government in the process, accreditors operate under the three essential academic principles:
institution autonomy, academic freedom, and commitment to institutional mission (Eaton, 2011).
LIBRARY AND INFORMATION STUDIES 13
Council for Higher Education Accreditation (CHEA)
The Council for Higher Education Accreditation (CHEA) is a private, non-profit
organization that coordinates accreditation activities in the United States. Representing more
than 3,000 colleges and universities and 60 national, regional, and programmatic accreditors,
CHEA aims to “[promote] academic quality through formal accreditation of higher education
accrediting bodies” and “coordinate and work to advance self-regulation through accreditation”
(CHEA, 2013a, p. 2).
One of the key purposes of CHEA is to review accrediting agencies to assure that their
accreditation standards, processes, and operations comply with the quality, improvement, and
accountability expectations established by CHEA (CHEA, 2010a). Every 10 years, each CHEA-
recognized accrediting agency undergoes a review wherein the CHEA Committee on
Recognition examines the agency’s standards and activities (CHEA, 2014a).
With its unique non-governmental, self-regulatory, and voluntary nature, the American
accreditation enterprise, under CHEA’s leadership and coordination, is regarded as cost-effective
and rigorous (Brittingham, 2009; Eaton, 2011, 2012b; Wolff, 2013; Woolston, 2012). Currently,
more than 8,300 degree-granting and non-degree-granting institutions and more than 24,000
programs are accredited by over 90 accreditors recognized either by CHEA or by the United
State Department of Education, or both (CHEA, 2015). Accreditation organizations rely heavily
on volunteers to carry out peer review activities. For example, during the 2010-2011 period,
20,761 volunteers provided in-kind support that amounts to $21,474,551 for the accreditation
process (CHEA, 2014c).
LIBRARY AND INFORMATION STUDIES 14
Accreditation as Gatekeeper
Accreditation in the United States also performs a gatekeeping role for higher education’s
access to federal and state funds. Since the 1950s, the federal government relies on the
accreditation organizations’ assessments of colleges and universities in order to funnel funds for
student loans and research grants (ACE, 2012; Brittingham, 2008, 2009; Eaton, 2010; Ewell,
2008a; Leef & Burris, 2002). Consequently, any college or university in the United States has to
be accredited in order to be eligible for federal funds, student grants and loans, and research or
program moneys. Moreover, in recent years, federal government’s investment in higher
education increased rapidly. For instance, between fiscal year 2003-04 and 2013-14, total
federal aid in financing postsecondary education increased 76% in constant dollars (Baum,
Elliott, & Ma, 2014). In fiscal year 2013-14, the federal government injected $167.38 billion
into higher education in the form of student financial aid (“Federal Higher,” 2014). As a result,
the federal government now plays a more authoritative and influential role in overseeing the
accreditation process and policy making (Brittingham, 2009; Eaton, 2013b; Ewell, 2008b).
A Changing Environment
While the federal investment in higher education has increased, state spending on higher
education has been in decline over the last three decades (Fischer & Stripling, 2014; McLendon,
Hearn, & Mokher, 2009). Since 2008, the average tuition at four-year public colleges increased
27% to $6,809 (Field, 2013a). However, most of the tuition increase was absorbed by students.
In 2000, there were only three states where students paid more for tuition than the amount of
subsidy universities and colleges received from their states. In 2012, the number of states with
higher student tuition contributions jumped to 24 (“Who Pays,” 2014). As students’ share of
tuition expanded, student loan debt edged out both auto loans and credit card debt to become the
LIBRARY AND INFORMATION STUDIES 15
largest form of consumer debt next to mortgages (“Borrowing for College,” 2014; “Student
Loan,” 2013). Meanwhile, the default rate on federal student loans rose steadily, and the amount
of federal student loan defaults reached $98.1 billion in the third quarter of FY 2014 (“Federal
Student,” 2014).
Another troubling sign of higher education performance and accountability is the quality
of student learning. The six-year graduation rate for full-time, first-time undergraduate students
at four-year institutions in 2012 remained low at 59% (USDE, 2014a). For two-year colleges,
the three-year completion rate stood at 31% in 2012 (USDE, 2014a), much lower than President
Obama’s goal of 50% completion rate in community colleges by the year 2020 (AACC, 2010).
For underrepresented minorities, the completion rates are significantly lower. At a global level,
the United States’ graduation rate falls behind that of other industrial countries. Among the 29
countries in the Organization for Economic Cooperation and Development, the United States
ranked 22
nd
in college graduation in 2012 (OECD, 2014). With US higher education losing its
preeminence, there are growing concerns about America’s ability to participate in an
increasingly dynamic and competitive global marketplace.
All of these indicators, including skyrocketing tuition, rising student loan debt and default
rates, increasing federal investment in higher education, and less-than-impressive graduation
rates, coupled with rapid technological development and innovation, prompted calls for further
reform of higher education and for addressing questions about the efficacy of the traditional
accreditation system. Among other things, accreditors are criticized for not being tough enough,
setting the bar of accreditation standards too low, and for not rigorously enforcing existing
standards. Additionally, accreditors have been accused of not being attentive to the evidence of
student achievement, of their conflict of interest with institutions of higher education, for their
LIBRARY AND INFORMATION STUDIES 16
lack of public transparency, for being parasitic, for being too expensive, and for inhibiting
innovation.
As Congress prepares for the reauthorization of the Higher Education Act of 1965,
accreditors are under close scrutiny and criticism (Field, 2013a, 2013b; Kelderman, 2013). For
example, according to Senator Tom Harkin of Iowa, only four institutions out of 7,000 lost their
regional accreditation in 2010 (Fieldb, 2013). The peer-review system, financed by participating
members and staffed by volunteers from peer institutions, is said to be self-perpetuating, self-
serving, prone to abuse, and ineffective (DeMillo, 2013; Field, 2013b; Gaston, 2014; Leef &
Burris, 2002). Additionally, accreditation decisions are not always available online for public
review. As a result, the credibility of the accreditation enterprise has reached an all-time low.
Amid criticism and debates, President Obama, in 2013, proposed a plan for a revised or
an alternative accreditation system that will allocate federal student aid based on a college’s
ratings on value, affordability, and student outcomes (White House, 2013a). In addition, the
President considered creating a national accreditor for emerging models of higher education,
such as massive open online courses (MOOCs) and other online courses offered by non-college
providers (Fain, 2014). The President’s plan, in a way, echoes the deficiencies and
recommendations identified in the 2006 Spellings’ Commission report on the Future of Higher
Education. Access, affordability, accountability, quality and value, and transparency remain the
centerpiece that the higher education and accreditation community needs to address and resolve.
Focusing on learning outcomes, presenting evidence of student learning, and increasing the
transparency of accreditation will be ever more critical and expected in future accreditation
practices.
LIBRARY AND INFORMATION STUDIES 17
Student Learning Outcomes
For the last two decades, student learning outcomes have been increasingly required by
federal, state, and accreditation agencies as a performance barometer of an institution, school, or
academic program. Outcome assessment serves two major purposes: quality improvement and
external accountability (Bresciani, 2006). Internally, outcomes-based assessment enhances an
institution’s teaching and learning; externally, it demonstrates the institution’s effectiveness to
policy makers, accreditors, and the public (Ewell, 2009). By sharing assessment results, colleges
and universities are held to accountability and transparency in meeting society’s expectations.
The development of learning outcomes assessment in the last three decades is reflected in
several dimensions, including the growth in assessment research and publications, professional
organizations, large-scale assessment initiatives, and the assessment profession. Ewell (2002)
commented that assessment was “an emerging scholarship” (p. 3) in which the core values of
assessment become part of the dominant culture, but the practice of assessment is not fully
institutionalized. However, less than a decade later, learning outcomes assessment grew to be an
important practice for higher education and to have a research focus. Maki (2010), in reviewing
the assessment movement in the last decade, pointed out that accreditors, professional and
disciplinary organizations, and foundations had progressively “produced resources and offered
workshops, institutes, symposia, and conferences on assessment, disseminating knowledge about
assessment practices in undergraduate and graduate education at institution, program, and
department levels” (p. xvii). Hutchings (2010) further contributed the recent developments of a
more hospitable environment for faculty to be involved in assessment to several factors. First,
there is a growing attention to teaching and learning in general, as witnessed by the “huge rise in
the number of campus events, conferences, special initiatives, funded projects, journals, online
LIBRARY AND INFORMATION STUDIES 18
forums, and multimedia resources shining a light on faculty’s work as teachers” (p. 10). At the
same time, there is growth in the scholarship of teaching and learning with more than 250
campuses involved in the Carnegie Academy for the Scholarship of Teaching and learning (p.
11). Finally, the assessment movement is invigorated by the emergence of new tools, such as the
Collegiate Learning Assessment, National Survey of Student Engagement, and the Community
College Survey of Student Engagement, and new technologies, including electronic portfolio and
online data management systems for assessment.
Growth in Literature
The proliferation of literature on student learning outcomes in higher education is
reflected in the increased venue for publication. Two new scholarly journals, Educational
Assessment, Evaluation and Accountability (since 2009) and Research and Practice in
Assessment (since 2008), were added in recent years to the already established body of scholarly
journals in this area. Other scholarly journals in assessment include Assessment and Evaluation
in Higher Education (since 1975), Assessment in Education: Principles, Policy & Practice (since
1994), Assessment Update (since 1989), and Practical Assessment, Research & Evaluation
(since 1999). In addition, the topics covered in these publications are diverse and broad (Hernon
& Schwartz, 2013).
The substantial body of literature on learning outcomes assessment emerged as a key
research area in education, and its growth is evident. Figure 1, drawn from the Google Books
project, illustrates the growth trends of two phrases “student learning assessment,” and “student
learning outcomes assessment” appearing in publications from 1970 to 2008. The Google Books
Ngram charts the yearly count of terms found in 8 million books digitized by Google (Lin et al.,
2012; Michel et al., 2011). The Google project allows researchers to examine the frequency of
LIBRARY AND INFORMATION STUDIES 19
words or phrases in publications and track the rise and fall of their usage over time. The term
“student learning assessment” first appeared in publications in 1973 and “student learning
outcomes assessment” in 1987. Since the mid-1990s, the use of these two terms have picked up.
Since 2000, the usage of “student learning assessment” continues to grow exponentially.
Figure 1. Growth of assessment literature, 1970 - 2008
Growth in Professional Organizations
Another indicator of the growing importance and interest in student learning outcomes
practice is the founding in the last few years of two national organizations explicitly focused on
assessment (Maki, 2010). The National Institute for Learning Outcomes Assessment (NILOA)
was established in 2008 with the mission to “discover and disseminate ways that academic
programs and institutions can productively use assessment data internally to inform and
strengthen undergraduate education, and externally to communicate with policy makers, families
and other stakeholders” (NILOA, 2012, para. 1). Since its establishment, NILOA has published
10 research reports and 23 occasional papers as well as developed a research library database of
a variety of assessment literature and websites. All of these resources are freely available from
its website. The second organization, the New Leadership Alliance for Student Learning and
Accountability, was co-founded by the Association of American Colleges and Universities
(AAC&U) and the Council for Higher Education Accreditation (CHEA) in 2009 with financial
LIBRARY AND INFORMATION STUDIES 20
support from the Teagle Foundation (AAC&U, 2008a). The mission of New Leadership
Alliance is to “leads and supports voluntary and cooperative efforts to move the higher education
community towards gathering, reporting on, and using evidence to improve student learning in
American undergraduate education” (New Leadership Alliance, 2014, para. 1). Its publication is
available from CHEA’s website.
Growth in Assessment-Related Initiatives
In last few years, several large-scale assessment projects were initiated by professional
organizations or foundations (Arum & Roksa, 2014; Jankowski & Provezis, 2011). These
projects aim to develop the framework for institutions to measure the quality of learning. The
AAC&U has two interrelated initiatives. The first initiative, the Liberal Education and America’s
Promise (LEAP), was launched in 2005 and is “a national advocacy, campus action, and research
initiative that champions the importance of a twenty-first century liberal education” (AAC&U,
2014a, para. 1). The key piece of this project is the Essential Learning Outcomes, which consists
of a set of skills, knowledge, and personal qualities important for a liberal education in four
broad areas: knowledge of human cultures and the physical and natural world, intellectual and
practical skills, personal and social responsibilities, and integrative and applied learning
(AAC&U, n.d., 2008b; Hernon & Schwartz, 2013; Maki, 2010).
The second AAC&U initiative, Valid Assessment of Learning in Undergraduate
Education (VALUE), was launched in 2007 as a collaborative project involving faculty and
educational professionals from more than 100 institutions to develop a set of institutional-level
rubrics corresponding with the 16 LEAP Essential Learning Outcomes. The VALUE rubrics are
grouped in three categories: intellectual and practical skills, personal and social responsibility,
and integrative and applied learning. They reflect faculty expectations for essential learning
LIBRARY AND INFORMATION STUDIES 21
across the nation regardless of the type of institution, its mission, size or location (AAC&U,
2008b, 2014b). AAC&U also maintains an online assessment resource portal that brings
relevant assessment information together from diverse sources.
The Voluntary System of Accountability (VSA) is another nation-wide, large-scale effort
to assess student learning outcomes. Jointly supported by the Association of Public and Land-
grant Universities and the American Association of State College and Universities, VSA is a
voluntary initiative aiming to support participating institutions in the measurement of education
outcomes and improvement efforts as well as demonstrating accountability and stewardship to
general public. VSA’s chief product is called College Portrait, a web portal of the profiles of
over 300 colleges for prospective students. In addition to basic school information, such as costs
of attendance, degree offerings, and student experiences, the site also presents student learning
outcomes from participating public four-year institutions (Jankowski et al, 2012; VSA, 2014).
The instruments used by VSA-participating institutions to measure and report student learning
outcomes are three standard tests: ACT’s Collegiate Assessment of Academic Proficiency, ETS’
Proficiency Profile, and the College Learning Assessment. The three key qualities measured by
these test are critical thinking, writing, and analysis/problem solving (Rhodes, 2012). Other
similar initiatives that publicly report the evidence and activities on college campuses related to
student outcomes include University and College Accountability Network (U-CAN),
Transparency By Design (TbD), Achieving the Dream (ATD), and Voluntary Framework of
Accountability (VFA) (Jankowski & Provezis, 2011).
The third major assessment project, the Degree Qualifications Profile (DQP) by the
Lumina Foundation, offers a qualifications framework that “illustrates clearly what students
should be expected to know and be able to do once they earn their degrees” (Lumina Foundation,
LIBRARY AND INFORMATION STUDIES 22
2011, p. 1). DQP specific learning outcomes are separated into three academic degree levels,
from associate’s, to bachelor’s and to master’s degrees regardless of a student’s field of study.
Learning outcome frameworks, including DQP, Essential Learning Outcomes, and
VALUE, offer tools for educators to track the student’s chronological development and
achievement as well as to identify gaps in knowledge, skills, and quality that employers deem
critical in job market (Hart Research Associates, 2006, 2008, 2010).
Assessment Practices
Assessment of student learning has gradually become a common practice across
campuses. In 2009, National Institute for Learning Outcomes Assessment (NILOA) conducted
its first study on the assessment activities at regionally accredited, undergraduate-degree
granting, two and four-year, public, private, and for-profit institutions. Of the 1,518 responding
institutions, 74% indicated that they had adopted common learning outcomes for all
undergraduate students (Kuh & Ikenberry, 2009). In a subsequent NILOA survey of the same
population in 2014, the proportion increased to 84% (Kuh, Jankowski, Ikenberry, & Kinzie,
2014). The 2008 survey by AAC&U also reported that 78% of its 433 member institutions have
a common set of learning outcomes for all of their undergraduate students and 68% of the
institutions also assess learning outcomes at the departmental level (Hart Research Associates,
2009a). In another survey on the assessment practices at department or program level in 2011,
NILOA reported that more than 80% of 982 responding academic programs established intended
learning outcomes goals (Ewell, Paulson, & Kinzie, 2011).
Statement of the Problem
Incorporating a student learning outcome assessment program in the accreditation process
has become an acceptable way to increase accountability efforts for consumers, politics, the
LIBRARY AND INFORMATION STUDIES 23
government, and the general public (Palmer, 2012). As a result, almost all accreditors now have
standards that require institutions and programs to set clear expectations of student learning
outcomes, to systematically collect and analyze evidence of student achievement, and to inform
the public of how students perform and of what needs improvement.
Programmatic or specialized accreditation, with its emphasis on specific knowledge and
skills that students have to master, pays particular attention to student learning outcomes.
According to a 2002 CHEA survey, 50 of the 59 specialized accreditors set up competency-
based standards (CHEA, 2002, p. 2). Several authors reported student learning outcomes
assessment is implemented and integrated into the programmatic accreditation process in various
disciplines, including engineering (Fakhry, 2012; Goda & Reynolds, 2010; Kim, Yue, Al-
Mubaid, Hall, & Abeysekera, 2012; Volkwein, Lattuca, Harper, & Domingo, 2007), health care
(Bouldin & Wilkin, 2000; Brosseau & Fredrickson, 2009; Dalrymple & Scherrer, 1998;
Kirschenbaum, Brown, & Kalis, 2006), business (Inamdar & Roldan, 2013; McCoy,
Chamberlain, & Seay, 1994; Pringle & Michel, 2007; Trapnell, 2007), sociology (Weiss, 2002),
and library and information studies (Applegate, 2006; Perrault, Gregory, & Carey, 2002).
Despite greater understanding and acceptance by administrators, accreditors, and faculty,
implementing learning outcomes assessment in higher education is still a complex and time-
consuming process. The extent of successful implementation of learning outcomes assessment
not only depends on the individual institution, but also on how accreditors enforce it in the
accreditation process. The process is further complicated by the emerging models of higher
education offerings and new instructional technology.
At an institutional level, implementing outcomes assessment involves changes in
organizational culture, structure, mission, and priority. It further requires collaborative learning
LIBRARY AND INFORMATION STUDIES 24
among administrators, faculty, and supporting staff. Maki (2010) expertly stated that the most
important determining factor is how assessment is viewed and owned at the institution:
How we situate assessment as a process of collective inquiry matters. Driven solely by
external forces, such as legislators or accreditors, assessment probably resides on the
margins of our institutions, eliciting periodic attention. This peripheral location divorces
us from our institutional missions and values and the education practices that translate
our intentions into multiple contexts for learning. Driven by internal curiosity about the
nature of our work, assessment becomes a core institutional process, embedded into
definable process, decisions, structures, practices, forms of dialogue, channels of
communications, and rewards. (p. 29)
Other salient issues of implementing outcomes assessment within an institutional context
are faculty buy-in, institutional commitment and investment, integration into existing practice,
transparency, outcome equity, and managing the tension between quality improvement and
external accountability. These issues are discussed in detail in Chapter Two of this study.
There are also degrees of difference among accreditors in enforcing outcome-based
standards and assessment practice by accreditors. According to Ewell (2008b), the process of a
rigorously reviewed institutional and program performance based on learning outcomes
standards by accreditors is slow. Some accreditors have not fully committed to the practice of
judging the “actual levels of student learning” based on evidence collected (p. 141). Instead,
accreditors still focus more on input, process, and quantitative measures, as “[it] is a lot easier to
measure than the quality of learning outcomes” (p. 142).
Another challenge for specialized accreditors is that there is no across-the-board, single
template or measure for outcomes assessment. Disciplines such as fine arts or performance,
LIBRARY AND INFORMATION STUDIES 25
requires a very different approach than other professional schools (ACE, 2012; Gaston, 2014).
Disciplines that do not have a connection with professional licensure requires a thorough and
clear protocol as well as guidelines on how student learning outcomes should be measured and
evidence collected. Learning outcomes cannot be narrowly defined by, or relied on, graduation
or placement rates.
Some professional associations developed competency statements regarding desired
knowledge, skills, and qualities for practitioners in the field. These statements serve as guidance
for degree program planning or curriculum development. Although these competency statements
complement and supplement the accreditation standards, it can be a complicated task to connect
these competences with student learning and to measure the outcomes. For instance, the
American Library Association developed the Core Competences of Librarianship statement in
2009, which “define[s] the knowledge to be possessed by all persons graduating from ALA-
accredited master’s programs in library and information studies” (ALA, 2009b). The list
consists of 41 realms of knowledge, skills, and attitudes in eight broad categories. However,
there is no clear connection between the Core Competences of Librarianship statement and
ALA’s accreditation standards or process. There is also no official ALA policy or guidelines on
how these core competences should be applied in program design or curriculum development.
Technology and innovation present another challenge in accreditation and learning
outcomes assessment. The rapid development and growth of information technology introduced
a slew of new forms of instructional delivery, including distance, online, asynchronous, blended,
hybrid, and self-paced learning (Griffiths, Chingos, Mulhern, & Spies, 2014; Lack, 2013; Wu,
2015). According to Gaston (2014), accreditation and outcomes assessment of distance
education is more complicated in the four dimensions associated with distance education: 1. the
LIBRARY AND INFORMATION STUDIES 26
range of discipline programs offered; 2. the variety of program lengths and requirements; 3. the
array of credentials that may be sought; and 4. the choice among program providers (p. 169).
The innovation of educational offerings such as massive open online courses (MOOCs), flipped
or hybrid classrooms, Mozilla Open Badges, stackable credentials, and competence-based
credentialing requires different approaches for assessing student learning and assuring education
quality than does the traditional format.
Technology further improves precision in assessing student performance and learning
outcomes. For example, assessing student performance and learning outcomes is enhanced with
the computerized adaptive testing which adjusts the difficulty of questions based on a student’s
responses throughout the assessment process (Arum & Roksa, 2014). The Integrated Planning
and Advising Services (IPAS) system provides aggregated learning analytics by mining data
across dispersed campus learning management systems, including education planning/progress
tracking system, academic advising system, and early-alert system. Still in its infancy stage,
IPAS holds substantial promise to offer students, faculty, and staff a comprehensive view of
student’s performance throughout their educational journey. By leveraging analytics
technologies, IPAS establishes assessment points based on a student’s learning objectives, tracks
and improves learning progress, identifies early warning signs, and promotes higher rates of
learning outcomes and degree completion (Brooks, 2014; Grajek, 2014, 2015; Johnson, Adams
Becker, Estrada, & Freeman, 2015).
Competency-based education that focuses on what a student knows and can do through
individual assessment is another example of leveraging the advances of learning technology and
requires different assessment framework. In this emerging education model, students learn
through multiple avenues, and they receive support, advice, and coaching virtually, and earn a
LIBRARY AND INFORMATION STUDIES 27
credential by demonstrating their learning through performance-based assessment instead of by
accumulated credit hours (Klein-Collins, 2013). Regional and specialized accreditors
particularly need to know if any of these new education models or offerings is appropriate for
their institution or discipline and how learning can be validated and outcomes be assessed.
With the increasing demand for effective student learning and transparency in higher
education, accreditors, federal and state legislators, funding agencies, and other stakeholders will
continue to stress outcomes assessment. Evaluation of student performance is resource-intensive
and requires great efforts from all participants, whereas improvement can take time to manifest.
Commitment and integration to outcomes assessment by MLIS programs might vary due to
institutional capacity and priority, as some MLIS programs might be able to allocate more
resources to the process while other programs might have such practice in place due to
institutional or legislative requirements. Some programs might have greater buy-in from faculty
members while other programs might have established a culture that places greater emphasis on
such practice. What is unknown is whether this is, in fact, the case, and, if it is, how that practice
and commitment might vary across the MLIS programs.
Purpose of the Study
The American Library Association (ALA) has been the accreditation body for the
librarian profession since 1924. The current ALA standards for accrediting Master of Library
and Information Studies (MLIS) programs, adopted in 2008, shift the accreditation practice from
a prescriptive approach to a more qualitative evaluation with emphasis on the assessment of
student learning outcomes. The purpose of this study was to investigate how the new emphasis
of learning outcomes assessment was integrated in the operation of ALA-accredited MLIS
LIBRARY AND INFORMATION STUDIES 28
programs in the United States and Canada. Specifically, the study aimed to address the
following research questions:
1. To what extent is the practice of outcomes assessment implemented at ALA-accredited
MLIS programs?
2. What types of outcomes assessment measures and approaches are employed at these
programs?
3. How have student learning outcomes assessment results been used in program’s
improvement efforts?
4. What are the perceived value and importance of outcomes assessment by program
administrators?
Significance of the Study
In the last ten years, MLIS education experienced several major changes in the area of
student enrollment, new program offerings, job market conditions, and the proliferation of
online-based programs. The collective impact of these trends not only affect the fundamental
operation of library programs, but also demands adjustments in the accreditation practices.
Student Enrollment
The number of students pursuing an MLIS degree fluctuated in recent years. Figure 1
shows the enrollment trend of ALA-accredited master’s degree programs from 1979 to 2011, the
latest statistical report available from Association for Library and Information Science
Education. The line chart displays a steady growth from 1999 through 2006, followed by a roller
coaster pattern in recent years. According to ALA’s Office for Accreditation, there was a
consecutive drop in enrollment in 2012 and 2013. The figure for 2013 dropped to below 17,000
for the first time since 2004.
LIBRARY AND INFORMATION STUDIES 29
Figure 2. ALA-Accredited Program Enrollment, 1979-2011.
(Adapted from Library And Information Science Education Statistical Report 2012 by D. P.
Wallace, 2012, p. 11.)
New Entrance
On the other hand, there was an increase in new MLIS programs during the last decade.
Since 2004, the number of ALA-accredited programs increased from 62 to 63, while one
program lost its accreditation status. In addition, there are three new programs in the pipeline to
be accredited within the next two years.
Job Market
The trend of declining student enrollment, coupled with the expanding the capacity of
available MLIS programs, is further complicated by the profession’s not-so-positive job outlook.
According to the 2014 U.S. Bureau of Labor Statistics, employment of librarians is projected to
grow only 7% between 2012 and 2022, slower than the 11% average growth rate of other
occupations (United States Department of Labor, 2014). The annual placement statistics,
compiled by Library Journal, a trade magazine, offer another unpleasant picture. Of the 2,023
LIBRARY AND INFORMATION STUDIES 30
MLIS graduates in 2013, only 57% reported that they had a permanent professional position a
year after graduation. Five percent were in temporary professional positions, 9% in non-
professional positions, and 10% in positions outside of librarianship altogether (Maata, 2014).
In another report, librarianship, with its low pay and below-average growth perspective,
is ranked as the worst master’s degree for jobs (Smith, 2012). This report was further echoed by
an editorial in the Library Journal by a recent MLIS graduate about the value of an MLIS degree
(Kelly, 2013). The author, strongly questioning the return on investment, the low salary, and the
justification of the requirement of an MLIS degree for a librarian position, stirred overwhelming
response and discussion within the profession.
Online Programs
The MLIS market is not only more crowded, but also more competitive due to the
widespread trend of online-only MLIS programs. From 2010 to 2014, the number of ALA-COA
online-only programs increased 56% from 16 to 25 programs. An online-only program allows
library schools to expand geographically to regions and states with no library programs and to
expand internationally as well. Furthermore, library schools can differentiate their online
programs by offering unique or specialized concentration or courses not available from their
competitors. For example, the online-only program at University of Southern California is the
first MLIS program in affiliation with a business school (Schwartz, 2013).
CHEA’s Recognition of ALA Committee of Accreditation
As the national coordinating body for institutional and programmatic accreditation,
CHEA reviews accrediting agencies to assure that their accreditation standards, processes, and
operations comply with CHEA quality, improvement, and accountability expectations (CHEA,
2010b). Every 10 years, each CHEA-recognized accrediting agency undergoes a review process
LIBRARY AND INFORMATION STUDIES 31
in which the CHEA Committee on Recognition examines the agency’s standards, practices, and
activities (CHEA, 2014a).
ALA was first recognized by CHEA as an accrediting body in 2001. In 2010, ALA
applied for reaffirmation of recognition through CHEA’s comprehensive review process in 2011.
In January 2012, CHEA informed ALA that it deferred the decision and asked ALA to provide
“more sufficient evidence that ALA has and implements a specific accreditation standard or
policy that requires institutions or programs routinely to provide reliable information to the
public on their performance, including student achievement as determined by the institution or
program” (CHEA, 2013b, p. 2). Specifically, CHEA requested that ALA provide substantial
evidence on enforcing the 2006 CHEA Standard 12B.1, which states:
To be recognized, the accrediting organization provides evidence that it has implemented:
accreditation standards or policies that require institutions or programs routinely to
provide reliable information to the public on their performance, including student
achievement as determined by the institution or program. (CHEA, 2006b)
ALA subsequently supplied CHEA with additional information and explanation. In
September 2012, CHEA issued a second statement, reiterating its decision to defer recognition
and requesting ALA to continue its efforts in improving its procedures and policies to
demonstrate its public accountability. After additional examples provided by ALA, CHEA
finally reaffirmed the recognition of ALA-COA’s application in January 2013 (CHEA, 2013b).
ALA-COA is among the 48 programmatic accrediting organizations recognized by CHEA
(CHEA, 2014b).
LIBRARY AND INFORMATION STUDIES 32
Importance of the Study
With concerns raised from the field about ALA accountability and accreditation
practices, O’Connor and Mulvaney (2013) urged ALA and library programs to collect more
outcome-based data and to make these measures, program self-studies, and peer review reports
freely available to the general public. Specifically, they compiled a list of 10 performance
indicators that show a program’s ability to meet ALA standards and can be used for comparison
among its peers. They concluded that “[c]ollecting and evaluating more data about ALA-
accredited programs library schools could provide more credibility for the MLS and its
recipients” (p. 40).
Given the changing landscape of MLIS programs and firmer requirements on
accountability set by CHEA, this study will be useful to ALA-COA and MLIS program directors
because it sheds light on how the current practice of assessing student learning. Furthermore, it
provides information on the common practices of assessing student learning and the reasons for
them. Finally, it offers insight on the value of an ALA accreditation by providing answers
regarding why programs do not apply for ALA accreditation.
Definitions
Accreditation: A quality review process conducted by professional peers whereby an
institution or program is evaluated to determine whether it has a minimum level of adequate
quality.
Accreditation Liaison Officer (ALO): A designated institutional representative who is
chiefly responsible for coordinating the accreditation effort with the accrediting agency.
American Library Association (ALA): Founded in 1876, ALA is the oldest and largest
library association in the world. Its mission is to provide leadership for the development,
LIBRARY AND INFORMATION STUDIES 33
promotion, and improvement of library and information services and the profession of
librarianship in order to enhance learning and ensure access to information for all.
American Library Association, Committee on Accreditation (ALA-COA): A standing
committee of the American Library Association responsible for the implementation of the
accreditation of library and information studies programs. The COA develops and formulates
standards of education for library and information studies and policies and procedures for ALA
accreditation.
Assessment: Identification, collection, and preparation of data to evaluate the attainment
of student learning outcomes. Effective assessment uses relevant direct, indirect, qualitative and
quantitative measures appropriate to the outcome being measured.
Benefits of accreditation: The advantages an institution gains by having accreditation.
Cost of accreditation: The institutional commitment in terms of budgetary spending
(direct costs) and time contributed (indirect costs) by the various campus constituencies to the
accreditation effort.
Council for Higher Education Accreditation (CHEA): The national, private, nonprofit,
body coordinating advocacy efforts for accreditation and performing the function of recognizing
accrediting entities; CHEA reviews the effectiveness of accrediting bodies and primarily assures
the academic quality and improvement within institutions.
External Review Panel (ERP): A group of library and information studies educators and
practitioners appointed by the ALA-COA through the Office for Accreditation to visit a program
and verify information in the Program Presentation. Panelists are also vetted by the program.
LIBRARY AND INFORMATION STUDIES 34
Gatekeeper: The role of accreditation with respect to federal funding; in order for an
institution or program to qualify for the receipt of federal funds it must be accredited by a
recognized institution, thus accreditation serves as a gatekeeper for those funds.
Institutional accreditation: Recognition of a minimum level of adequate quality at the
institutional level and without respect to individual programs of study.
Learning outcomes: see Student learning outcomes.
National accreditation: Quality review at either the institutional level or the
programmatic level conducted on a national scope rather than on a regional or state scope.
Peer Review: The concept governing accreditation whereby the actual review of the self-
study is conducted by knowledgeable professionals from like institutions in order to root the
decision in legitimacy and credibility.
Programmatic accreditation (or specialized accreditation): Recognition of a minimum
level of adequate quality at the level of the individual program of study without respect to the
rest of the institution as a whole.
Regional accreditation: Quality review at the institutional level conducted on a regional
scope rather than on a national or state scope.
Self-regulation: A concept whereby entities agree to govern themselves and establish
mechanisms and processes to do so; accreditation exemplifies the concept of self-regulation.
Self-study: A comprehensive review usually lasting approximately a year and a half to
two years resulting in a culminating document in which an institution or program considers every
aspect of its operation in order to determine whether it has adequate resources at all levels to
fulfill its clearly defined mission.
LIBRARY AND INFORMATION STUDIES 35
Site visit: Generally, a two to three day period in which knowledgeable professionals
from like institutions visit an institution after reviewing its self-study to ascertain the accuracy of
the self-study and identify any concerns; subsequent to the site visit, the visiting team makes an
accreditation recommendation to the accrediting body after which the accrediting body
announces a formal decision.
Specialized accreditation (or programmatic accreditation): Recognition of a minimum
level of adequate quality at the level of the individual program of study without respect to the
rest of the institution as a whole.
Student learning outcomes: Statements that describe the knowledge, skills and behaviors
that students acquire as they progress through the program.
U.S. Department of Education: The arm of the federal government concerned with
education quality and access nationally.
Voluntary association: An organization in which membership is optional; accrediting
bodies began as voluntary associations and, strictly speaking, continue to be so classified,
however because eligibility for federal funding is tied to accreditation many professionals
question whether accreditation is truly voluntary.
LIBRARY AND INFORMATION STUDIES 36
CHAPTER TWO: LITERATURE REVIEW
This chapter surveys the body of literature written on three broad areas germane to the
scope of study:
1. The development of accreditation in the United States and its future direction;
2. The effects of learning outcome assessments in the United States, including the
development of outcomes assessment movement in higher education, its framework,
benefits, and challenges;
3. The accreditation of library education in the United States, covering a brief history of
library education and its accreditation system, the current library accreditation
standards and its emphasis on outcomes assessment, and the assessment practices at
library education.
The Development of Accreditation and the Effect of Outcomes Assessment and Accreditation
sections of this chapter were authored by students in the thematic group on accreditation and
outcomes assessment under the direction of Dr. Robert G. Keim, chair of the thematic group.
The author(s) for each segment of this two sections are Nathan Barlow and Rufus Cayetano for
History of Accreditation, Jill Richardson for Future Direction of Accreditation, and Benedict
Dimapindan and Winyuan Shih for Effect of Outcomes Assessment and Accreditation. The
purpose of this literature review is to contextualize the current state of accreditation and
assessment practices in the field of library and information studies.
History of Accreditation
Early Institutional Accreditation
Accreditation has a long parentage among the universities and colleges of the United
States, dating to the self-initiated external review of Harvard in 1642. This external review, done
LIBRARY AND INFORMATION STUDIES 37
only six years after Harvard’s founding, was intended to ascertain rigor in its courses by peers
from universities in Great Britain and Europe (Brittingham, 2009; Davenport, 2000). This type
of self-study is not only the first example in America of peer-review, but it also highlights the
need for self and peer regulation in the U.S. educational system due to the lack of federal
governmental regulation. This lack of federal government intervention in the evaluation process
of educational institutions is a main reason for the way accreditation in the U.S. developed
(Brittingham, 2009).
While the federal government does not directly accredit educational institutions, the first
example of an accrediting body occurred through a state government. In 1784, the New York
Board of Regents was established as the first regionally organized accrediting organization. The
Board was set up like a corporate office with the educational institutions being franchisees. The
Board created mandated standards that had to be met by each college or university in order for it
to receive state financial aid (Blauch, 1959).
Not only did Harvard pioneer accreditation in the U.S. with early external review of its
own courses, but the president of Harvard University initiated a national movement in 1892
when he organized and chaired the Committee of Ten, which was an alliance formed among
educators (mostly college and university presidents) to seek for standardization regarding
educational philosophies and practices in the U.S. through a system of peer approval (Davis,
1945; Shaw, 1993).
Around this same time, different associations and foundations undertook an accreditation
review of educational institutions in the U.S. based on their own standards. Associations such as
the American Association of University Women, the Carnegie Foundation, and the Association
of American Universities would, for a variety of different reasons and clientele (e.g., gender
LIBRARY AND INFORMATION STUDIES 38
equality, professorial benefits), evaluate various institutions and generate lists of approved or
accredited schools. These associations were responding to the desire of their constituents for
accurate information regarding the validity and efficacy of the different colleges and universities
(Orlans, 1974; Shaw, 1993).
Regional Accreditation, 1885 to 1920
When these associations declined to broaden or continue their accrediting practices,
individual institutions united to form regional accrediting bodies to assess secondary schools’
adequacy in preparing students for college (Brittingham, 2009). Colleges were then measured
by the quality of students they admitted based on standards at the secondary school level were
measured by the accrediting agency. The regional accrediting agencies began to focus also on
creating a list of colleges that were good destinations for incoming freshmen. If an institution
was a member of the regional accreditation agency, it was considered an accredited college.
More precisely, the institutions that belonged to an accrediting agency were considered colleges
while those that did not belong were not (Blauch, 1959; Davis, 1932; Ewell, 2008b; Orlans,
1974; Shaw, 1993).
Regional accrediting bodies were formed in the following years: New England
Association of Schools and Colleges (NEASC) in 1885, the Middle States Association of
Colleges and Secondary Schools (MSCSS and Middle States Commission on Higher Education
[MSCHE]) in 1887, the North Central Association of Colleges and Schools (NCA) and the
Southern Association of Colleges and Schools (SACS) in 1895, the Northwest Commission on
Colleges and Universities (NWCCU) in 1917, and, finally, the Western Association of Schools
and Colleges (WASC) in 1924 (Brittingham, 2009).
Regional accrediting associations created instruments for establishing unity and
LIBRARY AND INFORMATION STUDIES 39
standardization in regards to entrance requirements and college standards (Blauch 1959). For
example, in 1901, MSCHE and MSCSS created the College Entrance Examination Board to
standardize college entrance requirements. The NCA also published its first set of standards for
its higher education members in 1909 (Brittingham, 2009).
Although there were functioning regional accreditation bodies in most of the states, in
1910, the Department of Education created its own national list of recognized (accredited)
colleges. Because of the public’s pressure to keep the federal government from controlling
higher education directly, President Taft blocked the publishing of the list of colleges and the
Department of Education discontinued the active pursuit of accrediting schools. Instead, it
reestablished itself as a resource for the regional accrediting bodies in regards to data collection
and comparison (Blauch, 1959; Ewell, 2008b; Orlans, 1974).
Regional Accreditation, 1920 to 1950
With the regional accrediting bodies in place, the ideas of what an accredited college was
became more diverse (e.g., vocational colleges, community colleges). Out of the greater
differences among schools in regards to school types and institutional purposes, there arose a
need to apply more qualitative measures and a focus on high rather than on minimum outcomes
(Brittingham, 2009). School visits by regional accreditors became necessary once a school
demonstrated struggles, since qualitative standards became the norm. The regional organizations
began to measure success (and, therefore, grant accredited status) on whether an institution met
the standards outlined in its own mission, rather than on a predetermined set of criteria
(Brittingham, 2009). In other words, if a school did what it said it would do, it could be
accredited. The accreditation process later became a requirement for all member institutions.
Self- and peer-reviews, which became a standard part of the accreditation process, were
LIBRARY AND INFORMATION STUDIES 40
undertaken by volunteers from the member institutions (Ewell, 2008b).
Accrediting bodies began to be challenged as to their legitimacy in classifying colleges as
accredited or not. The Langer Case in 1938 is a landmark case that established the
standing of accrediting bodies in the United States. Governor William Langer of North
Dakota lost a legal challenge of the NCA’s denial of accreditation to North Dakota
Agricultural College. This ruling carried over to other legal cases wherein the decision
that accreditation was a legitimate and voluntary process was upheld (Fuller & Lugg,
2012; Orlans, 1974).Ti
In addition to the regional accrediting bodies, there arose other associations meant to
regulate the accrediting agencies themselves. The Joint Commission on Accrediting was formed
in 1938 to validate legitimate accrediting agencies and discredit questionable or redundant ones.
After some changes to the mission and the membership of the Joint Commission on
Accreditation, the name was changed to the National Commission on Accrediting (Blauch,
1959).
Regional Accreditation, 1950 to 1985
The period 1950 to 1985 was coined the golden age of higher education and was marked
by increasing federal regulations. During this period, key developments in the accreditation
process included the standardization of the self-study, the execution of the site visit by
colleagues from peer institutions, and the regular, cyclical, visitation of institutions (Woolston,
2012). With the passage of the Veterans’ Readjustment Assistance Act of 1952, the U.S.
Commissioner of Education was required to publish a list of recognized accreditation
associations (Bloland, 2001). This act provided for education benefits to veterans of the Korean
LIBRARY AND INFORMATION STUDIES 41
War directly rather than to the educational institution they attended, increasing the importance of
accreditation as a mechanism for recognition of legitimacy (Woolston, 2012).
A more “pivotal event” occurred in 1958 with the National Defense Education Act’s
(NDEA) allocation of funding for NDEA fellowships and college loans (Weissburg, 2008).
NDEA limited participating institutions to those that were accredited (Gaston, 2014). In 1963,
the U.S. Congress passed the Higher Education Facilities Act requiring that higher education
institutions receiving federal funds through enrolled students be accredited.
Arguably the most striking expansion in accreditation’s mission coincided with the
passage of the Higher Education Act (HEA) in 1964 (Gaston, 2014). Title IV of HEA in this
legislation expressed the intent of Congress to use federal funding to broaden access to higher
education. According to Gaston (2014), having committed to this much larger role in
encouraging college attendance, the federal government found it necessary to affirm that
institutions benefitting from such funds were worthy of it. That same year, the National
Committee of Regional Accrediting Agencies (NCRAA) became the Federation of Regional
Accrediting Commissions of Higher Education (FRACHE).
The Higher Education Act was first signed into law in 1965. That law strengthened the
resources available to higher education institutions and provided financial assistance to students
enrolled at those institutions. The law was especially important to accreditation because it forced
the U.S. Department of Education (USDE) to determine and list a much larger number of
institutions eligible for federal programs (Trivett, 1976). In 1967, the National Committee on
Accrediting (NCA) revoked Parsons College accreditation citing “administrative weakness” and
a $14 million debt. The college appealed but the courts denied it on the basis that the regional
accrediting associations were voluntary bodies (Woolston, 2012).
LIBRARY AND INFORMATION STUDIES 42
The need to deal with a much larger number of potentially eligible institutions led the U.S
Commissioner of Education to create in the Bureau of Higher Education the Accreditation and
Institutional Eligibility Staff (AIES) with an advisory committee. The purpose of the AIES,
which was created in 1968, was to administer the federal recognition and review process
involving the accrediting agencies (Dickey and Miller, 1972). In 1975, NCA and FRACHE
merged to form a new organization called the Council on Postsecondary Accreditation (COPA).
The newly created national accreditation association encompassed an astonishing array of types
of postsecondary education to include community colleges, liberal arts colleges, proprietary
schools, graduate research programs, bible colleges, trade and technical schools, and home-study
programs (Chambers, 1983).
Regional Accreditation, 1985 to Present
Since 1985, accountability is of paramount importance in the field of education.
According to Woolston (2012), key developments in the accreditation process during this period
include higher education’s experiencing rising costs resulting in high student loan default rates as
well as accreditation’s enduring increasing criticism for a number of apparent shortcomings,
most ostensibly a lack of demonstrable student learning outcomes. At the same time,
accreditation is increasingly and formally defended by various champions of the practice. For
example, congressional hostility reached a crisis stage in 1992 when Congress, in the midst of
debates on the reauthorization of the Higher Education Act, threatened to bring the role of the
accrediting agencies as gatekeepers for financial aid to a close.
During the early 1990s the federal government grew increasingly intrusive in matters
directly affecting accrediting agencies (Bloland, 2001). As a direct consequence, Subpart 1 of
Part H of the Higher Education Act amendments involved an increased role for the states in
LIBRARY AND INFORMATION STUDIES 43
determining the eligibility of institutions to participate in the student financial aid programs of
the aforementioned Title IV. For every state, this meant the creation of a State Postsecondary
Review Entity (SPRE) that would review institutions that the USDE secretary identified as
having triggered such review criteria as high default rates on student loans (Bloland, 2001). The
SPREs were short lived and, in 1994, were abandoned largely because of a lack of adequate
funding. The 1992 reauthorization also created the National Advisory Committee on
Institutional Quality and Integrity (NACIQI) to replace the AIES.
For several years, the regional accrediting agencies entertained the idea of pulling out of
COPA and forming their own national association. Based on dissatisfaction with the
organization, regional accrediting agencies proposed a resolution to terminate COPA by the end
of 1993, and following a successful vote on the resolution, COPA was effectively terminated
(Bloland, 2001). A special committee, generated by the COPA plan of dissolution of April 1993,
created the Commission on Recognition of Postsecondary Accreditation (CORPA) to continue
the work of recognizing accrediting agencies (Bloland, 2001). However, CORPA was formed
primarily as an interim organization to continue national recognition of accreditation. In 1995,
national leaders in accreditation formed the National Policy Board (NPB) to shape the creation
and legitimation of a national organization overseeing accreditation. National leaders in
accreditation were adamant that the new organization should reflect higher education’s needs
rather than those of postsecondary education. Following numerous intensive meetings, a new
organization named the Council for Higher Education Accreditation (CHEA) was formed in
1996 as the official successor to CORPA (Bloland, 2001).
In 2006, the Spellings’ Commission “on the future of higher education” delivered the
verdict that accreditation “has significant shortcomings” (USDE, 2006, p. 7) and accused
LIBRARY AND INFORMATION STUDIES 44
accreditation of being both ineffective and a barrier to innovation. Since the release of the
Spellings’ Commission report, the next significant event on the subject of accreditation came
during President Barack Obama’s State of the Union Address on February 12, 2013. In
conjunction with the president's address, the White House released a nine-page document titled
The President's Plan for a Strong Middle Class and a Strong America. The document stated that
the President was going to call on Congress to consider value, affordability, and student
outcomes in making determinations about which colleges and universities receive access to
federal student aid, either by incorporating measures of value and affordability into the existing
accreditation system; or by establishing a new, alternative system of accreditation that would
provide pathways for higher education models and colleges to receive federal student aid based
on performance and results (White House, 2013b).
Future Direction of Accreditation
Accreditation in higher education is at a crossroads. Since the 2006 Spellings’
Commission report called for more government oversight of accreditation to ensure public
accountability, the government and critics began to scrutinize a system that had been
nongovernmental and autonomous for several decades (Eaton, 2012b). The U.S. Congress is
currently in the process of reauthorizing the Higher Education Act (HEA), and it is expected to
address the accreditation issue. All the while, CHEA and other accreditation supporters attempt
to convince Congress, the academy, and the public at-large of accreditation’s current and future
relevance in quality higher education.
In anticipation of the HEA’s reauthorization, NACIQI had the charge of providing the
U.S. Secretary of Education with recommendations on recognition, accreditation, and student aid
eligibility (NACIQI, 2012). The committee advised that accrediting bodies should continue their
LIBRARY AND INFORMATION STUDIES 45
gatekeeping role for student aid eligibility, but also recommended some changes to the
accreditation process. These changes included more communication and collaboration among
accreditors, states, and the federal government to avoid overlapping responsibilities; moving
away from regional accreditation and toward sector or mission-focused accreditation; creating an
expedited review process and developing more gradations in accreditation decisions; developing
more cost-effective data collection and consistent definitions and metrics; and making
accreditation reports publically available (NACIQI, 2012).
However, two members of the committee did not agree with the recommendations and
submitted a motion to include the Alternative to the NACIQI Draft Final Report, which
suggested eliminating accreditor’s gatekeeping role; creating a simple, cost-effective system of
quality assurance that would revoke financial aid to campuses not financially secure; eliminating
the current accreditation process altogether as means of reducing institutional expenditures;
breaking the regional accreditation monopoly; and developing a user-friendly, expedited
alternative for the re-accreditation process (NACIQI, 2012). However, the motion did not pass,
and the alternative view was not included in NACIQI’s final report. As a result, Hank Brown,
the former U.S. Senator from Colorado and founding member of the American Council of
Trustees and Alumni, drafted a report seeking accreditation reform and reiterating the
alternatives suggested above, because accreditation had “failed to protect consumers and
taxpayers.” (Brown, 2013, p. 1).
The same year the final NACIQI report was released, the American Council of
Education’s (ACE) Task Force on Accreditation released its own report identifying challenges
and potential solutions for accreditation (ACE, 2012). The task force made six
recommendations: 1. increase transparency and communication; 2. increase the focus on student
LIBRARY AND INFORMATION STUDIES 46
success and institutional quality; 3. take immediate and noticeable action against failing
institutions; 4. adopt a more expedited process for institutions with a history of good
performance; 5. create common definitions and a more collaborative process between
accreditors; and 6. increase cost-effectiveness (ACE, 2012). They also suggested that higher
education “address perceived deficiencies decisively and effectively, not defensively or
reluctantly.” (ACE, 2012, p. 8).
President Obama also recently spoke out regarding accountability and accreditation in
higher education. In his 2013 State of the Union address, Obama asked Congress to “change the
Higher Education Act, so that affordability and value are included in determining which colleges
receive certain types of federal aid” (White House, 2013c, para 39). The address was followed
by The President’s Plan for a Strong Middle Class and a Strong America, which suggested
achieving the above change to the HEA “either by incorporating measures of value and
affordability into the existing accreditation system; or by establishing a new, alternative system
of accreditation that would provide pathways for higher education models and colleges to receive
federal student aid based on performance and results.” (White House, 2013b, p. 5). Furthermore,
in August 2013, President Obama called for a performance-based rating system that would
connect institutional performance with financial aid distributions (White House, 2013a). Though
accreditation was not specifically mentioned in his plan, it is not clear if the intention is to
replace accreditation with this new rating system or utilize both systems simultaneously (Eaton,
2013b).
The President’s actions over the last year are of concern to CHEA and other supporters of
nongovernmental accreditation. Calling it the “most fundamental challenge that accreditation
has confronted to date,” Eaton (2012b) expressed concern over the standardized and increasingly
LIBRARY AND INFORMATION STUDIES 47
regulatory nature of the federal government’s influence on accreditation. Astin (2014) also
stated that, if the U.S. government creates its own process for quality control, the U.S. higher
education system is “in for big trouble” (para. 9), like the government-controlled Chinese higher
education system.
Though many agree there will be an inevitable increase in federal oversight after the
reauthorization of the HEA, supporters of the accreditation process offered recommendations for
minimizing the effect. Gaston (2014) provides six categories of suggestions for implementation:
consensus and alignment, credibility, efficiency, agility and creativity, decisiveness and
transparency, and a shared vision. The categories maintain the aspects of accreditation that have
worked well and that are strived for around the world – nongovernmental, peer review – as well
as address the areas receiving the most criticism. Eaton (2013a) adds that accreditors and
institutions must push for streamlining the federal review of accreditors as a means to reduce
federal oversight, better communicate the accomplishments of accreditation and how quality
peer-review benefits students, and anticipate any further actions the federal government may
take.
While the HEA undergoes the process of reauthorization, the future of accreditation
remains uncertain. There have been many reports and opinion pieces on how accreditation
should change and/or remain the same, much of them with overlapping themes. Only time will
tell if the accreditors, states, and the federal government reach an acceptable and functional
common ground that ensures the quality of U.S. higher education into the future.
Effect of Outcomes Assessment and Accreditation
This section of the literature review examines the effects of accreditation, focusing
primarily on the assessment of student learning outcomes. Specifically, outcome assessment
LIBRARY AND INFORMATION STUDIES 48
serves two main purposes — quality improvement and external accountability (Bresciani, 2006;
Ewell, 2009). Over the years, institutions of higher education made considerable strides with
regard to learning assessment practices and implementation. Yet, despite such progress, key
challenges remain.
Trend toward Learning Assessment
The shift within higher education accreditation toward greater accountability and student
learning assessment began in the mid-1980s (Beno, 2004; Ewell, 2001; Shavelson, 2007;
Wergin, 2005, 2012). During that time, higher education was portrayed in the media as “costly,
inefficient, and insufficiently responsive to its public” (Bloland, 2001, p. 34). The impetus
behind the public’s concern stemmed from two reasons: the perception that students were
underperforming academically and the demand of the business sector (Ewell, 2001). Employers
and business leaders expressed their need for college graduates who could demonstrate high
levels of literacy, problem solving ability, and collaborative skills in order to support the
emerging knowledge economy of the 21
st
Century. In response to these concerns, institutions of
higher education started emphasizing student learning outcomes as the main process of
evaluating effectiveness (Beno, 2004).
Framework for Learning Assessment
Accreditation is widely considered to be a significant driving force behind advances in
both student learning and outcomes assessment. According to Rhodes (2012), in recent years,
accreditation contributed to the proliferation of assessment practices, lexicon, and even products
such as e-portfolios, which are used to show evidence of student learning.
Kuh and Ikenberry (2009) surveyed provosts or chief academic officers at all regionally
accredited institutions granting undergraduate degrees and found that student assessment was
LIBRARY AND INFORMATION STUDIES 49
driven more by accreditation than by external pressures such as government or employers.
Another major finding was that most institutions planned to continue their assessment of student
learning outcomes despite budgetary constraints. They also found that gaining faculty support
and involvement remained a major challenge — an issue examined in more depth later in this
section.
Additionally, college and university faculty and student affairs practitioners stressed how
students must now acquire proficiency in a wide scope of learning outcomes to adequately
address the unique and complex challenges of today’s ever-changing, economically competitive,
and increasingly globalizing society. In 2007, AAC&U published a report focusing on the aims
and outcomes of a 21
st
Century collegiate education, with data gathered through surveys, focus
groups, and discussions with postsecondary faculty. Emerging from the report were four
“essential learning outcomes.” These were 1. knowledge of human cultures and the physical and
natural world through study in science and mathematics, social sciences, humanities, history,
languages, and the arts; 2. intellectual and practical skills, including inquiry and analysis, critical
and creative thinking, written and oral communication, quantitative skills, information literacy,
and teamwork and problem-solving abilities; 3. personal and social responsibility, including
civic knowledge and engagement, multicultural competence, ethics, and foundations and skills
for lifelong learning; and 4. integrative learning, including synthesis and advanced
understanding across general and specialized studies (AAC&U, 2007, p. 12). With the adoption
of such frameworks or similar tools at institutions, accreditors can be well-positioned to connect
teaching and learning and, as a result, better engage faculty to improve student learning
outcomes (Rhodes, 2012).
LIBRARY AND INFORMATION STUDIES 50
Benefits of Accreditation on Learning
Accreditation and student performance assessment have been the focus of various
empirical studies, with several pointing to benefits of the accreditation process. Ruppert (1994)
conducted case studies in 10 states – Colorado, Florida, Illinois, Kentucky, New York, South
Carolina, Tennessee, Texas, Virginia, and Wisconsin – to evaluate different accountability
programs based on student performance indicators. The report concluded that “quality indicators
appear most useful if integrated in a planning process designed to coordinate institutional efforts
to attain state priorities” (p. 155).
Furthermore, research also demonstrated how accreditation helps shape outcomes inside
college classrooms. Specifically, Cabrera, Colbeck, and Terenzini (2001) investigated classroom
practices and their relationship with the learning gains in professional competencies among
undergraduate engineering students. The study involved 1,250 students from seven universities
and found that the expectations of accrediting agencies may encourage more widespread use of
effective instructional practices by faculty.
A study by Volkwein, Lattuca, Harper, and Domingo (2007) measured changes in student
outcomes in engineering programs following the implementation of new accreditation standards
by the Accreditation Board for Engineering and Technology (ABET). Based on the data
collected from a national sample of engineering programs, the authors noted that the new
accreditation standards were indeed a catalyst for change, finding evidence that linked the
accreditation changes to improvements in undergraduate education. Students experienced
significant gains in the application of knowledge of mathematics, science, and engineering;
usage of modern engineering tools; use of experimental skills to analyze and interpret data;
designing solutions to engineering problems; teamwork and group work; effective
LIBRARY AND INFORMATION STUDIES 51
communication; understanding of professional and ethical obligations; understanding of the
societal and global context of engineering solutions; and recognition of the need for life-long
learning. The authors also found accreditation also prompted faculty to engage in professional
development-related activity. Thus, the study showed the effectiveness of accreditation as a
mechanism for quality assurance (Volkwein et al., 2007).
Organizational Effects of Accreditation
Beyond student learning outcomes, accreditation also has considerable effects on an
organizational level. Procopio (2010) noted that the process of acquiring accreditation
influences perceptions of organizational culture. According to the study, administrators are more
satisfied than are staff members – and especially more so than faculty – when rating
organizational climate, information flow, involvement in decisions, and utility of meetings.
“These findings suggest institutional role is an important variable to consider in any effort to
affect organizational culture through accreditation buy-in” (p. 10). Similarly, a study by
Wiedman (1992) describes how the two-year process of reaffirming accreditation at a public
university drives the change of institutional culture.
Meanwhile, Brittingham (2009) explains that accreditation offers organizational-level
benefits for colleges and universities. The commonly acknowledged benefits include students’
access to federal financial aid funding, legitimacy in the eyes of the public, consideration for
foundation grants and employer tuition credits, positive reflection among peers, and government
accountability. However, Brittingham (2009) points out that there are “not often recognized”
benefits as well (p. 18). For example, accreditation is cost-effective, particularly when
contrasting the number of personnel to carry out quality assurance procedures here in the U.S.
versus internationally, where it is far more regulated. Second, “participation in accreditation is
LIBRARY AND INFORMATION STUDIES 52
good professional development” because those who lead a self-study come to learn about their
institution with more breadth and depth (p. 19). Third, self-regulation by institutions – if done
properly – is a better system than government regulation. Lastly, “regional accreditation gathers
a highly diverse set of institutions under a single tent, providing conditions that support student
mobility for purposes of transfer and seeking a higher degree” (p. 19).
Future Assessment Recommendations
Many higher education institutions developed plans and strategies to measure student
learning outcomes, and such assessments are already in use to improve institutional quality
(Beno, 2004). For future actions, CHEA, in its 2012 Final Report, recommends further
enhancing the commitment to public accountability:
Working with the academic and accreditation communities, explore the adoption and
implementation of a small set of voluntary institutional performance indicators based on
mission that can be used to signal acceptable academic effectiveness and to inform
students and the public of the value and effectiveness of accreditation and higher
education. Such indicators would be determined by individual colleges and universities,
not government. (p. 7)
In addition, Brittingham (2012) outlines three developments that have the capacity to
influence accreditation and increase its ability to improve educational effectiveness. First,
accreditation is growing more focused on data and evidence, which strengthens its value as a
means of quality assurance and quality improvement. Second, “technology and open-access
education are changing our understanding of higher education” (p. 65). These innovations –
such as massive open online courses – hold enormous potential to open up higher education
sources. As a result, this trend will heighten the focus on student learning outcomes. Third,
LIBRARY AND INFORMATION STUDIES 53
“with an increased focus on accountability – quality assurance – accreditation is challenged to
keep, and indeed strengthen, its focus on institutional and programmatic improvement” (p. 68).
This becomes particularly important amid the current period of rapid change.
Challenges to Student Learning Outcomes
Assessment is critical to the future of higher education. As noted earlier, outcome
assessment serves two main purposes: quality improvement and external accountability
(Bresciani, 2006; Ewell, 2009). The practice of assessing learning outcomes is now widely
adopted by colleges and universities since its introduction in the mid-1980s, and assessment is
also a requirement in the accreditation process. However, outcomes assessment in higher
education is still a work in progress, and there is still a fair amount of challenges (Kuh & Ewell,
2010).
Organization learning challenges. First, there is the organizational culture and learning
issue. Assessment, as clearly stated by the American Association for Higher Education, “is not
an end in itself but a vehicle for educational improvement” (AAHE, 1992, p. 1). The process of
assessment is not a means unto its own end. Instead, it provides an opportunity for continuous
organizational learning and improving (Maki, 2010). Too often, institutions assemble and report
sets of mountainous data just to comply with federal or state accountability policy or an
accreditation agency’s requirements. However, after the report is submitted, the evaluation team
has left, and the accreditation is confirmed, there are few incentives to act on the findings for
further improvement. The root causes of deficiencies identified are rarely followed up and real
solutions are never sought (Ewell, 2005; Fulcher, Good, Coleman, & Smith, 2014; Wolff, 2005).
Another concern pointed out by Ewell (2005) is that accreditation agencies tend to
emphasize the process, rather than the outcomes, once the assessment infrastructure is
LIBRARY AND INFORMATION STUDIES 54
established. The accreditors are satisfied with formal statements and goals of learning outcomes,
but do not query further about how, the appropriateness, and to what degree these learning goals
are applied in the teaching and learning process. As a result, the process tends to be a single-
loop learning where changes reside at a surface level, instead of a double-loop learning, where
changes are incorporated in the practices, belief, and norms (Bensimon, 2005).
Lack of faculty buy-in. Lack of faculty’s buy-in and participation is another hurdle in
the adoption of assessment practice (Banta & Pike, 2012; Hutchings, 2010; Kuh & Ewell, 2010).
In a 2009 survey by the National Institute for Learning Outcomes Assessment, two-thirds of all
2,809 surveyed schools noted that more faculty involvement in learning assessment would be
helpful (Kuh & Ikenberry, 2009). According to Ewell (1993, 2002, 2005), there are several
reasons that faculty are inclined to be directly involved in the assessment process. First, faculty
view teaching and curriculum development as their domain. Assessing their teaching
performance and student learning outcomes by external groups can be viewed as an intrusion
upon their professional authority and academic freedom. Second, faculty are deterred by the
extra efforts and time required for engaging in outcome assessment and the unconvincing added-
value perceived. Furthermore, the compliance-oriented assessment requirements are imposed by
external bodies and most faculty members participate in the process indirectly. Faculty might
also have a view of the definitions and measures of “quality” that differs from that of the
institution or accreditors (Perrault, Gregory, & Carey, 2002, p. 273). Finally, the assessment
process incurs tremendous amount of work and resources. To cut costs, the majority of the work
is done by administration at the institution. Faculty consequently perceive assessment as an
exercise performed by administration for external audiences and do not embrace the process.
LIBRARY AND INFORMATION STUDIES 55
Hutchings (2010) further echoed Ewell’s observation by enumerating four formidable
obstacles to fuller faculty engagement in assessment. First, assessment is an area with which
most faculty are not familiar. The language and activities of assessment, including accounting,
testing, evaluation, measurement, total quality management, benchmarking, are foreign to many
faculty members. In addition, assessment is neither an area that faculty were trained during in
their education process nor is it an area in which they would invest professional development
experience. The third obstacle is that there is no incentive for faculty to be involved in
assessment. It is not tied to institutional reward systems, including promotion and tenure
deliberations. Finally, faculty have not seen sufficient evidence to show assessment makes a
difference. When most faculty are stressed and pressed with increasing demands in teaching and
research, assessment appears to exhibit less priority.
Lack of institutional investment. Shortage of resources and institutional support is
another challenge in the implementation of assessment practice. As commented by Beno (2004),
“[d]eciding on the most effective strategies for teaching and for assessing learning will require
experimentation, careful research, analyses, and time” (p. 67). With continuously dwindling
federal and state funding in the last two decades, higher education, particularly at the public
institutions, is stripped of resources to support such an endeavor. A case in point is the recession
in early 1990s. Budget cuts forced many states to abandon the state assessment mandates
originated in the mid-1980s and to switch to process-based performance indicators as a way to
gain efficiency in large public institutions (Ewell, 2005). The 2009 National Institute for
Learning Outcomes Assessment survey shows that a majority of the surveyed institutions
undercapitalized resources, tools, and expertise for assessment work. Twenty percent of
respondents indicated they had no assessment staff, and 65% had two staff members or fewer
LIBRARY AND INFORMATION STUDIES 56
(Kuh & Ewell, 2010; Kuh & Ikenberry, 2009). The resource issue is further described by Beno
(2004):
A challenge for community colleges is to develop the capacity to discuss what the results
of learning assessment mean, to identify ways of improving student learning, and to make
institutional commitments to that improvement by planning, allocating needed resources,
and implementing strategies for improvement. (p. 67)
Difficulty with integration into local practice. Integrating the value and
institutionalizing the practice of assessment into daily operations can be another tall order in
many institutions. In addition to redirecting resources, leadership’s involvement and
commitment, faculty’s participation, and adequate assessment personnel contribute to the success
of cultivating a sustainable assessment culture and framework on campus (Banta, 1993; Kuh &
Ewell, 2010; Kurzweil, 2015; Lind & McDonald, 2003; Maki, 2010). Furthermore, assessment
activities, imposed by external authorities, tend to be implemented as an addition to, rather than
as an integral part of, institutional practice (Ewell, 2002; Maki, 2010). Assessment, like
accreditation, is often viewed as a special process with its own funding and committee, instead of
being part of regular business operations. Finally, the work of assessment, program reviews,
self-study, and external accreditation at institutional and academic program levels tends to be
handled by various offices on campus, and coordinating the work can be another challenge
(Perrault, Gergory, & Carey, 2002).
Colleges also tend to adopt the institutional isomorphic approach by modeling themselves
after peers who are more legitimate or successful in dealing with similar situation and the
practice widely used to gain acceptance (DiMaggio & Powell, 1983). As reported by Ewell
(1993), institutions are prone to “second-guess” and adopt the type of assessment practice
LIBRARY AND INFORMATION STUDIES 57
acceptable by external agencies as a safe approach instead of adopting or customizing the one
appropriate to the local needs and situation (Ewell, 1993). Institutional isomorphism offers a
safer and more predictable route for institutions to deal with uncertainty and competition, to
conform to government mandates or accreditation requirements, or to abide by professional
practices (Bloland, 2001). However, the strategy of following the crowd might hinder in-depth
inquiry within institutional context, as well as the opportunity for innovation and creativity.
Furthermore, decision makers may be unintentionally trapped in a culture of doing what
everyone is doing without carefully examining unique local situation, the logic, the
appropriateness, and the limitations behind the common practice (Miles, 2012).
Lack of assessment standards and clear terminology presents another challenge in
assessment and accreditation practice (Ewell, 2001). With no consensus on vocabulary,
methods, and instrument, assessment practice and outcomes can have limited value. As reported
by Ewell (2005), the absence of outcome metrics makes it difficult for state authorities to
aggregate performance across multiple institutions and to communicate the outcomes to the
public. The exercise of benchmarking is also impossible. Bresciani (2006) stressed the
importance of developing a conceptual definition, framework, and common language at
institutional level.
Outcome equity. Outcomes assessment that focuses on students’ academic performance
while overlooking the equity and disparity of diverse student population as well as the student
engagement and campus climate issues is another area of concern. In discussing local financing
of community colleges, Dowd and Grant (2006) stressed the importance of including “outcome
equity” in additional to performance-based budget allocation. Outcome equity pays special
LIBRARY AND INFORMATION STUDIES 58
attention to the equal outcomes of educational attainment among populations of different social,
economic, and racial groups (Dowd, 2003).
Tension between improvement and accountability. The tension among the equally
important goals of outcomes assessment, quality improvement and external accountability, can
be another factor affecting outcomes assessment practice. According to Ewell (2008a, 2009),
assessment practice evolved over the years into two contrasting paradigms. The first paradigm,
assessment for improvement, emphasizes constant evaluating and enhancing the process and
outcomes, while the other paradigm, assessment for accountability, demands conformity to a set
of established standards mandated by the state or accrediting agencies. The strategies, the
instrumentation, the methods of gathering evidences, the reference points, and the way results are
utilized in these two paradigms tend to be at the opposite ends of the spectrum (Ewell, 2008a,
2009). For example, in the improvement paradigm, assessment is mainly used internally to
address deficiencies and enhance teaching and learning. It requires periodic evaluation and
formative assessment to track progress over time. On the other hand, the accountability
paradigm assessment is designed to demonstrate institutional effectiveness and performance to
external constituencies and to comply with pre-defined standards or expectations. The process
tends to be performed on set schedules as a summative assessment. The nature of these two
constraints can create tension and conflict within an institution. Consequently, an institution’s
assessment program is unlikely to achieve both objectives. Ewell (2009) concluded that “when
institutions are presented with an intervention that is claimed to embody both accountability and
improvement, accountability wins” (p. 8).
Transparency challenges. Finally, for outcome assessment to be meaningful and
accountable the process and information need to be shared and open to the public (Ewell, 2005).
LIBRARY AND INFORMATION STUDIES 59
Accreditation has long been criticized as mysterious or secretive with little information to share
with stakeholders (Ewell, 2010b). In a 2006 survey, the Council of Higher Education
Accreditation reported that only 18% of the 66 accreditors surveyed provide information about
the results of individual reviews publicly; less than 17% of accreditors provide a summary on
student academic achievement or program performance; and just over 33% of accreditors offer a
descriptive summary about the characteristics of accredited institutions or programs (CHEA,
2006a).
The progress of disclosing assessment outcomes has been slow. In the 2014 Inside
Higher Education survey, only 9% of the 846 college presidents indicated that it is very easy to
find student outcomes data on the institution’s website, and only half of the respondents agreed
that it is appropriate for federal government to collect and publish data on outcomes of college
graduates (Jaschik & Ledgerman, 2014). With the public disclosure requirements of the No
Child Left Behind Act, there is an impetus for higher education and accreditation agencies to be
more open to public and policy makers. It is expected that further openness will contribute to
more effective and accountable business practices as well as the improvement of educational
quality.
Conclusion
It has been three decades since the birth of the assessments movement in U.S. higher
education and a reasonable amount of progress has been made (Ewell, 2005). Systematic
assessment of student learning outcomes is now a common practice at most institutions, as
reported by three nationwide surveys. The 2008 survey performed by AAC&U reported that
78% of the 433 surveyed institutions have a common set of learning outcomes for all their
undergraduate students, and 68% of the institutions also assess learning outcomes at the
LIBRARY AND INFORMATION STUDIES 60
departmental level (Hart Research Associates, 2009a). Similarly, the 2009 National Institute for
Learning Outcomes Assessment (NILOA) found that 74% of the 1,518 surveyed institutions
adopted common learning outcomes for all undergraduate students and most institutions conduct
assessments at both the instructional and program level (Kuh & Ikenberry, 2009). In a
subsequent NILOA survey in 2014, the number of institutions with common outcomes
assessment jumped to 84% (Kuh, et al., 2014).
As the public concern about the performance and quality of American colleges and
universities continues to grow, it is more imperative than ever to embed assessment in the
everyday work of teaching and learning. Assessment outcomes should be efficaciously
employed to further improve practice, to inform decision makers, to communicate effectively
with the public, and to be held accountable for preparing the national learners in the knowledge
economy. With effort, transparency, continuous improvement and responsiveness to society’s
demands, higher education institutions will be able to regain the public’s trust.
American Library Association Accreditation
Early History of Library Education
In 1887, the first formal library training program in the U.S. opened its door at the School
of Library Economy of Columbia College in New York City (Miksa, 1988). However, the
development of library profession, standards of operation, and needs of instruction can be traced
back to the Librarians’ Conference of 1853 (Lynch, 2008). According to the U. S. Bureau of
Education’s survey on public libraries, there were already 3,647 public libraries in the country in
1876 (Vann, 1961). Hence, a substantial library workforce already existed during that period.
Prior to the first library school, library workers acquired the necessary knowledge and skills
through hands-on experience and inquiry, through the reading of literature, and through the
LIBRARY AND INFORMATION STUDIES 61
activities sponsored by American Library Association (ALA), which was founded in 1876
(Vann, 1961). Such a learning-by-doing, vocational, in-service training, and apprenticeship
system also reflected the educational model of other professions, including the legal profession
(Parsons, 1975) and the medical profession (Flexner, 1910), at that time. The apprentice model
continued even long after the opening of the first library school (Lynch, 2008). By the turn of
the 1900s, three additional library schools were established (Parsons, 1975). The first set of
standards of library training and entrance requirements were issued in 1905 by the ALA
Committee on Library Training (Seavey, 1989).
Accreditation of Library Training Program
The development and evolvement of library education, its associated standards, and
accreditation follows similar patterns as the medical and legal profession (Parsons, 1975). The
American Library Association assumed the role of accreditation agency of library education in
1924. However, there are at least three large-scale assessments of library training programs,
curriculum, and facilities prior to the establishment of an accreditation program for library
education at ALA. The first study was commissioned and funded by the ALA Committee on
Library Training to examine if library schools met the committee’s 1905 standards. The
investigation consisted two parts: First, two surveys were sent to employers of library school
graduates as well as recent library school graduates, seeking their opinions on the training
standards and program. Second, site visits of ten library schools were conducted by Mary Esther
Robbins, a library educator, using an instrument called Test of College Efficiency. Robbins
submitted her final report to the Committee in 1915. Unfortunately, this first investigation of
library education by ALA was never published and no accreditation action was taken (Seavey,
1989).
LIBRARY AND INFORMATION STUDIES 62
The second study was conducted by Alvin Saunders Johnson under the commission of
the Carnegie Corporation, which had also funded the construction of 1,689 library buildings in
the United States. The study reviewed the service effectiveness of those Carnegie libraries and
examined the adequacy of library training. Although most of Johnson’s recommendations were
rejected by ALA, the report did identify several training issues, including the dismal quality of
training programs, trainers, and facilities (Lynch, 2008; Vann, 1961).
In 1919, Charles C. Williamson of the New York Public Library, under the commission
of ALA and also funded by the Carnegie Corporation, presented a report to ALA about his
extensive study of 15 library schools on their organization, curriculum, faculty, teaching,
financial status, and entrance requirements. In this landmark report, Williamson made several
recommendations, including elevating library programs to the graduate level, upgrading
curriculum and training methods, and raising admission requirements. However, most
importantly, Williamson advocated ALA take a leadership role in coordinating the efforts of
library education and operating a nationwide, voluntary accrediting body. This new entity would
be charged with the authority to formulate and enforce library education standards, recruit
qualified students, certify graduates without examination, and promote all types of library
training (Lynch, 2008; Parsons, 1975; Wilson & Hermanson, 1998).
From 1920 to 1923, ALA appointed a couple of working groups to assess Williamson’s
recommendations and to develop a plan to establish a centralized accreditation agency. In 1924,
five years after the submission of the Williamson Report, ALA formed the Board of Education
for Librarianship with the charge of overseeing the development in three areas: library education
standards, accreditation of library programs, and promotion of library education. The Board,
with its new responsibilities and authority, issued the first standard, 1925 Minimum Standard for
LIBRARY AND INFORMATION STUDIES 63
Library Schools. Under this first ALA Standard, fourteen library schools were accredited
(Parson, 1975).
In 1956, the ALA Board of Education for Librarianship was replaced by a standing
committee, Committee on Accreditation (COA), which continues to lead the accreditation
operation of the organization today (Moran, 2013). Over the years, the charge of COA remained
the same: “to be responsible for the execution of the accreditation program of the ALA, and to
develop and formulate standards of education …” (ALA, 2014a, para. 1).
As the official accreditation body of ALA, COA maintains a close tie and budgetary
connection with its parent organization. With limited direct input from other library professional
organizations, including American Association of Law Libraries, Association for Information
Science and Technology, Association for Library and Information Science Education, Medical
Library Association, and Special Libraries Association, COA has long been criticized of its lack
of independence and autonomy under the umbrella of ALA.
As a recognized programmatic accreditor by CHEA, COA is required to “[demonstrate]
independence from any parent entity, or sponsoring entity, for the conduct of accreditation
activities and determination of accreditation status” (CHEA, 2010b, p. 3). As a result, COA
issued a memorandum of understanding in 2010 to articulate its relationship with ALA and to
avoid conflicts of interest. The statement delineates that COA as “a distinct and autonomous
profession,” maintains “sustaining relationships on behalf of the profession with … Council for
Higher Education Accreditation (CHEA), private accreditation agencies, institutions of higher
education, and the public” (ALA, 2010, p. 1). Furthermore, ALA and COA mutually recognizes
that “COA must have autonomy as specified herein to ensure that its professional accreditation
functions are carried out independent of improper influence by ALA” and “ALA shall respect the
LIBRARY AND INFORMATION STUDIES 64
confidentiality of reports issued by programs seeking accreditation and COA decision
documentation in order to protect the consultative, developmental nature of the accreditation
process” (ALA 2010, p. 1).
In the U.S., most librarian positions require job applicants with an MLIS degree from
ALA-accredited programs. Students attending MLIS programs not accredited by ALA-COA can
severely limit their employment opportunities (ALA, 2014b). Several studies analyzing the job
posting of various librarian positions concluded that an ALA-accredited MLIS degree is the most
important academic qualification (Du, Stein, & Martin, 2007; Haycock, 2010; Lynch & Smith,
2001; Robinson, 1993; White, 2000).
ALA Accreditation Standards
Since it was first published in 1925, ALA accreditation standards have been updated and
revised over the years to reflect changes in the profession, the shifting demands of the society,
and the evolution of library education. Six sets of ALA standards that guide the accreditation
process have been issued in 1926, 1933, 1951, 1972, 1992, and 2008. Between editions, the
standards are revised every five years (Moran, 2013).
Some of the major changes of the accreditation standards over the years are
The 1926 and 1933 Standards were based on quantitative requirements and were
designed to accredit library schools. Starting with the 1951 standards, the entity of
accreditation changed from library schools to library education programs leading to a
master degree (Yungmeyer, 1984).
Starting in the 1972 Standards, the evaluation focus shifted from standards to the
missions and objectives of the library program and its parent institution (Lynch, 2008)
LIBRARY AND INFORMATION STUDIES 65
Accreditation standards evolved over the years from quantitative and prescriptive
requirements to more qualitative and outcome based, with focus on accountability
(Moran, 2013; Swigger, 2010).
The 2008 Standards. The current 2008 revision, titled Standards for Accreditation of
Master’s Program in Library & Information Studies, incorporate several emerging issues,
including diversity, student learning outcomes, distance education, systematic planning,
globalization, ethics, multiple degree programs, and values that were absence from the 1992
version of the standards (ALA, 2008). Specifically, the 2008 version of standards stresses
continuous assessment of student learning as well as aligning and realigning assessment results
with the MLIS program’s mission, goals, objectives, and learning outcomes. It further
emphasizes the need for engaging assessment activities and communicating results with
constituents. Finally, the 2008 Standards stress that MLIS programs should embed assessment
in curriculum development, teaching and learning practices, and program review.
The ALA 2008 Standards begin with a statement of the purpose of accreditation in which
student learning outcomes are clearly stressed (ALA, 2008, p. 1):
Accreditation serves as a mechanism for quality assessment and quality enhancement
with quality defined as the effective utilization of resources to achieve appropriate
educational objectives and student learning outcomes.
The six sets of standards used in accreditation process to assess specific aspects of MLIS
programs are [emphasis added]
1. Mission, Goals, and Objectives, with a requirement that “[p]rogram objectives are stated
in terms of student learning outcomes” (ALA, 2008, p. 6).
LIBRARY AND INFORMATION STUDIES 66
2. Curriculum that should be evaluated according to “assessment of students’ achievement
and their subsequent accomplishments” (ALA, 2008, p. 8).
3. Faculty who should “demonstrate skill in academic planning and assessment” (ALA,
2008, p. 9).
4. Students: “Assessment of an application is based on combined evaluation of academic,
intellectual, and other qualifications as they relate to the constituencies served by a
program, a program’s goals and objectives, and the career objectives of the individual.”
“The school applies the results of evaluation of student achievement in to program
development” (ALA, 2008, p. 10).
5. Administration and Financial Support should include systematic planning and evaluation
“for ongoing appraisal to make improvements and to plan for the future” (ALA, 2008, p.
12).
6. Physical Resources and Facilities “that are sufficient to the accomplishment of its
objectives” (ALA, 2008, p. 12).
Scope of the 2008 Standards. The ALA accreditation and 2008 Standards consists of
the following unique aspects:
The 2008 Standards is applied to both on-site and online courses “regardless of forms or
locations of delivery of a program” (ALA, 2008, p. 3).
The ALA accrediting MLIS programs have to be affiliated with “a degree-granting
authority of regionally accredited institutions” (ALA, 2012, p. 8).
ALA, with an agreement with the Canadian Library Association, also accredited MLIS
programs in institutions in Canada (ALA, 2012).
LIBRARY AND INFORMATION STUDIES 67
ALA is recognized by CHEA and Association of Specialized and Professional
Accreditors as accrediting agency, but not by USDE. In 1992, ALA voluntarily withdrew
from USDE’s recognition due to the fact that the 1992 Higher Education Act limited the
recognition to those accreditation agencies playing a gatekeeper role to establish
eligibility for federal funding (ALA, 2008).
The 2008 Standards only applies to master’s degree programs in library and information
studies, not doctoral, bachelor, or associate library education programs (ALA, 2012, p.
8).
The accreditation is performed solely at programs leading to the master’s degree,
instead of school or institutional level. As a result, a library school may seek
accreditation for more than one graduate program of education in library and
information studies leading to a master’s degree. As of 2014, the following five
institutions have two ALA-COA accredited MLIS programs:
o Florida State University: Master of Science and Master of Arts
o University of Indiana: Master of Library Science and Master of Information
Science
o University of Kentucky: Master of Science in Library Science and Master of
Arts
o University of North Carolina at Chapel Hill: Master of Science in Library
Science and Master of Science in Information Science
o Texas Woman's University: Master of Science and Master of Arts in Library
Science
LIBRARY AND INFORMATION STUDIES 68
ALA Standards and Outcome Assessment
The two major documents that guide the ALA accreditation process are the Standards for
Accreditation of Master’s Programs in Library & Information Studies (ALA, 2008) and the
Accreditation process, policies, and procedures (AP3) (ALA, 2012). Both of them are available
from ALA’s Website.
Standards for Accreditation of Master’s Programs in Library & Information
Studies. In the 2008 Standards, the term “assessment” appears 12 times throughout the
document while the term “outcome” is used eight times (ALA, 2008). Specifically, in the
Mission, Goals, and Objectives section, the 2008 Standards stipulates that “Program objectives
are stated in terms of student learning outcomes …” [emphasis added] (p. 6). In the Curriculum
section the 2008 Standards specifies,
The curriculum is continually reviewed and receptive to innovation; its evaluation is used
for ongoing appraisal, to make improvements, and to plan for the future. Evaluation of
the curriculum includes assessment of students' achievements and their subsequent
accomplishments [emphasis added]. (Section II.7, ALA, 2008, p. 8)
In the Faculty section, the 2008 Standards states that faculty should “demonstrate skill in
academic planning and assessment, have a substantial and pertinent body of relevant
experience …” [emphasis added] (Section III.6, ALA, 2008, p. 9). Finally, in the Student
section, the 2008 Standards stresses the importance of holistic assessment process based on
student performance [emphasis added]:
The school applies the results of evaluation of student achievement to program
development. Procedures are established for systematic evaluation of the degree to which
a program's academic and administrative policies and activities regarding students are
LIBRARY AND INFORMATION STUDIES 69
accomplishing its objectives. Within applicable institutional policies, faculty, students,
staff, and others are involved in the evaluation process. (Section IV.6, ALA, 2008, p. 10)
Accreditation Process, Policies, and Procedures. ALA’s Accreditation Process,
Policies, and Procedures (ALA, 2012) serves as the manual to the ALA 2008 Standards for the
accreditation process. It provides an operational definition of outcome assessment and detailed
instructions on how it should be implemented. In general, the manual stresses that outcomes
assessment should be an integral part of planning, evaluation, and improvement process. In the
Guidelines for the Program Presentation section of the manual, a subsection (II.3) is devoted to
outcomes assessment and sources of data for measuring outcomes. Specifically, it states
[emphasis added],
The Standards and the current accreditation process emphasize ongoing planning, self-
evaluation, and the use of program-level outcomes assessment by ALA-accredited
programs... The results of developing and evaluating outcomes assessments will be a
unique set of measures of what constitutes success for that school and program.
Under the Standards, programs should use outcomes assessment as part of the ongoing
planning and evaluation process.
Furthermore,
Outcomes assessment provides the Dean and faculty with information to make useful
decisions about program improvement and to develop strategies for continuous
improvement… The process of outcomes assessment ultimately results in revision of the
objectives and goals of a school and program… Effective outcomes assessment means
that the school and program have established and use broad-based, continuous program
planning, development, assessment, and improvement. (ALA, 2012, p. 38)
LIBRARY AND INFORMATION STUDIES 70
Accreditation Process, Policies, and Procedures
As mentioned earlier, ALA’s Committee on Accreditation (COA) is charged with the
responsibility of performing the accreditation of library programs and to develop library
education standards. COA is a standing committee of the ALA and supported by ALA Office of
Accreditation. COA operation is guided by the following principles (ALA, 2012):
The accreditation of MLIS programs is coordinated through a single agency that
represents the interests of the members of the profession;
Accreditation enhances the quality of library and information services through the
improvement of the professional education available for librarians and related
information professionals;
The spirit of accreditation lies in its constructive and continual evaluation and assessment
of LIS educational programs.
Similar to other accreditation agencies, ALA accreditation is carried out by volunteering
practitioners, educators, and the public, instead of solely by educators. The active participants of
ALA accreditation program come from two groups: COA committee members and members of
the External Review Panel.
Committee on Accreditation
ALA COA consists of 12 members appointed by the ALA president-elect. Among them,
10 members represent educators and practitioners. Since ALA accredits library education
programs in Canada, one of the COA members must be a Canadian librarian or educator who
represents the interests of the Canadian Library Association. The remaining two members are
representatives of the public and represent the public interest. COA meets quarterl, and each
meeting takes about two to two-and-a-half days. Two of these meetings are in conjunction with
LIBRARY AND INFORMATION STUDIES 71
ALA’s summer annual conference and Midwinter meeting. The spring and fall meetings are
held at ALA’s headquarter in Chicago. These meeting are not open to the public (ALA, 2012;
Dare, 2011).
External Review Panel. The External Review Panel (ERP), the second active
participating group in the accreditation process, is a group of library and information studies
educators and practitioners appointed by the COA. ERP performs two major functions: serving
as the agent of COA during a site visit and analyzing the program presentation, also known as a
self-study, to verify information stated in the program presentation. Site visits typically take two
business days (Dare, 2012). After the visit, the ERP submits a report to the COA, stating the
strength and limitations of the program, areas not in compliance with the Standards, and areas for
improvement. Based on the report, COA then makes its accreditation decision (ALA, 2012).
ERP “[d]oes not determine whether or not a program is meeting the Standards and does not make
– or even recommend – an accreditation decision.” (Dare, 2012, p. 7)
A typical External Review Panel consists of six members: three educators and three
practitioners. To serve as an external review panelist, a volunteer will first complete an
application form and specify areas of specialization from a list of 28 choices. Once accepted, the
person must go through training at an ALA conference. ALA’s Office of Accreditation
maintains a database of about 280 profiles of panelists in the ERP pool and about 60 of them are
inactive members (ALA, 2012, Dare, 2010). On average, ERP conducts 10 accreditation site
visits per year.
Costs of ALA Accreditation
The ALA accreditation process can be an expensive undertaking for accredited programs
and their institutions (Berry, 1995). In 1991, 10 library schools addressed their dissatisfaction
LIBRARY AND INFORMATION STUDIES 72
with the value of ALA accreditation and pointed out that it is not worth the cost, work, and time
incurred (Berry, 1991a, 1991b). Although these schools eventually did not leave the program,
they do present a common concern about the expenses incurred in the process from accredited
programs.
According to ALA’s Accreditation Process, Policies, and Procedures (AP3), accredited
programs are responsible for the direct costs associated with the accreditation process. ALA’s
Office of Accreditation is responsible for assessing and reviewing the costs. There are five types
of fees: Pre-candidacy fee, Candidacy fee, Accreditation fee, Comprehensive or progress review
fee, and late fee. The Comprehensive fee includes all review-related expenses, including the
preparation and distribution of documents, conference calls, and site-visit costs (ALA, 2012). In
addition, the indirect cost for preparing the process and the release time from an external
reviewer’s institutions can be quite large. Yungmeyer (1984) estimated that the dollar figure of
volunteer time involved in the ALA accreditation work amounts to nearly a quarter of a million
dollars in the 1980-1981 period.
Outcomes Assessment Practice at MLIS program
As a supporting unit on campus, college and university libraries experience increasing
demands for greater accountability in providing quality services and resources to support the
educational and research needs of their institutions (Taylor & Heath, 2012; White &
Blankenship, 2007). In a survey of member opinions and current library literature, the
Association of College and Research Libraries (ACRL, 2010) identified increased demands for
accountability and assessment as one of the 2010 top 10 trends in academic libraries. In ACRL’s
2014 survey, measuring and demonstrating student success outcomes and educational
LIBRARY AND INFORMATION STUDIES 73
accountability continues to be among the top trends (ACRL, 2014). This section highlights
assessment-related studies and practices in the field of MLIS.
Assessment model. As outcomes assessment is now a prerequisite for accreditation or
the state funding process at both institutional and MLIS program levels, there is a gradual growth
of literature on this topic by library educators and practitioners. However, most of the research
falls into the exploratory nature of surveying current assessment practice. Only a handful of
authors discussed large-scale transformation of integrating outcomes assessment practice at the
program and curriculum level.
In describing a development of an outcomes assessment model at the University of South
Florida School of Library and Information Science, Perrault, Gregory, and Carey (2002)
discussed how outcomes goals are connected with a school’s and university’s mission and goals;
how student learning outcomes of each library course are linked with teaching effectiveness; and
how assessment results are looped back for program improvement. The authors further
delineated that this continuous-improvement cycle is “more than the act of measuring outcomes;
it is a system that includes defining programmatic outcomes, measuring programmatic effects,
and applying results for guiding and improving programs” (Carey, Perrault, & Gregory, 2001, p.
79).
Current assessment practice. Several authors surveyed the current library education
landscape on the practice of outcomes assessment. Applegate (2006) set out to investigate the
outcomes measures used by 15 ALA-accredited MLIS programs by analyzing their program
presentations prepared for accreditors. The results show that that most schools conducted
indirect measures of student learning outcomes, using student surveys, exit surveys, alumni
surveys, and employer surveys. Only 53% of the schools employed more objective and direct
LIBRARY AND INFORMATION STUDIES 74
measures, such as class assignments, comprehensive exams, capstone projects, portfolios, and
job placement rates.
In another study on assessment practices, Tammaro (2006) surveyed 160 library schools
worldwide on their practices in quality assurance. The results show that 90% of the schools have
quality assurance processes in place. The commonly used quality assurance process,
performance indicators, and outcomes assessment include resources allocated, student drop-out
rates, student recruitment practices, student evaluation, and employer surveys. In discussing
higher education’s eroding accountability and public trust, Cox, Mattern, Mattock, Rodriguez,
and Sutherland (2012) proposed a measure to assess the performance of iSchools, a group of
MLIS schools with concentration on information science and technology. The four major
proposed measures for assessment of the quality of iSchool programs are reputation rankings,
scholarly publishing and research, student evaluation of teaching, and student satisfaction. The
means of measurement of each area were further discussed. In another exploratory study,
Stansberry (2006) assessed students’ online discourse to determine student learning and
knowledge construction using two theoretical frameworks, Interaction Analysis Model and
Framework for Designing Questions for Online Learning. The results show that the combination
of these two frameworks offer a holistic assessment of the student learning progress.
Pre-program assessment. Clearly stated competencies and expectations required in
MLIS programs contribute to the preparedness of incoming students. As the nature of
librarianship becomes more technically demanding, programs and curricula incorporate more
information technology components and expect students to be well-versed in a variety of
technologies. Two studies examined the technology competency requirements of incoming
students at ALA-accredited MLIS programs. Scripps-Hoekstra, Carroll, and Fotis (2014)
LIBRARY AND INFORMATION STUDIES 75
analyzed the websites of 58 ALA-accredited MLIS programs and reported wide variations on
skill requirements, methods of evaluation, and remedial support across these programs. Also
using content analysis of MLIS program websites, Kules and McDaniel (2010) identified 29
technical competencies required among these programs. The three common methods identified
for assessing students in enhancing their technical skills are checklist or self-assessments, pre-
admission test, and post-admission test. Both studies confirmed that there is little consistency in
practices across these programs in terms of technical skill prerequisites, in-program supports,
best practices, and evaluation. In another study, Smith (2014) reported the implementation of
leadership pre-assessment in the admission process of a school librarianship certificate program
with focuses on leadership training. After the completion of the program, the participants took a
post-assessment survey. The author found that there is a significant improvement in leadership
capacity after the training. The results contribute to the revision of the curriculum and the
improvement of teaching.
End of program assessments. End of program assessment (EPA) or final assessment
before the completion of a degree program provides direct evidence of what and how well
students learned and accomplished during their enrollment. Burke and Snead (2014) surveyed
the opinions of 125 MLIS faculty members on their use of a set of EPA practices and their
associated benefits and drawbacks. The results show that two-thirds of library programs required
an EPA, and an academic portfolio was the most favorable requirement, followed by student
theses. In another study, Latrobe and Lester (2000) assessed how effective students developed
an academic portfolio throughout the school media certification program at University of
Oklahoma. The portfolio serves as a summative assessment tool as well as an accreditation
process. Finally, Shannon (2008) surveyed recent graduates and internship supervisors about
LIBRARY AND INFORMATION STUDIES 76
their perceived effectiveness of the school library media preparation program at the University of
South Carolina. Areas surveyed include program quality, preparation of students for
employment, intern’s knowledge, competencies, and skills, and areas for improvement. The
results contribute to the ongoing improvement and enhancement of the program.
Core competencies. Professional organizations and disciplinary associations compile
statements of competencies that stipulate expectations and requirements for the preparation of
entrants into the profession (Lester & Fleet, 2008). Competency statements specify “desired
knowledge, skills, and attitudes evidenced by practitioners and promulgated by national
associations whose missions support and advance the professions related to the discipline…”
(Suskie, 2009, p. 46). Also known as competency-based or criterion-referenced standards, such
professional statements also define the standards for certification or licensure required by certain
occupation (Suskie, 2009).
With the influx of new information technology and digital content in the late 1990s and
the constant change of user’s information seeking behavior, there was a growing concern about
the gap between the skills taught at the MLIS programs and the real skills needed as practitioners
(Auld, 1990; Holley, 2003; Kniffel, 1999). As a result, ALA organized its first Congress on
Professional Education in 1999 to “examine the initial preparation of professional librarians as
the first step in studying the broader issues or education and training for librarians and other
library workers” (Simmons, 2000, p. 43), to “[i]dentify the core competencies for the
profession,” and to “[d]escribe the competencies of the generalist of the future” (Gorman, 2005,
pp. 2-3). Over the next six years, a draft of this set of core competencies was discussed,
commented, redrafted, and consulted by various ALA committees and library professionals. In
2006, the Task Force on Library Education was appointed by then ALA president, Leslie Burger,
LIBRARY AND INFORMATION STUDIES 77
and chaired by formal ALA President, Carla Hayden, to finalize the list (Hicks & Given, 2013).
Among all the charges, the Task Force was asked to develop recommendations of a core
curriculum for MLIS programs and to recommend specific changes to ALA accreditation
standards based on the concerns of practitioners, employers, educators, and students (ALA,
2009b).
In 2009, the final version of this statement was approved and adopted by ALA Council in
its mid-winter meeting. The set of eight core competencies and the associated 41 traits that a
graduate from an ALA-accredited MLIS program should possess are (ALA, 2009a):
Foundations of the profession
Information resources
Organization of recorded knowledge and information
Technological knowledge and skills
Reference and user services
Research
Continuing education and lifelong learning
Administration and management
The Task Force further stated that “the core competencies are the bedrock of the curricula
of accredited programs” and ALA should “incorporate the core competencies and ALA’s Core
Values of Librarianship into its Standards for Accreditation of Master’s Programs in Library and
Information Studies” (Rettig, 2009, p. 6). The Task Force also recommended that the
accreditation standards be worded in a prescriptive, instead of indicative, manner and in an
imperative, active voice. The standards need to concentrate on documented student learning
LIBRARY AND INFORMATION STUDIES 78
outcomes and the assessment processes should provide objective evidence to demonstrate those
outcomes are achieved by all graduates (ALA, 2009c).
ALA also maintains a second competencies list, called Knowledge and Competencies on
its website. The site contains a compilation of 20 statements on specialized knowledge and
competencies developed by relevant professional organizations, including American Association
of Law Libraries, the Medical Library Association, the Music Library Association, and the
Special Libraries Association. These lists provide guidelines for individuals interested in
developing relevant skill sets in those specialized information professions, as well as for MLIS
programs and educators to develop training and programs in these specialized areas (ALA,
2014c).
Several authors researched how competencies are used in MLIS programs. In a content
analysis study, Lester and Fleet (2008) surveyed the program presentation documents of 55
ALA-accredited MLIS programs. They found competency statements and standards from
various professional organizations were incorporated extensively into the mission statements and
curriculum development of MLIS programs. Their study also shows that the library schools
adhere to the expectations and perspectives of the professions. Their findings further reveal that
such competency statements are also used by employers in hiring and in the professional
development process and by state library in formulating standards of practices or certification
requirements.
In another study, Church, Dickinson, Everhart, and Howard (2012) analyzed sets of
competencies, guidelines, and standards issued by professional organizations and state agencies
for guiding the preparation program of school librarians. They identified common themes and
guidelines to assist library programs in curriculum development and their training focus. Finally,
LIBRARY AND INFORMATION STUDIES 79
McKinney (2006) compared the course catalog and online syllabi of 56 ALA-accredited
programs with ALA core competencies. Her findings show that 94.6% of the MLIS programs
address all eight core competencies in their curriculum. The results render further support the
value of ALA’s statement of core competencies.
Transparency. Both CHEA and USDE require accreditation agencies to disclose certain
accreditation information to the public. USDE demands that accreditation agencies “must
maintain and make available to the public, upon request, written materials describing” the types
of accreditation it grants, its accreditation standards and procedure, institutions and programs
accredited, as well as allows third-party to comment on programs or institutions being accredited
(USDE, 2014b, para. 602.23). To demonstrate accountability, CHEA’s Recognition Policy and
Procedures also specifically requires that
To be recognized, the accrediting organization provides evidence that it has implemented:
accreditation standards or policies that require institutions or programs routinely to
provide reliable information to the public on their performance, including student
achievement as determined by the institution or program. (CHEA 2010a, p. 5)
ALA-COA, as a member of CHEA, is required to provide information on accreditation
decisions to the public. The ALA accreditation manual states that “institutions seeking
accreditation … have an obligation to … make those [outcomes assessment] results public” and
accredited programs “are expected to make publicly available the results of their evaluation of
education effectiveness” (ALA, 2012, p. 12). ALA’s Office of Accreditation, which coordinates
and supports activities related to COA’s accreditation, is responsible for “[p]roviding
information to … the general public about the accreditation process, policies, and procedures, as
well as the accreditation status of specific graduate LIS programs” (ALA, 2012, p. 11).
LIBRARY AND INFORMATION STUDIES 80
The Confidentiality and Disclosure section of the ALA accreditation manual provides
guidelines for the distribution of various documents related to the accreditation process:
Program Presentation: ALA-COA encourages every school to make its Program
Presentation available publicly. The Office [of Accreditation] will make the Program
Presentation available for educational purposes with permission of the school. (ALA,
2012, p. 30)
External Review Panel Report: ALA-COA treats the report of the ERP as confidential,
but encourages schools to make the report available publicly.
Decision Document: ALA-COA treats its Decision Document letter as confidential
correspondence … The COA encourages the school to broadly distribute the
accreditation decision and the reasons behind the decision.
Annual Statistical Reporting: The Office [of Accreditation] makes a trended summary of
annual statistical reporting publicly available.
The Decision Document contains the following elements:
The name of the institution and school;
The name of all ALA-accredited programs offered by the school;
Accreditation status of the program(s) and the date(s) when this status was
granted;
The date of the next comprehensive review;
Issues or concerns regarding compliance with the standards, a list of required
reports, and a schedule for submission of those reports. (ALA, 2012, p. 24)
In addition, COA meeting reports and the Office for Accreditation’s semi-annual
newsletter, Prism, are available to general public on ALA’s website. Although accreditation
LIBRARY AND INFORMATION STUDIES 81
decisions are announced in these official publications, no detailed information on reasons behind
the decisions are disclosed on the website or in the publications. ALA also maintains a website
with links to sample program presentations from ALA-accredited programs. However, currently,
only 25 out of the 63 accredited programs made their program presentations available through
this ALA website (ALA, 2014e).
Despite all of the efforts to make the accreditation process more open, ALA COA was
criticized of its lack of transparency and objectiveness over the years (Berry, 1976, 1995, 2000;
Berry, Blumenstein, & DiMattia, 1999; Cronin, 2000; Saracevic, 1994). When ALA applied for
reaffirmation of CHEA’s recognition in 2010, its request was deferred twice due to the lack of
“sufficient evidence that ALA has and implements a specific accreditation standard or policy that
requires institutions or programs routinely to provide reliable information to the public on their
performance, including student achievement as determined by the institution or program”
(CHEA, 2013b, p. 2). It is a clear indication that ALA and its accredited programs should more
willingly share their assessment results.
Challenges. Cultivating a culture of assessment within a MLIS program is not easy,
especially because the student enrollment rates and the resources of MLIS programs are
relatively small. Cost, time, added workload to faculty, and disengaging faculty from teaching or
research activities are commonly mentioned obstacles of implement outcomes assessment of
MLIS programs (Applegate, 2006; Burke & Snead, 2014; Cox et al., 2012). Assessment
measures, including extra course work and projects, can also add extra burden to students
(Applegate, 2006; Burke & Snead, 2014).
LIBRARY AND INFORMATION STUDIES 82
Conclusion
The increasing library of literature, both in quantity and scope, on the topic of outcomes
assessment illustrates the gaining of awareness, interests, acceptance, and understanding of such
practices in the MLIS education and library community. However, most of the studies reviewed
are more exploratory in nature and few employ in-depth assessment theories. Furthermore,
limited studies have fully sought the connection among assessment, accountability, and quality
improvement in MLIS education. It is the hope that this study will contribute additional
knowledge in this area.
LIBRARY AND INFORMATION STUDIES 83
CHAPTER THREE: METHODOLOGY AND RESEARCH DESIGN
There is increasing emphasis on the importance of outcomes assessment in the
accreditation process in higher education in general and in library education specifically. The
ALA 2008 Standards and its companion manual heavily stress the integration of outcomes
assessment in the missions, curricula, teaching, and student evaluation of MLIS programs.
However, there exists little research on the role of outcomes assessment practice in the
accreditation of these programs. Given the growing emphasis and requirement on student
learning outcomes assessment in the latest ALA 2008 Standards, this study aimed to investigate
the practice of outcome assessment at 62 MILS programs. This study aimed to address the
following research questions:
1. To what extent is the practice of outcomes assessment implemented at ALA-accredited
MLIS programs?
2. What types of outcomes assessment measures and approaches are employed at these
programs?
3. How have student learning outcomes assessment results been used in program’s
improvement efforts?
4. What are the perceived value and importance of outcomes assessment by program
administrators?
Research Design
This mixed-methods research employs a combination of qualitative and quantitative
approaches. There are several benefits of and attraction to mixed-methods research (Creswell,
2009; Creswell & Clark, 2011). First, it takes advantages of the strengths of both qualitative and
quantitative research and, thus, offset the weaknesses of both research methodologies. Second, a
LIBRARY AND INFORMATION STUDIES 84
mixed-methods approach can better address complex research topics, especially for research
questions that cannot be answered by either qualitative or quantitative methods. Additionally,
more insight and evidence can be acquired from the combination of these two approaches.
Finally, mixed methods provide an opportunity to view issues from multiple worldviews, instead
of limiting to them to certain paradigms associated with a specific research methodology.
The proposed research strategy for this mixed-methods research consists of a concurrent
triangulation design by conducting a survey for quantitative data and content analysis for
qualitative data simultaneously. The datasets collected from both methodologies will then be
compared to determine whether there is convergence, differences, and correlation. The survey in
this study encompasses all 62 MLIS programs, while the content analysis consists of only a
subset of the research population. As a result, the data from the survey is more representative
and, thus, weighted more heavily in the analysis. The data from content analysis is embedded as
secondary data within the primary data from the survey, as it play a supportive role to
complement and supplement the findings from the survey.
Population and Sample
This mixed-methods research consists of both survey and content analysis of MLIS
related programs. The size of available sample of each methodology is different.
Survey, Quantitative Research
The population for this study consists of 62 master’s degree programs in the area of
Library and Information Studies. Among them, 53 programs are accredited by the American
Library Association’s Committee on Accreditation (ALA-COA); five are in the conditional
status, indicating the programs are required to make changes to comply with the Standards; two
are in the process of acquiring ALA-COA accreditation; one program lost accreditation by ALA
LIBRARY AND INFORMATION STUDIES 85
in 2013; and one program is not accredited by ALA. Geographically, eight programs are offered
by Canadian institutions, one program is located in Puerto Rico, and 53 programs are based in 33
states and the District of Columbia in the United States. These programs are delivered through a
wide range of means, from the traditional on-site instruction to online-only to a combination of
both forms. Table 1 shows a breakdown of programs by geographical region. The full list of the
programs and the degree offered are available in Appendix A.
Table 1
Geographical Distribution of the Programs Studied
State/Province Count State/Province Count State/Province Count
United States Kentucky 1 Puerto Rico 1
Alabama 1 Louisiana 1 Rhode Island 1
Arizona 1 Maryland 1 South Carolina 1
California 3 Massachusetts 1 Tennessee 1
Colorado 1 Michigan 2 Texas 3
Connecticut 1 Minnesota 1 Washington 1
District of Columbia 1 Mississippi 1 Wisconsin 2
Florida 2 Missouri 1 Canada
Georgia 1 New Jersey 1 Alberta 1
Hawaii 1 New York 8 British Columbia 1
Illinois 3 North Carolina 3 Nova Scotia 1
Indiana 1 Ohio 1 Ontario 3
Iowa 1 Oklahoma 1 Quebec 2
Kansas 1 Pennsylvania 3 TOTAL 62
Adapted from American Library Association. (2014b). Directory of Institutions offering accredited master’s
program.
Content Analysis, Qualitative Research
The sample of content analysis includes 12 ALA-accredited MLIS programs. Their
program presentations are publicly available from ALA’s website. Three programs were re-
accredited in 2014, five in 2013, two in 2012, and two in 2011. Table 2 lists these programs and
the year of their last accreditation.
LIBRARY AND INFORMATION STUDIES 86
Table 2
MLIS Program Identifier, Year Last Accredited, and Type of Parent Institutions
Program Year Last Accredited Type
A 2014 Public, 4-Year
B 2014 Public, 4-Year
C 2014
Public, 4-Year
D 2013 Public, 4-Year
E 2013 Public, 4-Year
F 2013 Public, 4-Year
G 2013
Public, 4-Year
H 2013 Public, 4-Year
I 2012 Public, 4-Year
J 2012 Public, 4-Year
K 2011
Public, 4-Year
L 2011
Public, 4-Year
Instrumentation
This mixed-methodology study employed both survey and content analysis to collect data
on student outcomes assessment at individual MLIS program for further analysis. Concurrent
triangulation of data from both quantitative and qualitative research methods ensure a
comprehensive approach to address the research questions. The collection of data was
performed simultaneously, and there is limited interaction between the two processes of data
collection. However, findings from the analysis of two sets of data can complement or
contradict each other through triangulation (Morse, 1991).
Survey, Quantitative Research
Surveys are a cost-effective and convenient method “to collect information from or about
people to describe, compare, or explain their knowledge, feelings, values, and behavior” (Fink,
2013, p. 1). In this study, online survey is the best means to collect data about outcomes
assessment practices at MLIS programs that are geographically spread out. The participants in
LIBRARY AND INFORMATION STUDIES 87
the survey are accreditation liaison officers of individual MLIS programs, as they are most
knowledgeable about the accreditation process of their programs, the accreditation standards of
ALA or other professional organizations, and the core competency requirements of the
profession.
The design of the survey instrument is critical to valid survey findings (Fink, 2013). In
this study, the survey instrument is primarily based on an existing questionnaire developed by
Peter D. Hart Research Associates, Inc. In 2008, AAC&U commissioned Peter D. Hart Research
Associates, Inc. to survey its member institutions on “the prevalence of specified learning
outcomes … and to document recent trends in curricular change, specifically in the areas of
general education and assessment” (Hart Research Associate, 2009b, p. 1). The Hart Research
questionnaire consists of 27 multiple-choice and open-ended questions. In addition to collecting
demographic information, the survey focuses on four areas: learning goals or outcomes,
assessment practice, assessment tools used, and general education trends. Although the AAC&U
survey is primarily designed to study assessment practice of undergraduate education, it does
contain some general questions applicable to specialized graduate programs. To enhance the
reliability and validity of the instrument, the current study also consulted surveys from other
similar studies on assessment practice at graduate and professional programs.
The survey questions were carefully chosen to ensure full representation of the research
questions. Each question is worded to avoid confusion, bias, and ambiguity. The response
selection is mutually exclusive and collectively exhaustive. Draft surveys were reviewed by
experts in this area, including four faculty members from MLIS programs and by the Director of
ALA Office of Accreditation, to ensure content validity. It was also pilot-tested to ensure the
LIBRARY AND INFORMATION STUDIES 88
instructions, questions, and scales were clear; the online instrument functioned properly; and the
responses were recorded and collected properly.
Content Analysis, Qualitative Research
Krippendorff (2010) described content analysis as “a research technique for making
replicable and valid inferences from texts (or other meaningful matter) to the context of their
use” (p. 234). Originally employed in study of mass communications, content analysis is used in
social science research. In addition, it can be equally applied to qualitative, quantitative, and
mixed modes of research design and employs a wide range of analytical techniques, including
coding, categorization, mapping, and clustering (White & Marsh, 2006). Through the analysis of
written, verbal, or visual documentation, researchers can identify key themes, concepts and their
relationships that can be used to make inferences or test hypothesis (Wilson, 2011).
Reliability and Validity
To ensure the quality of the survey instrument, the study paid special attention to the
reliability and validity of the survey instrument. The reliability of the survey is enhanced by
minimizing four types of survey errors identified by Dillman, Smyth, and Christian (2009).
First, sampling errors were avoided as this study surveys the entire population of MLIS programs
accredited or in the process of being accredited by ALA. Measurement error, which is the
degree of imperfection the survey data is collected, is reduced by carefully composing and
wording survey questions. The coverage error, caused by improper representation of the
underlying population, was avoided by identifying the ALOs of MLIS programs through ALA.
The survey invitation letter asks them to forward the survey to the appropriate person in their
program if they are not the right person to respond. Finally, the non-response error is reduced by
LIBRARY AND INFORMATION STUDIES 89
sending a follow-up reminder after the initial invitation as well as offering survey results to
respondents as an incentive.
Reliability of qualitative research refers to the consistency and stability of research
approach and procedure throughout the study and across different researchers (Creswell, 2009).
To ensure the reliability of content analysis, the study follows several reliability procedures:
Review each program presentation twice to acquire a fuller familiarity with the
current practice, diversity, and accreditation standard of learning outcomes
assessment in each MLIS program.
Establish a mechanism of verifying the data accuracy in the research and coding
process through checking, confirming, and self-correcting.
Consult MLIS program websites and supporting documents to verify particular
statements that are not clear in the program presentation.
Examine research findings from similar studies in the past and assess the degree to
which this study’s results are congruent with those of the previous ones, as well as the
findings derived from the MLIS program survey.
Validity means “how well a test measures what it is designed to measure” (Kurpius &
Stafford, 2006, p. 141). A valid survey is able to collect information that accurately reflect
respondents’ knowledge, attitudes, values, and behavior (Fink, 2013). To ensure the validity of
the study, the following provisions and strategies are made:
The survey instruments were reviewed by four faculty members from three MLIS
programs as well as by the Director of ALA Office for Accreditation. The four
faculty experts either led or participated in the accreditation process of their program.
One faculty member also had chaired the ALA Committee on Accreditation. Based
LIBRARY AND INFORMATION STUDIES 90
on their comments and suggestions, the survey was revised to enhance content
validity.
The researcher gained a familiarity with the accreditation practice in the MLIS
education by attending training sessions for External Review panelists provided by
ALA Office of Accreditation at ALA Annual Conference in June 2014.
Data collected from the MLIS program survey and content analysis was triangulated;
evidence was examined to verify themes that emerged.
Provide thick and detailed description of the key themes identified and promote the
credibility of the study.
Data Collection and Analysis
The data for this mixed-methods research were acquired via a survey of accreditation
liaison officers of MLIS programs as well as from program presentations from selected
accredited MLIS programs. A thorough planning for data management contributes to the
accuracy and integrity of data collection and analysis and, hence, the validity and reliability of
the findings. This study employed the following process and data analysis instruments to collect
and then examine the data.
Survey, Quantitative Research
With the approval of this study by the Institutional Review Board (IRB), the survey of
this study was distributed to accreditation liaison officers of 62 MLIS programs in the fall of
2014. A list of accreditation liaison officers and their contact information was acquired from
ALA’s Directory of Institutions Offering Accredited Master’s Programs (ALA, 2014b). The
survey participants received a personalized invitation email with a link to an online survey
hosted by Qualtrics, an online survey service (Appendix B). The invitation provides a brief
LIBRARY AND INFORMATION STUDIES 91
overview of the study and its goals, with emphasis on the contribution to the understanding of
accreditation practice in the profession. Participants were assured that their identity, contact
information, and responses would be kept confidential. To encourage participation, the
invitation letter offered to provide the results to anyone interested. To increase response rate, a
follow-up email reminder with survey link included was sent to non-respondents two weeks after
the initial invitation (Appendix C). Those who did not wish to be contacted were filtered out of
the reminder.
The survey questionnaire contains 18 questions and sub-questions and consists of six
sections (Appendix D). The Demographic Information section, consisting of questions 1 to 4,
asks for the affiliation (i.e., public or private/independent) of the parent institution of the MLIS
programs and the official position title of the respondents. It further clarifies if the respondent is
the accreditation liaison officer of the program.
The Program Goals and Measureable Objectives section contains questions 5 to 8 and
asks respondents if the MLIS program developed a set of learning goals and an associated
outcomes assessment plan. According to ALA Standards, sections I.2, I.3, II.1, and II.2, the
objectives of an MLIS program should be expressed in terms of student learning outcomes and
reflect a set of ten essential characters, knowledge, skills, qualities, values, and traits associated
with the information profession (ALA, 2008). The survey questionnaire also seeks input on the
driving forces for developing assessment plan. Next, the survey inquires whether the MLIS
program incorporates ALA’s Core Competencies of Librarianship or competencies statements
from other professional organizations in its curriculum. As specified in section II.5 of the ALA
Standards, the curriculum of an MLIS program should take “into account the statements of
LIBRARY AND INFORMATION STUDIES 92
knowledge and competencies developed by relevant professional organizations” in the design of
learning experiences (ALA, 2008, p. 7).
Section three of the survey, Assessment Practices, includes questions 9 to 10 and elicits
detailed information on the actors of assessment practices at the MLIS program. First, the
respondents are asked if a designated assessment committee and curriculum committee exist in
their program. Then, they are asked about the composition of these two committees and the
headcounts by membership type. The answers from this section provide insight as to the degree
of participation from key stakeholders. It corresponds to ALA Standards, sections I.1, I.3, II.7,
and IV.6, which specify that the evaluation process should involve those served by the program,
including students, faculty, employers, alumni, and other constituents.
Section four of the survey, Assessment Measures and Use of Assessment Results,
consists of three Likert-style questions (11 to 13). Question 11 asks respondents to rate, in an
ordinal scale of six (1 = None and 6 = All), how many students participate in a list of ten direct
and indirect assessment activities, including capstone project, portfolio, comprehensive
examination, thesis, internship, certification examination, interview, and survey. Question 12
lists eight types of assessment methods and asks respondents to indicate the degree to which they
are employed in four ordinal scale, ranging from Not at all to Very much. ALA Standards,
sections II.6 and IV.4 stipulate that the MLIS program should systematically assess students’
performance through multifaceted methods and offer continuous guidance and counseling. In
addition, faculty members should be skillful in instructional planning and assessment. Question
13 seeks opinions from respondents on the degree to which the assessment results are used in
making changes in four ordinal scale, ranging from Not at all to Very much. According to ALA
Standard, sections II.7 and IV.6, curriculum evaluation should include the assessment of
LIBRARY AND INFORMATION STUDIES 93
students’ achievements and subsequent accomplishments. Furthermore, evaluation results
contribute to the ongoing program appraisal, improvements, and planning for the future.
Section five of the survey, Organizational Supports, explores the resources allocated to
assessment activities and contains questions 14 and 15. The respondents are asked to supply the
number of full-time equivalent (FTE) personnel dedicated to assessment activities and the time
faculty and staff are invested in assessment activities. ALA Standards sections III.7 and V.4
specify that the MLIS program should provide adequate support to carry out its program of
teaching and service.
The sixth and final section of the survey consists of three optional, open-ended questions
(16-18) that solicit the respondents’ comments on the value and program benefits of
accreditation. Question 16 seeks participants’ opinions on the benefits of ALA accreditation
process while Question 17 asks participants to list additional organizations the program is
accredited by. The last question requests the respondents to share any relevant information about
their student learning outcomes assessment practice.
Table 3 summarizes the structure of the survey questionnaire and the pertinent sections in
ALA Standards corresponding to each question.
Table 3
Questionnaire and Corresponding ALA Standards
No. Description Data Type ALA Standards
Section 1: Demographic Information
1 Institution affiliation Nominal
2 Official title Nominal
3 Accreditation liaison officer? Nominal
4 Position at last accreditation review Nominal
Section 2: Program Goals and Measurable Objectives
LIBRARY AND INFORMATION STUDIES 94
Table 3, continued
5 Learning goals defined? Nominal I.2; I.3; II.1; II.2
6 Written assessment plan developed? Nominal I.2; I.3
6a Assessment plan adopted as a policy? Nominal I.2; II.1; IV.6
6b Drivers for the policy development Nominal
7 Degree of adoption of ALA Core Competencies Ordinal II.5
8 Adoption of other professional organization’s
competencies statements
Nominal II.5
Section 3: Assessment Practice
9 Assessment Committee? Nominal
9a Composition of Assessment Committee Nominal I.1; I.3; II.7;
IV.6
10 Curriculum Committee? Nominal
10a Composition of Curriculum Committee Nominal I.1; I.3; II.7;
IV.6
Section 4: Assessment Measures and Use of Assessment Results
11 Student involvement in 10 assessment tools Ordinal III.6; IV.4
12 Degree of use of 8 assessment methods Ordinal III.6; IV.4
13 15 types of usage of assessment results Ordinal II.7; IV.6
Section 5: Organizational Supports
14 Assessment personnel? Nominal III.7; V.4
14a Assessment personnel status Nominal III.7
15 Degree of faculty/staff involvement in assessment Ordinal I.1; I.3; II.7;
IV.6
Section 6: Benefits of Assessment and Accreditation
16 ALA accredited? Nominal
16a Accreditation benefit Nominal
17 Other accreditation? Nominal
18 Additional information on outcomes assessment Nominal
LIBRARY AND INFORMATION STUDIES 95
Response
As mentioned in the previous section, an email invitation with hyperlink to the online
survey was disseminated to program directors at 62 MLIS programs in the United States and
Canada. At the end of October, 2014, a total of 34 surveys (or 55%) were received. Five
surveys were not completed and, thus, omitted from the pool. The responses from the remaining
29 (47%) valid surveys were used in the data analysis. As none of the questions on the survey
was force-response questions, some respondents chose to complete only certain portions of the
survey and address only certain topics. Consequently, the response rate varies from question to
question.
Respondents
Of the 29 respondents who completed surveys, 27 (93%) are from public institutions and
2 (7%) are affiliated with private or independent institutions. Among those programs, 21 (91%)
are currently accredited by ALA. One program is in the pre-candidacy status, and one is in the
process of applying for ALA accreditation. In addition to ALA accreditation, 11 or 48%
programs are also accredited by other organizations, including Association to Advance
Collegiate Schools (AACSB), Council for the Accreditation of Education Preparation (CAEP),
National Association of Schools of Art and Design (NASAD), National Council for
Accreditation of Teacher Education (NCATE), and locally developed quality assurance
programs.
Content Analysis, Qualitative Research
Program presentation is a self-study prepared by MLIS programs as part of the review
process for ALA accreditation. Its intended readers are the ALA’s external review panelists.
The document offers an overview of the program; explains in details how the program meets the
LIBRARY AND INFORMATION STUDIES 96
accreditation standards and the mission, goals, and objectives of the program; provides analysis
on the strengths, weaknesses, and challenges of the program; and outlines plans and goals for
future development and compliance with the accreditation standards (ALA, 2012). Since ALA-
COA standards stress the importance of outcome assessment, the assumption is that program
presentations would include any pertinent data on such practice. As Applegate (2006) noted, the
strength of content analysis is that “a program presentation is a non-reactive methodology: the
data being exists before being examined” (p. 327). On the contrary, survey responses can be
subjective and biased as respondents have a desire to depict their program in a certain manner.
Program presentations from 12 ALA-accredited MLIS programs were downloaded from ALA’s
website for analysis.
Data Analysis
The research design of this study employs the concurrent mixed methods approach. The
data collected from separate research methodologies will be analyzed individually using
appropriate process and tools most relevant to the dataset. There will be neither direct
interaction during the processes nor merge of the two datasets.
Survey, Quantitative Research
Organizing and managing survey data is an essential part of data analysis. Based on the
survey questionnaire, a code book (Appendix E) was constructed and tested. The code book
contained the variable names, variable labels, and response values. With the value assigned to
each possible response, Qualtrics, the survey software, automatically generated reports and
descriptive statistics. To ensure the integrity of the results, the data collected from the
participants were exported from Qualtrics to an Excel spreadsheet for manual review and
LIBRARY AND INFORMATION STUDIES 97
validation. Answers for open-ended questions were also derived from the Excel spreadsheet for
further analysis.
Content Analysis, Qualitative Research
To analyze the 12 program presentations, a broad coding scheme (Appendix F) was
developed first. The 10 broad categories correspond to the research questions of the study and
the pertinent sections in the 2008 ALA Accreditation Standards. To avoid ambiguity, a brief
description and examples of each category were included in the coding scheme. An Excel
spreadsheet was used to record all the evidence during the content analysis process.
Limitations and Delimitations
There are several important assumptions, limitations, and delimitations in this study.
Assumptions
The study makes the following assumptions about the survey participants:
They are a representative sample of accreditation liaison officers.
They have a clear understanding of learning outcomes assessment, a full knowledge
of assessment practice at their program, a complete familiarity with ALA 2008
Accreditation Standards and its companion manual, and a thorough awareness of and
involvement in the ALA-COA accreditation process.
They are directly involved in the accreditation process of their program.
They have the ability to report their perceptions accurately.
Limitations
There are several important limitations in this study. First, it is assumed that the
participants’ survey answers will reflect what they truly believe. It is possible that their
responses might include subjective view and, hence, not be representative of the actual practice
LIBRARY AND INFORMATION STUDIES 98
at their program. Furthermore, the participants may provide guarded answers on sensitive
questions even though the survey is conducted anonymously. Second, there are limitations
related to the design of the survey. The reliability and validity of the survey can have a serious
impact on the data collected and, thus, the findings. Another limitation related to the content
analysis of this study concerns the availability of program presentations of ALA-accredited
programs. Not all 62 MLIS programs make their program presentations publicly available.
There might be unknown reasons for those schools that are unwilling to disclose their program
presentations. Due to this limitation, the findings from the content analysis might not be
representative of the whole sector. Finally, the study is conducted at MLIS programs in Canada
and the United States. The generalizability of its findings is limited. Given this fact, as well as
the unique characteristics of MLIS programs, the history of ALA-COA accreditation, and the
development of the librarian profession, the conclusions may not be easily transferrable to other
programmatic accreditation practices.
Delimitations
Three delimitations narrow the scope of this study. First, this study only includes MLIS
programs in Canada and the United States, the domain of ALA-COA accreditation. Second, the
study only investigates master’s level library education, although more than half of the ALA-
accredited library programs also offer bachelor, certification, or doctoral programs as well as
courses and workshops for students who are not seeking a library degree. According to the
Association for Library and Information Science Education, non-MLIS degree students account
for 31% of total student enrollment in courses offered by ALA-COA accredited programs in
2011 (Wallace, 2012). Finally, all of the MLIS programs surveyed in this study are affiliated
with an institutions that are accredited by a regional accreditors. This study ignores the
LIBRARY AND INFORMATION STUDIES 99
outcomes assessment practice implemented at the institutional level of the MLIS programs. It is
likely that the institution-wide practices may have direct and indirect influence on the assessment
practice at school or program level.
LIBRARY AND INFORMATION STUDIES 100
CHAPTER FOUR: RESULTS
The purpose of this study was to understand outcomes assessment practice at MLIS
programs accredited by American Library Association (ALA) at program and course levels and
the perceived value of outcomes assessment by program Accreditation Liaison Officers (ALOs).
To gain first-hand accounts and perspectives of current assessment efforts, 62 MLIS program
leaders in the United States and Canada were invited to respond to the survey. To further
contextualize the practice on the ground, 12 MLIS program presentations prepared for ALA
accreditation were also reviewed and analyzed. This chapter presents the results of both
approaches. This study aimed to address the following research questions:
1. To what extent is the practice of outcomes assessment implemented at ALA-accredited
MLIS programs?
2. What types of outcomes assessment measures and approaches are employed at these
programs?
3. How have student learning outcomes assessment results been used in program’s
improvement efforts?
4. What are the perceived value and importance of outcomes assessment by program
administrators?
The survey instrument is provided in Appendix C. The survey questions were framed to
gain a fuller picture of the state of assessment efforts at MLIS programs. The survey also sought
to better understand the primary drivers of assessment practices, faculty involvement, and the
support of assessment activities at MLIS programs.
LIBRARY AND INFORMATION STUDIES 101
Characteristics of MLIS Programs
MLIS programs tend to be smaller professional school programs in terms of scale and
size on campus. Of the 57 MLIS programs accredited by ALA, the average number of enrolled
students is 238 while the average number of faculty and staff is 16.5 and 10.2, respectively.
Table 4 provides an overview of the staffing, enrollment, and financial data of ALA-accredited
MLIS programs. Due to the relatively small size of the program, the program director usually
serves as the accreditation liaison officer and, thus, received an invitation to participate in this
survey.
Table 4
ALA-Accredited MLIS programs (n=57)
n Mean Median Mode Range
Full-time Faculty 57 16.7 12 12 4 - 48
Support Staff
(Total FTE)
56 10.2 6 4 1 – 42.9
MLIS Students
(Total FTE)
56 237.6 214 108 47.67 – 1,253
Revenue 57 $4,858,594 $2,924,727 - $505,045 - $33,772,677
Expenditure 54 $5,156,600 $2,393,612 - $473,945 - $38,400,127
Source: Wallace, D. P. (Ed.). (2012). Library and information science education statistical report 2012.
Description of Respondents
Of the 29 participants, the vast majority (26 or 90%) indicated they currently served in
the capacity of accreditation liaison officer (ALO) or coordinator at their program. Seventeen
(59%) of the respondents also identified as the program director or chair of the MLIS program
(Table 5). As the person most knowledgeable regarding the accreditation practice at their
program, ALOs offer the most pertinent answers to the survey questions. Consequently, they
LIBRARY AND INFORMATION STUDIES 102
contribute to the high quality of the survey data and the strong representation of the subject
matter.
Table 5
Formal Titles of Respondents
Formal Title Count Percentage
Chair, Dean, Program Director 17 58.6%
Associate Dean, Director 3 10.3%
Associate Director for Assessment / Assessment Coordinator 2 6.9%
Associate Professor 2 6.9%
Academic Services Administrator 1 3.4%
Accreditation Assistant 1 3.4%
Director of Program Effectiveness, Marking & Recruitment 1 3.4%
None response 2 6.9%
TOTAL 29 100.0%
Of the 29 respondents, however, only one-third (10 or 34%) indicated they served at this
same capacity at the program’s last accreditation review. As ALA typically re-accredits MLIS
programs every five to seven years, the low retention rate of the ALOs is noteworthy. The high
turnover of personnel or assignment between accreditation reviews signals that the knowledge
and experience might be lost in transition and between accreditations. It further reveals the
episodic nature of the program review process.
Program Goals and Measurable Objectives
The Program Goals and Measurable Objectives section of the survey inquires as to
whether respondents’ MLIS programs developed a set of learning goals and associated outcomes
assessment plan. The survey asks if the MLIS program incorporates ALA’s Core Competencies
LIBRARY AND INFORMATION STUDIES 103
of Librarianship or competencies statements from other professional organizations in its
curriculum.
Presence of Formalized Learning Outcomes and Assessment Plan (ALA Standards I.2, I.3,
II.1, II.7)
Successful assessment of student learning begins with well-articulated and achievable
learning goals and objectives. The 2008 ALA Standards stipulates that an MLIS program should
have clearly defined learning goals and objectives expressed in terms of student learning
outcomes (I.2 and I.3). In addition, these goals and objectives should be used for curriculum
development and evaluation as well as for assessment of student’s achievements (II.1 and II.7).
When asked if their programs have a common set of learning goals and outcomes, 26 (92.9%)
respondents answered positively (Table 6). In addition, 22 (78.6%) respondents pointed out that
their program adopted a written assessment plan with stated goals, desired learning outcomes,
and measuring mechanisms. Most (21 or 75.0%) respondents also indicated that such
assessment plan is formally adopted as a policy in their program.
Table 6
Implementation of Learning Goals and Assessment Plan
Yes Under development No
A common set of learning goals and
outcomes
92.9% 3.6% 3.6%
Written assessment plan with stated
goals, designed outcomes, and
measuring mechanisms
78.6% 21.4% 0.0%
Assessment plan adopted as a policy 75.0% 21.4% 3.6%
The high degree of established learning goals and outcomes and implementation of
assessment plan at MLIS programs coincides with the findings from similar studies. In a survey
LIBRARY AND INFORMATION STUDIES 104
of the assessment practice at two- and four-year, public, private, and for-profit institutions, Kuh
and Ikenberry (2009) reported that about three-quarters of them adopted common learning
outcomes for undergraduate students. In another study, Ewell, Paulson, and Kinzie (2011) found
that more than 80% of 982 surveyed departments or programs at two and four-year sectors
established an agreed-upon set of student learning outcomes at departmental or program level.
In a focus group study of assessment practices with 45 campus leaders, Kinzie (2010) reported
that assessment not only took root, but it is also woven into the structures and processes on many
campuses.
A review of the program presentations of the 12 MLIS programs provides additional
evidence on the integration of student learning outcomes in program objectives. Table 7
illustrates the learning outcomes statement derived from section I.2 of the 12 program
presentations. Only one program, School J, failed to explicitly use the verbiage “learning
outcomes” in its program presentation. However, School J’s program objectives are un-
mistakably phrased in assessable learning outcomes and defined with specific knowledge and
skills expected from the students. In addition, School J’s program also provides a table that
illustrates how its program objectives are linked to each of the ten competencies specified in
section I.2 of 2008 ALA Standards.
LIBRARY AND INFORMATION STUDIES 105
Table 7
Learning outcomes statement from the 12 MLIS program presentations
Program Learning Outcomes Statement [emphasis added]
A Five learning outcomes, complied with SACS accreditation standards
and served as the basis for evaluating student’s achievement of learning
outcomes
B Three program goals and 12 objectives, stated in terms of student learning
outcomes
C 15 core competencies that serves as program learning outcomes and are explicitly
integrated into every course and syllabus
D Four program objectives, designated as learning outcomes and matched ALA
accreditation standards
E Six program objectives, integrated into core courses and stated in terms of student
learning outcomes
F Six program objectives in the form of student learning outcomes and integrated
into course objectives
G Four program objectives built upon student learning outcome assessments
(SLOAs) with nine direct assessment components
H
Seven program objectives expressed in student learning outcomes
I
Outcomes-based evaluation is measured in terms of five program goals
J
Seven program objectives, reflected throughout the degree programs
K Six program goals; four program objectives in the form of student learning
outcomes and integrated into each course
L
Three program objectives with focus on learning outcomes at the program level
The program at School G illustrates a tightly knit structure of program objectives,
learning outcomes, and assessment program. With the help of two assessment specialists, the
faculty at School G’s MLIS program revised its program objectives to incorporate a systematic
evaluation cycle in 2012. The new learning outcomes matrix (Table 8) shows close relationship
between this school’s four program objectives, stated in terms of learning outcomes, and their
associated outcomes assessment measures, including nine direct assessments and three indirect
LIBRARY AND INFORMATION STUDIES 106
assessments. Each of the four program objectives, shown in column one, contains clearly
defined educational statement with expected learning outcomes. Columns two and three show
the sources and measures of direct and indirect evidences. The outcomes from these assessment
cycles are then used to make program-related adjustments on an on-going basis. This framework
exemplifies a systematic, active, and continuous planning process. It further serves as a well-
defined summative roadmap of the expected knowledge and skills of its students.
Table 8
Alignment of Program Objectives with Student Learning Outcomes Assessment Components,
School G
Four Program Objectives
Graduates of MLIS Program will:
Nine Direct Performance
Assessment Components
Three Indirect Assessment
Components (Surveys)
PO 1. Perform administrative, service,
and technical functions of professional
practice in libraries and information
centers by demonstrating skills in
information resources; reference and
user service; administration and
management; organization of recorded
knowledge and information.
SLOA* 1.1 Applied Library
Experience Notebook
(ALEN)
SLOA 1.2 Collection
Development Project
Student Survey (at program
exit)
Alumni Survey (1 year
after graduation)
Employer Survey (every 3
years)
PO 2. Use existing and emerging
technologies to meet needs in libraries
and information centers.
SLOA 2.1 Social Cataloging
Technology Project
SLOA 2.2 Reference
Transactions Assessment
SLOA 2.3 Career e-Portfolio
Website
Student Survey (at program
exit)
Alumni Survey (1 year
after graduation)
Employer Survey (every 3
years)
PO 3. Integrate relevant research to
enhance their work in libraries and
information centers.
SLOA 3 Research Proposal
Student Survey (at program
exit)
Alumni Survey (1 year
after graduation)
Employer (every 3 years)
Survey
LIBRARY AND INFORMATION STUDIES 107
Table 8, continued
PO 4. Demonstrate professionalism as
librarians or information specialists.
SLOA 4.1 MLIS
Foundational Knowledge
Articulation Assessment
SLOA 4.2 Ethics Project
Report
SLOA 4.3 Career e-Portfolio
Student Survey (at program
exit)
Alumni Survey (1 year
after graduation)
Employer Survey (every 3
years)
*SLOA: Student Learning Outcome Assessment
At School C, the program objectives correspond to a set of 15 comprehensive learning
outcomes expected of its students. Stated in terms of core competencies, these learning
outcomes are explicitly integrated into every course, syllabus, and assignment. The School also
developed a core competency database to facilitate students mapping competencies to courses.
Diverse Knowledge and Competencies Statements (ALA Standards II.5)
Professional organizations and disciplinary associations compile statements of
competencies that stipulate expectations and requirements for the preparation of entrants into the
profession (Lester & Fleet, 2008). The 2008 ALA Accreditation Standards specify that MLIS
programs’ curriculum should take into account the statements of knowledge and competencies
developed by relevant professional organizations (II.5). Survey respondents were asked to what
degree their curriculum and learning objectives adopted competencies statements developed by
ALA and other professional organizations. As Table 9 shows, more than 90% of the MLIS
programs either extensively or fully integrated ALA Core Competencies into their curriculum
and course development.
LIBRARY AND INFORMATION STUDIES 108
Table 9
Adoption of ALA Core Competencies of Librarianship
Additionally, MLIS programs also incorporate a wide range of knowledge and
competencies statements or guidelines compiled by relevant professional organizations in
defining learning outcomes for curriculum and courses. As shown in Table 10, survey
respondents reported their programs took into account competencies statements from 14
professional groups in curriculum development.
Table 10
Knowledge and Competencies Statements from Relevant Professional Organizations or State
Standards from Survey
Professional Organization or State Standards Frequency
ALA Core Competencies of Librarianship (ALA) 24
American Association of School Librarians (AASL) 8
Society of American Archivists (SAA) 6
Special Libraries Association (SLA) 5
Medical Library Association (MLA) 4
American Society for Information Science and Technology (ASIST) 3
Canadian Association of Research Libraries (CARL) 3
Degree of adoption Percentage
Very much 64%
Quite a bit 28%
Some 4%
Not at all 4%
Total 100%
LIBRARY AND INFORMATION STUDIES 109
Table 10, continued
Council for the Accreditation of Educator Preparation (CAEP), formerly the
National Council for Accreditation of Teacher Education (NCATE) 2
Individual state standards 2
International Society for Technology in Education (ISTE) 2
Academy of Certified Archivists 1
American Association of Law Libraries (AALL) 1
Association for Library and Information Science Education (ALISE) 1
Association of Records Managers and Administrators (ARMA) 1
Young Adult Library Services Association (YALSA) 1
An even wider array of competencies statements is documented in the content analysis
study. A total of 32 unique competencies statements were mentioned in the 12 program
presentations (Table 9). For each MLIS program, the total number of competencies statements
mentioned ranges from three to 17, with an average of six statements per program. The wide
scope and sheer size of competencies statements adopted by MLIS programs illustrate the
interdisciplinary nature of the library and information field as well as the diversity of curriculum
and program tracks, job options, and career paths.
The organizations listed in tables 10 and 11 share about 66% overlap. It is also noted that
competencies from organizations that demand certification, credential, and licensure for entrants
are cited more often on both lists. For example, the American Association of School Librarians
(AASL) and Society of American Archivists (SAA) are ranked high on both lists, as both of
them certify graduates aiming to become a school librarian or an archivist. In addition,
competencies statements issued by individual state are also ranked high as additional state
examination or credential, in addition to an MLIS degree, is required for anyone to work as
school librarians in the state.
LIBRARY AND INFORMATION STUDIES 110
Table 11
Knowledge and Competencies Statements from Relevant Professional Organizations or State
Standards from Content Analysis
Professional Organization or State Standards Frequency
Individual State standards 10
ALA Core Competencies of Librarianship (ALA) 9
Society of American Archivists (SAA) 6
Special Library Association (SLA) 6
American Association of Law Libraries (AALL) 5
American Association of School Librarians (AASL) 5
American Society for Information Science and Technology (ASIST) 3
Medical Library Association (MLA) 3
Association for Computing Machinery (ACM) 2
Association for Library Service to Children (ALSC) 2
Association of College & Research Libraries (ACRL) 2
Rare Books and Manuscripts Section of Association of College and
Research Libraries (RBML) 2
Reference and User Services Association (RUSA) 2
WebJunction Competency Index for the Library Field 2
Young Adult Library Services Association (YALSA) 2
ACRL Diversity Standards 1
American Medical Informatics Association (AMIA) 1
American Theological Library Association (ATLA) 1
Association Board for Engineering and technology 1
Association for Library and Information Science Education (ALISE) 1
Computing Research Association (CRA) 1
Federal Librarian Competencies, Federal Library and Information
Center Committee (FLICC) 1
International Federation of Library Associations and Institutions (IFLA) 1
LIBRARY AND INFORMATION STUDIES 111
Table 11, continued
Library of Congress Working Group on Cataloging 1
Music Library Association Core Competencies 1
Public Library Association 1
Although ALA Core Competencies of Librarianship are ranked high on both lists, three
(25%) of the 12 MLIS programs in the content analysis did not mention it in their curriculum
development or assessment program. One key reason is that the knowledge and skills specified
in ALA Core Competencies are defined narrowly and in a highly prescriptive manner (Case,
2009). They are either too broad and too general for a specialized field, such as archivists, or
completely inappropriate or irrelevant for professions in the technology sector. As one survey
respondent commented:
We have not formally adopted ALA's Core Competencies. They are somewhat
traditional and limited for a forward-looking program that incorporates [standards for]
archives and other information professions beyond libraries.
Another survey respondent added,
We look at specialized professional organization competencies for curriculum area and
individual course content standards.
It is further elaborated by a statement from a program presentation that, for courses in emerging
areas with no established competencies, such as data curation and digital libraries, their faculty
employed several approaches to inform curriculum development, including expert advisory
boards, analysis of job advertisements, survey of employers, and interviews.
LIBRARY AND INFORMATION STUDIES 112
Assessment Practices
The Assessment Practices section of the survey elicits detailed information on the
practice of learning outcomes assessment at the MLIS program, including the driving forces for
the development of an assessment plan and the participants in the process.
Major Driver of Outcomes Assessment Practice
The next set of survey questions aimed to identify the stimulating factors for outcomes
assessment activities at the MLIS program, as multiple studies consistently established that
regional and programmatic accreditation plays a catalyst role in the implementation of
assessment practices across all institutional types and disciplines (Kinzie, 2010; Kuh &
Ikenberry, 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014). At program and departmental
levels, Ewell, Paulson, and Kinzie (2011) reported that academic programs with accreditation
requirements engage more assessment activities and efforts than do non-accredited programs.
Similar differences also existed in assessment activities demanded for meeting the expectations
from state and governing boards, legislative mandates, and general accountability between
accredited programs and non-accredited programs. Accordingly, programmatic accreditation is
said to not only exert a direct and specific impact on a program’s operations, but it also carries a
“halo effect” of much broader influence in the spectrum of assessment activities and perceptions
(Ewell, Paulson, & Kinzie, 2011, p. 14).
The survey results of the current study are consistent with those of the above-mentioned
research. Most (77.3%) ALOs indicated that ALA Accreditation Standards are one of the
important stimuli for their assessment programs (Table 12). Additionally, 18.2% of ALOs also
selected other accreditation body as a key driver of their assessment programs. Internally, both
LIBRARY AND INFORMATION STUDIES 113
program chair and faculty are viewed as key driving forces, and the influence from other factors,
such as students and staff, seems to be more limited.
Table 12
Forces Driving the Development of Assessment Plan and Practice
Assessment Drives (multiple choices) Percentage
ALA Accreditation Standards 77.3%
MLIS Dean, Chair, Program Director 50.0%
Faculty 50.0%
Other accreditation body 18.2%
Administrative officer 13.6%
Students 9.1%
Others 9.1%
Accreditation is often viewed as an external enforcer of assessment programs and the key
requester and consumer of assessment results. A commonly mentioned concern was that
assessment is motivated by a compliance mentality. Assessment results are not proactively
utilized for program improvement and resource allocation. The results of this study, however,
reveal that MLIS program directors or faculty members do show interest in and recognize the
intrinsic value of outcomes assessment.
Several MLIS programs leveraged a negative accreditation review to institute changes
and improvement. In 2010, the MLIS program at School G was placed on Conditional
Accreditation status by ALA as a result of a lack of evidence of programmatic planning centered
on student learning outcomes. The program used this opportunity to recruit two assessment
specialists to work with faculty in overhauling its mission, goals, and objectives with a rigorous
planning process grounded in the assessment of student learning outcomes. As a result, ALA
LIBRARY AND INFORMATION STUDIES 114
accepted the program’s new plan and removed its conditional accreditation in 2012. Under the
new plan, student learning outcomes assessment plays a central part in program review,
curriculum development, and in amending courses to better prepare students for the profession.
In another example, the MLIS program at School K also went through a thorough review of its
assessment practice and developed student learning outcomes for all of its courses after its last
accreditation review. An ad hoc evaluation task force was also formed to assess how to utilize
student learning outcomes in the revision of program goals and curriculum.
Systematic and On-going Assessment Process (ALA Standards I.1)
An effective assessment program is characterized by an on-going review and
improvement process in which assessment results are used on a regular basis “to improve student
learning by informing decisions about pedagogy, classroom instruction, curriculum, learning
resources, library resources and services, or student services” (Lopez, 2002, p. 357). Maki
(2010) also prescribed a systematic assessment cycle that begins with identifying the expected
outcomes based on the learning goals and objectives, followed by gathering and analyzing data,
interpreting evidence to evaluate teaching effectiveness, determining if progress was made, and
then implementing any changes required. The iterative process then starts all over again. Banta
and Palomba (2015) further pointed out that the assessment cycle is a continuum with four steps:
define, measure, reflect, and improve. It forms the basis of an effective assessment plan.
Reflecting this framework, the 2008 ALA Standards, especially in the Introduction
section and section I.1, stipulates that an accredited program should continuously and
systematically review, assess the attainment of, and revise its program’s vision, mission, goals,
objectives, and learning outcomes as well as communicate its policies, process, and assessment
activities and results to constituents (ALA, 2008). With the interdisciplinary nature of the library
LIBRARY AND INFORMATION STUDIES 115
and information discipline, the constant evolvement of the knowledge and skills in the
profession, and the rapid changing technology landscape, it is even more critical to have a
systematic process in reviewing and revising curriculum at MLIS programs.
Several MLIS programs created an elaborate assessment plan with a specific time line or
matrix of activities relevant to their program objectives and curriculum. Faculty at School J use
an annual assessment timeline with clearly outlined activities and defined responsibilities. The
cycle starts each fall when faculty review school-wide goals and career trends in the library and
information profession. In the spring, the Dean, Associate Deans, and standing committees
present the school’s progress report to faculty. In the summer, the Dean and Associate Deans
gather and synthesize data on goals and progress and work with standing committees to establish
new goals for the fall. Throughout the year, faculty provide input and updates to Associate
Deans and appropriate standing committees. In this model, planning is integrated into the
school’s structure, class schedule, and assessment activities.
Some MLIS programs maintain a multiple-year assessment program that links assessment
results to other existing educational processes, such as curriculum review, strategic planning, and
budgeting. The program planning framework at School F is a good case at hand. The school’s
framework clearly defines the scope of its planning process, identifies the activities and actors
involved as well as the instruments required to carry the plan out, and establishes the
responsibilities, performance indicators, and timeline of the process. Operating on a rotating
four-year cycle, the planning process focuses on a specific area and assessment activities each
year: year one, mission, goals, and objectives with attention on strategic planning; year two,
teaching and learning, including a comprehensive curriculum review; year three, research; and,
year four, community engagement.
LIBRARY AND INFORMATION STUDIES 116
School I operates a three-year cycle curriculum review. In year one of the cycle, the
program’s Masters and Specialist Committee appraises the six core courses and their syllabi
according to program goals, objectives, and ALA accreditation standards. In the second year, the
committee reviews the specialization courses, following the same format as in year one. In the
third year, the committee incorporates the results from the previous two years and makes
recommendations on, as well as implements, changes to individual courses or on course
sequences.
At School C, the Curriculum and Program Development Curriculum Committee
coordinates a systematic review of the entire curriculum, also on a three-year cycle, with input
from relevant stakeholders. Meanwhile, the school holds quarterly two-day faculty retreats to
review program and curriculum as well as to make any adjustments. The on-going and
systematic assessment cycle allows School C’s MLIS program to identify areas for improvement
and develop strategies for implementation or adjustment responsively. Furthermore, the School
conducts an annual environmental scan of the job market for MLIS professionals to identify
emerging trends and make changes in curriculum and existing courses accordingly or developing
new courses.
Assessment Practice Drivers and Participants (ALA Standards II.7, IV.4)
Assessment as an on-going and systematic process works best when it is woven into
existing infrastructure and operation of the program. At many campuses, a committee of faculty,
staff, and students that represent multiple interests and opinions lead the assessment effort. All
survey respondents reported that a curriculum committee is in place at their program while 77%
of them further indicated that they had a committee responsible for assessment effort.
Additionally, several of the respondents pointed out that their curriculum committee is also
LIBRARY AND INFORMATION STUDIES 117
responsible for learning outcomes assessment. Two respondents further commented that all of
their faculty are members of the assessment committee.
The ALA Accreditation Standards stipulate that the assessment of students’ achievement
should involve those served by the program (II.7, IV.4). A committee with a diverse
membership allows a variety of perspectives to be included and multiple voices to be heard in the
discussion. Table 13 depicts the composition of both the Assessment and Curriculum
Committees based on ALOs’ survey responses. Faculty members, with the largest presence, lead
the assessment efforts and curriculum development while students and practitioners provide
insight and interpretation from their respective viewpoints. Although the charges and roles of
these two committees are different, their composition and size is almost identical.
Table 13
Composition of Assessment and Curriculum Committees
Membership
Range
Assessment
Committee Mean
Curriculum
Committee Mean
Faculty 0 - 60 6.5
6.2
Students 0 - 20 1.4
1.7
Practitioners 0 - 20 1.1
1.0
Administrators 0 - 3 0.7
0.8
Others (staff, teaching
assistants, external evaluators)
0 - 2 0.4 0.4
Total
10.1 10.1
The content analysis also reveals that each of the 12 MLIS programs has a designated
committee which manages the outcomes assessment practice in the program. As can be seen in
Table 14, seven of those 12 committees (58%) have the term “curriculum” in their name. These
curriculum committees, as described in program presentations, oversee the tightly related
LIBRARY AND INFORMATION STUDIES 118
domains of curriculum guidance and outcomes assessment. For smaller MLIS programs, it can
be more efficient and effective to have a single committee administer both areas. Therefore, the
absence of an assessment committee at some MLIS programs does not signify a lack of
assessment activities, commitment, or efforts.
For the remaining five MLIS programs, a designated program committee is responsible
for all business, including assessment and curriculum, related to the MLIS program. For
example, the MLIS Program Committee at School H is charged with general oversight of the
program, including the courses and curricular matters, academic policies, and future program
planning. Membership of the committee includes faculty, staff, and student representatives. At
School D, its MLIS Committee is responsible for evaluating student achievement and the extent
to which the program is accomplishing its objectives. Its membership includes a student
representative, faculty members, and staff members who meet monthly. The committee also
created several subcommittees to focus on issues related to program learning outcomes
development and program specialization. One of its subcommittees is the Graduate Outcomes
Assessment Subcommittee. Its membership includes a representative from Student Services, the
MLIS Program Director, and the MLS Specialization Directors. It is charged with the tasks of
assessing the attainment of the program’s learning outcomes and reviewing the rubrics used for
such measurement.
LIBRARY AND INFORMATION STUDIES 119
Table 14
Assessment Committee Name of the 12 MLIS Program Presentations Reviewed
Program Committee Name
A Master's Committee
B Curriculum Committee
C Curriculum & Program Development Committee
D MLS Committee, including Graduate Outcomes Assessment Subcommittee
E Curriculum Committee
F Curriculum Advisory Committee
G Curriculum Committee
H MLIS Program Committee
I Master's and Specialist Committee
J Curriculum Steering Committee
K Professional Program Committee
L Curriculum Committee
The committee charges mentioned in program presentations of the 12 MLIS programs are
summarized in Table 15. As these committee names imply, the most common committee
charges are curriculum- and course-related, including curriculum review and revision; review,
approval, and development of new courses and final projects; course scheduling, and
implementation of the changes. Student learning outcomes assessment is the next key
responsibility. Program-related tasks, including review of program goals, objectives, and policy,
and establishing degree requirements represent another cluster of responsibilities.
LIBRARY AND INFORMATION STUDIES 120
Table 15
Key Committee Charges of 12 MLIS Programs
Charge
Count
Curriculum review and revision
9
New courses and specialization review, approval, and development
6
Student learning outcomes assessment
6
Program goals and objectives review
5
Monitoring changes and implementation of curriculum or courses
4
Development of comprehensive exam and portfolio
3
Establishment of degree requirements
2
Maintenance of course rotation schedule
1
Policy review
1
As stipulated in the ALA Standards, the program presentation of the 12 MLIS programs
specifically mentioned the effort to include a wide range of constituencies in the assessment
process. As Table 16 shows, the composition of the curriculum or assessment committees at all
12 programs include alumni, employers, faculty, practitioners, and students. In eight of the
programs, the committees also actively seek input from other programs committees, alumni, as
well as practitioners associated with the program. For example, School C’s Curriculum &
Program Development Committee, consisting of 10 members who are full-time and part-time
faculty, alumni, and student representatives, coordinates a systematic review of the entire
curriculum on a three-year cycle. The committee also incorporates input from the School’s
Alumni Board, International Advisory Council (comprising leaders in the profession), Program
Advisory Board (comprising practitioners), and Student Association. At School A, the Master’s
Committee, composed of three faculty members, two student representatives, two program
LIBRARY AND INFORMATION STUDIES 121
coordinators, and the manager of student services, gather opinions from the school’s
Administrative Board, the Board of Visitors, and colleagues and collaborators on campus as well
as from other peer MLIS programs.
The inclusion of students and practitioners in a program’s assessment operation can be
significant. Their direct participation reinforces the vital role of students in improving teaching
and learning processes. In addition, students, as the contributors of the assessment data, can
offer accurate interpretations of assessment results (Ewell, 2010a). Students can also contribute
to the awareness of the importance of outcomes assessment and the dissemination of assessment
results. To enhance their students’ experience and enthusiasm of the capstone project which
requires students to draw on knowledge and skills derived from multiple courses, School H hosts
an annual Capstone Event. Based on student presentations, a panel of judges from the
professional community awards prizes for projects demonstrating the greatest impact on two
categories: social impact and commercial impact. Audience Choices Awards are also distributed
to projects voted on by the attendees. At School L, several student awards, judged by faculty
members, are presented at the School’s annual convocation ceremony. The program at School B
cosponsors an annual student paper competition with its alumni association and awards the best
paper as judged by a panel of alumni and faculty members. These activities proactively engage
students in the learning and assessment process.
LIBRARY AND INFORMATION STUDIES 122
Table 16
Participants in the Assessment Process at MLIS Programs
Participants No of Programs
Alumni 12
Employers 12
Faculty, including adjuncts 12
Students 12
Other program committees 8
Practitioners, information professional, corporate
representatives, specialists 8
Administrative staff 5
Internship supervisors 4
Campus colleagues 3
Colleagues from other LIS programs 2
LIS Librarian 2
Professional associations 2
Prospective students 1
Several MLIS programs also devised innovative ways to expand interaction with various
stakeholders. These activities not only allow participants to voice their needs, but also
strengthen their ties with the program. The MLIS program at School A hosted a series of
“Practitioner Summits” around the state to solicit insights from alumni, practitioners, and
potential employers about future changes, organizational challenges, innovative practices, and
skills required of new employees. The school reported that information collected was invaluable
for revising curriculum, course contents, and program focus. At School L, the school enhanced
the quality and quantity of face-to-face interaction with its stakeholders by sponsoring or co-
sponsoring receptions at professional organization meetings, staffing at the school’s booth in the
LIBRARY AND INFORMATION STUDIES 123
exhibit area at the ALA annual conference, hosting programs at areas with large presence of
alumni, and visiting alumni workplaces. In another example, School F sponsored annual
meetings of the local chapter of Association of Records Managers and Administrators to promote
its program, expand its network of contacts into information management communities, and
establish connections with federal government departments and agencies. School F’s program
also conducted site visits and interviews with supervisors of its experiential learning project
sponsors on a regular basis.
Assessment Measures and Use of Assessment Results
In the Assessment Measures section, survey respondents reported the frequencies of
direct and indirect assessment measures employed in their program. They also described the
degree to which assessment results are applied in improving program performance and student
learning.
Multiple Assessment Approaches (ALA Standards IV.4)
Students arrive at an academic program or a class with diverse backgrounds and
experiences. Their learning occurs over time, at a different pace, and inside and outside of the
classroom and throughout the course of the program. They do not respond to the same type of
pedagogy or educational experiences equally. Relying on one single assessment instrument,
such as class assignments or student survey, is inadequate to discern the vast set of skills,
competencies, behaviors, values, and attitudes that students are expected to acquire. Effective
assessment plans, thus, should employ a combination of direct, indirect, formative, summative,
and other assessment methods to gather both qualitative and quantitative evidence of student
learning. The data can then be triangulated to reduce discrepancies and to provide more accurate
LIBRARY AND INFORMATION STUDIES 124
and credible information for decision-making (Banta & Palomba, 2015; Lopez, 2002; Maki,
2010).
The major difference between direct and indirect assessment hinges on how the evidence
of learning is collected and assessed. Direct evidence of student learning is “tangible, visible,
self-explanatory, and compelling evidence of exactly what students have and have not learned,”
while indirect evidence contains “proxy signs that students are probably learning” (Suskie, 2009,
p. 20). Although indirect methods are further removed from examining student work directly,
they offer a different lens through which to view how learning occurs. The indirect evidence of
student learning complements the findings from the direct measures and further contributes to a
deeper understanding of student learning.
Section IV.4 of the 2008 ALA Standards specifies that students should receive
systematic, multifaceted evaluation of their achievements. Table 17 summarizes a compilation
of commonly used direct and indirect assessment measures at both course level and program
level. In this two-by-two matrix, the vertical axis represents the two possible levels for
conducting assessment, either at individual course level or at program level. The horizontal axis
denotes the two types of measures, direct and indirect. The matrix, thus, represents four possible
combinations or cells of assessment measures.
LIBRARY AND INFORMATION STUDIES 125
Table 17
Examples of Direct and Indirect Measures for Assessment
Level Direct Measures Indirect Measures
Course Class discussion participation or
presentation
Course assignments, term papers, reports,
research projects
Examinations, quizzes, standardized tests
Rubric
Summaries and assessments of electronic
class discussion threads
Course evaluations
Course grades, if not accompanied by a
rubric or scoring criteria
Grades that are not based on explicit
criteria related to clear learning goals
Number of student hours spent on
homework, service learning
Percent of class time spent in active
learning
Test blueprints (outlines of the concepts
and skills covered on tests)
Program Capstone projects
Comprehensive examination
Employer and internship supervisor ratings
of student performance
Feedback from computer-stimulated tasks
Pass rates or scores on licensure,
certification exams, or subject area tests
Portfolios of student work
Student publications, conference
presentations
Student reflections on their values,
attitudes, and beliefs
Thesis
Alumni perceptions of their career
responsibilities and satisfaction
Benchmarking, comparing to programs at
other institutions
Curriculum mapping
Department or program review data
Focus group interviews with alumni,
employers, faculty, and students
Honors, awards, and scholarships earned
by students and alumni
Job placement rates, data
Quality and reputation of graduates and
programs
Registration or course enrollment
information
Retention or graduate rates
Scores on tests required for further study,
such as GPA, GRE
Student participation in faculty research,
publications, conference presentations
Student ratings of their knowledge and
skills and reflections on what they have
learned over the course of the program
Surveys of alumni, employers, faculty,
and students
Voluntary gifts from alumni and
employers
Adapted from Banta & Palomba (2015), Maki (2010), Middle State Commission (2007), Suskie (2009)
Survey Results
Survey respondents were asked to indicate the degree, ranging from None (value =1) to
All (value = 6), to which 17 assessment measures were used to assess their students’ learning at
LIBRARY AND INFORMATION STUDIES 126
their program. Student course evaluation, the indirect course assessment instrument, was not
included in the survey questionnaire, as it is a common practice by all MLIS programs for
students to fill out course evaluation at the end of each term. Table 18 lists the 17 assessment
tools, their mean score, and standard deviation, in parenthesis, according to popularity within the
framework of the assessment measure matrix.
Eight out of the 17 assessment measures scored higher than 4 points, which indicates that
they are used by more than half of the students at the participating MLIS programs. Five of
these methods are indirect measures used at the program level: (1) Exit Survey or interview; (2)
student surveys, interviews, focus group, forums; (3) curriculum mapping; (4) alumni surveys,
interviews, focus groups; and (5) Retention, graduation rates. The remaining four are direct
measures at either course level ([1] class assignments, exams, research projects; [2] rubric) or
program level ([1] internship supervisor evaluations; [2] capstone, culminating projects).
LIBRARY AND INFORMATION STUDIES 127
Table 18
Mean Score and Standard Deviation, in Parenthesis, of Each Assessment Measure Used to
Assess Student Learning Based on a 6-Point Likert Scale*
Level Direct Indirect
Course Class assignments, exams,
research projects: 5.92 (0.28)
Rubric: 4.76 (1.05)
Student course evaluation**
Program Internship supervisor evaluations:
4.70 (0.90)
Capstone, culminating projects:
4.08 (0.28)
Portfolio: 3.87 (2.18)
Certification exams: 2.43 (1.24)
Comprehensive exams: 2.08
(1.93)
Thesis: 1.74 (0.86)
Exit Survey or interview: 4.67 (1.58)
Student surveys, interviews, focus
group, forums: 4.52 (1.20)
Curriculum mapping: 4.38 (0.97)
Alumni surveys, interviews, focus
groups: 4.37 (0.95)
Retention, graduation rates: 4.25
(1.20)
Employer surveys, interviews, focus
groups: 3.95 (0.82)
Internship supervisor surveys,
interviews, focus groups: 3.87 (1.10)
Job placement rates: 3.87 (0.93)
National student surveys (e.g. NSSE,
CSEQ, SSI): 1.77 (1.30)
*Degree of use: None = 1; Few = 2; Some = 3; About half = 4; Most = 5; All = 6
**Student course evaluation, used by all MLIS programs, was not included in the survey
Content Analysis Results
The measures mentioned in the 12 program presentations illustrate an even richer and
more diverse means that MLIS programs employ to gather assessment evidence. Among these
12 programs, the number of tools adopted varied widely from 10 to 37 with an average of 25
tools per program. Table 18 lists an inventory of the 38 instruments and the number of adopting
programs, in parenthesis, derived from the program presentations. Similar to the findings from
the survey, more than half (52.6%) of these tools collect indirect evidence at program level
followed by tools measures direct evidence at program level (23.7%). Course evaluation,
LIBRARY AND INFORMATION STUDIES 128
internship supervisor rating, and surveys of alumni and students are used by all 12 MLIS
programs. At the program level, internship and portfolios represent the most commonly used
end-of-program tools to assess students’ overall learning directly. Capstone, another popular
tool reported in the survey, was mentioned less here. Surveys, interviews, and meeting involving
key stakeholders are the most popular indirect measures at program level. Another popular
category of tools reported is the numeric data related to graduation, retention, graduate
employment, and student grades. These statistics are maintained usually for reporting to parent
institutions, federal agencies, or professional organizations.
At the course level, similar to the survey findings, rubric and course-based assignments
are commonly used to collect direct evidence. All programs rely on student course evaluations
to collect indirect evidence on student learning.
LIBRARY AND INFORMATION STUDIES 129
Table 19
Number of Programs, in Parenthesis, Employing Direct and Indirect Measures for Assessment
Level Direct Measures Indirect Measures
Course Rubric (9)
Course assignments, term papers, reports,
research projects, examinations, quizzes
(8)
Class discussion participation or
presentation (7)
Completion of prerequisite exams, non-
credit courses (3)
Course evaluations (12)
Program Employer and internship supervisor ratings
of student performance (12)
Portfolios of student work (8)
Student publications, conference
presentations (7)
Thesis (5)
Capstone projects (3)
Comprehensive examination (3)
Student reflections on their values,
attitudes, and beliefs (3)
Pass rates or scores on licensure,
certification exams, or subject area tests
(1)
Alumni surveys (12)
Student surveys (12)
Employer surveys (10)
Focus group interviews with alumni,
employers, faculty, and students, town
hall meetings (10)
Honors, awards, and scholarships earned
by students and alumni (10)
Benchmarking, comparing to programs at
other institutions, program ranking (8)
Internship supervisor surveys (8)
Scores on tests required for further study,
such as GPA, GRE, grades of core
courses (8)
Graduation rates (7)
Placement data, rates (7)
Retention data, rates (7)
Exit interviews (6)
Inputs from advisory committees (6)
Alumni performance (5)
Curriculum mapping, number of new or
revised courses, joint programs,
specialization (5)
Faculty surveys (5)
Student participation in faculty research,
campus committees, professional
organizations (5)
Registration, enrollment rate, or course
enrollment information (4)
Number of internship, completion of
thesis defense (3)
Annual reports, department or program
review data, accreditation assessment
report (2)
Other Measures Environmental scan, job market reviews (2)
Student background, current positions (2)
Advising, technology, writing support (1)
Faculty-student ratio (1)
LIBRARY AND INFORMATION STUDIES 130
Direct Measures at Course Level
A rubric is the most popular tool at course level. For students, rubrics provide a list of
criteria to guide them in completing their assignments. For instructors, rubrics serve as scoring
guides to evaluate student work or performance. At School D, rubrics are also used to assess the
performance of classes. For each class, the school’s curriculum committee developed an
instructor rubric based on the learning outcomes specified in its syllabus. At the end of each
term, the instructors assess the outcomes of each student using the rubrics. The MLS Committee
then reviews these completed rubrics in the spring and fall semesters to determine the attainment
of learning outcomes of the course and make necessary adjustments. Lastly, course-embedded
assignments are the second most used tool to assess student learning at MLIS programs.
Direct Measures at Program Level
As a professional degree, all MLIS programs offer internship as a required or elective
course. Internships offer students direct exposure to real-life settings and problems. Under the
guidance of practitioners, they get to attest their academic skills, gain hands-on experience, and
explore the profession. Internship based on stated program objectives is the most commonly
used direct measure. Both the School D and School E use internship as the final project for their
students.
Several programs place strong value on internship. The curriculum at School K includes
a compulsory service learning component as part of a required core course on ethics, diversity,
and change. Students are matched to a site based on their skills and professional interests. The
emphasis of this course is to expose students to issues related to diversity, plurality, ethics,
accountability, change, leadership, and social justice. The students are required to perform a
minimum of 20 hours of service learning in the course of a 10-week period. They have to keep a
LIBRARY AND INFORMATION STUDIES 131
reflective journal and attend regular class meetings to debrief their experiences and discuss
issues, especially those related to ethics and diversity, encountered.
At School C, site supervisors are invited to complete an online survey to help the
program improve the field experience. Students also submit a final report on the lessons learned
and issues encountered. Based on this feedback, the program revised its site supervisor
guidelines and modified the schedule for interaction between site supervisors and faculty
advisors.
At School L, internship is part of a series of extracurricular engagement options that
students are encouraged to partake in. Other opportunities for students to build competencies
include graduate assistantship, alternative spring break placement, participation in student
chapters of professional organizations, attendance at school and campus workshops, seeking
mentorship, involvement in school research groups, giving presentations at school, and serving
on school and campus committees.
Final Project
The end of the program project or final project is a performance-based measure that
offers students the opportunity to demonstrate how they apply what they learned throughout the
program in an area of their interest. Projects such as comprehensive exam, capstone project,
portfolio, and thesis allow students to integrate the knowledge, skills, and values they have
developed over the course of the program through course work, field experience, program
activities, and community engagement. By focusing on the areas of their interest or program
concentration, students can further reflect the specialized knowledge and skills in relation to their
future career paths and what they might bring to the profession. A final project offers direct
LIBRARY AND INFORMATION STUDIES 132
evidence of student learning at the program level. It serves as a tool to gauge how well the
program is doing with achieving its objectives.
Most of the MLIS programs examined require students to complete at least one final
project at the end of their study. Among the 12 MLIS programs reviewed, only two programs
lacked a final project requirement. Eight programs require one final project, and two programs
require two final projects. Four programs allows students to choose from a set of options,
including comprehensive exam, portfolio, and thesis, for their final project. For example, in
order to graduate, students at School A are required to complete two final projects: first, pass a
closed-book, seven-hour comprehensive exam; second, submit a master’s research paper that
includes sections of literature review, research questions, methodology, collection and analysis
of data, evaluation results, and synthesis of findings. On the other hand, students at School B
can choose from three options: comprehensive exam, thesis, or portfolio project, for their final
project. Table 20 shows a list of possible options and the number of programs that offer such
options. Similar to the survey findings, portfolio was the most common type of final project,
followed by thesis and capstone projects. Comprehensive exam is another popular type, but it is
used in combination with another type of final project.
Table 20
Final Project Requirements by Program Count from Content Analysis
Final Project(s) Count
Portfolio 5
Thesis 4
Capstone 2
No requirement 2
Comprehensive exam 1
LIBRARY AND INFORMATION STUDIES 133
Table 20, continued
Practicum 1
Comprehensive exam and practicum 1
Comprehensive exam and thesis 1
Indirect Measures at Course Level
All 12 MLIS programs employ course evaluations to collect feedback from students on
their learning. Some programs use standardized surveys that are operated by the university while
others develop their own survey tailored to local situation and needs. At School A, students offer
feedback through a two-part course evaluation process. The first part is the state’s online course
evaluation survey that collects quantitative data based on students’ ratings on a five-point scale
in the areas of course/instructor characteristics, overall course assessment, feedback to students,
and instructor’s command of the material. The second part is the school’s own evaluation form,
originally developed by its student association, with a focus on providing qualitative and
actionable feedback to the instructors. The survey consists of open-ended questions seeking
feedback on course organization, course readings, most and least useful learning experiences,
degree of challenges, and faculty/student interaction.
Indirect Measures at Program Level
A wide range of indirect measures are employed at program level among all 12 MLIS
programs. Locally developed surveys are the most popular tools used to gather feedback from
key stakeholders, including students, alumni, faculty, employers, and internship supervisors.
One major issue with surveys is that when they are used alone they only provide participant
opinion on how much they feel students have learned. It does not collect direct and supporting
evidence of learning. Therefore, surveys should be used in combination with direct measures of
LIBRARY AND INFORMATION STUDIES 134
student learning (Lopez, 2002). Several MLIS programs invest extra efforts to synthesize
assessment data from multiple sources to reach a more concrete understanding of student and
program performance. The Master’s Committee at School A aggregates data collected from
multiple measures at macro-level, including exit interviews, comprehensive exam mean scores,
student exit surveys, alumni surveys, and Town Hall meetings to assess the quality of student
perception of the program. The feedback is analyzed and applied for program improvement,
student services, and curriculum changes. At School F, a comprehensive curriculum review
incorporates a slew of measurement results, including student course evaluations, work
placement reports, surveys of students, alumni, employers, focus group meetings, interview with
CO-OP employers and experiential learning supervisors, and annual reports on student
achievement to monitor. Meanwhile, results from capstone evaluations provide a means of
assessing learning outcomes as students complete the program. The program at School K
follows a similar approach in assessing the attainment of goals, objectives, and learning
outcomes. A variety of instruments, including course assignments, focus-group meetings, town-
hall meetings, surveys of students, course evaluation, and external professional participants on
portfolio review panels are used in the process.
Summative and Formative Assessment
Several programs reported that students are assessed for progress throughout their
educational journey. At School C, students are evaluated at three transition points in the
program to ensure that the curriculum and courses are designed to prepare them for a successful
learning experience. At the beginning of the program, all new students must take and pass a
prerequisite course that prepares them for the school’s online learning environment. The second
checkpoint is when students complete the three required core courses, which serve as
LIBRARY AND INFORMATION STUDIES 135
prerequisites for other courses later in the program. Students have to earn at least a “B” grade in
each of the core courses in order to move on; otherwise, they can re-take the course. The
culminating experience course, either the e-portfolio project or thesis, serves as the final point for
assessing student achievement.
School F surveys the student learning experience at various stages in the program: at the
beginning of the program, at the mid-way point, when they take the CO-OP option, and at their
exit from the program. The school reported that the survey data were valuable in assessing
curriculum, student services, and course content. For example, based on student comments, the
program increased the leadership component in the curriculum as well as improved the rotation
of courses offered in the evening sessions. At School B, students receive a wide range of
formative and summative evaluation throughout their advancement in the program. Through the
program’s course management system, faculty keep students informed of their progress in
individual courses. In addition, students are required to complete an annual self-evaluation
survey to determine their academic advancement in the program. Faculty then use students’ self-
report, as well as their program plan and transcript, to monitor their progress and make
recommendations.
Benchmarking or Peer Review
In addition to conduct surveys of alumni, several MLIS programs employ various tools to
compare their outcomes and performance with those of their peer group. Of the 12 MLIS
programs reviewed, Schools A, E, I, J, K, and L used the MLIS program ranking compiled by
U.S. News & World Report or by other professional organizations as an indicator of the quality
of their programs. Another benchmarking tool mentioned in the MLIS program presentations is
the Workforce Issues in Library & Information Science (WILIS) 2 project. Funded by the
LIBRARY AND INFORMATION STUDIES 136
Institute of Museum and Library Services and participated by 39 MLIS programs, WILIS 2 is a
project to study the educational, workplace, career, and retention issues faced by MLIS
graduates. The survey asked graduates to rate on how well their program prepared them to enter
the profession. Specifically, the survey sought their opinions on their learning outcomes in the
areas of knowledge, skills, and preparedness for employment, technology expertise, capstone
experiences, and suggestion for program improvement (Marshall et al., 2010). Several of the 12
MLIS programs mentioned in their program presentation that they employed the WILIS 2 data to
assess their program performance as well as to compare themselves with peer institutions. For
example, School K reported that its WILIS 2 survey data showed its alumni’s general
satisfaction with the program. However, some of School K’s graduates also expressed concerns
about the strong theoretical focus of the program. Furthermore, the survey data showed that
School K’s alumni were more involved in experiential education than were alumni from other
MLIS programs.
At School C, the school also compares its WILIS 2 survey results with other participating
MLIS programs. School C’s graduates reported that they were well prepared for their first job
and more than two-thirds of the graduates landed a job within three months of their graduation.
School L also reported that, in the WILIS 2 survey, their alumni rated its program higher than
graduates from other participating programs in terms of preparedness for first jobs, participation
in professional activities, and perceived skills acquired.
At School C, an online only program, two quality control tools are used to benchmark
and evaluate the quality of its online program. At the program level, the school adopted the
Sloan-C Quality Scorecard, now called OLC Quality Scorecard, from the Online Learning
Consortium to assess the quality of its program. The Quality Scorecard provides a metric of 70
LIBRARY AND INFORMATION STUDIES 137
indicators for measuring the quality of online program in eight areas: institutional support,
technology support, course development / instructional design, course structure, teaching and
learning, social and student engagement, student support, and evaluation and assessment.
According to School C’s program presentation, its online MLIS program earned the 2013 Sloan
Consortium Effective Practice Award for innovation in online education. At the course level,
School C’s program applied the Quality Matters program, consisting of a set of rubric with eight
general standards, to measure and certify the design and quality of its online courses.
Assessment and measurement is one of the eight standards.
Applications of Assessment Evidence (ALA Standards II.7, IV.6)
Section II.7 of the 2008 ALA Standards states that the MLIS programs should use
program and student evaluation for ongoing appraisal, for improving student achievement and
subsequent accomplishments. Also in Section IV.6, the Standards asks the program to apply the
results of student evaluation to program development.
Survey Results
How are MLIS programs using the outcomes results of assessment? Respondents were
asked to rate their program’s use of assessment findings for various purposes, using a four-point
scale ranging from 1 for “not at all” to 4 for “very much.” Based on the mean usage scores and
the purposes of the application, the 15 activities associated with the use of assessment data can
be roughly grouped into three clusters. Table 20 displays the 15 applications of assessment
results, ranked by respondents’ appraisal within each cluster.
Applications in Group 1 are related to accreditation activities, either at an institutional or
programmatic level. Preparing for programmatic accreditation is cited as the most common use
of assessment data. This finding echoes respondents’ statement that ALA accreditation is the
LIBRARY AND INFORMATION STUDIES 138
prime motivation for outcomes assessment practice (Table 12). Other researchers also reached a
similar conclusion. In three large-scale surveys conducted by the National Institute for Learning
Outcomes Assessment, meeting accreditation expectations was cited as the primary reason for
the use of assessment results by U.S. colleges and universities (e.g. Ewell, Paulson, Kinzie,
2011; Kuh & Ikenberry, 2009; Kuh et al., 2014).
The next cluster of applications of assessment results, with mean scores between 3 and
3.50, is associated with improvement-related activities that include evaluation of program
performance, improving decision and planning process, revision of program goals and
curriculum, and enhancement of instruction. The third cluster of applications contains activities
on using assessment data for making operational decisions about admissions, student services, IT
and library support, faculty and staff performance, resources, and other matters. These activities
scored less than 3.0 and, thus, are less common among all MLIS programs.
Table 21
Applications of Assessment Results Based on a 4-Point Likert Scale From MLIS Program Survey
Applications Mean / Standard Deviation
Group 1
Preparing program presentation or self-studies for
programmatic or specialized accreditation (e.g. ALA
accreditation) 3.67 / 0.64
Preparing program presentation or self-studies for regional
institutional accreditation (e.g. North Central
Association of Colleges and Schools) 3.08 / 1.06
Group 2
Evaluating overall program performance 3.50 / 0.83
Improving program decision making and planning 3.42 / 0.88
Revising program learning goals 3.38 / 0.92
Reviewing or revising program curriculum 3.25 / 0.94
Improving instruction or pedagogy 3.13 / 0.80
LIBRARY AND INFORMATION STUDIES 139
Table 21, continued
Group 3
Designing more effective student orientation or support 2.58 / 0.88
Increasing the connections between in-class and field
learning 2.54 / 0.98
Supporting budget requests or allocation 2.46 / 1.14
Improving IT support 2.08 / 0.88
Determining student readiness for admission to the program 1.92 / 0.93
Evaluating faculty or staff performance 1.88 / 0.95
Enhancing Library resources 1.83 / 0.94
Determining student readiness for later courses in the
program 1.75 / 0.74
Degree of use: Not at all = 1; Some = 2; Quite a bit = 3; Very much = 4
Content Analysis Results
Results from the review of the 12 MLIS program presentations provide a much richer
picture of the application of assessment results. Program presentations portray not only a more
diverse array of activities, but also depict the context of their use in detail. A total of 36 types of
applications of assessment results were identified from reviewing the 12 program presentations.
Based on the nature of their use, they were further grouped into 14 broad categories (Table 22).
Unlike the survey findings where external requirements, such as accreditation, are cited as the
most frequent use of assessment results, the results from content analysis indicate that most of
the 14 applications are related to internal use for program improvement or enhancement.
Furthermore, the data also reveal that MLIS programs apply the assessment data extensively and
diversely, ranging from course content to student support, program development, and facility
improvement. On average, a MLIS program documented almost 10 different types of use of
assessment data.
LIBRARY AND INFORMATION STUDIES 140
Table 22
Applications of Assessment Results, From MLIS Program Presentations (N = 12)
Applications No. of Programs
Revising course content, assignments, projects 12
Revising curriculum, student competencies, course sequence 11
Enhancing student advising, orientation programs, placement services, and
timeliness feedbacks 9
Developing new courses, new concentrations, workshops; cooperating with
other academic units 8
Improving technology support; incorporating new technology tools 7
Adjusting course schedule and class size 7
Changing policy on admission, degree, pre-requisite, retention, technology
requirements; revising recruitment practice 7
Identifying gaps and challenges of the program; monitoring trends in the field 6
Evaluating faculty or staff performance 5
Improving pedagogy, faculty skills 5
Enhancing communications with students; holding regular meetings with
students; revising program websites 5
Preparing for accreditation reports or work 4
Improving onsite facility 2
Revising policy and procedures 2
Mean of application per MLIS program: 9.75; Range: 3 – 19; Mode = 12; Median = 8.5
The following examples derived from program presentations showcase salient instances
of applying assessment results at MLIS programs. These sample assessment practices,
particularly those that involved stakeholders in meaningful ways and contributed to program
improvement, are indicators of the diverse use of assessment results and its integration into the
systematic and ongoing program review process.
LIBRARY AND INFORMATION STUDIES 141
Program development and planning. At School D, the MLIS program underwent a
comprehensive review and revamping process in order to realign the program’s mission, goals,
and curriculum with the university’s new strategic plan in 2009. An ad hoc committee,
consisting of faculty, staff, and a student representative, engaged a rigorous process of soliciting
input from key stakeholders. They surveyed alumni, current students, and faculty; conducted a
panel discussion with employers at faculty retreat; researched current employment requirements
of information professionals; reviewed programs at peer institutions; consulted competency
statement or standards from relevant professional organizations and ALA accreditation
standards; and studied the program’s learning outcomes assessment results as well as student
course evaluations. The new mission statement, goals and objectives, core curriculum, areas of
focus, admissions requirements, and technology requirements recommended by the committee
were accepted by the faculty and implemented in 2010.
Before developing new program concentrations in health librarianship and youth services
librarianship, the MLIS program at School G surveyed library directors, employers, and alumni
to assess the demand for these two specializations. The positive feedback resulted in the
school’s establishment of these two program tracks as well as an amendment of the cataloging
and classification track’s curriculum.
Enhancing student services. The feedback from the annual alumni surveys at School L
indicated inadequacy in academic advising. Consequently, the program created the position of
Advising Coordinator to enhance advising support for students. School D also hired a student
advisor as well as rented additional office space at their new campus based on the feedback from
current students. To improve retention and graduation rates, School G surveyed the students
who had dropped out and incorporated awareness-building activities into its orientation program.
LIBRARY AND INFORMATION STUDIES 142
At School J, feedback, comments, and recommendations gathered from surveys, “pizza with the
dean” sessions, the alumni board, internship supervisors, and employers led to the creation of the
Career Services Office.
Revising curriculum or course. Based on assessment results, MLIS programs make
adjustments in the curriculum. These changes include adding a new course, revising existing
courses substantially, adding new degree requirements, developing new concentration, and
modifying course sequence. At School A, the alumni survey revealed a concern about
inadequate skills in basic information technology of some new students. Hence, the program
now requires all incoming students to pass an information technology competency test. Those
who fail are required to take a non-credit remedial class. In the same vein, the faculty teaching
technology-intensive courses at School L expressed a concern regarding the technology expertise
of some students. Collaboratively with student representatives to the Curriculum Committee and
Student Advising Coordinator, faculty developed an introduction to technology course for new
students.
In another case, School B experienced an unprecedented failure rate on the student
comprehensive exams. The school surveyed its faculty on revising the exams and subsequently
held a meeting to redesign the exam as a take-home comprehensive examination. At School F,
the leadership component of the program was augmented in the curriculum based on the
feedback from alumni and current students.
In 2009, the MLIS program at School H went through a MLIS curriculum review process
led by the MLIS Program Committee. To collect as much input as possible, the Committee
surveyed a wide variety of stakeholder groups, including current students, alumni, employers,
and faculty as well as held open meetings and interviews. Based on the feedback received, the
LIBRARY AND INFORMATION STUDIES 143
Committee recommended eliminating the professional portfolio and thesis options for the
degree’s final project to better align with the program goals, student learning outcomes, and the
expectations of employers. The capstone project became the sole final project requirement for
completing the program. In addition, to better prepare students for the capstone experience, a
project management course was added as a prerequisite.
Improving communication with students. At School B, the feedback from an alumni
survey indicated that the school had not done enough to encourage student participation in
professional activities and organizations. As a result, the school launched a series of efforts,
including disseminating a weekly digest to all students, inaugurating a Facebook site, and
redesigning its website to improve the communication with its students. At School C, the
feedback from the online Student Opinion of Teaching Effectiveness survey prompted the
faculty to make several changes in communicating and working with the students, including
more prompt responses to student inquiries, more timely assignment feedback, clearer
assignment instructions, and more precise grading expectations.
Preparing for accreditation. At School E, learning outcomes assessment is closely
associated with the evaluation of the MLIS program. The student’s end of program examination
forms the basis for learning outcome assessment and its results are then reported to the Southern
Association of Colleges and Schools (SACS). At the School A, the comprehensive exams at the
end of the program are graded using the program’s learning outcomes as a rubric. The results of
the exams are analyzed and presented as part of the school’s annual SACS Accreditation
Review. At School K, the program was well positioned, due to its active assessment efforts, to
anticipate the work on ALA accreditation and the Western Association of Schools and Colleges.
LIBRARY AND INFORMATION STUDIES 144
Organizational Supports
In this section, survey respondents reported the number of FTE personnel, if any,
dedicated to assessment activities as well as the status of the personnel. They also reported the
degree of faculty and staff involvement in assessment work.
Assessment Personnel
Assessment is resource intensive, and its return on investment can take time to manifest.
Given current financial circumstances, previous studies reported that many campuses conduct
assessment on a shoestring budget (Ewell, Paulson, Kinzie, 2011; Kuh & Ikenberry, 2009).
When asked about the resources currently devoted to their assessment infrastructure, only one-
third (33%) of the respondents indicated they had at least a part-time position dedicated to
assessment work in their program. The personnel commitment ranges from 0.25 to 3 FTE.
Among those dedicated assessment personnel, 20% were tenured faculty; 30% were non-tenure
track faculty; and 50% were staff members. Comparing with a 2011 study by Ewell, Paulson,
and Kinzie, in which 69% of the 2,719 department heads surveyed indicated they had at least one
part-time assessment staff, MLIS programs seems to be undercapitalized in assessment efforts.
Given the relatively small size of personnel and budgets at MLIS programs in general,
the work and responsibility involved in assessment might be perceived to be an add-on to faculty
or staff members. For example, at School K, the MLIS program leveraged the expertise of a
faculty member whose background was in instructional design, and the information literacy
librarian, who had been involved in evaluation of instructional programs. School K’s MLIS
program, as part of the Graduate School of Education & Information Studies, also benefited from
its association with the university’s reputable research center on the evaluation of educational
programs. In addition, the School K held regular meetings and training on curriculum design for
LIBRARY AND INFORMATION STUDIES 145
its faculty and doctoral students. At School C, the school’s dean provides administrative and
financial leadership in assessment, while the associate director is responsible for preparing
assessment reports for the Western Association of Schools and Colleges and ALA as well as for
public disclosure.
Receiving temporary guidance and support from outside experts on assessment work is
another approach of leveraging external assessment resources. For example, the MLIS program
at School G recruited two assessment specialists from the University’s College of Education to
work with its faculty in creating and implementing a systematic evaluation and planning cycle
for its assessment plan. At School L, an outside consultant was brought in to lead a series of in-
depth discussion with program constituencies, including students, faculty, alumni, and staff
about the opportunities and challenges faced by the program. Subsequently, six working teams,
including one on student experience and one on curriculum, were formed to further address these
issues.
Increasingly, institutions are setting up teaching and learning centers on campus that offer
a wide range of instructional support and help faculty to connect instruction and assessment
(Hutchings, 2010; Kinzie, 2010). Relying on the teaching and learning center and assessment
specialists on campus to facilitate and improve assessment practice is another approach used by
MLIS programs to enhance assessment work. Faculty at School E can consult with the
university’s teaching center in the creation, design, implementation, and assessment of online
courses. The Center also sponsors seminars and workshops throughout the year for faculty and
teaching assistants. In addition, the department also provides a course release to new faculty
members during their first year of employment to allow them to develop adequate skills for
designing and teaching online courses
LIBRARY AND INFORMATION STUDIES 146
Although the majority of the 12 MLIS programs relies on assessment or curriculum
committee to lead and carry out the outcomes assessment operation, a few were able to create a
dedicated assessment position. At School I, it established a director for assessment position,
after its reorganization. In this capacity, the director assumed the leadership role in instituting a
sustainable solution at the school to evaluate student learning outcomes and ensure its
incorporation into the review processes. In addition, the director is responsible for guiding the
school’s compliance with the university’s method for assessing institutional effectiveness and
ALA’s accreditation standards for ongoing outcomes assessment and quality improvement. The
school was further committed to hiring an additional full-time staff member to support the
initiative to develop a sustainable and holistic solution for evaluating student learning outcomes
and ensuring their incorporation into the review processes.
At School D, it hired the formal associate dean as special assistant to the dean to lead
accreditation preparation and other special projects. The person was advised and assisted by a
committee consisting of faculty, staff, alumni, and student representatives. Although this is not
an assessment position, the incumbent is directly and indirectly involved in outcomes assessment
in relation to the preparation for school’s accreditation.
Faculty Engagement
In addition to organizational support and dedicated resources, faculty participation is
critical for a meaningful assessment practice. Several library authors pointed out that cost, time,
added workload, and disengaging faculty from teaching or research activities are the common
challenges of engaging library faculty in assessment work (Applegate, 2006; Burke & Snead,
2014; Cox et al., 2012).
LIBRARY AND INFORMATION STUDIES 147
When asked about the involvement of assessment activities by faculty and staff, 67% of
the survey respondents reported that most or all of their faculty and staff are involved in
assessment activities (Table 23). This account affirms the earlier report that faculty, together
with the director or dean of the MLIS program, are the second driving force, after accreditation,
in the development of assessment plan and practice at MLIS programs (Table 11).
Table 23
Faculty Involvement in Assessment Activities
Faculty involvement Percentage
All 42%
Most 25%
About Half 13%
Some 17%
Few 4%
None 0%
Faculty are at the center of assessment practice, as mentioned in the program presentation
of several MLIS programs. MLIS faculty members typically chair and lead the assessment
committees or curriculum committees as well as drive assessment initiatives or innovative
practice. For example, faculty members at School J experimented with various ways to assess
student learning at course and program levels. Faculty implemented pre- and post-tests in
several required courses to measure student’s initial mastery and subsequent gain with regard to
program learning objectives. Faculty members also pilot-tested an electronic assessment matrix
in conjunction with student-designed e-portfolios. Results from these experiments were
discussed at the School’s curriculum committee and changes in course and curriculum were
eventually adopted. Faculty at School L organized two retreats on online pedagogy.
LIBRARY AND INFORMATION STUDIES 148
Participants, including faculty, administrative staff, doctoral students, and instructional
designers, discussed the best practices in the online learning environment. An online community
was later created for sharing the outcomes of the retreat and for continuing the dialogue among
program participants.
Benefits of Assessment and Accreditation
In the final section of the survey, participants offer their thoughts on the value and
benefits of assessment practice and ALA accreditation, as well as additional comments on this
topic.
Assessment Practice
The survey participants were given the opportunity to offer additional insight on their
assessment practices in areas not covered in the survey questions. Eleven participants responded
to this open-ended question. The following common topics emerged from the responses.
University Mandate
Three respondents pointed out that assessment at their program is part of a large scale,
institution-wide assessment practice and mandate. One respondent described clearly that
“[a]ssessment in the School is very closely integrated with a comprehensive University-wide
assessment program that includes both academic and support units.” Another respondent echoed
in detail that “Our program is part of a larger university initiative on assessment. There are
annual assessment reports that are required from each unit.” The third participant also reported
that “This is very much an institutionalized process at our university.” However, one respondent
pointed out a concern that an institutionalized and top-down approach might lack flexibility to
include individual programs’ unique needs and differences as “[e]ach unit may develop its own
assessment plan, but university reporting is according to a template.”
LIBRARY AND INFORMATION STUDIES 149
Increasing Importance and Work in Progress
Three participants identified the increasing importance of the role of outcomes
assessment. One indicated that outcomes assessment “is an area that we are paying increased
attention to.” Another pointed out that “ALA says that this is its most important measure for
accreditation.” One respondent mentioned that they are making progress in this area by
reporting, “They’re very much under development. Program level learning outcomes are in
place and we’re at the beginning of the process of developing course-level learning outcomes
and assessments.” Meanwhile, another respondent pointed out that assessment practice is in
place: “That we have them. That we measure student progress towards achieving them in all our
aspects of all our courses.”
Best Practice
Two respondents shared their current practices in detail. One reported,
You might like to know how we do it! 1) We ask faculty to show how course objectives
(intended learning outcomes at course level) map to our Goals for Graduates of the MLIS
program; 2) We are required by the Provost each year to take two of our Goals for
Graduates and assess the extent to which a sample of students in two of our core courses
have met these goals; 3) We use the Goals for Graduates as an evaluative framework for
the reflective report that students participating in our year-long work experience program
are required to complete at the end of the year.
Another respondent stated that “The rubric we use to evaluate student portfolios is
directly connected to the learning outcomes and a major source of data for revisions and updates
to the learning outcomes.”
LIBRARY AND INFORMATION STUDIES 150
Be Aware of Issues
While recognizing the importance of learning outcomes assessment, two respondents
cautioned about the potential issues of the practice. One warned that “While quantifying
learning is a popular thing to do and does provide a way of measuring what we do, [one needs to]
retain a healthy skepticism of the process.” Another one pointed out that current ALA
accreditation practice still “focuses more on inputs than these outputs/outcomes.”
Benefits of Accreditation
The survey invited participants to comment on the benefits of accreditation. Eighteen
respondents took additional time to respond to this open-ended question. Most of the
respondents pointed out that ALA approval is not the be-all and end-all of the accreditation
process. They contended that there is a diverse set of values and externalities derived from the
process. Themes arising from the response analysis reflect many conclusions discussed in the
literature review.
Accreditation as a Means for Self-Evaluation and Program Improvement
Two-thirds of the 18 respondents identified the opportunities and benefit of on-going
self-reflection and improvement espoused by accreditation. One respondent articulated this
improvement aspect clearly: “The self-study that comprises the accreditation process provide us
with a measurement of our program’s current state, and allows us a re-evaluation of our methods
of assessing student learning, services to students, and other important measures.” Specifically,
respondents pointed out the following aspects of accreditation [emphasis added]:
An internal process: five respondents described that accreditation is as a “self-
evaluation,” “self-examination,” “self-study,” and “self-improvement” process.
LIBRARY AND INFORMATION STUDIES 151
An on-going process: four respondents reported that accreditation “support[s] for
continuous planning and assessment” and for “continuous evaluating and
self[-]improvement.” It provides “[c]ontinuous program review” and “[t]he opportunity
to review our progress on a continuous basis.”
Reflection: “It forces us to thoroughly review what we are doing and gives us outside
evaluation of how effectively we are accomplishing it.” It is a “[r]enewed impetus for
curriculum development;” a “[m]omentum for curriculum review and development.”
Quality assurance: accreditation helps “affirmation of quality” and “[v]alidation in the
field” and “[d]iscovering the strengths of our program and areas where we need to
improve”
Accreditation as a Seal of Approval
Six respondents highlighted the intrinsic professional value of accreditation. Five
respondents stated plainly the value of recognition or validation of being accredited:
ALA accreditation indicates that the program has undergone a self-evaluation process,
been reviewed by peers, and meets the Standards established by the American Library
Association’s Committee on Accreditation. We value this experience for the benefits for
students of graduating from an accredited program.
Other comments were “our students … can report that they graduated from an accredited
program[,] other than that the process is expensive,” and “[s]tudent and employer recognition of
the basis of the curriculum.” Two respondents specifically pointed out that accreditation is
“[i]mportant for student recruitment” and it offers the “ability to attract students.” Three
respondent also pointed out that graduating from an accredited program is critical for
employment.
LIBRARY AND INFORMATION STUDIES 152
Faculty Participation
Accreditation can be leveraged for faculty to work in unity toward common goals. One
respondent elaborates that accreditation allows “[t]eam-building for faculty and staff” and
“[d]eveloping shared understanding of our program strengths and weaknesses.” Another
respondent expands that accreditation affords “[f]aculty coming together to complete the self-
study and prepare for review.” Another participant added that accreditation functions for “a
greater shared understanding of the program and its goals” among faculty members.
External Connection
Accreditation is also closely connected with demands from a program’s parent
organization and the profession. One respondent reported that accreditation offers “coordination
with University strategic efforts and regional accreditation.” Another participant saw it provides
a “connection of the profession.”
Accountability
Finally, one respondent pointed out that, with increasing demand for transparency and
accountability, accreditation serves a function of “[r]esponsiveness to a broad-based
constituency.” It furnishes the “[v]alidation in the field” and “[r]ecognition from … academia.”
Although respondents revealed many advantages of accreditation, one respondent begged
to differ by pointing out that the current ALA accreditation practice “does not support
incremental improvement by schools.” It echoes the criticism of the liminal nature of the ALA
accreditation system and that the ALA accreditation standards are designed as a threshold for
MLIS education and are meant to be inclusive (Mulvaney & O’Connor, 2014). Unlike education
ranking systems, such as those produced by U.S. News and World Report, ALA accreditation
LIBRARY AND INFORMATION STUDIES 153
pays less attention to quality improvement within a program and shies away from identifying
quality differences among MLIS programs.
Based on the responses, MLIS ALOs clearly agree that accreditation serves as an
important incentive for assuring program quality, assessing student learning, identifying areas for
improvement, and meeting accreditation requirements. On the other hand, accreditation
continues to serve its most practical purpose as an endorsement of program recognition by a
professional organization. To MLIS programs, being ALA-accredited guarantees a steady flow
of incoming students, revenues, and a seal of approval of their graduates in the job market.
Summary
This inquiry was designed to identify the current state of assessment practice and efforts
at Master of Library and Information Studies programs in the United States and Canada.
Responses from an online survey of Accreditation Liaison Officers and contents from 12 MLIS
program presentations offer rich, relevant, and diverse data for analysis. The findings from these
two data sets are highly consistent and complementary. Through triangulation, the final results
adequately inform the research questions.
With the strong emphasis on student outcomes assessment in the 2008 ALA
Accreditation Standards, MLIS programs instituted a solid assessment plan and practice. About
93% of the surveyed MLIS programs adopted a common set of learning goals and outcomes for
all their students while 79% developed a written assessment plan with clearly articulated learning
goals, outcomes, and measures. MLIS programs also adopted competencies statements from a
wide range of professional organizations in defining learning outcomes for curriculum and
courses. Each program, on average, adopts six competency statements.
LIBRARY AND INFORMATION STUDIES 154
Accreditation remains the major catalyst for the assessment of student learning, although
faculty and program directors have gained interest, as reported by survey respondents. To gather
evidence of learning, MLIS programs employ a wide range of direct and indirect assessment
measures at both course and program levels. On average, each MLIS program employ 25
different types of assessment measures. At program level, capstone project, internship, and
portfolio are the most commonly used performance-based measure to assess students’ overall
achievement, whereas surveys of key stakeholders are the most popular indirect measure to
acquired perceived value of learning experience. Rubric and course-based assignments are
commonly used measures at course level. Results from multiple measures are triangulated and
applied to a broad range of areas by MLIS programs, including preparation for accreditation,
program development, curriculum or course revision, faculty and staff evaluation, and instruction
improvement.
Due to their relative small size, most of the MLIS programs conduct learning outcomes
assessment on a shoestring budget. Only one-third of the survey respondents have at least a part-
time person dedicated to assessment work in their program. MLIS programs rely on resources
on campus, such as the center for learning and teaching, for assessment support. At most of the
MLIS programs, the curriculum committee, chaired by faculty members, assumes the role of
leading assessment activities and efforts.
Survey respondents further reported that outcomes assessment at their schools is
associated with a campus-wide initiative or program. Assessment, in their view, continues to
gain importance due to the requirement from accreditation or from parent institution. On the
other hand, accreditation offers several additional benefits. It provides a means for continuous
self-evaluation and program improvement. It encourages faculty to work as a team and to
LIBRARY AND INFORMATION STUDIES 155
develop a shared understanding of program goals, strengths, and areas for improvement. Finally,
program gains legitimacy through accreditation. An accredited program vastly increase its
ability to recruit more and better qualified students, command resources, elevate program quality
and recognition, and enhance the employability of its graduate.
LIBRARY AND INFORMATION STUDIES 156
CHAPTER FIVE: DISCUSSION
The research problem driving this study was to investigate how the new emphasis on
learning outcomes assessment in the 2008 ALA Accreditation Standards was integrated into the
operation of MLIS programs in the United States and Canada. Four specific research questions
were addressed:
1. To what extent is the practice of outcomes assessment implemented at ALA-accredited
MLIS programs?
2. What types of outcomes assessment measures and approaches are employed at these
programs?
3. How have student learning outcomes assessment results been used in program’s
improvement efforts?
4. What are the perceived value and importance of outcomes assessment by program
administrators?
This chapter includes a discussion of the results and implications of the study. It then presents
areas worth further investigation based on the discoveries from the study.
Discussion of Findings
Guided by four research questions, the study employed a mixed-research methodology
consisting of an online survey of accreditation liaison officers and an analysis of 12 MLIS
program presentations. Through the triangulation of findings from both data sets, six prominent
themes emerged across the survey responses and content analysis of program presentations.
Collectively, the six themes, listed below, adequately address the study’s four research questions.
LIBRARY AND INFORMATION STUDIES 157
1. Outcomes assessment has taken hold at MLIS programs (addressing research question 1)
2. Accreditation is the primary drive for MLIS assessment efforts, while program directors,
faculty, and curriculum committees provide leadership in its practice (addressing research
question 1)
3. MLIS programs employed a diverse range of tools for measuring learning outcomes
(addressing research question 2)
4. MLIS programs applied assessment results extensively (addressing research question 3)
5. MLIS programs conduct outcomes assessment with limited resources (addressing
research question 1)
6. MLIS programs and faculty recognize the intrinsic values of assessment and accreditation
(addressing research question 4)
Theme 1: Outcomes Assessment has Taken Hold at MLIS Programs
With the prominent emphasis on student learning outcomes in 2008 ALA Accreditation
Standards, the practice of outcomes assessment became the norm at MLIS programs. About
93% of the surveyed MLIS programs adopted a common set of learning goals and outcomes for
all their students, while 79% developed a written assessment plan with clearly articulated
learning goals, outcomes, and measures. In addition, learning goals and objectives at MLIS
programs are tightly integrated with their curriculum, course syllabi, assignments, and
assessment plan. MLIS programs also incorporate competencies statements from a wide range
of professional organizations in defining learning outcomes for curriculum and courses. On
average, each MLIS program adopts six competency statements. The study also revealed that
assessment practices at some MLIS programs are an integral part of an assessment framework
and mandate from its university, state, or regional accreditation agencies.
LIBRARY AND INFORMATION STUDIES 158
The program presentations further disclose that many MLIS programs conduct ongoing
outcomes assessment in a systematic manner. Assessment practice is further integrated into the
school’s planning and curriculum development process, as well as its organizational structure.
These observations address Research Question 1 about the current practice of outcomes
assessment at MLIS programs.
Theme 2: Accreditation Is the Primary Drive for MLIS Assessment Efforts, While
Program Directors, Faculty, and Curriculum Committees Provide Leadership in Its
Practice
Accreditation keeps MLIS programs viable in recruiting students effectively, receiving
external funding and grants legitimately, and certifying the employment of its graduates.
Seventy-Seven percent of survey respondents attributed ALA accreditation as the primary force
for the assessment activities at their program, whereas faculty and MLIS directors exert critical
influence on program’s assessment efforts. Curriculum and assessment committees chaired by
faculty members coordinate the assessment work, initiate new effort, set up achievable targets,
interpret results, and implement the changes at MLIS programs. Faculty, staff, and students are
well-included and represented in these committees.
The study further shows that assessment efforts at MLIS programs are substantively
influenced by the two leading, but sometimes opposing, purposes of outcomes assessment:
accountability and program improvement. However, many MLIS programs find a balance
between compliance with accreditation standards and the desire to improve the quality of student
learning with no obvious tension. These observations suffice to address Research Question 1
about the current practice of outcomes assessment at MLIS programs.
LIBRARY AND INFORMATION STUDIES 159
Theme 3: MLIS Programs Employed a Diverse Range of Metrics for Measuring Learning
Outcomes
MLIS programs employ more than 37 direct and indirect measures to gather qualitative
and quantitative evidence of student learning formatively as well as summatively at both
program and course level. In addition to the commonly used course assignment and course
evaluation, rubric, internship rating, portfolios, and surveys are the most popular assessment
mechanism cited by MLIS programs. To ensure capturing a fuller picture of student learning,
MLIS programs combine multiple assessment tools, such as rubric with portfolio and student
survey with internship, in assessing student learning. Evidence from multiple sources is then
triangulated to reach a richer conclusion. MLIS programs also solicit opinions from a wide
spread of constituencies, ranging from current students, faculty, and alumni, to employers,
external reviewers, and practitioners, on student learning. Likewise, they take into account
recognized student achievements, including honors, scholarships, rewards, presentation, and
publications, in assessing program effectiveness. Longitudinally, MLIS programs rely on direct
and indirect evidence related to enrollment, retention, grade, passing rate, graduation, and job
placement to monitor program performance and success.
Externally, MLIS programs actively track the development of the library and information
profession, the advances in technologies, and the employment trends in order to keep the
program and curriculum relevant and agile. They form advisory boards composed of
practitioners, employers, industry leaders, and state officers so as to tap into the environmental
changes. To monitor the employment trends and skill requirements, they analyze job postings
and interview employers. To maintain as well as expand connections, they host regional
meetings, sponsor workshops, visit employers and alumni, and participate in professional
LIBRARY AND INFORMATION STUDIES 160
conferences. To benchmark performance against their peers, they participate in library education
projects, such as WILIS (Workforce Issues in Library and Information Science), and track the
educational rankings. These observations address research question 2 about the types of
outcomes assessment measures and approaches employed at MLIS programs.
Theme 4: MLIS Programs Applied Assessment Results Extensively
Although accreditation agencies are the primary customer of assessment results, the
MLIS programs apply assessments results for program improvement and student learning
enhancement extensively. Assessment evidence contributes to the development and alignment of
program goals and objectives, the revision of curriculum and courses, the launch of new
specialization or program tracks, and the formation of partnership and collaboration on projects.
Assessment outcomes are also used to improve instruction, student services, facility, resources,
and class scheduling as well as to evaluate the performance of faculty and staff. In many cases,
MLIS programs are responsive to the feedback from stakeholders and that changes are
implemented in a timely manner. By leveraging accreditation, MLIS programs make assessment
more meaningful and use accreditation recommendations to stimulate changes. These
observations address research question 3 about the application of outcomes assessment results at
MLIS programs.
Theme 5: MLIS Programs Conduct Outcomes Assessment with Limited Resources
Due to the relatively small size of MLIS programs, two-thirds of the programs surveyed
have no dedicated assessment personnel to support assessment efforts. Instead, most of the
programs depend on assessment committees or curriculum committees to conduct assessment
work, with additional support from regular staff members. Additionally, MLIS programs rely on
campus resources, such as university’s centers for teaching and learning and assessment experts
LIBRARY AND INFORMATION STUDIES 161
from other departments, to complete assessment work. As for faculty support, only a few
programs reported that they offer professional development, stipends, technical assistance, or
release time for faculty involved in assessment work. These observations address research
question 1 about the current practice of outcomes assessment at MLIS programs.
Theme 6: MLIS Programs and Faculty Recognize the Intrinsic Value of Assessment and
Accreditation
While meeting the compliance requirements from accreditors, the university, or the state,
MLIS programs continue to gain interest in the value of assessment for the purposes of program
improvement and student learning enhancement. There are indications of more conscientious
measures of student learning and the use of assessment data in program planning and curriculum
development at MLIS programs. However, there are also indications that more effort is required
to move assessment work beyond the designated committees to all faculty members and to
expand assessment practice at the program level to all courses. These observations address
Research Question 4 about the perceived values and importance of outcomes assessment at
MLIS programs.
Implications for Practice
Building upon previous research of this topic, this study shed additional light on the
current assessment practices at MLIS programs. It provides a more nuanced and detailed picture
of different assessment approaches, applications, and implications across a diverse group of
MLIS programs. It is hoped that, with additional knowledge of the magnitude and shape of these
variations, MLIS program leaders, educators, and assessment personnel will have a better idea of
what their peers are doing and will be better informed on assessment planning and accreditation
groundwork. It is further hoped that this study may generate additional interest, dialogue,
LIBRARY AND INFORMATION STUDIES 162
research, and collaboration among educators, practitioners, and accreditation agencies on this
topic.
Using Multiple Measures
Learning outcomes assessment is generally used either for internal quality improvement
or external accountability. Although they are not mutually exclusive, tension between these two
imperatives tends to exist due to differences in their intended use, target audience, measuring
approach, and reference points (Ewell, 2009). While this study did not identify any obvious
tension, there is no doubt that the priority of assessment efforts at MLIS programs is to meet the
external requirements - either programmatic or regional accreditation or university mandate.
Surveys and other indirect measures are used intensively by MLIS programs to report assessment
practices and learning outcomes. Evidence from indirect measures is also used to make program
and curriculum changes. On the other hand, the extent of employing direct measures in
combination with the evidence from indirect measures among MLIS programs is less common
and varies widely across programs. All assessment measures are not created equal and no single
measure can fully capture the broad spectrum of student learning outcomes (Jankowski et al.,
2012). The evidence collected from different types of measures should not be treated the same.
It is imperative for accreditors and accreditation standards to clearly explain the difference
between these two types of measures and the potential deficiencies of heavily relying on a single
tool for assessment. Emphasis should also be placed on establishing principles for judging the
adequacy of evidence, encouraging the use of multiple direct and indirect measures, and
triangulating data from multiple sources to capture wider range and deeper data.
LIBRARY AND INFORMATION STUDIES 163
Involving and Supporting Faculty
Assessment is furthered when it is integrated into organizational structure, culture, and
process (Hersh & Keeling, 2013; Kinzie, 2010; Kurzweil, 2015). There is limited evidence from
this study that MLIS programs allocate adequate resources for assessment operations or in
sufficient supporting faculty involvement. To grow assessment into the operation of an MLIS
program, to increase faculty engagement and use of assessment results, to advance assessment at
the course level, and to continue to build a sustainable assessment infrastructure, MLIS programs
need to invest additional resources and provide faculty professional development opportunities
and support. Direct faculty involvement and ownership is essential to meaningful assessment
programs, and MLIS administrators should strive for a positive environment and climate with
adequate resources to recognize and reward faculty contributions in this area, to encourage
innovative practice, and to increase discussions on assessment issues at regular meetings.
As leaders in library education and the programmatic accreditation process, both the
Association for Library and Information Science Education and ALA should continue to work
with other relevant professional organizations in fostering a dedicated community of learning,
sharing, discussing, and collaborating around this topic. Special interest groups should be
formed to sponsor activities, such as training, professional development, conferences, online
forums, projects, research, and publications, in this area.
Sustaining Assessment Efforts
Applications of assessment evidence appear to expand beyond an act of compliance at
MLIS programs, evident both from the survey responses and the content analysis of 12 MLIS
program presentations. Nevertheless, this study also reveals that there are nuances in terms of
attitude, embracement, and enthusiasm toward assessment practices and efforts across MLIS
LIBRARY AND INFORMATION STUDIES 164
programs. Without buy-in and commitment from all participants, assessment stands as an
obligatory response to external demand and the results of outcomes assessment might not lead to
a desirable action and improvement. For MLIS programs to demonstrate their value and
effectiveness to external stakeholders, learning outcomes assessment must be an ongoing and
iterative process and should be tightly connected with program goals and objectives. Assessment
work needs to be embedded within the curricular review function and strategic planning process
as well as woven into the culture and structure of the program. In order to sustain assessment
efforts, assessment results must be applied toward improving instruction and learning and
translated into resource allocation. Student success stories and best practices must be shared
among MLIS programs.
Future Research
This study barely begins the full examination of assessment practices at MLIS programs.
As been noted earlier, to date, very few empirical studies have been published in this area.
While the analysis of this study have broad implications and applications, there are still areas and
gaps which warrant further exploration or clarification.
MLIS programs varies in terms of its size, program concentration, and delivery
mechanism (i.e., online, onsite, or hybrid). Among the 57 ALA accredited MLIS programs,
student enrollment ranges from 48 FTE to 1,253 FTE, the number of full-time faculty varies
from four to 48, and the annual revenue spans from $5 million to $33 million (Wallace, 2012).
In addition, there is a wide variety of program concentrations and curriculum focuses. With
these variations, a one-size-fits-all assessment approach can hardly work well, and accreditation
standards have to be flexible in order to accommodate the wide range of differences from
program to program. One possible direction for further research is to determine the differences
LIBRARY AND INFORMATION STUDIES 165
in assessment practices among MLIS programs by size, institutional type, assessment support,
delivery mechanism, and other major characters.
Another avenue worthy of further investigation is related to making student learning
evidence transparent. Several studies on institutional disclosure of assessment activities and
results reported that it remains a challenge for institutions to do more in communicating their
outcomes assessment effort with external audiences (Jankowski et al., 2012; Jankowski &
Makela, 2010; Jankowski & Provezis, 2011). Although the ALA Office for Accreditation
encourages each MLIS program to make its program presentation available publicly, there is no
specific instruction for the mechanism of disclosure or rules on exactly what to share. As a
result, not all schools provide their program presentation on their websites. Even if the program
presentation is posted online, the report is not always complete. In some cases, the appendix
section containing critical data is removed from the original program presentation, or hyperlinks
to websites for further details in program presentations lead to password-protected pages.
As reported in Chapter Two, ALA was cited by CHEA for failing to require accredited
“institutions or programs routinely to provide reliable information to the public on their
performance, including student achievement as determined by the institution or program”
(CHEA, 2013b, p. 2). With MLIS programs’ focusing more on assessment, it is even more
critical to examine how the programs effectively communicate assessment results to multiple
internal and external audiences. Some possible research questions include, “How transparent are
the programs in terms of their assessment activities and learning outcomes results? What
contents are currently disclosed or withheld? What are the concerns and obstacles of such
practices? What have programs done to explain the meaning and use of the results to external
users?”
LIBRARY AND INFORMATION STUDIES 166
Another important area for further study pertains to identifying innovative assessment
practices and technology adopted by MLIS programs and how effective and reliable they are.
For example, with the rapid development of technology, the integrated planning and advising
services (IPAS) now aggregates and mines data from discrete learning support systems, offers
students new insight and advice on their learning, and provide faculty and student advisors just-
in-time and just-for-you information to improve student learning and success (Hrabowski III,
Suess, & Fritz, 2011; Richman & Ariovich, 2013). The adoption of IPAS and the application of
learning analytics will be an area for further exploration. Projects like WILIS that offer
continuous and longitudinal data to benchmark performance across MLIS programs is another
area deserving further analysis.
Conclusions
Assessment made great strides across higher education and throughout academic
disciplines. The responses to the survey, the additional comments by ALOs, and the contents
from the program presentations provide an in-depth understanding of the current state of
assessment of student learning at MLIS programs. There is considerable progress being made at
MLIS programs in assessment practices, including the incorporation of learning outcomes into
program goals and objectives, the employment of a wide range of assessment measures, and the
engagement of more assessment activities. MLIS programs also aptly apply assessment results
into program improvement in addition to complying with accreditation requirements. There is
evidence of substantial involvement by faculty members, as well as increasing interest and
support of assessment work. However, some programs are more advanced in assessment
practices than others, and support and resources dedicated to assessment work are also relatively
limited. Challenges remain and there is still room for further improvement.
LIBRARY AND INFORMATION STUDIES 167
As a professional degree program, MLIS curriculum focuses strongly on the acquisition
of technical skills, accumulation of hands-on experience through real-world practices, and
attainment of know-hows. MLIS program shares many similarities with the principles of
competency-based learning and emphasizes and expects its students to apply learned knowledge
and skills in real-life settings. The ability for MLIS programs to integrate outcomes assessment
effectively at course and program levels and to leverage assessment results for program
improvement determines the quality of the degree, the employability of the graduates, and the
future of the profession. With no standardized mechanism, such as licensure or certification
examinations, to audit the quality of new entrants, an efficacious assessment program at MLIS
programs is even more critical for a successful career of graduates and the health and prosperity
of the profession.
LIBRARY AND INFORMATION STUDIES 168
REFERENCES
AAC&U. (n.d.). The essential learning outcomes. Washington, DC: Association of American
Colleges and Universities. Retrieved from
http://www.aacu.org/leap/documents/EssentialOutcomes_Chart.pdf
AAC&U. (2007). College Learning for the New Global Century. Washington, DC: Association
of American Colleges and Universities. Retrieved from
http://www.aacu.org/leap/documents/GlobalCentury_final.pdf
AAC&U. (2008). New Leadership for Student Learning and Accountability: A statement of
principles, commitments to action. Washington, DC: Association of American Colleges
and Universities. Retrieved from
http://www.aacu.org/resources/assessment/documents/New_Leadership_Statement.pdf
AAC&U. (2008b). Our students’ best work: A framework for accountability worthy of our
mission. Washington, DC: Association of American Colleges and Universities. Retrieved
from http://www.aacu.org/about/statements/2008/assessment
AAC&U. (2014a). Liberal Education and America’s Promise (LEAP). Washington, DC:
Association of American Colleges and Universities. Retrieved from
http://www.aacu.org/leap/index.cfm
AAC&U. (2014b). VALUE: Valid Assessment of Learning in Undergraduate Education.
Washington, DC: Association of American Colleges and Universities. Retrieved from
http://www.aacu.org/value/index.cfm
AACC. (2010). Democracy’s colleges: A call to action. Washington, DC: American Association
of Community Colleges. Retrieved from
http://www.aacc.nche.edu/About/completionchallenge/Documents/calltoaction.pdf
LIBRARY AND INFORMATION STUDIES 169
AAHE. (1992). 9 principles of good practice for assessing student learning. North Kansas City,
MO: American Association for Higher Education.
ACE. (2012). Assuring academic quality in the 21
st
century: Self-regulation in a new era.
Washington, DC: American Council on Education. Retrieved from
http://www.acenet.edu/news-room/Documents/Accreditation-TaskForce-revised-
070512.pdf
ACRL. (2010). 2010 top trends in academic libraries: A review of the current literature. College
& Research Libraries News, 71(6), 286-292. Retrieved from
http://crln.acrl.org/content/71/6/286.full
ACRL. (2014). Top trends in academic libraries: A review of the trends and issues affecting
academic libraries. College & Research Libraries News, 75(6), 294-302. Retrieved from
http://crln.acrl.org/content/75/6/294.full
ALA. (2008). Standards for Accreditation of Master’s Program in Library & Information
Studies. Chicago, IL: American Library Association. Retrieved from
http://www.ala.org/accreditedprograms/sites/ala.org.accreditedprograms/files/content/sta
ndards/standards_2008.pdf
ALA. (2009a). ALA’s core competences of librarianship. Chicago, IL: American Library
Association. Retrieved from
http://www.ala.org/educationcareers/careers/corecomp/corecompetences
ALA. (2009b). Core competences. Chicago, IL: American Library Association. Retrieved from
http://www.ala.org/educationcareers/careers/corecomp/corecompetences
LIBRARY AND INFORMATION STUDIES 170
ALA. (2009b). Final Report, Library Education Task Force. Chicago, IL: American Library
Association. Retrieved from
http://www.ala.org/offices/sites/ala.org.offices/files/content/accreditation/ebd12_30.pdf
ALA (2010). Memorandum of understanding between the American Library Association and the
Committee on Accreditation. Chicago, IL: American Library Association. Retrieved from
http://www.ala.org/offices/sites/ala.org.offices/files/content/accreditation/Memo%20of%
20Understandin.pdf
ALA. (2012). Accreditation process, policies, and procedures (AP3) (3
rd
ed.). Chicago, IL:
American Library Association. Retrieved from
http://www.ala.org/accreditedprograms/sites/ala.org.accreditedprograms/files/content/sta
ndards/AP3_third_edition_may2012.pdf
ALA. (2014a). ALA handbook of organization. Chicago, IL: American Library Association.
Retrieved from http://www.ala.org/groups/committees/ala/ala-coa
ALA. (2014b). Directory of Institutions offering accredited master’s program. Chicago, IL:
American Library Association. Retrieved from
http://www.ala.org/accreditedprograms/sites/ala.org.accreditedprograms/files/content/dire
ctory/pdf/LIS_directory_2-2014.pdf
ALA. (2014c). Graduate programs in library and information studies that are not accredited by
the American Library Association (ALA) Committee on Accreditation as of 2005.
Chicago, IL: American Library Association. Retrieve from
http://www.ala.org/offices/hrdr/educprofdev/nonalaaccredited
ALA. (2014d). Knowledge and competencies statements developed by relevant professional
organizations. Chicago, IL: American Library Association. Retrieve from
LIBRARY AND INFORMATION STUDIES 171
http://www.ala.org/educationcareers/careers/corecomp/corecompspecial/knowledgecomp
etencies
ALA. (2014e). Sample program presentations. Chicago, IL: American Library Association.
Retrieve from
http://www.ala.org/accreditedprograms/resourcesforprogramadministrators/onlinepp
American Association of School Librarians. (2014). CAEP/AASL school librarianship programs.
Chicago, IL: American Library Association. Retrieve from
http://www.ala.org/aasl/education/ncate/programs
Applegate, R. (2006). Student learning outcomes assessment and LIS program presentation.
Journal of Education for Library and Information Science, 47(4), 324-336.
Arum, R., & Roksa, J. (2014). Aspiring adults adrift: Tentative transitions of college graduates.
Chicago, IL: The University of Chicago Press.
Astin, A.W. (2014, February 18). Accreditation and autonomy. Inside Higher Ed. Retrieved from
http://www.insidehighered.com/views/2014/02/18/accreditation-helps-limit-government-
intrusion-us-higher-education-essay
Auld, L. W. S. (1990). Seven imperatives for library education. Library Journal, 115(8), 55-59.
Banta, T. W. (1993). Summary and conclusion: Are we making a difference? In T. W. Banta
(Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp.
357-376). San Francisco, CA: Jossey-Bass.
Banta, T. W., & Palomba, C. A. (2015). Assessment essentials: planning, implementing, and
improving assessment in higher education (2
nd
ed.). San Francisco, CA: Jossey-Bass.
LIBRARY AND INFORMATION STUDIES 172
Banta, T. W., & Pike, G. R. (2012). The bottom line: Will faculty use assessment findings? In C.
Secolsky & D. B. Denison (Eds.), Handbook on measurement, assessment, and
evaluation in higher education (pp. 47-56). New York, NY: Routledge.
Baum, S., Elliott, D. C., & Ma, J. (2014). Trends in student aid, 2014. New York, NY: The
College Board. Retrieved from https://secure-
media.collegeboard.org/digitalServices/misc/trends/2014-trends-student-aid-report-
final.pdf
Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New
Directions for Community College, 2004(126), 65-72.
Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational
learning perspective. New Directions for Higher Education, 2005(131), 99-111.
Berry, J. N., III. (1976). Open the faculty club. Library Journal, 101(15), 1679.
Berry, J. N., III. (1991a). Don’t leave it to educators: Practicing librarians must be part of
accreditation. Library Journal, 116(18), 6.
Berry, J. N., III. (1991b). Big 10 dean oppose ALA accreditation: Indiana’s Cronin leads attack,
library diretors go along. Library Journal, 116(18), 18.
Berry, J. N., III. (1995). Accreditation: Benefits & costs. Library Journal, 120(16), 6.
Berry, J. N., III. (2000). For openness in ALA accreditation: Operating in secret, COA
undermines the process. Library Journal, 125(4), 6.
Berry, J. N., III.; Blumenstein, L.; & DiMattia, S. (1999). Move accreditation apart from ALA?
Independent accreditation recommended by Education Congress. Library Journal,
124(10), 16, 20.
LIBRARY AND INFORMATION STUDIES 173
Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States
Government Printing Office. Retrieved from
http://hdl.handle.net/2027/mdp.39015007036083
Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation (CHEA).
Phoenix, AZ: Oryx Press.
Borrowing for college: The state of higher education in California. (2014). The Campaign for
College Opportunity. Retrieved from
http://www.collegecampaign.org/index.php/download_file/view/1069/334/
Bouldin, A. S., & Wilkin, N. E. (2000). Programmatic assessment in U.S. schools and colleges
of pharmacy: A snapshot. American Journal of Pharmaceutical Education, 64(4), 380-
387.
Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A
compilation of institutional good practice. Sterling, VA: Stylus.
Brittingham, B. (2008). An uneasy partnership: Accreditation and the federal government.
Change, 40(5), 32-38.
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are?
New Directions for Higher Education, 2009(145), 7-27. doi:10.1002/he.331
Brittingham, B. (2012). Higher education, accreditation, and change, change, change: What’s
teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence,
and excellence: The promise and practice of quality assurance (pp. 59-75). Washington,
DC: Teacher Education Accreditation Council.
Brooks, D. C. (2014). IPAS implementation issues: Data and systems integration. Louisville,
CO: ECAR. Retrieved from https://net.educause.edu/ir/library/pdf/ERS1404.pdf
LIBRARY AND INFORMATION STUDIES 174
Brosseau, L., & Fredrickson, A. (2009). Assessing outcomes of industrial hygiene graduate
education. Journal of Occupational and Environmental Hygiene, 6(5), 257-266.
Brown, H. (2013). Protecting Students and Taxpayers: The Federal Government’s Failed
Regulatory Approach and Steps for Reform. Washington, DC: American Enterprise
Institute, Center on Higher Education Reform. Retrieved from
http://www.aei.org/files/2013/09/27/-protecting-students-and-
taxpayers_164758132385.pdf
Burke, S. K., & Snead, J. T. (2014). Faculty opinions on the use of master’s degree end of
program assessments. Journal of Education for Library and Information Science, 55(1),
26-39.
Cabrera, A. F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for
assessing classroom teaching practices and student learning: The case of engineering.
Research in Higher Education, 42(3), 327-352.
Carey, J. O., Perrault, A. H., & Gregory, V. L. (2001). Linking outcomes assessment with
teaching effectiveness and professional accreditation. Academic Exchange, 5(1), 79-86.
Case, D. O. (2009). On the controversy regarding proposed changes to ALA Standards. Bulletin
of the American Society for Information Science and Technology, 35(6), 3.
CAT. (2012). Welcome to the Coding Analysis Toolkit (CAT). Pittsburgh, PA: University of
Pittsburgh. Retrieved from http://cat.ucsur.pitt.edu/default.aspx
Chambers, C. M. (1983). Council on Postsecondary Education. In K. E. Young, C. M.
Chambers, & H. R. Kells (Eds.), Understanding accreditation (pp. 289-314). San
Francisco, CA: Jossey-Bass.
LIBRARY AND INFORMATION STUDIES 175
CHEA. (2002). Specialized accreditation and assuring quality in distance learning. Washington,
DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/mono_2_spec-accred_02.pdf
CHEA. (2006a). CHEA survey of recognized accrediting organizations: Providing information
to the public. Washington, DC: Council for Higher Education Accreditation. Retrieved
from http://www.chea.org/pdf/CHEA_OP_Apr06.pdf
CHEA. (2006b). Recognition of accrediting organizations: Policy and procedure. Washington,
DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/Recognition_Policy-June_28_2010-FINAL.pdf
CHEA. (2010a). Recognition of accrediting organizations: Policy and procedures. Washington,
DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/Recognition_Policy-June_28_2010-FINAL.pdf
CHEA. (2010b). The value of accreditation. Washington, DC: Council for Higher Education
Accreditation. Retrieved from
http://www.chea.org/pdf/Value%20of%20US%20Accreditation%2006.29.2010_buttons.
pdf.
CHEA. (2012). The CHEA initiative final report. Washington, DC: Council for Higher
Education Accreditation. Retrieved from:
http://www.chea.org/pdf/TheCHEAInitiative_Final_Report8.pdf
CHEA (2013a). Accreditation tool kit. Washington, DC: Council for Higher Education
Accreditation. Retrieved from
http://www.chea.org/accreditation_toolkit/accreditation_toolkit.pdf
LIBRARY AND INFORMATION STUDIES 176
CHEA. (2013b). Summary of recognition status: American Library Association Committee on
Accreditation (ALA-COA). Washington, DC: Council for Higher Education
Accreditation. Retrieved from
http://www.chea.org/pdf/Recognition/Summaries_2013/ALA-
COA%20March%202013.pdf
CHEA. (2014a). 2012-2013 annual report. Washington, DC: Council for Higher Education
Accreditation. Retrieved from http://www.chea.org/pdf/2012-
2013%20Annual%20ReportB.pdf
CHEA. (2014b). 2013-2014 Directory of CHEA-recognized organizations. Washington, DC:
Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/pdf/2013-
2014_Directory_of_CHEA_Recognized_Organizations.pdf
CHEA. (2014c). CHEA almanac online. Washington, DC: Council for Higher Education
Accreditation. Retrieved from http://www.chea.org/Almanac%20Online/index.asp
CHEA. (2015). Database of institutions and programs accredited by recognized United States
accrediting organizations. Washington, DC: Council for Higher Education Accreditation.
Retrieved from http://www.chea.org/search/default.asp
Church, A. P., Dickinson, G. K., Everhart, N., & Howard, J. K. (2012). Competing standards in
the education of school librarians. Journal of Education for Library and Information
Science, 53(3), 208-217.
Cox, R. J., Mattern, E., Mattock, L., Rodriguez, R., & Sutherland, T. (2012). Assessing iSchools.
Journal of Education for Library and Information Science, 53(4), 303-316.
LIBRARY AND INFORMATION STUDIES 177
Creswell, J. W. (2009). Research design: Qualitative, quantitative and mixed methods
approaches (3
rd
ed.). Los Angeles, CA: Sage Publications, Inc.
Creswell, J. W., & Clark, V. L. (2011). Designing and conducting mixed methods research (2
nd
ed.). Los Angeles, CA: Sage Publications, Inc.
Cronin, B. (2000). Accreditation: Retool it or kill it. Library Journal, 125(11), 54.
Dare, L. (2010). Spotlight on process and policy: Formation of External Review Panels. PRISM,
18(1), 6-7.
Dare, L. (2011). The Committee on Accreditation: What goes on behind those closed doors?
PRISM, 19(1), 6-8.
Dare, L. (2012). External Review Panel role and responsibilities. PRISM, 20(2), 6-8.
Dalrymple, P. W., & Scherrer, C. S. (1998). Tools for improvement: A systematic analysis and
guide to accreditation by the JCAHO. Bulletin of the Medical Library Association, 86(1),
10-16.
Davenport, C. A. (2000). Recognition chronology. Retrieved from http://www.aspa-
usa.org/sites/default/files/inserts/Davenport.pdf
Davis, C. O. (1932). The North Central Association of Colleges and Secondary Schools: aims,
organization, activities. [Chicago, IL]: The Association. Retrieved from
http://hdl.handle.net/2027/mdp.39015031490645
Davis, C. O. (1945). A history of the North Central Association of Colleges and Secondary
Schools 1895-1945. Ann Arbor, MI: The North Central Association of Colleges and
Secondary Schools. Retrieved from
http://www.archive.org/details/historyofthenort009847mbp
LIBRARY AND INFORMATION STUDIES 178
The degree qualifications profile. Indianapolis, IN: Lumina Foundation. Retrieved from
http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf
DeMillo, R. (2013, January 16). Accreditation – or real quality assurance? [Online commentary].
Retrieved from
http://www.popecenter.org/commentaries/article.html?id=2792#.U2F7kxBJ6wQ
Dickeson, R. C. (2006). The need for accreditation reform. Issue paper (The Secretary of
Education’s Commission on the Future of Higher Education). Washington, DC: USDE.
Retrieved from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/dickeson.pdf
Dickey, F., & Miller, J. (1972). Federal involvement in nongovernmental accreditation.
Educational Record, 53(2), 138.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review, 48(2), 147-
160.
Dougherty, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems for
community colleges: Lessons for the voluntary framework of accountability for
community colleges. New York, NY: Columbia University. Retrieved from
http://ccrc.tc.columbia.edu/media/k2/attachments/performance-accountability-
systems.pdf
Dowd, A. C. (2003). From access to outcome equity: Revitalizing the democratic mission of the
community college. Annals of the American Academy of Political and Social Science,
586(1), 92-119.
LIBRARY AND INFORMATION STUDIES 179
Dowd, A. C., (2005). Data don’t drive: Building a practitioner-driven culture of inquiry to
assess community college performance. Indianapolis, IN: Lumina Foundation. Retrieved
from http://www.luminafoundation.org/publications/datadontdrive2005.pdf
Dowd, A. C., & Grant, J. L. (2006). Equity and efficiency of community college appropriations:
The role of local financing. The Review of Higher Education, 29(2), 167-194.
Dowd, A. C., & Tong, V. P., (2007). Accountability, assessment, and the scholarship of “Best
Practices.” In J. C. Smart (ed.), Higher education: Handbook of theory and research (pp.
57-90). Dordrecht, Netherlands: Springer. Retrieved from
http://cue.usc.edu/tools/Dowd%20and%20Tong_Accountability%2C%20Assessment%2
0and%20the%20Scholarship%20of%20Best%20Practice.pdf
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys:
The tailored design method (3
rd
ed.). Hoboken, NJ: Wiley & Sons.
Driscoll, A., & De Noriega, D. C. (2006). Taking ownership of accreditation: Assessment
processes that promote institutional improvement and faculty engagement. Sterling, VA:
Stylus Publishing.
Du, F., Stein, B., & Martin, R. S. (2007). Content analysis of an LIS job database: A regional
prototype for a collaborative model. Libri, 57(1), 17-26.
Eaton, J. S. (2010). Accreditation and the federal future of higher education. Academe, 96(5), 21-
24.
Eaton, J. S. (2011). U.S. accreditation: Meeting the challenges of accountability and student
achievement. Evaluation in Higher Education, 5(1), 1-20.
LIBRARY AND INFORMATION STUDIES 180
Eaton, J. S. (2012a). An overview of U.S. accreditation. Washington, DC: Council for Higher
Education Accreditation. Retrieved from
http://chea.org/pdf/Overview%20of%20US%20Accreditation%2003.2011.pdf
Eaton, J. S. (2012b). The future of accreditation: Can the collegial model flourish in the context
of the government’s assertiveness and the impact of nationalization and technology?
How? Planning for Higher Education, 40(3), 8-15. Retrieved from
http://www.chea.org/pdf/The%20Future%20of%20Accreditation_Planning_HE_JE.pdf
Eaton, J. S. (2013a). Accreditation and the next reauthorization of the Higher Education Act.
Inside Accreditation with the President of CHEA, 9(3). Retrieved from
http://www.chea.org/ia/IA_2013.05.31.html
Eaton, J. S. (2013b). The changing role of accreditation: Should it matter to governing board?
Trusteeship, 21(6). Retrieved from http://www.chea.org/pdf/Eaton-
Changing_Role_Accreditation.pdf
Ewell, P. T. (1993). The role of states and accreditors in shaping assessment practice. In T. W.
Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher
education (pp. 339-356). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of
departure. Washington, DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/award/StudentLearningOutcomes2001.pdf
Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta
(Ed.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass.
LIBRARY AND INFORMATION STUDIES 181
Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C.
Burke (Ed.), Achieving accountability in higher education: Balancing public, academic,
and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2008a). Assessment and accountability in America today: Background and context.
New Directions for Institutional Research, 2008(S1), 7–17.
Ewell, P. (2008b). U.S. accreditation and the future of quality assurance: A tenth anniversary
report from the Council for Higher Education Accreditation. Washington, DC: The
Council for Higher Education Accreditation.
Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension.
Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf
Ewell, P. T. (2010a). Foreword. In P. Hutchings (Ed.), Opening doors to faculty involvement in
assessment (pp. 4-5). Champaign, IL: National Institute for Learning Outcomes
Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/PatHutchings_000.pdf
Ewell, P. T. (2010b). Twenty years of quality assurance in higher education: What’s happened
what’s different? Quality in higher education, 16(2), 173-175.
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and in: Assessment practices at the program
level. Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved
from
http://www.learningoutcomesassessment.org/documents/NILOAsurveyreport2011.pdf
Fain, P. (2014, May 9). Ideas take shape for new accreditors aimed at emerging online providers.
Inside Higher Ed. Retrieved from
LIBRARY AND INFORMATION STUDIES 182
http://www.insidehighered.com/news/2014/05/09/ideas-take-shape-new-accreditors-
aimed-emerging-online-providers#sthash.dt9CrJf3.dpbs
Fakhry, H. (2012). Investigating the dynamic behavior of introducing outcomes assessments in
information systems programs for accreditation compliance. Review of Business
Information Systems, 16(3), 125-144.
Federal higher education programs – overview. (2014, September 23). New York, NY: New
America Foundation. Retrieved from http://febp.newamerica.net/background-
analysis/federal-higher-education-programs-overview
Federal student loan default rate. (2014, October 28). New York, NY: New America Foundation.
Retrieve from http://febp.newamerica.net/background-analysis/federal-student-loan-
default-rates
Field, K. (2013a, August 12). 5 years on, renewed Higher-Ed Act has lost its luster. The
Chronicle of Higher Education. Retrieved from http://chronicle.com/article/5-Years-On-
Renewed-Higher-Ed/141043/
Field, K. (2013b, December 13). Senators say accreditors are ineffective and beset by conflict of
interest. The Chronicle of Higher Education. Retrieved from
http://chronicle.com/article/Senators-Say-Accreditors-Are/143589/
Fink, A. (2013). How to conduct surveys: A step-by-step guide (5
th
ed.). Los Angeles, CA: Sage
Publications, Inc.
Fischer, K., & Stripling, J. (2014, March 3). An era of neglect: How public colleges were
crowded out, beaten up, and failed to fight back. The Chronicle of Higher Education.
Retrieved from http://chronicle.com/article/An-Era-of-Neglect/145045/
LIBRARY AND INFORMATION STUDIES 183
Flexner, A. (1910). Medical education in the United States and Canada: A report to the Carnegie
Foundation for the advancement of teaching. Washington, DC: Science & Health
Publications.
Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014). A simple model for
learning improvement: Weigh pig, feed pig, weigh pig. Champaign, IL: National Institute
for Learning Outcomes Assessment. Retrieved from
http://learningoutcomesassessment.org/documents/Occasional_Paper_23.pdf
Fuller, M. B., & Lugg, E. T. (2012). Legal precedents for higher education accreditation. Journal
of Higher Education Management, 27(1), 47-88. Retrieved from
http://www.aaua.org/journals/pdfs/JHEM_-_Vol_27_Web_Edition_2012.pdf
Gaston, P. L. (2014). Higher education accreditation: How it’s changing, why it must. Sterling,
VA: Stylus Publishing, LLC.
Gillen, A., Bennett, D. L., & Vedder, R. (2010). The inmates running the asylum? An analysis of
higher education accreditation. Washington, DC: Center for College Affordability and
Productivity. Retrieved from
http://www.centerforcollegeaffordability.org/uploads/Accreditation.pdf
Goda, B. S., & Reynolds, C. (2010). Improving outcome assessment in information technology
program accreditation. Journal of Information Technology Education: Innovations in
Practice, 9(2010), 49-59.
Golden, D. (2006, November 13). Colleges, accreditors seek better ways to measure learning.
The Wall Street Journal. Retrieved from
http://online.wsj.com/article/SB116338508743121260.html
LIBRARY AND INFORMATION STUDIES 184
Gorman, M. (2005). A paper on education for librarianship and ALA’s Standards for
Accreditation of Master’s Program in Library & Information Studies, 1992. Retrieved
from http://mg.csufresno.edu/PAPERS/Accreditation_standards.pdf
Grajek, S. (2014). Top 10 IT issues, 2014: Be the change you see. EDUCAUSE Review,
49(March/April): 10-54. Retrieved from
https://net.educause.edu/ir/library/pdf/ERM1421.pdf
Grajek, S. (2015). Top 10 IT issues 2015: Inflection point. EDUCAUSE Review, 50(1): 10-48.
Retrieved from https://net.educause.edu/ir/library/pdf/ERM1511.pdf
Griffiths, R., Chingos, M., Mulhern, C., & Spies, R. (2014). Interactive online learning on
campus: Testing MOOCs and other platforms in hybrid formats in the University System
of Maryland. New York, NY: ITHAKA S+R. Retrieved from
http://sr.ithaka.org/sites/default/files/reports/S-
R_Interactive_Online_Learning_Campus_20140710.pdf
Hart Research Associates. (2006). How should college prepare students to succeed in today’s
global economy? Based on survey employers and recent college graduates. Washington,
DC: Author. Retrieved from
http://www.aacu.org/leap/documents/Re8097abcombined.pdf
Hart Research Associates. (2008). How should college assess and improve student learning?
Employers’ views on the accountability challenge. Washington, DC: Author. Retrieved
from http://www.aacu.org/leap/documents/2008_Business_Leader_Poll.pdf
Hart Research Associates. (2009a). Learning and assessment: Trends in undergraduate
education (A survey among members of the Association of American College and
LIBRARY AND INFORMATION STUDIES 185
Universities). Washington, DC: Author. Retrieved from
https://www.aacu.org/membership/documents/2009MemberSurvey_Part1.pdf
Hart Research Associates. (2009b). Trends and emerging practices in general education: Based
on a survey among members of The Association of American Colleges and Universities.
Washington, DC: Author. Retrieved from
http://www.aacu.org/membership/documents/2009MemberSurvey_Part2.pdf
Hart Research Associates. (2010). Raising the bar: Employers’ views on college learning in the
wake of the economic downturn. Washington, DC: Author. Retrieve from
https://www.aacu.org/leap/documents/2009_EmployerSurvey.pdf
Haycock, K. (2010). Predicting sustainability for programs in library and information studies:
Factors influencing continuance and discontinuance. Journal of Education for Library
and Information Science, 51(3), 130-141.
Hernon, P., & Schwartz, C. (2913). Literature on assessment for learning. In P. Hernon, R. E.
Dugan, & C. Schwartz (Eds.), Higher education outcomes assessment for the twenty-first
century (pp. 19-27). Santa Barbara, CA: Libraries Unlimited.
Hersh, R. H., & Keeling, R. P. (2013). Changing institutional culture to promote assessment of
higher learning. Champaign, IL: National Institute for Learning Outcomes Assessment.
Retrieved from
http://www.learningoutcomeassessment.org/documents/occasionalpaperseventeen.pdf
Hicks, D., & Given, L. M. (2013). Principled, transformational leadership: Analyzing the
discourse of leadership in the development of librarianship’s core competences. The
Library Quarterly, 83(1), 7 – 25.
LIBRARY AND INFORMATION STUDIES 186
Holley, R. P. (2003). The ivory tower as preparation for the trenches: The relationship between
library education and library practice. College and Research Libraries News, 54(3), 172-
175.
Hrabowski III, F. A., Suess, J., & Fritz, J. (2011). Assessment and analytics in institutional
transformation. EDUCAUSE Review, 46(5): 14-28. Retrieved from
https://net.educause.edu/ir/library/pdf/ERM1150.pdf
Hutchings, P. (2010). Opening doors to faculty involvement in assessment. Champaign, IL:
National Institute for Learning Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/PatHutchings_000.pdf
Inamdar, S. N., & Roldan, M. (2013). The MBA capstone course: Building theoretical, practical,
applied and reflective skills. Journal of Management Education, 37(6), 747-770.
Jankowski, N., & Makela, J. P. (2010). Exploring the landscape: What institutional websites
reveal about student learning outcomes assessment activities. Champaign, IL: National
Institute for Learning Outcomes Assessment. Retrieved from
http://www.learningoutcomesassessment.org/documents/NILOAwebscanreport.pdf
Jankowski, N. A., Ikenberry, S. O., Kinzie, J., Kuh, G. D., Shenoy, G. F., & Baker, G. R. (2012).
Transparency & accountability: An evaluation of the VSA College Portrait Pilot.
Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from
http://www.learningoutcomesassessment.org/documents/VSA_000.pdf
Jankowski, N. A., & Provezis, S. J. (2011). Making student learning evidence transparent: The
state of the art. Champaign, IL: National Institute for Learning Outcomes Assessment.
Retrieved from
http://www.learningoutcomesassessment.org/documents/TransparencyOfEvidence.pdf
LIBRARY AND INFORMATION STUDIES 187
Jaschik, S., & Ledgerman, D. (2014). The 2014 Inside Higher Ed survey of college & university
presidents. Washington, DC: Inside Higher Ed.
Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2015). NMC horizon report: 2015
higher education edition. Austin, TX: The New Media Consortium. Retrieved from
http://cdn.nmc.org/media/2015-nmc-horizon-report-HE-EN.pdf
Kelderman, E. (2013, December 2). Accreditors now find themselves under critical review. The
Chronicle of Higher Education. Retrieved from http://chronicle.com/article/Accreditors-
Now-Find/143325/
Kelley, M. (2013). Can we talk about the MLS? A profession based more on apprenticeship
might work better. Library Journal, 138(8), 8.
Kim, D. J., Yue, K., Al-Mubaid, H., Hall, S. P., & Abeysekera, K. (2012). Assessing information
systems and computer information systems programs from a balanced scorecard
perspective. Journal of Information Systems Education, 23(2), 177-192.
Kinzie, J. (2010). Perspectives from campus leaders on the current state of student learning
outcomes assessment: NILOA focus group summary 2009-2010. Champaign, IL: National
Institute for Learning Outcomes Assessment. Retrieved from
http://learningoutcomesassessment.org/documents/FocusGroupFinal.pdf
Kirschenbaum, H. L., Brown, M. E., & Kalis, M. M. (2006). Programmatic curricular outcomes
assessment at colleges and schools of pharmacy in the United States and Puerto Rico.
American Journal of Pharmaceutical Education, 70(1), 1-12.
Klein-Collins, R. (2013). Sharpening our focus on learning: The rise of competency-based
approaches to degree completion. Champaign, IL: National Institute for Learning
LIBRARY AND INFORMATION STUDIES 188
Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/Occasional%20Paper%2020.pdf
Kniffel, L. (1999). Practitioners, educators seek library’s place in professional education.
American Libraries, 30(6), 12-15.
Krippendorff, K. (2010). Content analysis. In Encyclopedia of research design (pp. 234-239).
Thousand Oaks, CA: Sage Publications, Inc.
Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United
States. Higher education management and policy, 22(1), 1-20.
Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes
assessment in American higher education. Champaign, IL: National Institute for Learning
Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf
Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know
and can do: The current state of student learning outcomes assessment in U.S. colleges
and universities. Champaign, IL: National Institute for Learning Outcomes Assessment.
Retrieved from
http://www.learningoutcomeassessment.org/documents/2013%20Survey%20Report%20
Final.pdf
Kules, B., & McDaniel, J. (2010). LIS program expectations of incoming students’ technology
knowledge and skills. Journal of Education for Library and Information Science, 51(4),
222-232.
LIBRARY AND INFORMATION STUDIES 189
Kurpius, S. E. R., & Stafford, M. E. (2006). Testing and measurement: A user-friendly guide.
Thousand Oaks, CA: Sage Publications, Inc.
Kurzweil, M. (2015). Making assessment work: Lessons from the University of Pittsburgh. New
York, NY: ITHAKA S+R. Retrieved from
http://sr.ithaka.org/sites/default/files/reports/SR_Case_Study_Making_Assessment_Work
_Pitt_01-29-15.pdf
Lack, K. A. (2013). Current status of research on online learning in postsecondary education.
New York, NY: ITHAKA S+R. Retrieved from
http://sr.ithaka.org/sites/default/files/reports/ithaka-sr-online-learning-postsecondary-
education-may2012.pdf
Latrobe, K., & Lester, J. (2000). Portfolio assessment in the MLIS program. Journal of
Education for Library and Information Science, 41(3), 197-206.
Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise?
Washington, DC: American Council of Trustees and Alumni. Retrieved from
http://www.goacta.org/images/download/can_accreditation_live_up_to_its_promise.pdf
Lester, J., & Fleet, C. V. (2008). Use of professional competencies and standards documents for
curriculum planning in schools of library and information studies education. Journal of
Education for Library and Information Science, 49(1), 43-69.
Lin, Y., Michel, J., Aiden, E. L., Orwant, J., Brockman, W., & Petrov, S. (2012). Syntactic
Annotations for the Google Books Ngram Corpus. Proceedings of the 50th Annual
Meeting of the Association for Computational Linguistics, 2, 169-174. Retrieved from
http://www.petrovi.de/data/acl12b.pdf
LIBRARY AND INFORMATION STUDIES 190
Lind, C. J., & McDonald, M. (2003). Creating and assessment culture: a case study of success
and struggles. In S. E. Van Kollenburg (Ed.), A collection of papers on selfstudy and
institutional improvement, 3. Promoting student learning and effective teaching, pp.21-
23. (ERIC Document Reproduction Service No. ED 476 673). Retrieved from
http://files.eric.ed.gov/fulltext/ED476673.pdf#page=22
Lopez, C. L. (2002). Assessment of student learning: Challenges and strategies. The Journal of
Academic Librarianship, 28(6), 356-367.
Lu, C., & Shulman, S. W. (2008). Rigor and flexibility in computer-based qualitative research:
Introducing the Coding Analysis Toolkit. International Journal of Multiple Research
Approaches, 2(1), 105-117.
Lumina Foundation. (2011). The Degree Qualifications Profile. Indianapolis, IN: Author
Lynch, B. P. (2008). Library education: Its past, its present, its future. Library Trends, 56(4),
931-953.
Lynch, B. P. & Smith, K. R. (2001). The changing nature of work in academic libraries. College
& Research Libraries, 62(5), 407-420.
Maata, S. L. (2014). Placements & salaries 2014: Renaissance librarians. Library Journal,
139(17), 26-33.
Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the
institution (2
nd
ed.). Sterling, VA: Stylus Publishing, LLC.
Marshall, J. G., Morgan, J. C., Rathbun-Grubb, S., Marshall, V. W., Barreau, D., Moran, B. B.,
Solomon, P., & Thompson, C. A. (2010). Toward a shared approach to program
evaluation and alumni career tracking: Results from the Workforce Issues in Library and
Information Science 2 study. Library Trends, 59(1-2), 30-42.
LIBRARY AND INFORMATION STUDIES 191
McCoy, J. P., Chamberlain, D., & Seay, R. (1994). The status and perceptions of university
outcomes assessment in economics. The Journal of Economic Education, 25(4), 358-366.
McKinney, R. D. (2006). Draft proposed ALA core competencies compared to ALA-accredited,
candidate, and precandidate program curricula: A preliminary analysis. Journal of
Education for Library and Information Science, 47(1), 52-77.
McLendon, M. K., Hearn, J. C., & Mokher, C. G. (2009). Partisans, professionals, and power:
The role of political factors in state higher education funding. The Journal of Higher
Education, 80(6), 686-713.
Michel, J., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Brockman, W., The Google Books
Team, Pickett, J. P., Hoiberg, D., Clancy, D., Norvig, P., Orwant, J., Pinker, S., Nowak,
M. A., & Aiden, E. L. (2011). Quantitative analysis of culture using millions of digitized
books. Science, 331(6014), 176-182.
Middle States Commission on Higher Education. (2007). Student learning assessment: Options
and resources (2
nd
ed.). Philadelphia, PA: Middle States Commission on Higher
Education. Retrieved from
http://www.msche.org/publications/SLA_Book_0808080728085320.pdf
Miksa, F. L. (1988). The Columbia School of Library Economy, 1887-1888. Libraries &
Culture, 23(3), 249-280.
Miles, J. A. (2012). Jossey-Bass business and management reader: Management and
organization theory. Hoboken, NJ: Wiley.
Moran, B. B. (2013). From the COA Chair: Perspective. Prism, 21(2), 5-6. Retrieved from
http://www.ala.org/offices/sites/ala.org.offices/files/content/accreditation/prp/prism/prism
archive/Prism_fall13.pdf
LIBRARY AND INFORMATION STUDIES 192
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangulation.
Nursing Research, 40(1), 120-123.
Mulvaney, P. & O’Connor, D. (2014). Opinion: Rethinking how we rate and rank MLIS
programs. Library Journal, 139(11), 37-40.
NACIQI. (2012). Report to the U.S. Secretary of Education, Higher Education Act
Reauthorization, Accreditation Policy Recommendations. Washington, DC: U.S.
Department of Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/naciqi-dir/2012-spring/teleconference-
2012/naciqi-final-report.pdf
New Leadership Alliance. (2014). Home. Retrieved from http://www.newleadershipalliance.org/
NILOA. (2012). About us. Champaign, IL: the Author. Retrieved from
http://www.learningoutcomeassessment.org/AboutUs.html
O’Connor, D., & Mulvaney, P. (2013). LIS accountability & accreditation: Colleting and
evaluating more data about ALA-accredited library schools could provide more
credibility for the MLS and its recipients. Library Journal, 138(14), 40-42.
OECD. (2014). Education at a glance 2014: OECD indicators. OECD Publishing. Retrieved
from http://dx.doi.org/10.1787/eag-2014-en
Orlans, H. O. (1974). Private accreditation and public eligibility (vols. 1-2). Washington, D.C.:
U.S. Office of Education. Retrieved from
http://hdl.handle.net/2027/mdp.39015033366355 and
http://hdl.handle.net/2027/mdp.39015033366363
Pallant, J. (2013). SPSS survival manual: a step by step guide to data analysis using IBM SPSS
(5
th
ed.). Maidenhead, UK: Open University Press/McGraw-Hill.
LIBRARY AND INFORMATION STUDIES 193
Palmer, J. C. (2012). The perennial challenges of accountability. In C. Secolsky & D. B. Denison
(Eds.), Handbook on measurement, assessment, and evaluation in higher education (pp.
57-70). New York, NY: Routledge.
Parsons, J. L. (1975). Accreditation in legal education and in education for librarianship, 1878-
1961. Law Library Journal, 68(2), 137-153.
Perrault, A. H., Gregory, V. L., & Carey, J. O. (2002). The integration of assessment of student
learning outcomes with teaching effectiveness. Journal of Education for Library and
Information Science, 43(4), 270-282.
Pringle, C., & Michel, M. (2007). Assessment practices in AACSB-accredited business schools.
Journal of Education for Business, 82(4), 202-211.
Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational
culture as related to external accreditation. Academic Leadership Journal, 8(2), 1-15.
Rettig, J. (2009). Educating the teachers: Member input sought about task force
recommendations. American Libraries, 40(5), 6.
Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the degree.
Planning for Higher Education, 40(3), 36-42.
Richman, W. A., & Ariovich, L. (2013). All-in-one: Combining grading, course, program, and
general education outcomes assessment. Champaign, IL: National Institute for Learning
Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/Occasional%20Paper%2019%20
FINAL.pdf
Robinson, W. C. (1993). Academic library collection development and management positions:
Announcements in College & Research Libraries News from 1980 through 1991. Library
LIBRARY AND INFORMATION STUDIES 194
Resources & Technical Services, 37(2), 134-146. Retrieved from
http://downloads.alcts.ala.org/lrts/lrtsv37no2.pdf
Ruppert, S. (Ed.) (1994). Charting higher education accountability: A sourcebook on state-level
performance indicators. Denver, CO: Education Commission on the States.
Salkind, N. J. (2011). Statistics for people who (think they) hate statistics (4
th
ed.). Los Angeles,
CA: Sage Publications, Inc.
Saracevic, T. (1994). Closing of library schools in North America: What role accreditation.
LIBRI, 44(3), 190-200.
Schwartz, M. (2013, January 4). USC launches master’s in library management. Library Journal.
Retrieved from http://lj.libraryjournal.com/2013/01/library-education/usc-launches-
masters-in-library-management/
Scripps-Hoekstra, L., Carroll, M., & Fotis, T. (2014). Technology competency requirements of
ALA-accredited library science programs: An updated analysis. Journal of Education for
Library and Information Science, 55(1), 40-54.
Seavey, C. A. (1989). Inspection of library training schools, 1914: The missing Robbins report.
Champaign, IL: University of Illinois, Graduate School of Library and Information
Science.
Shannon, D. M. (2008). School library media preparation program review: Perspectives of two
stakeholder groups. Journal of Education for Library and Information Science, 49(1), 23-
42.
Shavelson, R. J. (2007). A brief history of student learning assessment: How we got where we
are and a proposal for where to go next. Washington, D. C.: Association of American
Colleges and Universities. Retrieved from
LIBRARY AND INFORMATION STUDIES 195
http://web.stanford.edu/dept/SUSE/SEAL/Reports_Papers/Shavelson_AcadTransition.pd
f
Shaw, R. (1993). A backward glance: To a time before there was accreditation. North Central
Association Quarterly, 68(2), 323-335.
Simmons, H. L. (2000). Librarian as teacher: A personal view. College & Undergraduate
Libraries, 6(2), 41-44.
Smith, D. (2014). Improving the leadership skills of pre-service school librarians through
leadership pre-assessment. Journal of Education for Library and Information Science,
55(1), 55-68.
Smith, J. (2012, June 8). The best and worst master’s degrees for jobs. Forbes. Retrieved from
http://www.forbes.com/sites/jacquelynsmith/2012/06/08/the-best-and-worst-masters-
degrees-for-jobs-2/
Stansberry, S. L. (2006). Effective assessment of online discourse in LIS courses. Journal of
Education for Library and Information Science, 47(1), 27-37.
Student loan debt by age group. (2013, March 29). Federal Reserve Bank of New York.
Retrieved from http://www.newyorkfed.org/studentloandebt/
Suskie, L. A. (2009). Assessing student learning: A common sense guide (2
nd
ed.). San Francisco,
CA: Jossey-Bass.
Swigger, B. K. (2010). The MLS project: An assessment after sixty years. Lanham, MD:
Scarecrow Press.
Tammaro, A. M. (2006). Quality assurance in library and information science (LIS) schools:
Major trends and issues. Advances in Librarianship, 30, 389-423.
LIBRARY AND INFORMATION STUDIES 196
Taylor, M., & Heath, F. (2012). Assessment and continuous planning: The key to transformation
at the University of Texas Libraries. Journal of Library Administration, 52(5), 424-435.
Trapnell, J. E. (2007). AACSB International accreditation. Journal of Management
Development, 26(1), 67-72.
Trivett, D. A. (1976). Accreditation and institutional eligibility. Washington, DC: American
Association for Higher Education.
USDE. (2006). A test of leadership: Charting the future of U.S. higher education. A report of the
commission appointed by Secretary of Education Margaret Spellings. Washington, DC:
U.S. Department of Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf
USDE. (2014a). The Condition of Education. Washington, DC: U.S. Department of Education,
National Center for Education Statistics. Retrieved from
http://nces.ed.gov/programs/coe/indicator_cva.asp
USDE. (2014b). Accreditation in the United States. Washington, DC: U.S. Department of
Education. Retrieve from
http://www2.ed.gov/admins/finaid/accred/accreditation_pg13.html
United States Department of Labor. Bureau of Labor Statistics. (2014). Occupational outlook
handbook. Retrieved from http://www.bls.gov/ooh/home.htm
Vann, S. K. (1961). Training for librarianship before 1923: Education for librarianship prior to
the publication of Williamson’s report on Training for Library Service. Chicago, IL:
American Library Association.
LIBRARY AND INFORMATION STUDIES 197
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the impact of
professional accreditation on student experiences and learning outcomes. Research in
Higher Education, 48(2), 251-282.
VSA. (2014). About VSA. Retrieved from http://www.voluntarysystem.org/about
Wallace, D. P. (Ed.). (2012). Library and information science education statistical report 2012.
Chicago, IL: Association for Library and Information Science Education.
Weiss, G. L. (2002). The current status of assessment in sociology departments. Teaching
Sociology, 30(4), 391-402.
Weissburg, P. (2008). Shifting alliances in the accreditation of higher education: On the long
term consequences of the delegation of government authority to self-regulatory
organizations. (Doctoral dissertation). Retrieved from
http://digilib.gmu.edu/dspace/handle/1920/3423
Wergin, J. F. (2005). Waking up to the importance of accreditation. Change, 37(3) 35-41.
Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson & D.
Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality
assurance (27-38). Washington, DC: Teacher Education Accreditation Council.
White, G. W. (2000). Head of reference positions in academic libraries: A survey of job
announcements from 1990 through 1999. Reference & User Services Quarterly, 39(3),
265-272.
White, L. N., & Blankenship, E. F. (2007). Aligning the assessment process in academic libraries
for improved demonstration and reporting of organizational performance. College &
Undergraduate Libraries, 14(3), 107-119.
LIBRARY AND INFORMATION STUDIES 198
White, M. D., & Marsh, E. E. (2006). Content analysis: A flexible methodology. Library Trends,
55(1), 22-45.
White House. (2013a, August 22). Fact Sheet on the President’s Plan to Make College More
Affordable: A Better Bargain for the Middle Class. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-plan-
make-college-more-affordable-better-bargain-
White House. (2013b, February 12). The President’s plan for a strong middle class and a strong
America. Retrieved from
http://www.whitehouse.gov/sites/default/files/uploads/sotu_2013_blueprint_embargo.pdf
White House. (2013c, February 12). State of the Union Address. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/02/12/president-barack-obamas-state-
union-address
Who pays for public education? (2014, March 3). The Chronicle of Higher Education. Retrieved
from http://chronicle.com/article/Who-Pays-More/145063/
Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions: The
case of university accreditation. Human Organization, 51(4), 398-407.
Wilson, A. M. & Hermanson, R. (1998). Educating and training library practitioners: A
comparative history with trends and recommendations. Library Trends, 46(3), 467-504.
Wilson, V. (2011). Research methods: Content analysis. Evidence Based Library and
Information Practice, 6(4), 177-179.
Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing demands?.
In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public,
academic, and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass.
LIBRARY AND INFORMATION STUDIES 199
Wolff, R. A. (2013, December 15). Exploding myths: What's right with regional accreditation
(and how we can make it even better). The Presidency. Retrieved from
http://www.acenet.edu/the-presidency/columns-and-features/Pages/Exploding-
Myths.aspx
Woolston, P. J. (2012). The costs of institutional accreditation: A study of direct and indirect
costs (Doctoral dissertation). Retrieved from
http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/51038
Wu, D. D. (2015). Online learning in postsecondary education: A review of the empirical
literature (2013-2014). New York, NY: ITHAKA S+R. Retrieved from
http://sr.ithaka.org/sites/default/files/reports/SR_Report_Online_Learning_Postsecondary
_Education_Review_Wu_031115.pdf
Yungmeyer, E. (1984). The ALA accreditation program. Journal of Education for Library and
Information Science, 25(2), 109-117.
LIBRARY AND INFORMATION STUDIES 200
Appendix A
Master’s Programs In Library And Information Studies In Canada And United States
LIBRARY AND INFORMATION STUDIES 201
LIBRARY AND INFORMATION STUDIES 202
LIBRARY AND INFORMATION STUDIES 203
LIBRARY AND INFORMATION STUDIES 204
LIBRARY AND INFORMATION STUDIES 205
LIBRARY AND INFORMATION STUDIES 206
Appendix B
Survey Invitation Letter
Subject: Information Request on Learning Outcomes Assessment Practice
Date: September 24, 2014
Dear Professor «Director__ALO»:
My name is Win Shih. I am an Ed. D. student at the University of Southern California. I am conducting
a research on the learning outcomes assessment practice at master of library and information studies
(MLIS) programs in Canada and the United States. As the MLIS program director and/or accreditation
liaison, you have the best knowledge of outcomes assessment practice at your program. This email is an
invitation to participate in this research by completing a short survey on learning outcomes assessment at
your program. If there is another person at your program who is knowledgeable on this topic, please
forward this invitation to the person.
The purpose of my study is to understand outcomes assessment practice at MLIS programs at
program and course level; the types of assessment measures used; its association with ALA
accreditation standards; and the perceived value of outcomes assessment. To date very few studies have
been published on this topic (see for example Applegate, 2006; Perrault, Gregory, and Carey, 2002).
Findings of this study will make a significant contribution to the understanding of current assessment
practice and may initiate a dialogue and future plans on effectively assessing student learning at MLIS
programs. The study is IRB-approved. The information you share will be secured locally and your
confidentiality and anonymity will be strictly maintained.
The survey itself should take no more than 15 minutes to complete. You can access the survey at:
https://usclibraries.az1.qualtrics.com/SE/?SID=SV_4TtyMD4MyBksnat
Your participation is voluntary and you may decline to answer any question you wish without
explanation. Your assistance will be greatly appreciated. To thank you for your contribution you can
receive a copy of the findings by contacting me directly. If you have any questions about the survey,
please do not hesitate to contact me at winyuans@usc.edu or 720-936-9097.
Best regards,
Win Shih
Doctor of Education (Ed. D.) Candidate
USC Rossier School of Education
Referenced above:
Applegate, R. (2006). Student learning outcomes assessment and LIS program presentation. Journal of
Education for Library and Information Science, 47(4), 324-336.
Perrault, A. H., Gregory, V. L., & Carey, J. O. (2002). The integration of assessment of student learning
outcomes with teaching effectiveness. Journal of Education for Library and Information Science,
43(4), 270-282.
LIBRARY AND INFORMATION STUDIES 207
Appendix C
Survey Invitation Follow-Up
Subject: Follow-up on Information Request on Learning Outcomes Assessment Practice
Date: October 7, 2014
Dear Professor «Director__ALO»:
Two weeks ago, an email was sent to you inviting your participation in a study on the learning
outcomes assessment at your program. If you have already completed the survey, thank you
very much for your assistance. The response so far has been quite positive! If you have not yet
had the chance to contribute to the research, please take about 15 minutes to do so now.
You can access and submit the survey at:
https://usclibraries.az1.qualtrics.com/SE/?SID=SV_4TtyMD4MyBksnat
To date very few studies have been done on this topic, findings of this IRB-approved study will
provide better understanding of current outcomes assessment practice and its perceived value at
MLIS program. As the MLIS program director and/or accreditation liaison, you have the best
knowledge of outcomes assessment practice at your program. Your participation is critical to the
success of this study. If there is another person at your program who is knowledgeable on this
topic, please feel free to forward this invitation to the person. Your confidentiality and
anonymity will be strictly maintained. To thank you for your contribution you can receive a
copy of the findings by contacting me directly. If you have any questions about the survey,
please do not hesitate to contact me at winyuans@usc.edu or 720-936-9097.
Best regards,
Win Shih
Doctor of Education (Ed. D.) Candidate
USC Rossier School of Education
LIBRARY AND INFORMATION STUDIES 208
Appendix D
Survey Instrument
Survey on Learning Outcomes Assessment at MLIS Program
This survey is asking accreditation liaison officers or coordinators at Master of Library and
Information Studies (MLIS) programs in the United States and Canada about their outcomes
assessment practices. To date very few studies have been published on this topic. Findings of
this study will make a significant contribution to the understanding of current assessment
practice and may initiate a dialogue and future plans on effectively assessing student learning at
MLIS programs. The survey should take no more than 15 minutes to complete.
To thank you for your contribution you can receive a copy of the findings by contacting Win
Shih, Ed. D. student at the University of Southern California, at winyuans@usc.edu.
LIBRARY AND INFORMATION STUDIES 209
INFORMED CONSENT FOR NON-MEDICAL RESEARCH
Welcome to the survey!
You are invited to participate in a dissertation research project conducted by Winyuan Shih, a
student in the Ed. D. program at the University of Southern California. The research project will
explore the extent of outcomes assessment practice at master of library and information studies
(MLIS) programs in Canada and the United States. Your participation is voluntary. You should
read the information below, and ask questions about anything you do not understand before
deciding whether to participate. Please take as much time as you need to read the consent
form. If you decide to participate, you will be asked to electronically sign this form. You may
also print a copy of this form.
Procedures
If you volunteer to participate in this survey, you will be asked to complete the following online
survey that will ask about outcomes assessment practice at your MLIS program. All answers to
the survey questions are voluntary and you can stop the survey at any point. The survey should
take approximately 15 minutes to complete.
Potential Risks and Discomforts
There is minimal risk to you in completing the survey. If you find any of the questions from the
survey uncomfortable you are free to skip them. You are also free to stop the survey at any
point.
Potential Benefits to Participants and/or to Society
There are no direct benefits for participants. However, it is hoped that through your
participation, the researcher will gain a better understanding of how outcomes assessment
practices at MLIS programs. To thank you for your contribution you can receive a copy of the
findings of the study by contacting me directly.
Confidentiality
Your responses are strictly confidential and findings from the survey will only be reported in
aggregate form. Any identifiable information obtained in connection with this study will remain
confidential and will be disclosed only with your permission or as required by law. My
dissertation committee members and I may access the data. However, any identifiable
information will be removed before the committee members accesses the data.
All data will be collected and stored through Qualtrics’ secure servers. Access to the files will
need to be via an authorized Qualtrics login and password. Users receive authorization either as
LIBRARY AND INFORMATION STUDIES 210
a read only or read and write on the data depending on the need and will be strictly limited to
me.
Data will be exported via a .sav file and imported into SPSS software for analysis. Only I will
have access to this data. If questions of statistical analysis arise, my dissertation committee
members will be consulted. Upon completion of this research project, the Qualtrics survey and
collected data will be deleted from the Qualtrics servers.
Compensation
There is no direct compensation for completing this survey.
Participation
Your participation is voluntary. Your refusal to participate will involve no penalty or loss of
benefits to which you are otherwise entitled. You may withdraw your consent at any time and
discontinue participation without penalty. You are not waiving any legal claims, rights or
remedies because of your participation in this research study.
Questions about the Research
Should you have any questions about the survey, please email me at winyuans@usc.edu or 720-
936-9097.
LIBRARY AND INFORMATION STUDIES 211
I. Demographic Information
institute 1. Which of the following best describes your institution’s affiliation?
Public (1)
Private / Independent (2)
title 2. Would you please indicate your official title:
liaison 3. Are you the accreditation liaison or coordinator at your program?
Yes (1)
No (2)
liaisonlas 4. Were you the accreditation liaison or coordinator at the time of the program’s last
accreditation review?
Yes (1)
No (2)
Other, please specify (3) ____________________
II. Program Goals and Measurable Objectives
setoflos 5. Does your program have a common set of intended learning goals or outcomes that
apply to all your students?
Yes (1)
Under development now (3)
No (2)
Not sure (4)
wasseplan 6. Does your program have a written programmatic assessment plan with stated goals,
desired learning outcomes, and measuring mechanisms?
Yes (1)
Under development now (3)
No (2)
Not sure (4)
LIBRARY AND INFORMATION STUDIES 212
Answer If Does your program have a written programmatic assessment plan with stated goals,
desired learning... Yes Is Selected
assepolicy 6a. If Yes, has the assessment plan been formally adopted as a policy?
Yes (1)
Under review now (3)
No (2)
Not sure (4)
Answer If Does your program have a written programmatic assessment plan with stated goals,
desired learning... Yes Is Selected
drive 6b. Who/what would you say is driving the development of the programmatic assessment
plan at your program?
Dean, Chair, or Program Director (1)
Administrative Officer other than the Dean, Chair, or Program Director (2)
Faculty (3)
Students (4)
ALA accreditation standards (5)
Other accreditation body, please specify: (6) ____________________
Other, please specify (7) ____________________
alacore 7. To what degree has your program adopted ALA’s Core Competencies of
Librarianship?
Not at All (1)
Some (2)
Quite a bit (3)
Very Much (4)
Uncertain (5)
othercomp 8. Please list additional knowledge and competencies statements from other
professional organizations that your MLIS program adopts.
III. Assessment Practice
assecomm 9. Do you have a committee responsible for the practice of learning out come
assessment?
Yes. Please provide committee name: (1) ____________________
No (2)
LIBRARY AND INFORMATION STUDIES 213
Answer If 9. Do you have a committee responsible for the practice of learning out come
assessment? Yes. Please provide committee name: Is Selected
assecomm 9a. If Yes, what is the composition and number of representatives on the Assessment
Committee?
______ Faculty (1)
______ Students (2)
______ Practitioners (3)
______ Administrators (4)
______ Other, please specify: (5)
Answer If Do you have an Assessment Committee? No Is Selected
nassecomm 9a. If No, which of the following are involved in the process of programmatic
assessment at your program and number of participants?
______ Faculty (1)
______ Students (2)
______ Practitioners (3)
______ Administrators (4)
______ Other, please specify: (5)
curricomm 10. Do you have a Curriculum / Program Committee?
Yes. Please provide committee name: (1) ____________________
No (2)
Answer If Do you have a Curriculum Committee? Yes Is Selected
10a. If yes, what is its composition and number of representatives on the Curriculum Committee?
______ Faculty (1)
______ Students (2)
______ Practitioners (3)
______ Administrators (4)
______ Other, please specify: (5)
IV. Assessment Measures
LIBRARY AND INFORMATION STUDIES 214
11. How many students in your program participate in the following types of assessments that
you use to assess student learning?
LIBRARY AND INFORMATION STUDIES 215
None (1) Few (2) Some (3)
About
Half (4)
Most (5) All (6)
Uncertain
(0)
Student class
assignments or
exams (1)
Capstone or
culminating,
professionally
focused project
/ demonstration
(2)
Portfolio (a
purposeful
collection of
student work
intended to
demonstrate
achievement of
learning
objectives) (5)
Comprehensive
examination
(oral or
written) (3)
Thesis (4)
Performance
assessments
other than
grades in field
experience
(e.g.,
internship,
practicum,
student
teaching,
scenarios,
service-
learning) (6)
Standardized
certification
examinations
or professional
licensure
examinations
(e.g. school
library media
certification)
(8)
Exit survey or
interview (9)
LIBRARY AND INFORMATION STUDIES 216
Student
surveys,
interviews,
focus groups,
or forums (10)
National
student surveys
(e.g., NSSE,
CSEQ, SSI).
Please specify:
(11)
Other
measures,
please specify:
(12)
LIBRARY AND INFORMATION STUDIES 217
12. To what extent has your program used the following measures to assess student or program
learning outcomes?
Not at All (1) Some (2) Quite a Bit (3) Very Much (4)
Rubrics to assess
student work (1)
Alumni surveys,
interviews or
focus groups (2)
Internship
supervisor
evaluations (3)
Internship
supervisor
surveys,
interviews or
focus groups (4)
Employer
surveys,
interviews or
focus groups (5)
Retention or
graduation rate;
length of times
to degree (6)
Job placement
rate (7)
Curriculum
mapping (a
process for
collecting and
recording
curriculum-
related data that
identifies core
skills and
content taught,
processes
employed, and
assessments
used) (9)
Other measures,
please specify:
(10)
LIBRARY AND INFORMATION STUDIES 218
13. To what extent has your program used student learning outcomes results for each of the
following?
Not at All (1) Some (2) Quite a Bit (3) Very Much (4)
Preparing program
presentation or self-
studies for
programmatic or
specialized
accreditation (e.g.
ALA accreditation)
(1)
Preparing program
presentation or self-
studies for regional
institutional
accreditation (e.g.
North Central
Association of
Colleges and
Schools) (2)
Evaluating overall
program
performance (9)
Revising program
learning goals (3)
Improving program
decision making and
planning (11)
Reviewing or
revising program
curriculum (6)
Supporting budget
requests or
allocation (12)
Improving
instruction or
pedagogy (7)
Determining student
readiness for
admission to the
program (4)
Determining student
readiness for later
courses in the
program (5)
LIBRARY AND INFORMATION STUDIES 219
Designing more
effective student
orientation or
support (13)
Evaluating faculty
or staff performance
(8)
Increasing the
connections
between in-class
and field learning
(14)
Enhancing Library
resources (16)
Improving IT
support (17)
Other, if applicable
(15)
V. Resources
personnel 14. Does your program have one or more personnel positions (professional or staff)
dedicated to support assessment?
Yes, number of FTE (1) ____________________
No (2)
Answer If Does your program have one or more personnel positions (professional or staff)
dedicated to supp... Yes, number of FTE Is Selected
pnstatus 14a. If Yes, is this person a:
Tenured faculty member (1)
Non-tenure track faculty member (2)
Staff member (3)
Other, please specify (4) ____________________
LIBRARY AND INFORMATION STUDIES 220
involve 15. How many of your program faculty and staff are involved in student learning
outcomes assessment activities beyond grading?
None (1)
Few (2)
Some (3)
About Half (4)
Most (5)
All (6)
Uncertain (9)
VI. Accreditation Value
alaccred 16. Is your program accredited by American Library Association?
Yes, when was the last full ALA-accreditation review (month / year)? (1)
____________________
No (2)
Other, please specify: (3) ____________________
Answer If 16. Is your program accredited by American Library Association? Yes, when? Is
Selected
benefit 16a. If yes, what are the most important benefits to your program of going through ALA
accreditation process?
Answer If 16. Is your program accredited by American Library Association? No Is Selected
notaccre 16a. If No, what are your reasons for not choosing to pursue or maintain ALA
accreditation?
otheraccr 17. Is your program accredited by other organization(s)?
Yes (1)
No (2)
Not sure (3)
Answer If Yes Is Selected
otheraccr2 17a. If your program is accredited by other organization(s), please list all and year of
accreditation:
otherslos 18. What else do you think we should know about student learning outcomes
assessment in your program?
LIBRARY AND INFORMATION STUDIES 221
Appendix E
Survey on Learning Outcomes Assessment at MLIS Program Codebook
Survey on Learning Outcomes Assessment at MLIS Program Codebook
Item # Variable Variable label
Coding instructions
0 ID Identification number
Number assigned to each
survey
I. Demographic Information
1 institute
Which of the following best
describes your institution’s
affiliation? 1 = Public
2 = Private / Independent
2 title
Would you please indicate your
official title:
3 liaison
Are you the accreditation liaison or
coordinator at your program? 0 = No
1 = Yes
4
liaisonlast
Were you the accreditation liaison or
coordinator at the time of the
program’s last accreditation review? 0 = No
1 = Yes
2 = Other
II. Program Goals and Measurable Objectives
5 setoflos
Does your program have a common
set of intended learning goals or
outcomes that apply to all your
students? 0 = No
1 = Yes
3 = Under development
now
4 = Not sure
6 wasseplan
Does your program have a written
programmatic assessment plan with
stated goals, desired learning
outcomes, and measuring
mechanisms? 0 = No
1 = Yes
3 = Under development
now
4 = Not sure
6a assepolicy
If Yes, has the assessment plan been
formally adopted as a policy? 0 = No
1 = Yes
LIBRARY AND INFORMATION STUDIES 222
3 = Under development
now
4 = Not sure
6b drive
Who/what would you say is driving
the development of the
programmatic assessment plan at
your program?
1 = Dean, Chair, or
Program Director
2 = Administrative
Officer other than the
Dean, Chair, or Program
Director
3 = Faculty
4 = Students
5 = ALA accreditation
standards
6 = Other accreditation
body, please specify:
7 = Other, please specify
7 alacore
To what degree has your program
adopted ALA’s Core Competencies
of Librarianship? 1 = Not at all
2 = Some
3 = Quite a bit
4 = Very much
5 = Uncertain
8 othcomp
Please list additional knowledge and
competencies statements from other
professional organizations that your
MLIS program adopts. Open-ended question
III. Assessment Practices
9 assecomm
Do you have a committee
responsible for the practice of
learning outcomes assessment? 0 = No
1 = Yes
9a
If Yes, what is the composition and number of representatives on the
Assessment Committee?
9aa acfaculty Faculty
9ab acstudent Students 0 = Not selected
9ac acpract Practitioners 1 = Selected
9ad acadmin Administrators
9ae acother Other, please specify
9b
If No, which of the following are involved in the process of programmatic
assessment at your program and number of participants?
LIBRARY AND INFORMATION STUDIES 223
9ba nacfaculty
Faculty
9bb nacstudent
Students
0 = Not selected
9bc nacpract
Practitioners
1 = Selected
9bd nacadmin Administrators
9be nacother Other, please specify
10 curricomm
Do you have a Curriculum /
Program Committee? 0 = No
1 = Yes
10a
If yes, what is its composition and number of representatives on the
Curriculum Committee?
10aa ccfaculty
Faculty
10ab ccstudent
Students
0 = Not selected
10ac ccpract
Practitioners
1 = Selected
10ad ccadmin
Administrators
10ae ccother Other, please specify
IV. Assessment Measures and Use of Assessment Results
11. How many students in your program participate in the following types of
assessments that you use to assess student learning?
11a asigexam Student class assignments or exams
1 = None
2 = Few
3 = Some
4 = About half
5 = Most
6 = All
9 = Uncertain
11b capstone
Capstone or culminating,
professionally focused project /
demonstration
11c portfolio
Portfolio (a purposeful collection of
student work intended to
demonstrate achievement of learning
objectives)
11d compexam
Comprehensive examination (oral or
written)
11e thesis Thesis
11f otherpa
Performance assessments other than
grades in field experience (e.g.,
internship, practicum, student
teaching, scenarios, service learning)
11g certexam
Standardized certification
examinations or professional
licensure examinations (e.g. school
library media certification)
11h exitsurv Exit survey or interview
11i surveys
Student surveys, interviews, focus
groups, or forums
11j natstusur
National student surveys (e.g.,
NSSE, CSEQ, SSI). Please specify:
11k othermea Other measures, please specify:
LIBRARY AND INFORMATION STUDIES 224
12. To what extent has your program used the following measures to assess student or
program learning outcomes?
12a rubrics Rubrics to assess student work
1 = Not at all
2 = Some
3 = Quite a bit
4 = Very much
12b alumnis
Alumni surveys, interviews or focus
groups
12c internse Internship supervisor evaluations
12d internsv
Internship supervisor surveys, interviews
or focus groups
12e Employers
Employer surveys, interviews or focus
groups
12f Retenrat
Retention or graduation rate; length of
times to degree
12g jobrate Job placement rate
12h currmap
Curriculum mapping (a process for
collecting and recording curriculum-
related data that identifies core skills and
content taught, processes employed, and
assessments used)
12i othermea2 Other measures, please specify:
13. To what extent has your program used student learning outcomes results for each of
the following?
13a
alapresent
Preparing program presentation or self-
studies for programmatic or specialized
accreditation (e.g. ALA accreditation)
1 = Not at all
2 = Some
3 = Quite a bit
4 = Very much
13b instacc
Preparing program presentation or self-
studies for regional institutional
accreditation (e.g. North Central
Association of Colleges and Schools)
13c overall Evaluating overall program performance
13d proglg
Revising program learning goals
13e decmak
Improving program decision making and
planning
13f curricul
Reviewing or revising program
curriculum
13g budget Supporting budget requests or allocation
13h pedagog
Improving instruction or pedagogy
13i adminis
Determining student readiness for
admission to the program
13j latercour
Determining student readiness for later
courses in the program
13k orientat
Designing more effective student
orientation or support
13l perform
Evaluating faculty or staff performance
LIBRARY AND INFORMATION STUDIES 225
13m fieldlearn
Increasing the connections between in-
class and field learning
13n libresour Enhancing Library resources
V. Organizational Supports
14 personnel
Does your program have one or more
personnel positions (professional or staff)
dedicated to support assessment?
0 = No
1 = Yes
14a pnstatus If Yes, is this person a:
1 = Tenured faculty
member
2 = Non-tenure
track faculty
member
3 = Staff member
4 = Other, please
specify
15 involve
How many of your program faculty and
staff are involved in student learning
outcomes assessment activities beyond
grading?
1 = None
2 = Few
3 = Some
4 = About half
5 = Most
6 = All
9 = Uncertain
VI. Benefits of Assessment and Accreditation
16 alaccred
Is your program accredited by American
Library Association? 0 = No
1 = Yes, when was
the last full ALA-
accreditation
review (month /
year)?
2 = Other, please
specify:
16a benefit
If yes, what are the most important
benefits to your program of going through
ALA accreditation process?
Open-ended
question
16b notaccre
If No, what are your reasons for not
choosing to pursue or maintain ALA
accreditation?
17 otheraccr
Is your program accredited by other
organization(s)?
0 = No
LIBRARY AND INFORMATION STUDIES 226
1 = Yes
2 = Not sure
17a otheraccr2
If your program is accredited by other
organization(s), please list all and year of
accreditation:
Open-ended
question
18 otherslos
What else do you think we should know
about student learning outcomes
assessment in your program?
Open-ended
question
LIBRARY AND INFORMATION STUDIES 227
Appendix F
Program Presentation Analysis Coding Scheme
Program Presentation Analysis Coding Scheme
Key Element Description
Section(s) in Program
Presentation
Learning outcomes
statement
Statements describing the knowledge,
skills and behaviors that students acquire
as they progress through the program,
e.g. program objectives, learning
outcomes, core competencies.
I.2, I.3, II.1, II.7
Systematic process
Description on systematic and ongoing
assessment process.
I.1
Competencies &
Standards
Competencies adopted by the program,
e.g. ALA Core Competencies, Society of
American Archivists Guidelines, AASL
Standards for School Library Medial
Specialist, SLA Competencies.
II.5
Assessment / Curriculum
Committee
Description of the assessment or
curriculum committee.
I.1, I.3, II.7, IV.6
Participants
Participants of the outcomes assessment
practices, including membership in the
assessment and curriculum committee,
e.g. alumni, employers, faculty,
internship supervisors, practitioners,
students.
II.7, IV.4
Measurements
Assessment measures used by the
program, e.g. course assignment, rubric,
survey, focus group, interview,
internship, GPA, retention/graduation
rate, placement rate.
IV.4
Student evaluation
Description or statement on Student
evaluation policy or process. IV.4
Application of
assessments
Applications of assessment results, e.g.
revision of curriculum & course, develop
new concentration, pedagogical
improvement, change of class schedule.
II.7, IV.6
Overall assessment of
student achievements
(final project)
Final projects required for all students,
e.g. capstone project, comprehensive
exam, internship, portfolio, thesis.
II.7, IV.4
Organizational supports
Resources allocated to assessment
activities, including personnel. III.7, V.4
Abstract (if available)
Abstract
There is an increasing emphasis on learning outcomes assessment in the accreditation process in higher education in general and in library education specifically. This mixed methods study investigated the practice of outcomes assessment at master’s programs in library and information studies accredited by the American Library Association in the United States and Canada. Six salient themes emerged from the survey responses of Accreditation Liaison officers and the content analysis of 12 program presentations of MLIS programs. First, outcomes assessment has taken hold at MLIS programs in which 93% of programs have adopting a common set of learning goals and outcomes, whereas 79% developed a written assessment plan. Second, accreditation is the primary driver for MLIS assessment efforts, while program directors, faculty, and assessment and curriculum committees provide leadership in its practice. Third, MLIS programs employed a diverse range of tools for measuring learning outcomes. Course assignment, course evaluation, rubric, internship rating, portfolios, and surveys are the most commonly used direct and indirect measures. Fourth, MLIS programs applied assessment results extensively for improving program, curriculum, course, and instruction
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
A descriptive analysis focusing on similarities and differences among the U.S. service academies
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
Assessment and accreditation of undergraduate study abroad programs
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
How does the evidence-based method of training impact learning transfer, motivation, self-efficacy, and mastery goal orientation compared to the traditional method of training in Brazilian Jiu-Jitsu?
PDF
PowerPoint design based on cognitive load theory and cognitive theory of multimedia learning for introduction to statistics
PDF
A study of the pedagogical strategies used in support of students with learning disabilities and attitudes held by engineering faculty
PDF
Shifting educator’s paradigm from the practice of implementing standards based learning and assessments to project based learning and assessments for the Common Core State Standards
PDF
Incorporating social and emotional learning in higher education: a promising practices based development of authentic leadership
PDF
Impact of accreditation actions: a case study of two colleges within Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges
PDF
Innovative strategies to accommodate postsecondary students with learning disabilities
PDF
States of motivation: examining perceptions of accreditation through the framework of self-determination
Asset Metadata
Creator
Shih, Winyuan
(author)
Core Title
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/24/2015
Defense Date
04/23/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,American Library Association,certification - United States,Higher education,learning outcomes assessment,librarians,library education - Canada,library education - United States,library schools accreditation - United States,master's programs in library and information studies,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Tobey, Patricia Elaine (
committee member
), Woolston, Paul J. (
committee member
)
Creator Email
shihwy1@gmail.com,winyuans@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-561362
Unique identifier
UC11301375
Identifier
etd-ShihWinyua-3392.pdf (filename),usctheses-c3-561362 (legacy record id)
Legacy Identifier
etd-ShihWinyua-3392.pdf
Dmrecord
561362
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Shih, Winyuan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
certification - United States
learning outcomes assessment
librarians
library education - Canada
library education - United States
library schools accreditation - United States
master's programs in library and information studies